AI agents rewrote the SaaS shared responsibility model. Your backup should catch up.

Rewind | Last updated on May 1, 2026 | 5 minute read

An AI coding agent opened a pull request at 2:14 a.m., merged itself, and kicked off a workflow that rewrote field definitions across a production Jira instance. By 2:31 a.m., 4,200 issues carried the wrong statuses. Nobody at the company was awake. Nobody had approved a single change.

This is the kind of story that used to be rare, but is now routine. It also exposes a quiet problem: The contract that tells your vendor, your CISO, and your auditor who protects your SaaS data was written for a slower world, one where no AI agent had ever touched a production system.

A fifteen-year-old contract, meeting a new operator

AWS formalized the SaaS Shared Responsibility Model, or SRM, in 2011. The SRM is the document that splits duties between the platform vendor and the customer. The vendor protects the infrastructure. The customer protects the data, the access, and the configuration inside it.

As of April 2026, that model is roughly fifteen years old, and it has held up well. It tells the CISO where the vendor’s obligation ends. It tells infrastructure leads what they own. It tells legal what to draft in the data processing agreement. It tells audit what to scope.

It also predates almost every risk an executive now manages. It predates production AI agents acting inside SaaS tenants. It predates the six-figure regulatory fine tier. It predates the finding that 68% of breaches involve a non-malicious human element, a person making an error or falling for social engineering (Verizon 2024 Data Breach Investigations Report). The model is not broken. It is just showing its age against a workload it was never designed to govern.

Why 2011 assumptions strain in the age of AI agents

The 2011 model assumed a human operator. A person with a mouse. A person who could only click so fast. Review gates, approval chains, and change windows were designed around that pace.

An AI agent is not that operator. A single agent can execute thousands of operations per minute. A single workflow can cross permission scopes, platforms, and data surfaces in one run. When an agent makes a bad change, the blast radius is not one record, it’s thousands. And the audit log attributes every action to the human API key the agent borrowed.

Three gaps open up:

  • Undoability. The 2011 model assumed most mistakes could be reversed through platform history. At agent scale, that assumption breaks.
  • Attribution. The 2011 model assumed a human behind every action. Agent activity routed through a human credential needs a different audit posture.
  • Reversibility at pace. If 10,000 operations land in an hour, recovery has to match the speed of the damage.

The pressure around these gaps is loud. 87% of IT professionals reported experiencing SaaS data loss in 2024, with malicious deletions as the leading cause (Kaseya, 2025 State of Backup and Recovery Report, n=3,000+). 100% of companies surveyed experienced revenue losses due to outages in the previous 12 months (Cockroach Labs, The State of Resilience 2025). 79% of technology leaders admit they are not completely prepared for new regulations like DORA, and 95% of executives are aware of at least one unresolved operational weakness within their tech stack.

The market is already moving. By 2028, 75% of enterprises will prioritize backup of software-as-a-service (SaaS) applications as a critical requirement, compared to 15% in 2024 (Gartner, August 2024).

What a 2026 model actually looks like

A modern SRM should treat platform-level recovery as a floor, not a ceiling. It should assume independent architecture, and it should plan for failover at the pace of an agent, not a human.

Platform-level recovery is the floor. Atlassian, Salesforce, Microsoft 365, Google Workspace, GitHub, QuickBooks Online, and Shopify all publish shared responsibility documentation. Each ships some level of built-in history or Atlassian Backup and Restore-style vendor coverage. That is necessary, but it is not sufficient when an agent can touch every record in the scope.

Independent architecture is the design principle. If an agent credential is compromised, any backup that lives on the vendor surface is inside the blast radius. An independent platform sits outside that radius. The old discipline (the 3-2-1 backup rule) is three copies of your data, in two different places in the cloud, one of which is not your SaaS provider. That discipline applies cleanly to the new problem.

Failover-ready capabilities keep humans in control. These are predefined options that cut disruption without handing the keys to another automated system. Hot Standby for Jira is scheduled for Q2 2026, a pre-synced secondary Jira instance maintained in a different region. Pilot Light for Jira is scheduled for Q3 2026, a Rewind-hosted, read-only reproduction of vital Jira data. Both are publicly announced. Timelines are subject to change.

Rewind is a SaaS resilience platform built on independent architecture, not a plugin. Your data stays accessible even if the SaaS vendor is compromised.

Rewind does not have AI products. Rewind’s backup and restore product protects your SaaS data from errors introduced by your own AI agents, automations, and AI-assisted workflows.

A seven-question AI agent checklist for the CISO

Use these with your platform owner and governance lead.

  1. What operations can our AI agent perform that built-in platform history cannot undo?
  2. Where does the attribution chain break between the human identity and the agent identity?
  3. What is our restore posture at agent pace?
  4. Which third-party integrations sit inside the SaaS vendor’s blast radius?
  5. What retention window meets our longest audit cycle?
  6. Where is our independent recovery path?
  7. Who owns this conversation at the executive level?

The compliance layer still matters

A 2026 SRM also has to map to the regulatory calendar the 2011 model never saw. Rewind supports compliance with SOC 2, ISO/IEC 27001:2022, GDPR, CCPA/CPRA, PIPEDA, DORA, HIPAA (via BAA), and HITECH. Software customers are ALWAYS responsible for ensuring data is protected. That line sits at the top of our Shared Responsibility page, and it is the posture every modern SRM should start from.

Eran Polak, R&D Manager at Matics, put the customer side like this: “We knew Atlassian provides limited data backup coverage, but that doesn’t extend to backup of account-level data in GitHub and Jira.” That essentially sums up the shared responsibility conversation.

Three moves for this quarter

The pace problem is solvable with a small working group.

  • Pull your current SRM diagrams. Confirm how many platforms are in scope. Confirm whether each diagram was last updated before production AI agents joined your team.
  • Run the seven-question checklist against your three highest-risk SaaS platforms. Score each.
  • Send this post to the two executives who would need to agree on a 2026 update. Your CISO and your CIO. A five-person conversation in the next sixty days is the fastest path to a policy change.

The SRM is too important to belong to any one vendor. Rewind is trusted by more than 25,000 organizations worldwide, and we see the gap opening up in real accounts every week.

Learn more about Rewind’s shared responsibility approach at rewind.com/shared-responsibility.


Profile picture of <a class=Rewind">
Rewind
Rewind is a leading and trusted provider of cloud backup and data recovery solutions, helping businesses safeguard their critical SaaS data from loss, corruption, and cyber threats.