Sliq Logo Sliq

Amazon Just Wrote the First Rules for AI Agents

Amazon's updated Business Solutions Agreement went into effect today. Buried inside a routine contract update that most sellers will accept without reading is something that has never existed before: a formal, legally binding definition of "Agent" applied to AI systems operating on a major platform, with explicit rules those agents must follow.

The policy is three sentences long and deceptively simple. AI agents must clearly identify themselves as automated systems. They must comply with the new Agent Policy at all times. They must immediately stop accessing Amazon's services if Amazon tells them to.

If you don't sell on Amazon, you might think this doesn't concern you. But Amazon didn't write this policy because it has a seller automation problem. It wrote this policy because it has an AI agent problem -- and every platform that matters to your business is about to write the same rules.

What Amazon actually did

On February 17, Amazon posted an update to its Seller Central forums announcing changes to the BSA effective March 4. The update adds a standalone Agent Policy governing any "automated software or AI agents" that access Amazon's services. It formally defines "Agent" as a new category in the agreement -- the first time Amazon has given AI agents their own legal status.

Three requirements apply to every agent. Self-identification: agents must identify themselves as automated systems at all times. Compliance: agents must follow the Agent Policy without exception. Kill switch: agents must cease access immediately if Amazon requests it.

Alongside the Agent Policy, Amazon added a new subsection prohibiting the use of Amazon materials or services for AI model development. The language covers data mining, reverse engineering, and extracting source code or model components. Amazon also renamed its "Developer Site" to the "Solution Provider Portal" -- a small administrative change that signals how it now thinks about the relationship between its platform and the tools that connect to it.

Sellers who continued using Amazon after today automatically accepted the new terms. There was no opt-in. No negotiation. Fifteen days between announcement and enforcement.

The part nobody outside Amazon's seller ecosystem is talking about

The Agent Policy didn't arrive in isolation. It's the legal layer on top of a technical strategy Amazon has been building for months.

In September 2025, Amazon transformed its Seller Assistant from a passive tool into an autonomous system that manages inventory, compliance, and advertising simultaneously. In November 2025, it launched the Ads Agent at its unBoxed conference and unified its Campaign Manager into a single interface. In February 2026, it opened the Amazon Ads MCP Server in beta -- a controlled channel through which AI agents can interact with Amazon's advertising APIs using natural language, on Amazon's infrastructure, with Amazon's visibility into every query.

Then, two weeks later, it updated the BSA to require every external agent to identify itself, comply with rules, and shut down on demand.

Vanessa Hung, CEO of Online Seller Solutions, connected the dots in an analysis published on LinkedIn. She argued that Amazon's review visibility cap (which limits how much data third-party tools can pull from product pages) and the BSA update are coordinated steps to close the data pipeline that allowed external AI tools to build intelligence systems using Amazon's marketplace data without authorization.

The pattern is hard to miss: Amazon is simultaneously opening a controlled front door (the MCP Server) and locking the uncontrolled back doors (the BSA restrictions). AI agents are welcome on Amazon's platform -- as long as they come through Amazon's infrastructure, follow Amazon's rules, and operate with Amazon watching.

The seller confusion is real -- and revealing

Amazon's seller forums tell you everything you need to know about how this kind of policy change actually lands. One seller raised a pointed concern about the breadth of "automated software": they use software to pull pending orders for fulfillment and send tracking back to Amazon. That's not AI -- it's basic order management. But the language is broad enough to cover it.

Another seller asked whether GETIDA, a widely used FBA reimbursement service, would be considered an "Agent" under the new policy. A third pointed out the irony of Amazon writing AI governance rules while aggressively deploying its own AI across the platform.

Nobody on the forum had clear answers, because Amazon hadn't provided them. The company didn't define the technical boundary between "automated software" and "AI agent." It didn't explain how agent self-identification should work for software that communicates through API calls rather than user-facing interfaces. And it gave sellers and software providers fifteen days to figure it out.

This is what AI agent governance looks like in practice right now: a platform writes broad rules, doesn't clarify the edge cases, and makes compliance the problem of everyone who depends on it.

Every platform is doing the same thing

Amazon is first to put it in a seller agreement. But the same pattern -- restrict uncontrolled access, launch a governed alternative -- is playing out across every major platform simultaneously.

Salesforce locked down Slack's API terms to prohibit bulk data export, persistent archiving, and using Slack data for LLM training. Then it launched a reimagined Slackbot as the approved AI agent for the platform -- powered by Claude, connected to Salesforce data, operating within existing permissions. Slack's CEO framed it explicitly: conversations are becoming infrastructure, and the platform controls who gets to build on that infrastructure.

Shopify's Winter '26 Edition introduced "Agentic Storefronts" -- products that surface inside AI conversations on ChatGPT, Perplexity, and Microsoft Copilot, with transactions happening in the conversation and attribution flowing back through Shopify. The platform is choosing which AI agents can sell on behalf of merchants, and how.

Atlassian launched agents in Jira with agents operating inside existing project configurations, permissions, and audit trails. The Rovo MCP Server is the approved connection point for external AI clients. Third-party agents are welcome -- through Atlassian's infrastructure.

Google announced the Universal Commerce Protocol to standardize how AI agents execute purchases across retailers, with Visa, Mastercard, Stripe, Shopify, Target, and Walmart as launch partners. Google is positioning itself as the neutral infrastructure layer -- but it's still Google deciding the protocol.

The playbook is identical everywhere. Step one: notice that AI agents are accessing your platform in ways you can't see or control. Step two: restrict that uncontrolled access. Step three: launch your own governed pathway -- an MCP server, an official agent, a sanctioned integration -- that gives you visibility, control, and a cut of the value.

What this means if you're not an Amazon seller

The Amazon policy matters beyond Amazon for three reasons.

First, it establishes precedent. Amazon is the first major platform to formally define "Agent" in its legal terms and require compliance. That definition and those requirements will be copied -- probably within months -- by Shopify, Salesforce, Google Workspace, Microsoft, and every other platform where AI agents interact with business data. If you're building on or using AI agents that touch any major platform, expect similar rules to arrive.

Second, it reveals the platform strategy. Platforms aren't trying to block AI agents. They're trying to control the terms under which agents operate. Amazon wants AI agents managing seller advertising -- it built a whole MCP Server for exactly that. What Amazon doesn't want is AI agents independently scraping its data, training models on its materials, or operating invisibly on its infrastructure. The distinction isn't "AI agents yes or no." It's "AI agents on our terms or not at all."

Third, it raises a real question about who governs AI agents. Right now, it's not governments -- no AI agent legislation exists in the US at the federal level. It's platforms. Amazon, Salesforce, Google, and Atlassian are writing the rules for how AI agents can operate in commercial environments, and those rules are designed to serve platform interests first. That's not inherently wrong -- platforms have legitimate reasons to govern what runs on their infrastructure -- but it means the regulatory framework for AI agents is being written by the companies most motivated to control what agents can do and who benefits.

The uncomfortable implication for AI agent tools

If you're evaluating AI agent tools for your business -- whether that's OpenClaw, Perplexity Computer, or any of the dozens of agent platforms launching in 2026 -- Amazon's policy update introduces a question you should be asking that you probably aren't: does this agent operate through sanctioned integration paths, or does it work around them?

An AI agent that accesses Amazon through the official MCP Server will keep working when Amazon tightens its policies. An agent that scrapes Amazon's data or works through undocumented endpoints might wake up one morning without access. The same logic applies to every platform. AI tools that use Atlassian's Rovo MCP Server, Salesforce's Agentforce APIs, or Google's commerce protocols are building on foundations the platforms want to support. Tools that bypass those foundations are building on ground that can shift at any time.

This is the real cost of the self-hosted agent model. When platforms start writing rules for AI agents, agents that operate outside the platform's approved channels become compliance liabilities -- not just technical ones.

What happens next

Amazon's Agent Policy is version one. It's vague, underspecified, and raises more questions than it answers. But it exists, and that's the point. Six months from now, it'll be more specific. A year from now, there will be an enforcement track record. And every other major platform will have its own version.

The window between "AI agents can do whatever they want on platforms" and "AI agents must operate within platform-defined rules" is closing. For most people reading this, the practical takeaway is straightforward: the AI tools that will last are the ones that work with platforms, not around them. That means governed integrations, sanctioned APIs, and agents that operate within the structures your business already depends on -- not standalone systems that promise autonomy but can't guarantee access.

The question isn't whether AI agents need rules. It's who gets to write them. Right now, the answer is Amazon.


This is part of a series on AI agents in 2026. See also: Jira Now Lets You Assign Tickets to AI Agents, Is OpenClaw Safe?, Perplexity Computer Can Build a Bloomberg Terminal, and Best OpenClaw Alternatives That Don't Require Coding.

Last updated: March 2026

Multiply yourself with Sliq

Sliq connects to all your tools and can do anything you can - just ask it in Slack.

Try Sliq Free