Sliq Logo Sliq

Google Is Banning OpenClaw Users - No Warnings, No Refunds

Starting around February 12, Google began permanently banning paid AI Pro and Ultra subscribers who had connected OpenClaw to their accounts through Antigravity OAuth. No warning. No grace period. No appeals process. And for annual subscribers who prepaid - no refunds.

This isn't a temporary suspension or a slap on the wrist. These are permanent account terminations, and in some cases they extend beyond the AI subscription itself. Some users report losing access to Gmail and Google Workspace accounts tied to the same credentials.

What happened

OpenClaw users had been routing their Gemini model access through Antigravity - Google's agent-first coding IDE - using OAuth tokens. This let them use Google's AI models inside OpenClaw instead of paying separately for Anthropic or OpenAI API access. Google's system allowed the OAuth connection without any block or warning at the point of integration. Then, weeks later, it flagged those accounts in bulk and terminated them.

Google's engineering team confirmed the reason by email: using Antigravity server credentials to power a non-Antigravity product violates their Terms of Service. The bans aren't limited to OpenClaw - users of OpenCode, the antigravity-cockpit browser extension, and other third-party tools that routed through the same OAuth flow were also hit.

One subscriber reported being banned after a single day of integration. Another spent three weeks escalating through support channels before receiving final confirmation that their account cannot be restored.

The support experience made it worse

The enforcement itself was harsh. The support experience around it was absurd.

One subscriber spent eight days working through Tier 1 support before reaching Google One, which told them on the record that the 403 errors were a "known WAF bug" - a technical issue, not a policy violation. That same subscriber was then routed to Android App Developer support, a team with no access to resolve the problem. Meanwhile, Google's engineering team was simultaneously sending emails confirming it was an intentional, permanent, policy-based termination.

Google also removed forum posts where banned users were trying to understand what happened. One subscriber who politely asked why a moderator's acknowledgment of the ban wave had been deleted had their own forum account terminated.

And through all of this, Google continued billing suspended accounts their monthly subscription fee.

What the companies are saying

Google Antigravity lead Varun Mohan defended the enforcement, citing "massive increase in malicious usage" that was degrading quality of service. He acknowledged a narrow exception for users genuinely unaware of the restriction, but offered no formal process to claim it.

OpenClaw creator Peter Steinberger called Google's approach "pretty draconian" and announced he would remove Antigravity support from OpenClaw entirely. His comparison was pointed: "Even Anthropic pings me and is nice about issues. Google just bans?"

It's worth noting that Anthropic took its own action against third-party OAuth usage - blocking OpenCode from Claude Max subscriptions in late January via client fingerprinting. But Anthropic blocked the integration proactively rather than letting it work and then banning users after the fact. That's a meaningful difference.

OpenAI, meanwhile, went the opposite direction and explicitly whitelisted OpenCode for consumer plans. Three companies, three completely different approaches to the same problem.

Why this matters beyond the ban itself

The Google ban adds another item to an already long list of ecosystem risks around OpenClaw. When people evaluate whether to use a tool, they typically think about the tool itself - does it work, is it secure, what does it cost. But the Google situation introduces a different kind of risk: the platforms your tool connects to can turn on you without warning.

This isn't a theoretical concern. OpenClaw's entire value proposition is connecting to everything - your email, your messaging apps, your calendar, your AI model providers. Each of those connections is a dependency, and each dependency is controlled by a company that can change its policies at any time. Google just demonstrated what that looks like in practice.

For people already running OpenClaw, the immediate action is straightforward: don't route through Antigravity OAuth if you haven't already, and if you have, check your account status now rather than waiting for the ban to land.

For people evaluating AI agents more broadly, the lesson is worth internalizing. The true cost of a self-hosted AI agent isn't just server bills and API fees. It includes the time spent monitoring these ecosystem shifts, reacting to policy changes you didn't see coming, and cleaning up the consequences when a platform decides your usage pattern no longer fits.

It's one more reason the gap between "powerful but unpredictable" and "limited but safe" keeps widening - and why managed AI platforms that own their own infrastructure look increasingly practical by comparison.


This is part of a series on AI agents in 2026. See also: Claude Cowork vs OpenClaw, Is OpenClaw Safe?, How Much Does OpenClaw Actually Cost?, and OpenClaw Setup Is Just the Beginning.

Last updated: February 2026

Multiply yourself with Sliq

Sliq connects to all your tools and can do anything you can - just ask it in Slack.

Try Sliq Free