The White House just blocked Anthropic from expanding access to its most powerful AI model. And the reason they gave is the same reason every gatekeeper has ever used: "safety."
It isn't safety. It's picking winners.
On April 30, the Wall Street Journal reported that the White House opposes Anthropic's plan to expand Mythos, its frontier cybersecurity AI, to roughly 70 additional companies. That would have brought the total from about 50 organizations to approximately 120. Administration officials blocked it, citing security concerns and, reportedly, worries that Anthropic lacks the compute capacity to serve more users without hampering government access.
Anthropic's response? Silence. The company declined to comment.
But here is the part that makes the whole thing unravel: even the White House's own guy is calling bullshit.
David Sacks, the White House AI czar, posted that Mythos is what AI should have been months ago, and the government is actively withholding it from the people. Sacks, a man literally inside the building, described the situation as "selecting winners and oppressing small businesses." When the administration's own AI czar is publicly siding with the critics, it is not a conspiracy theory anymore. It is a documented disagreement at the highest level of government.
Let's talk about why this matters, because every small business owner in America should be furious.
Key Takeaways
- The Safety Lie: The White House blocked Anthropic's Mythos expansion citing safety, but even its own AI czar David Sacks called it "selecting winners and oppressing small businesses."
- Limited Access Already Failed: A Discord group bypassed the "most sophisticated AI safety perimeter" by guessing a URL. CISA, America's cyber defense agency, was never invited to Project Glasswing.
- Selective Permission: JPMorgan and the NSA get access while small security startups are locked out. The gate is a political instrument, not a security measure.
- Compute Feudalism: Anthropic is worth $900 billion and has $40 billion in revenue but claims it cannot scale access. The real barrier is political, not technical.
The Safety Lie
Anthropic has spent weeks warning that Mythos is too dangerous for public release. The model could supposedly hack any system, find 27-year-old vulnerabilities, and weaponize corporate infrastructure if it falls into the wrong hands.
These apocalyptic warnings justified Project Glasswing, the limited-release initiative that handed access to roughly 40 organizations. Of that group, only about a dozen were publicly named.
But somehow, Mythos is safe enough for JPMorgan? The same JPMorgan that has been breached multiple times? The same banking sector that suffers more cyberattacks than any other industry?
Safety is being selectively applied. It is a convenient smokescreen.
And here is where it gets almost funny. The limited-access model they claimed was for security purposes has already failed catastrophically.
On April 21, Bloomberg reported that an unauthorized group gained access to Mythos through a third-party vendor on the exact same day the model was announced. The group, operating through a private Discord channel, has been using the tool regularly ever since. They even provided screenshots and live demonstrations to prove it.
How did they bypass the most sophisticated AI safety perimeter in the world? They did not use zero-day exploits. They did not execute complex code injections. They guessed the URL based on Anthropic's predictable naming conventions.
Meanwhile, CISA, the Cybersecurity and Infrastructure Security Agency, the government body actually tasked with protecting America's critical tech infrastructure, was not invited to Glasswing at all. A Discord group got Mythos before America's own cyber defense agency did.
If Mythos is too dangerous for legitimate businesses, how did a Discord group steal access through a contractor before the launch dust even settled? The limited-access model already failed. Security through obscurity always does.
The Coding Agent Catastrophe
And if that was not enough, here is the kicker. On April 30, VentureBeat reported that security researchers demonstrated full secret exfiltration from AI coding agents built by Anthropic, Google, and Microsoft.
A crafted GitHub pull request or branch name was enough to steal OAuth tokens, API keys, and credentials in cleartext. Claude Code, GitHub Copilot, and OpenAI Codex were all compromised through the exact same prompt-injection class. Bug bounties were paid. No CVEs were issued. No users were notified.
If these companies cannot secure access for 50 vetted organizations, do not pretend that locking out 50,000 more is for their own good. The perimeter was never secure. The gate was never locked.
The real pattern is not protection. It is selective permission.
Picking Winners in a Boardroom
This was never about safety. It was about deciding who gets to compete.
When Anthropic limits Mythos to Amazon, Google, JPMorgan, and a handful of government agencies while blocking expansion to 70 additional companies, it is not managing risk. It is rationing capability.
The market should decide which businesses thrive with AI. Instead, a boardroom in San Francisco is making that call, with the White House rubber-stamping the guest list.
Consider the hypocrisy. Anthropic restricted Mythos because it might be weaponized. Yet the NSA is reportedly using the model, according to Axios and TechCrunch. This comes despite the Pentagon labeling Anthropic a supply-chain risk just weeks earlier.
The UK's AI Security Institute also confirmed access. The U.S. military wants the tool for its own scanning and exploitation. You just cannot have it.
Sacks himself warned where this leads. Once Chinese open source releases the equivalent, the public will get it anyway. The gatekeeping will not stop progress. It will only hand the advantage to China. This is not security. It is self-sabotage.
The $900 Billion Elephant in the Room
There is another reason the compute limits argument does not hold water. A big one.
Anthropic is currently raising roughly $50 billion at a valuation of approximately $900 billion, per TechCrunch. Its annual revenue run rate has surpassed $30 billion and is reportedly closer to $40 billion. Google is investing up to $40 billion in cash and compute. Amazon and Broadcom deals are already in motion.
A company with nearly a trillion-dollar valuation and $40 billion in yearly revenue does not get to cry poverty when asked to scale access. It gets to decide where to spend its money.
And Anthropic has chosen massive compute partnerships with the same tech giants that already dominate every other layer of the stack. The expansion did not fail because of capacity. It failed because of politics.
Meanwhile, OpenAI is playing the same game. On April 30, CEO Sam Altman confirmed that OpenAI will restrict its competing GPT-5.5 Cyber tool to critical cyber defenders only, requiring an application and credential verification.
This is the same Altman who called Anthropic's Mythos restriction fear-based marketing just nine days earlier. The playbook is identical. Gatekeep the powerful tools, hand the keys to connected institutions, and tell everyone else to wait their turn.
Small businesses do not need permission to compete. They need access.
The Death of American Small Business, Accelerated
This is the part that should genuinely scare you.
Small businesses built America. Five-person agencies out-execute 500-person corporations every single day, because they are faster, more focused, and hungrier. AI was supposed to multiply that advantage. Instead, it is being weaponized against it.
I have seen this playbook before. Different decade, same trick. The names change but the game stays the same: scare the public, write the rules, hand the keys to the few.
When the most transformative technology of our lifetime is rationed by government and corporate gatekeepers, the gap between what a 10-person shop can access and what a megacorp commands grows wider every quarter.
This is not innovation. It is compute feudalism. The lord hands grain to his vassals and tells the peasants starvation is for their own safety.
Chinese open-source models like DeepSeek and Qwen already offer capable alternatives at prices that American closed models cannot match. The U.S. is spending $900 billion valuations and Pentagon court battles to build a wall that Chinese open source will walk through for free.
Why You Should Be Mad
The White House's own relationship with Anthropic confirms how broken this is. President Trump tried to block all federal agencies from using Anthropic products in February. Defense Secretary Pete Hegseth sought to declare the company a supply-chain risk because Anthropic refused to allow unrestricted Pentagon use in autonomous weapons and domestic surveillance.
U.S. District Judge Rita Lin blocked that directive in March. Then, in April, White House chief of staff Susie Wiles and Treasury Secretary Scott Bessent held what they called a productive meeting with Anthropic CEO Dario Amodei. The administration that tried to ban Anthropic is now helping it keep models away from you.
The White House's own AI czar says this is about picking winners and oppressing small businesses. When the guy inside the building is saying the same thing the rest of us are screaming from the outside, it is not a radical opinion. It is a documented, observable fact.
A tool is either on the market or it is not. There is no middle ground where AI is safe enough for JPMorgan's cybersecurity team but too dangerous for a regional bank's.
There is no consistent world where the NSA can scan vulnerabilities with Mythos but a small security startup cannot. Either the model is too dangerous to exist, or it is a product that anyone can buy.
The Closing Argument
No more picking winners. No more safety smokescreens. No more guest lists.
The market decides who thrives. Not the overlords. Not the bureaucrats. Not a handpicked group of corporate partners in a Glasswing initiative that left out the very agency tasked with defending America's cyber infrastructure.
For developers and small businesses watching this unfold, the path forward is the same one we have been advocating for months. Sovereign AI is not a hobby. It is survival.
Local models through Ollama, agent orchestration through OpenClaw, and self-hosted infrastructure are not just cheaper alternatives to the API gatekeepers. They are the only guarantee that your tools cannot be taken away because a White House meeting went the wrong way.
Once one company gets to ration access to the most powerful technology of the decade, every company will try. Anthropic and OpenAI are testing whether the public will accept tiered AI citizenship.
If we do not reject it now, the AI revolution will belong to a handful of companies, a few dozen government agencies, and everyone else will get the budget tier.
That is not a future. That is a country club. And you are not on the list.
Get More Articles Like This
Getting your AI agent setup right is just the start. I'm documenting every mistake, fix, and lesson learned as I build PhantomByte.
Subscribe to receive updates when we publish new content. No spam, just real lessons from the trenches.