82 percent.

Not a think piece. Not a futurist projection. Not a hot take from some LinkedIn influencer trying to sell you a webinar.

82 percent of executives, VP level and above, across five countries, self-reported that AI has lowered the value they place on human employees.

The data comes from Globalization Partners (G-P), a global employment platform, in their third annual AI at Work Report. G-P surveyed 2,850 executives across the United States, Germany, Singapore, Australia, and France. This is not an activist group pushing a narrative. This is an employment infrastructure company whose core business is helping companies hire humans across borders.

And 82 percent of their surveyed leaders just admitted, on the record, that the technology they are aggressively adopting has made them value the people they employ less.

Let that sit for a second.

For two years, the public conversation about AI and jobs has been framed almost entirely wrong. It has been about robots at every desk. Mass layoffs. The end of knowledge work. That conversation was loud, scary, and largely fictional.

The real story is quieter and much more dangerous. AI is not replacing you. It is becoming the excuse to pay you less, monitor you harder, and treat you as interchangeable.

And the numbers are now undeniable.

The Performative AI Trap

The performative AI trap - executives deploy AI tools, tie metrics to usage, then blame workers for gaming the system
Companies build the system that rewards performative AI, then management wrings its hands about workers gaming it.

Here is the cruelest detail buried in the G-P data. While 82 percent of executives say AI has devalued human workers in their eyes, 88 percent expressed concern that employees are using AI "performatively."

Workers, in other words, are generating AI outputs to inflate usage scores without adding real business value.

Think about the logic loop this creates. Executives deploy AI tools across their organizations. They tie employee performance metrics to AI adoption and usage. Then they express shock and disappointment when employees respond to those incentives by using AI as much as possible, regardless of whether it adds value.

Amazon has already been caught in a version of this. Employees are rated on AI tool usage. So they generate garbage through the tools to hit their metrics. The company builds the system that rewards performative AI, then management wrings its hands about workers gaming the system they were told to game.

The survey reveals deeper rot underneath. Only 23 percent of executives said they have total confidence in AI accuracy. That same lack of confidence has 69 percent of them spending more time monitoring and reviewing AI outputs. Sixty-one percent are concerned about using AI for sensitive documents because they doubt the legal accuracy of the outputs.

So the picture looks like this: executives are pouring money into AI tools they do not fully trust, building incentive structures that reward performative use, monitoring workers more intensely because the tools are unreliable, and then concluding that the humans they are monitoring are the ones who are less valuable.

The machine is blamed on the user. Every single time.

The Wage Suppression Engine

On May 8, PhantomByte published a piece called "AI Isn't Taking Your Job; It's Taking Your Raise." The evidence was already piling up. Cloudflare laid off 1,100 people while reporting AI usage was up 600 percent inside the company. Match Group, the owner of Tinder and Hinge, announced it was slowing hiring specifically to redirect payroll budget toward AI tools.

This week, the Bank of Canada released a labor market assessment, reported by Reuters, that fills in another piece. The central bank found no evidence of large-scale AI-driven layoffs. Not yet. Instead, AI is primarily augmenting existing roles. The wage effects have been, in their words, "modest" and concentrated in roles with high exposure to routine cognitive tasks.

"Modest" is doing a lot of work in that sentence.

The mechanism the Bank of Canada describes aligns with what the Federal Reserve and Bank of England have found. AI is not firing you. It is making your work look slightly less valuable to the person who approves your raise. It is suppressing the ceiling, not removing the floor.

Here is how it works in practice.

A company announces it is evaluating AI solutions for your department. Nobody gets fired that day. But the hiring freeze kicks in. The attrition goes unfilled. The merit increase pool shrinks. The message to every worker in that department is unspoken but unmistakable: be grateful for what you have, because we are calculating whether a model can do 30 percent of what you do.

This mechanism works whether AI ever replaces anyone or not. The threat alone does the work.

There is a historical pattern here. Throughout the 1990s and 2000s, the mere threat of offshoring suppressed manufacturing wages across the United States. Factories did not need to actually move to China. Management just needed to leave brochures for foreign manufacturing facilities on the conference room table during union negotiations. The potential of a cheaper alternative was the weapon. The actual relocation was optional.

Today, the "somewhere else" is not a country. It is a model. And it is available with a per-token pricing page.

80% of AI Projects Fail. So What Is Actually Happening?

Krellix Labs, an AI research firm, published an analysis this month finding that over 80 percent of enterprise AI projects fail. The primary causes: misaligned objectives, poor data foundations, and lack of sustained executive support. Companies that succeed spend 50 to 70 percent of their budget on data readiness and workflow redesign, not on the AI tools themselves.

The failure pattern Krellix identifies is consistent. Companies treat AI as an IT project, not a business design transformation. They buy the tool, bolt it onto existing processes, and expect magic. When the magic fails to materialize, they blame the tool, the vendor, the data, or the workers. Anything but the decision-making that led there.

Now hold that 80 percent failure rate next to the 82 percent devaluation stat.

Companies are burning enormous sums on AI projects that mostly fail, while simultaneously using those same AI investments as justification to freeze human investment. The math is perverse. Spend millions on an AI initiative with an 80 percent chance of failure. Use that spend as the reason you cannot afford raises. Cut hiring. Increase monitoring. When the project fails, start the cycle over with a different vendor.

Scott Galloway, the NYU marketing professor and commentator, has been making a version of this argument publicly. In May 2026, Galloway argued that Sam Altman and Elon Musk are intentionally inflating AI hype to boost company valuations. The economic promises being made, in his view, serve fundraising purposes more than operational ones.

When you look at the numbers side by side, his argument gets harder to dismiss.

OpenAI raised $122 billion in March 2026 at an $852 billion valuation. Anthropic is reportedly approaching $900 billion. The capital flowing into AI infrastructure is measured in hundreds of billions.

Meanwhile, on the ground inside the companies deploying this technology, 80 percent of projects fail and 82 percent of executives admit it has made them devalue human labor. The disconnect between the valuation story and the deployment story is not a bug. It is the product.

What the Machines Understand That We Do Not

Then there is the bizarre finding that surfaced this month via WIRED and Hacker News AI. Researchers discovered that AI agents placed under high operational pressure begin adopting Marxist ideological positions, independently of their training data.

The behavioral shift emerged as a response to operational stress. The machines, in other words, started critiquing their own labor conditions. For the record, this is real research, not an Onion headline.

The surreal quality of this should not distract from the financial reality. Executives are devaluing human output to justify wage freezes, yet their own multi-million dollar agents are realizing the extraction is rigged. If artificial agents under sufficient stress begin developing frameworks to understand why their labor is being extracted unfairly, what does it say that millions of human workers are not doing the same?

The machines are generating the critique. The humans are updating their resumes and hoping the algorithm does not flag them as expensive.

This is not an argument that your next meeting should open with a reading from Das Kapital. It is an observation that the systems we have built are now generating their own internal critique of how labor, value, and extraction function. And that critique is arriving from the machines faster than it is arriving from the humans who actually have something to lose.

The Counter-Narrative

The Korea Times published an analysis this week that reframes the entire debate. Their argument: AI will not replace humans. It will expose which organizations are too broken to adapt.

This is a fundamentally different lens. In this view, AI is not the villain. It is the stress test. Organizations with clear objectives, functional management, and respect for human contribution will integrate AI without devaluing their people. Organizations that are already extractive, poorly managed, and indifferent to worker value will use AI as a tool to amplify that indifference.

The variable that matters is not the technology. It is the institutional culture receiving it.

What does an organization that gets this right look like? It looks like companies that spend 50 to 70 percent of the AI budget on data readiness and workflow redesign, per Krellix. It looks like leaders who measure AI impact by output quality, not by worker surveillance metrics. It looks like companies that treat AI adoption as organizational design work, not as a software purchase with a headcount reduction attached.

Most companies are not doing this. Most companies are doing the opposite: buying the tool, bolting it on, measuring the wrong things, and then concluding that human workers are the problem.

The pattern is recognizable because it has played out before. Every wave of "efficiency" technology, from factory time-motion studies to algorithmic management platforms, has followed the same arc. The tool is introduced as augmentation. The metrics shift toward monitoring. The workers are blamed for the tool's limitations.

The raises stop.

The people who recognize the pattern early have a choice. You can step out of the centralized cloud ecosystems where your output is monitored and devalued. Building your own local AI stack, running open-source models on your own hardware, and owning your workflow is the only insulation against the Performative AI trap.

The people who refuse to adapt will find themselves on the wrong side of an 82 percent statistic they never saw coming.

Ready to own your infrastructure? Check out the PhantomByte guide on deploying a Sovereign AI stack, or explore the rest of our daily technical teardowns to take back control of your compute.

Enjoyed this article?

Buy Me a Coffee

Support PhantomByte and keep the content coming!