Society

The AI bubble is a feature of neoliberal capitalism, not a bug

Viewpoint: We're facing the logical endpoint of an economic philosophy that treats knowledge production as just another asset class.
Cover Image for The AI bubble is a feature of neoliberal capitalism, not a bug
Google DeepMind / Unsplash

Wall Street is finally waking up to what some economists have been warning about for years: The artificial intelligence boom looks increasingly like a speculative bubble. When Michael Burry, the hedge fund manager who predicted the 2008 housing crisis, bets $1.1 billion against AI darlings like Nvidia and Palantir, investors take notice. When Bank of England officials warn of a “sudden correction” and tech CEOs themselves admit people “will overinvest and lose money,” the alarm bells grow louder.

But here’s what the bubble-watchers are missing. This isn’t a glitch in the system; it’s how the system is designed to work.

Philip Mirowski, the noted historian of economics at Notre Dame, has spent decades documenting how neoliberal ideology transformed American science into a marketplace – and turned technological speculation into a permanent feature of our economy. 

His insights suggest that the AI bubble isn’t an aberration or irrational exuberance. It’s the logical endpoint of an economic philosophy that treats markets as the ultimate information processors and scientific progress as just another asset class.

The numbers tell a troubling story. OpenAI’s valuation has soared to $500 billion despite reporting losses of roughly $12 billion in a single quarter — while projecting annual revenues around $13 billion. The company has committed to spending over $1 trillion on infrastructure through 2035, including $60 billion annually just for Oracle data center capacity. That’s nearly five times what it expects to generate in revenue.

Meanwhile, an MIT study found that 95% of companies adopting generative AI see no measurable return on investment. And perhaps most tellingly, U.S. Census Bureau data shows that AI adoption among large firms peaked in July at 13.4%, then declined to 11.7% by October — the first sustained drop since tracking began.

The technology embodies the neoliberal faith that all problems can be solved through algorithmic efficiency and market mechanisms.

Share

These aren’t the metrics of a healthy technological revolution. They’re the hallmarks of what Mirowski describes as a cycle in which short-term investor optimism substitutes for genuine innovation and sustainable business models. This pattern operates like a Ponzi scheme.

Mirowski’s broader critique helps explain why this keeps happening. In the neoliberal worldview that has dominated policy since the 1980s, the market is imagined as a near-perfect information processor — better than scientists, better than universities, certainly better than government bureaucrats. 

When this philosophy took hold in academia through policies like the Bayh-Dole Act, it fundamentally changed how we produce knowledge. Universities became focused on intellectual property and licensing revenues. Basic research gave way to whatever promised the quickest commercial payoff. Science itself became what Mirowski calls “Science-Mart.”

AI represents the apotheosis of this transformation. The technology embodies the neoliberal faith that all problems can be solved through algorithmic efficiency and market mechanisms. It promises to make everything faster, cheaper, more automated. It reduces human judgment to data processing and transforms social questions into technical ones. This is precisely why AI makes such an attractive investment bubble — it aligns perfectly with an ideology that insists markets always know best.

The circular logic is stunning. AI companies buy services from each other, creating the appearance of a robust ecosystem. Nvidia invests $100 billion in OpenAI, which then uses the funds to purchase Nvidia GPUs. Meta announces $600 billion in AI infrastructure spending through 2028. Money flows in massive circles while actual profitability lags impossibly far behind.

Proponents argue that speculative capital is the price of innovation, that patient money is required for breakthrough technologies to mature. But when the bubble bursts — most observers now say it’s a matter of when, not if — the consequences will extend far beyond shareholder losses. 

Gary Marcus, former head of Uber’s AI division, warns of potential government bailouts approaching $1 trillion. One analyst estimates the AI bubble is 17 times larger than the dot-com crash. The interconnected nature of AI investments could trigger chain reactions similar to 2008.

Yet the deeper damage may be to human knowledge itself. When venture capital and stock valuations become the primary metrics for evaluating technological progress, we lose the patience required for genuine innovation. We get technologies optimized for investor presentations rather than real-world problems. We get AI systems that “hallucinate” false information, autonomous agents that complete tasks only one-third of the time, and business models that charge far less than profitability would require. All of this, propped up by faith that growth will eventually justify the hype.

Until we recognize that speculative bubbles aren’t bugs in the system but features of treating science as a marketplace, we’ll continue lurching from one crisis to the next.

Share

This commercialization can lead to what Mirowski calls, in his book Science-Mart, a “qualitative degradation in the character of the knowledge produced.” The AI bubble offers stark evidence. Rather than carefully developing technologies we understand and can control, we have a gold-rush mentality where being first to market matters more than being right.

The optimists insist that even if there’s a bubble, the infrastructure and innovation will remain — that we’ll be left with valuable technology after the financial wreckage clears. But this misses the point. 

The bubble isn’t incidental to how we develop AI; it determines which kinds of AI get developed, who controls them and what problems they’re designed to solve. The market logic that creates bubbles also shapes which research questions get asked, which applications receive funding, and which uses of AI become dominant.

The AI boom has revealed something fundamental about how neoliberal capitalism treats knowledge production: as another domain for speculation, where the loudest promises attract the most capital, regardless of underlying substance. 

Until we recognize that speculative bubbles aren’t bugs in the system but features of treating science as a marketplace, we’ll continue lurching from one crisis to the next. The question isn’t whether this bubble will burst, but whether we’ll finally understand that the problem runs deeper than any single technology.

Hasani J. Gunn is a Harvard-trained policy analyst and government consultant in Los Angeles.

Share