Beyond the AI Bubble

August 2025 ยท 4 minute read

AI Bubble Original Article

I think the question of whether artificial intelligence constitutes a bubble is poorly posed. The bubble metaphor comes from financial history, where asset prices detach from intrinsic value and collapse once expectations revert. That metaphor presumes a static underlying substrate, an asset with relatively fixed fundamentals. In the case of AI, the substrate itself is mutating at a pace too rapid for conventional analogies to capture.

The reality is that official economic measures of AI productivity appear modest while individual adoption is massive and transformative. Studies at MIT and McKinsey record that only a minority of enterprise deployments create measurable profit, yet hundreds of millions of users incorporate AI tools into their daily cognition. Enterprises treat AI as a capital project whose returns are legible on balance sheets, while workers treat AI as a personal prosthesis whose value is captured in time saved, errors avoided, or skills accelerated. When productivity manifests first at the cognitive margin rather than the organizational margin, it becomes invisible to official measures. This mismatch resembles the early electrification paradox, where household usage preceded enterprise-wide efficiency gains by decades. The apparent contradiction dissolves once one separates institutional AI adoption from individual augmentation.

Ask yourself: would you renounce your AI tools if required to pay a subscription fee of 20 euros per month? What about 40 euros? For most knowledge workers, the answer is no. The value of AI augmentation exceeds its direct cost by an order of magnitude.

The claim that “most AI investments yield zero return” obscures another structural fact. Consumer surplus from AI already exceeds direct revenue flows. What workers would pay not to lose access to their tools dwarfs their subscription fees. That surplus, while difficult to price, represents genuine welfare gains and competitive advantage. The economic narrative of AI as “yielding zero return” is therefore an artifact of measurement. It reflects an asymmetry between the legibility of costs and the illegibility of cognitive transformation.

Incremental progress is another site of misperception. Benchmarks climb slowly, percentages tick upward, demos accumulate. The week-by-week view generates the intuition of stasis. The year-by-year view produces a very different picture: tasks once considered impossible collapse in succession, and abstractions carefully designed to be out of reach are dissolved by new model generations. If the distribution of progress is heavy-tailed, then sampling at the wrong temporal scale produces the illusion of hype cycles that in fact are artifacts of our framing.

The more disquieting feature is not that investors might be misallocating capital but that alignment research may be misallocating attention. Every benchmark we construct as a claim of impossibility appears to have a half-life of less than two years. The danger is not overvaluation but methodological erosion. We build proofs of limitation, and then models climb past them. This dynamic is epistemically corrosive: safety research that rests on claims of incapacity becomes obsolete at the pace of capability growth. Our problem is not that markets are irrational but that our tools for understanding capability trajectories may be brittle.

The bubble frame is therefore misleading. It suggests we must choose between hype and fundamentals, between overvaluation and collapse. The reality is stranger. We are in a regime where capability expansion and institutional adaptation move on divergent timescales. Individual cognitive augmentation races ahead while enterprises lag. Research benchmarks collapse while regulatory discourse ossifies. Financial markets oscillate without anchoring.

The practical question for AI alignment researchers like me is what conceptual tools can replace the bubble narrative. One possibility is capability liquidity, the idea that local improvements in reasoning or perception diffuse through the system and reorganize workflows globally. Another is invisible surplus, the value captured in augmented cognition that remains outside conventional economic measurement. Both point to the same conclusion: to understand the mismatch between where transformation is occurring and where our metrics are looking.

Instead of asking whether AI is a bubble, we should ask how to measure capability diffusion, how to predict the half-life of impossibility proofs, and how to construct epistemic tools that do not collapse as quickly as the benchmarks they rely on. The risk is that safety preparation lags behind a curve of discontinuity that refuses to flatten.