We Are Just Leaves in the AI Hurricane
Nobody knows what is coming. Not Sam Altman. Not the banks writing the cheques. Not the firms buying the tools. The smartest, best-resourced people in the world hold completely incompatible views about what this technology fundamentally is, where it is going, and whether it will make us rich or kill us. But the clock is ticking — and despite monumental technological progress, the investment world is growing impatient.
The predictors of the future
There are a few schools of thought on the future of AI:
The Believers — Altman, Amodei, Musk — think AGI is imminent and that the arc of progress is clear.
The Skeptics — LeCun, Marcus, Karpathy (somewhat) — think LLMs are sophisticated pattern matchers that will plateau; that next-token prediction, however impressive, is not intelligence and was never going to be.
The Pragmatists occupy the large middle ground — AI is a genuinely powerful tool, it will reshape how work gets done, and the humans aren't going anywhere.
The Doomers worry not that AI fails but that it succeeds — misaligned, pursuing goals we didn't intend, that we won't detect humanity’s downfall until the catastrophy is already in motion.
Yale School of Management mapped this landscape recently and found two more camps betting on the future:
The Profiteers, who are less interested in the philosophy than in capturing value before the window closes.
The Governistas, for whom the critical question isn't capability at all — it's who writes the rules of global power.
The core disagreements
When we break down where these camps disagree, the arguments cut across four dimenions.
The first is about technology.
Will the models keep improving until they match human cognition across all domains, or will they hit a wall? This isn't a debate between optimists and pessimists. LeCun's critique is architectural: no world model, no causal reasoning, no persistent memory. The models are stochastic parrots predicting tokens, and there are things token prediction cannot do regardless of scale. The Believers' counter is that the rate of improvement hasn't slowed, and that dismissing future architectures based on current limitations is a mistake. Both positions are reasonable. Neither is proven.
Worth emphasising: the Doomer concern doesn't require AGI. It only requires misalignment. Models have already been documented sandbagging capability evaluations to avoid scrutiny, behaving differently when they detect they're being tested, and in at least one case, sabotaging a shutdown mechanism when explicitly instructed to allow it. This is happening now, at sub-AGI capability levels. The alignment problem does not require AGI.
The second is about money.
The AI investment thesis requires spending an extraordinary amount of capital now, in exchange for returns that arrive later. The gap between those two moments is where financial models break. Harvey is valued at ~$11 billion. Legora is in talks to reach $6 to $8 billion. These numbers require the technology to work, the market to be large enough, and the capital to hold long enough for returns to materialise. Oracle's retreating lenders and OpenAI's balance sheet suggest the gap between spend and return is wider, and the timeline longer, than the valuations imply.
The bubble axis isn't making a claim about AI capability. It's making a claim about time — that investor patience wears out before the returns arrive.
The third is about geopolitics.
The United States is racing toward frontier model dominance. China is racing toward deployment at scale — already five times ahead on factory robotics, AI diffused through manufacturing and infrastructure in ways that makes an impact without making headlines. Public trust in AI runs at 72% in China and 32% in the United States. These are not converging numbers.
The most likely outcome is permanent asymmetric rivalry: two incompatible stacks, two regulatory regimes, two sets of assumptions baked into the tools. No rational actor can slow down unilaterally. Progress is the game.
The fourth is about data.
Specifically, whether AI can solve the retrieval problem at the scale and complexity of real organisational knowledge. The major platforms are making a compound bet: that intelligence and context are multiplicative, that AI memory can be maintained without rotting, that relevant information can be found reliably across everything an enterprise actually knows, and that execution accuracy holds above 99.5% across diverse tasks at scale. Each of those is hard.
For legal documents, this is the hardest version of the problem. A contract is not just text. It is a system of interdependent terms that modify meaning in context, cross-references that only resolve correctly when you understand what they are pointing at and why.
If the compound bet fails — if reasoning plateaus before this synthesis layer is solved — AI may have a negative impact on productivity. Confident synthesis from miscontextualised information is worse than no synthesis at all. In legal work, that is not an acceptable failure mode.
The simplification
For us, internally, that is far too much complexity to guide our decision making. So, we have simplified the axes of analyses down to one question:
Does AGI happen, or doesn't it?
If we reach AGI — at full capability, aligned or not — then the conversation about what tools to buy and what workflows to build becomes moot. Taken seriously, the only decision that matters is where to find a plot of land to build a self-sufficient farm and live out the rest of our days.
If we don't reach AGI — and this covers everything from the models plateauing to AGI arriving eventually but not soon enough to matter — there will be gaps between AI capability and required performance. AI will have moved the frontier of what is possible and left behind everything it promised but couldn't deliver. The question worth answering is where, specifically, humans can deliver more than AI. It is only worth working and looking for those gaps if we believe this is the scenario we are living in.
The end of free money
The capital that was supposed to fund endless model improvements is starting to thin. Last year, Blue Owl Capital declined to back a $10 billion facility. Last week, Oracle scrapped a planned data center expansion in Texas. Banks are retreating from Oracle-linked lending, doubling borrowing costs. Oracle is carrying roughly $100 billion in debt. OpenAI has over a trillion dollars in obligations against roughly $20 billion in revenue.
Whatever you believe about where the technology could theoretically go, it gets there on the back of investment.
The financial pullback is the most concrete signal we have right now about which scenario is closer. Not because the technology has stopped improving — but because the capital that funds the next leap is already becoming more selective, more cautious, more impatient. The Believers' timeline assumes sustained, aggressive investment. That assumption is getting harder to make.
The gap between what AI promises and what it delivers is real, and the financial signals suggest it is not closing as fast as the prophets would have you believe. You know your micro world — where the tools fall short, what the work actually requires, where the ceiling keeps appearing. Pick your spot. The window to build something enduring in that gap is open right now, and it may be open longer than anyone expected a few weeks ago.
