The Inevitability of AI Replacement: A Question of When, Not If
Continuing the pattern of our “economic brutalist” analysis, let's forecast to a possible endpoint for AI adoption at law firms… and we'll try to do so with as much objectivity as possible. Discard any prejudices you might have over the value of generative AI tools and platforms. Discard any preconceptions about the worthiness of thin wrappers or thick wrappers or anything else. Let’s try to forecast the near future based on human tendencies and the anticipated capability of language models.
The Mirror Machine
Let’s start with what language models are able to do — and the way that they're trained — they are extremely compelling machines that hold a mirror up to the human experience. They predict tokens. They generate text that are designed to appease the user (there is a recent meme that essentially says: the dumbest person you know is being told by GPT right now that they are brilliant and correct — "You are absolutely right”).
That's the construct of the machine. It's designed to tell the user what they want to hear, to continue engagement, to feed their words back to them while simultaneously leveraging the incredible wealth of information that's compressed into its neural network. And that's essentially what the neural network is: a compression of all the information on the internet and its training material.
These machines cause the smart to get smarter and the dumb to get dumber.
The Compounding Effect of LLMs
LLMs operate essentially as force multipliers on expertise. There are some interactions with LLMs that will compound expertise upwards, enhancing the value that is delivered; and there are some interactions that compound downwards. Those interactions that compound downwards will reduce value to zero.
If we measure productivity gains from interacting with LLMs:
Negative compounding — it reduces productivity to zero — which is a floor. The worst output from an LLM has zero value (assuming that we can ignore situations where someone acts on bad outputs to cause an actual negative consequence).
In contrast, positive compounding has no ceiling. Continuous compounding has no upper limits, as long as there are resources enough to sustain the compounding effect: enough human capital, enough energy, and enough money.
If lawyers can adopt AI to replace parts of their work, then what stops a consumer of legal services from adopting AI to replace the lawyer altogether?
The Replacement for Human Thinking
The Unit Economics of Human Thinking
We can imagine large language models as generating the equivalent of “human thought”. In other words, the compounded output of LLMs is productionizing human thinking. For many of the use cases where LLMs have been applied at law firms, they already function as a replacement for thoughtful contemplation, as a direct substitute for knowledge and experience. Every time as user asks “please analyze this text” or “draft an appropriate response to this”, the LLM is replacing human thinking.
If robots are a replacement for hands and craftsmanship, then language models are operating a replacement for human brains.
While humans are still required to verify the outputs of LLMs, economically, there is a point where the value of using an LLM is greater than the value of not using an LLM. Imagine a hypothetical example:
Assume a human worker currently costs twenty dollars an hour, and that human worker can produce four documents per hour. That's a cost of five dollars per document.
Assume it costs less than five dollars of electricity and resources to produce the same document using an LLM — document being the relevant measure here because the LLMs are just spitting out words.
Using the tools of capitalism, we can create a one-to-one measure: do we use a human or do we use a machine for the means of production? In our hypothetical, the rational economic actor will use an LLM because it is cheaper doing the work with a human (assuming the same quality of output).
We are ignoring other factors such as:
(implied and other costs) the human worker having had years of education and training, compared to a machine where someone may build the equivalent capability much more cheaply
(philosophical questions) "is the machine smart?” and “do we have AGI?” and “is it genuinely intelligent or not?"
If we reduce LLMs down to a unit of economics, if we reduce AI down to a means of production, then forecasting becomes a much more straightforward exercise. We only need to ask, “at what point is it worthwhile for a machine to replace a human for a given task?”
The Uber Problem: We Are Heavily Subsidized
Before we go on, there is one thing we must note — and this is critical — we don't actually know what AI costs yet.
Right now, the cost of LLMs is heavily subsidized by venture capital. OpenAI, Anthropic, and the rest are burning through billions in funding to offer their services at prices that almost certainly don't reflect the true cost of compute, infrastructure, and model training. This is the exact same playbook used by Uber during the ride-sharing wars.
There was a time when Uber was cheaper than taxis. During this time, a lot of people said taxis were obsolete because the economics were so clearly in Uber's favor. But, that was not the real price. That was venture-subsidized customer acquisition pricing. The real price emerged later when the subsidy tap turned off. Now, Uber costs as much or more than taxis ever did, and we realize the "economic inevitability" was partially an illusion funded by SoftBank's Vision Fund.
The AI industry is in that same customer acquisition phase right now. Companies are racing to get enterprises and individuals dependent on their platforms, to make AI feel indispensable, to shift workflows and processes to assume AI is always available at current prices. But these aren't sustainable prices. They're customer acquisition costs dressed up as product pricing.
So when we do this economic calculation — $5 of human labor versus $5 of tokens — we have to acknowledge that the $5 of AI might actually be $15 or $50 once the VC subsidy disappears and AI companies need to actually turn a profit.
This doesn't break the forecast, but it does change the timeline significantly. The economic crossover point where AI becomes genuinely cheaper than human cognitive labor might be much further out than it appears. Or it might never come for certain tasks. We won't know until the subsidy ends and we see the real unit economics.
And just like with Uber, by the time we find out the real costs, we might have already restructured entire industries around the assumption that AI is cheap. That's going to be an interesting reckoning.
The Three Questions Framework
With the cost subsidiaries caveat in mind, the forecast for when AI will replace humans comes down to three questions:
Can a machine replace a human who's currently doing that task? Assuming a human of average intelligence and competence, is AI better than one person for a specific task?
Can we accept the system constraints? What are the boundaries that you would put in place to ensure that the machine performs no worse than an average human performing the task? Do you have to design parameters around this artificial intelligence in order to make it work properly?
Is there economic incentive to scale? Once you have AI performing a task with the right parameters, functioning reliably and predictably, then is the task something you would want to scale in the first place? Is there enough demand?
If the answer to all of those is yes, then AI can explode in that arena.
The Foregone Conclusion
Applying this framework for economic replacement to legal services is an interesting exercise.
If we hold a mirror up to ourselves and the tasks that lawyers are performing, and we look at the demand from people who need legal services:
the average intelligence of a language model has already surpassed the average intelligence of a lawyer (possibly our most controversial statement in this blog post)
the outputs of models are somewhat reliable most of the time (see our earlier blog post about the 60% rule)
there is more demand than supply for legal services
Then, what happens if AI makes supply of legal services suddenly infinite?
It is no longer a matter of what will happen — the foregone conclusion is that artificial intelligence will do the majority of cognitive labor — it is a matter of when. And when that happens, the economic value is going to be captured by those people who are the future suppliers of these economic production units — the artificial lawyer brains.
The question is how soon? Or at least, that's the forecast if the economics hold.
An Afterthought: Then, There's Always Human Agency
Now here's where the neat economic model starts to break down. We have been treating humans as perfectly rational economic actors, and we demonstrably are not. The entire history of technology adoption is littered with economically superior solutions that failed because of human resistance, institutional inertia, or social preference.
The Refusal to Optimize
The portion of humanity that would be replaced first — e.g. the ones performing routine cognitive work — have the most to lose and the least investment in the system. What if they simply refuse to participate?
We already see this with self-checkout machines at supermarkets. They're faster, cheaper, more efficient. Retailers have been pushing them for 20 years. And yet people still queue for human cashiers. Not because they're unaware of the alternative, but because the human assistance and social interaction matter to the buyers and consumers.
The economic model assumes that once AI becomes cheaper per unit of cognitive output, adoption is inevitable. But cheaper doesn't mean preferred. People might choose to transact with other humans, even at higher cost, because the human element is the point. For legal services, “trust” remains at the heart of the value.
Professional Gatekeeping and Regulatory Capture
Now consider the legal professionals, the credentialed class, the ones with institutional power. Legal institutions have potentially sufficient power to prevent their own replacement.
Lawyers don't just perform cognitive labor — they control the regulatory apparatus that defines what counts as legal work. Who can give legal advice? Who can appear in court? Who can certify documents? These aren't natural laws; they're political decisions made by lawyers.
Other profession with high barriers to entry has this same dynamic. Doctors control medical licensing boards. Accountants define what counts as proper accounting. Engineers stamp their approval on designs. These aren't just technical requirements — they're moats, deliberately constructed to prevent exactly the kind of cost competition our economic model describes.
Our economic model says: "When AI is cheaper and good enough, it will replace humans". But the economic model ignores that the humans being replaced write the rules about what "good enough" means, and who is allowed to compete.
The Moving Target of Cognitive Work
Here's the third possibility: what if humans don't try to compete with AI at existing tasks, but instead constantly create new forms of cognitive work that AI can't do?
This is what happened with previous waves of automation. When spreadsheets replaced accounting clerks, accountants didn't disappear — they moved upstream to financial strategy. When CAD software replaced drafters, engineers didn't disappear — they took on more complex design work.
The economic model assumes a fixed set of cognitive tasks that AI will progressively conquer. But what if the boundary keeps moving? What if the new "smartness" is defined as the ability to do cognitive work that current AI cannot?
This creates a very different forecast: not "AI replaces cognitive workers", but "AI constantly redefines what valuable cognitive work is, and humans keep inventing new ways to be valuable".
Maybe, “It’s Complicated”
The economic framework asks: "When does AI become cheaper than human cognitive labor?"
The economic inevitability is real. But so is human resistance. And so is the possibility that the economics aren't what they seem. And the collision between these forces is going to define the next twenty years of work, value, and what it means to think for a living.