A Tale of Two Eras — The Unbundling of Legal Services
If you tried to sell technology to law firms before 2022 and your software did not perform at 100% accuracy, law firms would invariable and politely show you the door. Then, Gen AI arrived with its well known 20-30% error rate, and law firms started buying immediately.
What happend? Did law firms forget they needed to provide flawless legal work to their clients? Are clients more tolerant to errors now?
Passing the Liability Parcel?
Lawyers sell trust. AI is sold to lawyers as a tool, not as a means of guaranteeing an outcome. So, when AI is wrong, it is still the law firm’s malpractice. In the same way, when a paralegal produce a piece of work, it is still the law firm’s malpractice.
Firms were always holding all the liability, no matter what software or process they used to help produce the legal work. Firms are always responsible for catching errors, regardless of whether they came from junior associates, offshore contract lawyers, or now AI.
However, the insertion of AI into the workflow creates a problem that most firms haven't grasped yet — Gen AI makes the liability visible.
Paying “Verification Tax” Doesn't Scale
When a senior lawyer reviews a junior's work, they know that junior's blind spots. Weak on tax implications. Unfamiliar with this jurisdiction. Tends to miss commercial context. They know where to look for problems.
With AI, no one knows what it knows or doesn’t know.
To verify AI output, lawyers cannot just spot check. Lawyers have to re-engage with the entire problem to know where errors might hide. Did it miss a key case? Misapply a law? Hallucinate a clause? In order to be sure, lawyers have to essentially start from square one and do the work again.
This is why "AI makes lawyers 10x faster" collapses under scrutiny. If verification requires a deep re-engagement with the problem, AI has not saved lawyers as much time as the vocal proponents might claim. The only wins are either:
where verification isn't needed, or
where the alternative was the work not getting done.
So, why are law firms accepting the increased risk from inaccuracies?
Is This Permanent or Temporary?
We know clients won’t accept error rates of 20-30% in their legal advice, so law firms will have to close this gap. The only way to close the gap is to pay the “verification tax”. This verification tax is increasing with more Gen AI use, but it is currently invisible in most firms' economics.
When lawyers review AI outputs, the cost is absorbed into the hourly billing model — lawyers may produce less original work, but they are billing the same hours because they are supervising AI generated outputs. That is, until clients start asking law firms to make verification costs visible — itemizing and billing for AI cleanup time. When clients see they are paying for error correction on top of the AI efficiency they were promised, they might start asking: "Why am I paying you to fix your own tools?"
There are two breaking points:
when will clients ask law firms to explicitly disclose AI cleanup time?
how much AI cleanup time will clients tolerate?
So, are either of those points when law firms will decide that software has to go back to 100% accuracy?
AI Is Creating Service Unbundling
The answer may be “never”. We may never go back to the days when law firms insisted on software being 100% accurate… because software never needed to be 100% accurate.
Even without software, law firms could always have done more work, faster, at lower quality, for lower fees. They chose not to because the market expected legal services to be uniformly “premium” and “flawless”.
The increasing adoption of Gen AI is forcing us to ask the question: "How much legal work actually needs the white-glove treatment?" or “How much work can be commoditized and done by software?”
Clients have always known that a portion of legal work doesn't need premium quality. It needs acceptable quality at reasonable cost — much of that is the legal work they already keep in-house, with this balance in mind.
The verification tax makes visible what was always true: firms were bundling premium work into commodity work, and charging both at the same rate. AI unbundles this. Discovery review doesn't need the same rigor as bet-the-company litigation strategy. Standard lease reviews don't need the same attention as complex M&A. Employment handbook updates don't need the same scrutiny as securities filings.
Clients can start to make different buying decisions.
The Market Splits Is Not On AI Adoption
There is a common narrative: big firms can afford to manage AI risk carefully while small firms cut corners for efficiency.
We believe that is not happening. The actual split in the market will be between firms whose economics are opaque and firms whose economics are transparent.
Some firms may choose to hide verification costs in their leverage models, blended rates, and undifferentiated billing. A partner spending 15 hours verifying AI output looks identical to a partner spending 15 hours on original strategy work. Clients cannot see the difference.
Other firms will have transparent economics. They don’t hide verification time, so they work with clients to decide what's worth verifying. This forces a decision on what is important — which work actually needs human guarantee, and which work is "AI output is better than nothing".
Quality is Permanently Changing
For decades, all legal work was sold as "high quality" regardless of actual stakes or complexity. Brand reputation was uniform: this firm does excellent work, period.
AI forces disaggregation that the profession resisted for years:
Which work actually needs human-generated first drafts?
Which work needs verification at all?
Which work is "good enough" if it gets done versus not happening?
The permanent change isn't AI replacing junior lawyers. It's the death of quality as an undifferentiated market signal.
Legal services are becoming vertically differentiated.
Premium human expertise for high-stakes matters.
Verified AI for standard work.
Unverified AI for truly commodity tasks.
Each with different price points, different service levels, different client expectations.
Clients will soon ask "What am I actually paying for?" And firms will need to give a better answer than "quality legal services."
Fix It Before The Phone Stops Ringing
The verification tax isn't going away. It is getting more expensive for premium work and disappearing for commoditized work.
If implemented with human-in-the-loop safeguards, AI likely won't increase risk to clients. In the interim, AI is making visible what was always buried in invoices — most legal work didn't need premium treatment or premium pricing, and risk should correlate to the work done. Clients have always made this calculation — keep work in-house, send it to a cheaper panel firm, or send it to a top-tier brand name firm. AI is just making the math more explicit.
This is going to shake up internal operations. Clients will start asking their top-tier firms why they're charging so much. Premium rates demand premium work, and AI is exposing what is and isn't premium through what does or doesn't deserve paying the verification tax on AI output.
The transformation law firms face is more than simply adopting tools with inherent error rates. It is disaggregating their own services based on software capability and client demands — before the work gets sent somewhere else.
Or as we have said elsewhere: firms need to figure out how to use AI properly before their clients just stop calling.
