Will Clients Start Buying Legal Services Directly from an AI Chatbot Within 2 Years?

Oz Benamram shared a fascinating idea on LawPunx: law firms have approximately two years to get their knowledge management systems in order before clients begin relying on self-service legal service (https://youtu.be/7SxaDc6iCh0?si=kmyKNZVIKCE3iEil). The release date of this video aligns intriguingly with the recent launch of Legora's portal, which enables lawyers to make their knowledge and expertise available through a platform where clients can log on, upload documents, ask questions, and tap into a combination of AI models and lawyer-provided prompts and context to get answers to their legal questions (https://legora.com/newsroom/portal-announcement).

If Oz’s idea becomes reality, it represents a fundamental shift in how legal services might be delivered. Instead of clients calling up lawyers for help, they would log into a specialized legal version of something like ChatGPT or Claude, drawing on curated lawyer expertise and knowledge bases. The legal chatbot would perform tasks by tapping into appropriate sources of knowledge, all while maintaining the professional standards clients expect from legal counsel.

Delivering the Core Function of Lawyers?

This brings us to a fundamental question we've been discussing internally: what do lawyers actually do?

At their core, lawyers help clients control and mitigate the legal risks. Lawyers trade on credibility and trust. For a legal chatbot to succeed in this space, does it also need to be trustworthy? Or, is it capable of being trustworthy?

Trustworthiness in AI faces two significant hurdles.

  • First, can language models be shaped to give reliable, consistent outputs? You can't have two different instances of the same question yielding dramatically different answers — that erodes trust immediately. Academic research confirms that "LLM performance is currently insufficiently reliable and insufficiently predictable" for complex legal judgments. A paper, "Off-the-Shelf Large Language Models Are Unreliable Judges", found that "minor changes in how a question is phrased" can "completely reverse the model's legal conclusion," and there was "essentially no agreement at all" when the same questions were posed to eight different open-source LLMs.

  • Second is the hallucination problem. Multiple high-profile incidents have occurred where lawyers faced sanctions for "citing ChatGPT-invented fictional cases" in legal briefs. A study from Stanford Law, "Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools", found that professional, RAG-based AI research tools made by LexisNexis and Thomson Reuters each hallucinate more than 17% of the time.

Pricing the AI Legal Services?

Assuming we can build trustworthy AI legal tools, the next question becomes: how do lawyers get paid?

The economics of self-service legal platforms remain uncertain. Is it like a Spotify model, where lawyers earn a royalty for their knowledge — perhaps a few dollars, maybe tens of dollars per query? Or does it follow more of a Figma-for-Law model, functioning as a professional B2B collaboration tool? We see potential for the business model to evolve in either direction — or perhaps into something entirely different (we might even see a return to the very olden days where lawyers charged by the word on paper, now reimagined as legal advice charged per token generated).

If this self-service model takes hold in whatever form, what then is the point of human lawyers? Will they be relegated to the promise land of “higher-level tasks”? And what would those tasks be?

Performing What Fraction of Legal Services?

If legal knowledge can truly be captured, tapped into, and applied through chatbots, then these tools essentially become a replacement for some fraction of legal services. The key word here is "fraction". What is that fraction? Half? A third? A tenth? What will clients be willing to entrust to a chatbot?

We had a fascinating discussion a couple of months ago with Nick Abrahams at Norton Rose Fulbright, who shared a telling anecdote. When a client was quoted 10% of the usual fees for a legal matter performed by AI, the client declined, saying they would rather pay 100% for humans. While that's only one data point, it is a revealing one. Even with such a dramatic reduction in costs, the client wasn't willing to take on the additional risk.

Here's the paradox: we return to our first point in this blog post — lawyers' jobs are fundamentally about risk mitigation. So at what point does the risk equation shift? For what tasks and questions will AI-assisted legal services be deemed adequate?

That's a difficult question to answer, but we know the landscape is evolving.

Offering Tiered Legal Services?

Looking at the history of legal services, the profession already have tiered pricing. Tax lawyers typically charge more than other lawyers, even in top tier law firms. BigLaw attorneys typically charge more than property conveyancing lawyers in suburbia.

Legal expertise are priced differently based on complexity and risk. Perhaps the future follows this model: chatbots and AI workflows will be charged at a fraction of the typical price, allowing clients to receive AI assistance for tasks that they would not have taken to a law firm in the past. In this way, rather than losing revenue, law firms might actually gain income from work that would never have come to them otherwise.

Relatedly, this could genuinely open up access to justice by giving people a much lower price point to receive legal services. This is already happening. 90% of low-income Americans that lack adequate legal assistance. A field study of 202 legal aid professionals found that 90% reported increased productivity from using Generative AI tools, specifically for "lower-risk applications" like document summarization, preliminary research, and translating "legalese" into more accessible formats.

What is the Path to Market?

Of course, all of this is premised on the idea that one of these legal chatbots can be built, publicized, and widely adopted.

Already:

  • Rocket Lawyer has launched "Rocket Copilot", a generative AI assistant, directly targeting small and medium-sized businesses.

  • Even more significantly, LegalZoom has announced a strategic partnership with Perplexity, embedding LegalZoom's services directly into the Perplexity platform.

Ultimately, this is a question of product-market fit: Who is the customer? How do you sell to that customer? What's the risk appetite for that customer?

For now, we're theorizing about a market that's rapidly emerging but not yet fully formed… but we can see signals. We know that a non-trivial amount of conversations on ChatGPT seeks legal advice or legal information — OpenAI likely has internal data that has not been made public (https://www.nber.org/papers/w34255). The question is whether users would be willing to pay for this. For example, only 5% of OpenAI users who have paid accounts — there's a clear bifurcation between what people expect and what they're willing to pay for. Would people be willing to pay for a legal specialist chatbot that taps into the expertise of lawyers, assuming all the previously discussed technical problems can be solved?

Maybe two years is the right timeframe.

Law firms that get their knowledge systems in order, that figure out how to package their expertise for AI consumption, and that can build trust with clients in this new model may find themselves at the forefront of a genuine transformation in legal services. Those that don't adapt may find their clients have already moved on to self-service solutions — or worse, find themselves unable to compete on either cost or value in an increasingly bifurcated market.

Maybe.

Next
Next

Prompt Injections: Why Humans Will Always Be Document Reviewers