The Three Modes of AI Interaction: From Assistant to Expert
As large language models become increasingly integrated into professional workflows, we're observing fascinating patterns in how people interact with these tools. Through our interviews and observations of user behavior, we have identified three distinct modes of interaction that form a spectrum of AI engagement - each with unique characteristics, benefits, and appropriate use cases.
The Spectrum of AI Interaction
At its core, a language model functions as an information processor: words go in, and words come out. However, the modality of interaction - how users engage with and apply these outputs - reveals three distinct approaches that we've termed Assistant, Servant, and Expert modes.
Mode 1: The Assistant - Your AI Colleague
In Assistant mode, users treat the language model as a collaborative colleague. This is the familiar chatbot experience: "Go find this for me," "Show me that," "Tell me more about this topic."
This interactive approach was likely the first mode users became comfortable with, and for good reason. It enables:
Iterative refinement: Users can guide the AI through multiple rounds of questions and responses
Dynamic exploration: The ability to drill down into specific topics based on emerging insights
Contextual building: Each exchange builds upon previous interactions
The Assistant mode thrives on engagement—it's conversational, exploratory, and highly responsive to user direction.
Mode 2: The Servant - Your AI Junior
In Servant mode, users provide a comprehensive set of instructions and expect the AI to perform a task autonomously. The output is treated as raw material—a first draft that won't see the light of day without significant human refinement.
Think of it as delegating to a junior team member: you provide clear instructions, they deliver a working draft, and you take that output as the foundation for further development. This mode is characterized by:
Task-oriented interactions: Clear instructions with expected deliverables
Output as starting point: Results require human review and refinement
Minimal back-and-forth: Less iterative than Assistant mode
Mode 3: The Expert - Your AI Specialist
In Expert mode, users rely on the language model as a subject matter specialist. They ask questions expecting authoritative answers that can be trusted and potentially shared without extensive verification. This could range from complex research queries to simple requests like "Write this email based on these bullet points."
Users in Expert mode treat the AI as someone more capable than themselves in specific domains, expecting outputs that are:
Reliable and actionable: Results that can be used with minimal modification
Authoritative: Information that carries weight and can be trusted
Publication-ready: Content that may see external use
The Risk-Reward Matrix
Interesting patterns emerge when we map these modes on a two-axis framework:
autonomy / engagement on one axis; and
trust / reliance on the other.
High Engagement + Low Reliance (Assistant Mode): Based on anecdotal evidence, this quadrant produces the highest quality outcomes. The combination of iterative guidance and human oversight maximizes both accuracy and relevance.
Low Engagement + Low Reliance (Servant Mode): This is where many agentic workflows focus - AI performs tasks autonomously, but outputs undergo significant human review before use.
Low Engagement + High Reliance (Expert Mode): This represents the highest risk scenario. While potentially offering high rewards when the AI performs well, it's also where we've seen notable failures - such as lawyers submitting legal briefs with hallucinated case citations.
High Engagement + High Reliance: This quadrant remains largely unexplored in legal practice, representing a theoretical space where users both heavily guide and fully trust AI outputs. (one model for this type of engagement is illustrated by OpenAI’s Pulse feature launched in September 2025, where the AI proactively “thinks” about your needs)
Implications for Legal Professionals
For lawyers and legal professionals, understanding these modes is particularly crucial. The legal field demands accuracy, accountability, and adherence to professional standards - making the choice of interaction mode a strategic decision.
The Assistant mode often proves most valuable for legal work because it:
Allows maximum human expertise injection
Enables real-time fact-checking and verification
Maintains professional responsibility through active oversight
Leverages AI capabilities while preserving human judgment
The other modes have merits, but they also bring risks.
The Smart-Dumb Analogy
Perhaps the most apt description of current AI capabilities is "the smartest dumb person" (or, depending on your view of AI, "the dumbest smart person"). This paradox captures the essence of working with AI: impressive capabilities coupled with unpredictable limitations. This paradox is actively discussed in critical analyses of AI. Experts suggest that LLMs are not truly intelligent but rather "anti-intelligent," performing the function of knowing (coherence) without understanding (comprehension). The models are highly proficient at pattern-matching and generating fluent, convincing text, but this fluency is ungrounded in memory, context, or intention, which makes their occasional "inexplicable errors" so dangerous.
If you had access to the world's smartest person who occasionally made inexplicable errors, how would you work with them? What would you trust them with? How would you structure your interactions to maximize productivity while minimizing risk?
These questions lie at the heart of effective AI integration.
Beyond Interacting with Only LLMs
The future of AI interaction likely involves sophisticated combinations and integrations of:
Deterministic algorithms for rule-based processes
Human expertise for judgment and creativity
Probabilistic AI for pattern recognition and content generation
The challenge is not finding one universal mode of interaction, but rather developing the judgment to select the appropriate mode for each specific task and context.
The Path Forward
We are still in the early stages of understanding optimal AI interaction patterns. What we're observing now are the organic equilibrium points - where users naturally gravitate based on their needs, risk tolerance, and desired outcomes.
As these technologies mature, the most successful implementations will likely be those that:
Clearly define the interaction mode for each use case
Establish appropriate safeguards for each mode
Train users to recognize when to switch between modes
Build systems that can scale across different modes effectively
The key insight is that there is no one-size-fits-all approach to AI interaction. Instead, success comes from understanding the spectrum of possibilities and making informed choices about where to position each interaction based on the specific needs of the task at hand.
Understanding these three prevailing modes of interacting with LLMs - Assistant, Servant, and Expert - provides a framework for thinking strategically about AI integration, helping us harness these powerful tools while maintaining the quality, accuracy, and professional standards legal work demands.