AI in the legal industry Archives - Thomson Reuters Institute https://blogs.thomsonreuters.com/en-us/topic/ai-in-the-legal-industry/ Thomson Reuters Institute is a blog from ¶¶ŇőłÉÄę, the intelligence, technology and human expertise you need to find trusted answers. Fri, 10 Apr 2026 08:56:53 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 Pattern, proof & rights: How AI is reshaping criminal justice /en-us/posts/ai-in-courts/ai-reshapes-criminal-justice/ Fri, 10 Apr 2026 08:46:55 +0000 https://blogs.thomsonreuters.com/en-us/?p=70255

Key insights:

      • AI’s greatest strength in criminal justice is pattern recognition— AI can process vast amounts of data quickly, helping law enforcement and legal professionals detect connections, reduce oversight gaps, and improve consistency across investigations and casework.

      • AI should strengthen justice, not substitute for human judgment— Legal professionals are integral to evaluating AI-generated outputs, especially when decisions affect evidence, warrants, and individuals’ constitutional rights.

      • The most effective model is human/AI collaboration— AI handles scale and speed, while judges, attorneys, and investigators provide context, accountability, and ethical reasoning needed to protect due process.


The law has always been about patterns — patterns of behavior, patterns of evidence, and patterns of justice. Now, courts and law enforcement can leverage a tool powerful enough to see those patterns at a scale at a speed no human mind could match: AI.

At its core, AI works by recognizing patterns. Rather than simply matching keywords, it learns from large amounts of existing text to understand meaning and context and uses that learning to make predictions about what comes next. In the context of law enforcement, that capability is nothing short of transformative.

These themes were front and center in a recent webinar, , from theĚý, a joint effort by the National Center for State CourtsĚý(NCSC) and the Thomson Reuters Institute (TRI). The webinar brought together voices from across the justice system, and what emerged was a clear and consistent message: AI is a powerful ally in the pursuit of justice, but only when paired with the judgment, accountability, and constitutional grounding that human professionals can provide.

AI’s pattern recognition is a gamechanger

“AI is excellent,” said Mark Cheatham, Chief of Police in Acworth, Georgia, during the webinar. “It is better than anyone else in your office at recognizing patterns. No doubt about it. It is the smartest, most capable employee that you have.”

That kind of capability, applied to the demands of modern policing, investigation, and prosecution, is a genuine gamechanger. However, the promise of AI extends far beyond the patrol car or the precinct. Indeed, it cascades through the entire arc of justice — from the moment a crime is detected all the way through prosecution and adjudication.

Each step in that chain represents not just an operational and efficiency upgrade, but an opportunity to make the system more fair, more consistent, and more protective of the rights of everyone involved.

Webinar participants considered the practical implications. For example, AI can identify and mitigate human error in decision-making, promoting greater consistency and fairness in outcomes across cases. And by automating labor-intensive tasks such as reviewing body camera footage, AI frees prosecutors and defense attorneys to focus on other aspects of their work that demand professional judgment and legal expertise.

In legal education, the potential of AI is similarly recognized. Hon. Eric DuBois of the 9th Judicial Circuit Court in Florida emphasizes its role as a tool rather than a substitute. “I encourage the law students to use AI as a starting point,” Judge DuBois explained. “But it’s not going to replace us. You’ve got to put the work in, you’ve got to put the effort in.”


AI can never replace the detective, the prosecutor, the judge, or the defense attorney; however, it can work alongside them, handling the volume and velocity of data that no human team could process alone.


Judge DuBois’ perspective aligns with broader judicial sentiment on the responsible integration of AI. In fact, one consistent theme across the webinar was the necessity of maintaining human oversight. The role of the legal professional remains central, participants stressed, because that ensures accuracy, accountability, and ethical judgment. The appropriate placement of human expertise within AI-assisted processes is essential to ensuring a fair and effective legal system.

That balance between leveraging AI and preserving human judgment is not just good practice, rather it’s a cornerstone of justice. While Chief Cheatham praises AI’s pattern recognition, he also cautions that it “will call in sick, frequently and unexpectedly.” In other words, AI is a powerful but imperfect tool, and those professionals who rely on it must always be prepared to intervene in those situations in which AI falls short. Moreover, the technology is improving extremely rapidly, and the models we are using today will likely be the worst models we ever use.

Naturally, that readiness is especially critical when individuals’ rights are on the line. “A human cannot just rely on that machine,” said Joyce King, Deputy State’s Attorney for Frederick County in Maryland. “You need a warrant to open that cyber tip separately, to get human eyes on that for confirmation, that we cannot rely on the machine.” Clearly, as the webinar explained, AI does not replace constitutional obligations; rather, it operates within them, and the professionals who use AI are still the guardians of due process.

The human/AI partnership is where justice is served

Bob Rhodes, Chief Technology Officer for ¶¶ŇőłÉÄę Special Services (TRSS) echoed that sentiment with a principle that cuts across every application of AI in the justice system. “The number one thing… is a human should always be in the loop to verify what the systems are giving them,” Rhodes said.

This is not a limitation of AI; instead, it’s the design of a system that works. AI identifies the patterns, and trained, experienced professionals evaluate them, act on them, and are accountable for them.

That partnership is where the real opportunity lives. AI can never replace the detective, the prosecutor, the judge, or the defense attorney. However, it can work alongside them, handling the volume and velocity of data that no human team could process alone. So that means the humans in the room can focus on what they do best: applying judgment, upholding the law, and protecting an individual’s rights.

For judicial and law enforcement professionals, this is the moment to lean in. The patterns are there, the technology to read them is here, and the opportunity to use both in service of rights — not against them — has never been greater.


Please add your voice to ¶¶ŇőłÉÄę’ flagship , a global study exploring how the professional landscape continues to change.Ěý

]]>
The AI Law Professor: When AI quietly hijacks legal judgment /en-us/posts/technology/ai-law-professor-first-draft-trap/ Wed, 08 Apr 2026 07:56:33 +0000 https://blogs.thomsonreuters.com/en-us/?p=70293

Key takeaways:

      • Anchoring distorts judgment before you begin — Research shows a first draft shapes subsequent decisions; and an AI draft is the most seductive anchor imaginable, because it looks exactly like something a lawyer would write.

      • The First Draft Trap inverts legal training — The Socratic method builds the habit of holding multiple possibilities in tension before committing; but an AI first draft collapses that space before the real thinking begins.

      • The fix is to ask for the map, not the draft — Requesting multiple strategic framings before writing keeps judgment where it belongs and uses AI to expand possibilities rather than foreclose them.


Welcome back toĚýThe AI Law Professor. Last month, I examined why promised efficiency gains often become a cycle of work intensification. This month, I want to address a subtler challenge. I call it the First Draft Trap and understanding it may change how you reach for AI the next time a new matter lands on your desk

We have all heard the pitch: Staring at a blank page? Just prompt the AI. In seconds you have a working draft: structured, coherent, and surprisingly competent. The blank page problem, that ancient enemy of productivity, thus has been vanquished.

Except the blank page itself was never just an obstacle; rather, it was a space of possibility. For lawyers, it was the space in which the most important part of their work actually happens. Now, with AI in the mix, that may be changing.

Welcome to the First Draft Trap.

Simply put, the First Draft Trap is this: The moment you accept an AI-generated draft as your starting point, you have already made the most consequential decision of the entire project — most importantly, you made it by not making it. You let the machine choose your direction, your framing, and your theory. Everything that follows is editing; and editing, no matter how rigorous, is not the same as thinking.

The cognitive hijack

There is solid psychology behind why this happens. Daniel Kahneman and Amos Tversky demonstrated in their landmark 1974 paper, , that once people are exposed to an idea, this first impression distorts their subsequent judgments and becomes a mental anchor. In their experiments, subjects who watched a roulette wheel spin to a random number still let that number influence their estimates of completely unrelated quantities. The anchor held even when people knew it was meaningless.


Please join Tom Martin at the on April 28–29. It’s virtual and completely free — two days of keynotes, panels, and workshops on AI and the legal profession


An AI first draft is the most seductive anchor imaginable. It is not random — it is plausible, and it is well-organized. It sounds like something a lawyer would write. And that is precisely what makes it dangerous. You know intellectually that it is just one of many possible approaches to addressing the matter, but the anchor holds anyway.

That is the First Draft Trap at the cognitive level. The AI draft is not just one option you happen to prefer. It is a filter that prevents you from seeing the other options that were available to you, the roads you never even noticed that you did not take.

Consider what this means for a profession built on the opposite instinct. From the first day of law school, lawyers are trained to resist the obvious answer and to think like a lawyer. The Socratic method exists for exactly this reason. A good professor hears your confident response and asks: What else? What if the facts were different? What is the argument on the other side? The goal is not to arrive at an answer, per se. It is to build the mental habit of holding multiple possibilities in tension before committing to any one of them.

The First Draft Trap is the anti-Socratic method. It delivers a confident answer before you have even formulated the question properly — and instead of interrogating it, you polish it.

The value of the blank page

Think about what a senior partner actually does when a junior associate brings them a memo. The partner’s value is not better writing; rather, it is peripheral vision: The ability to see what the memo does not address, the argument not considered, or the framing that would land differently with this particular judge or this particular jury. That capacity to see beyond the document in front of them is why clients pay senior partners premium rates. And it is precisely the muscle that atrophies when your default workflow begins with the prompt generate a draft.


The AI draft is not just one option you happen to prefer. It is a filter that prevents you from seeing the other options that were available to you, the roads you never even noticed that you did not take.


The two-system framework offered by Kahneman and Tversky gives us a clean way to describe what is going wrong. System 1 is fast, intuitive, and pattern-matching; while System 2 is slow, deliberate, and analytical. The practice of law, at its best, is a System 2 discipline. We, as lawyers, are trained to override gut reactions, challenge assumptions, and think through consequences before acting.

In this way, the AI first draft feels like a System 2 output. It is structured, footnoted, and methodical. However, your decision to accept it as a starting point is pure System 1 — a fast, intuitive grab at the nearest plausible answer. You have used a sophisticated tool to bypass the sophisticated thinking the tool was supposed to support. That uncomfortable period of ambiguity, of not knowing which path is best, is where the real lawyering lives.

What to do instead

None of this means stop using AI. It means stop using AI to skip the hard part that matters.

Before you ever ask for a draft, ask for the map. Describe the matter or document you are working on, then ask the AI for three fundamentally different strategic framings for the problem. For each framing, request the strongest argument in its favor and its most serious vulnerability. Then ask which framing best fits the client’s goals, the audience, or the procedural posture. Close with a clear instruction: Do not write a draft yet.

That last instruction is the key. It keeps you in the driver’s seat during the phase that matters most. You are using AI to expand the possibilities before you prune them, not after. And, most importantly, it gives you the opportunity to think for yourself about other important possibilities and add them in.

In the terms used by Kahneman and Tversky, use AI to fuel System 2, not to hand the controls to System 1. Let the machine generate options, and you exercise judgment.

For lawyers, the ability to see what is not there is the whole game.

Do not let the first draft blind you to it.


Tom Martin is CEO & Founder of LawDroid, Adjunct Professor at Suffolk University Law School, and author of the forthcomingĚý. He is “The AI Law Professor” and writes this eponymous column for the Thomson Reuters Institute.

]]>
Relationship-building and AI fluency key to closing visibility gap, new report shows /en-us/posts/corporates/closing-ai-visibility-gap/ Mon, 06 Apr 2026 12:18:00 +0000 https://blogs.thomsonreuters.com/en-us/?p=70271

Key insights:

      • A significant visibility gap persists between legal departments and the C‑Suite — Most general counsel believe their legal department contributes strategically, yet senior executives often fail to see or understand that value.

      • Strong internal relationship‑building is critical (and often underdeveloped) — This capability enables legal teams to spot risks earlier, stay embedded in decision‑making, and make their work more visible across the business.

      • Closing the gap requires communicating legal’s value and increasing true AI fluency — For legal teams to be seen as proactive, strategic partners rather than task executors, communication and strong AI fluency are essential.


General counsel (GCs) have spent years doing more with less, tightening their legal spend, and aligning the law department’s priorities with the wider business. And yet, despite all of this effort, a striking visibility gap persists. While 86% of GCs believe their department is a significant contributor to overall organizational objectives, only 17% of the C-Suite agrees, according to the , from the Thomson Reuters Institute, which was based on more than 2,300 interviews with corporate general counsel. Meanwhile, 42% of C-Suite executives say the legal function contributes little or not at all to company performance.

The challenge for GCs is whether their staff have the skills and capabilities to make their work visible, relevant, and understood by the business at large. To address this perception gap in 2026, every GC needs to prioritize building richer internal relationships with business leads, moving from task-based to outcome-focused messaging, and improving the team’s collective AI fluency.

Empower teams to build internal relationships

Nearly half of all GCs surveyed for the report cited staffing and resource constraints as the top barrier to delivering additional value, a concern that has remained stubbornly consistent for years. Beyond headcount, the report underscores that the deeper challenge facing legal departments is relational.

Internal relationship-building is one of the most critical and underrated people skills in a legal department’s collective skill set. Indeed, 68% of GCs rate internal dialogue as their most valuable source of information about emerging risks. In fact, the most successful GCs use a deliberate combination of formal and informal methods to build connections with the internal business units that they serve.


You can learn more about how to assess your legal department’s strategic positioning with theĚýThomson Reuters Institute’s Value Alignment toolkit, here


Some run structured weekly face-to-face sessions with business departments, complete with schedules, plans, and frameworks. Others rely on walking the halls, open-door policies, and ad-hoc conversations that keep the corporate law department visible and accessible on a human level.

The report offers a five-dimensional framework to help GCs audit where, with whom, and how often legal is in dialogue with other parts of the business.

Corporate Law

Use communication tactics that focus on business outcomes

Even when legal departments are doing excellent work, they often describe it in the wrong language. Many in-house lawyers categorize their contributions in task-based terms — such as “We support M&A” or “We analyze contracts” — rather than in value-creating terms.

Some in-house legal leaders have progressed to stakeholder-level framing, such as, “We protect the company from competitive threats” or “We support new business opportunities.” Still, neither of these levels truly communicates value to a C-Suite audience, the report shows.

To effectively align the law department’s priorities with business goals, in-house attorneys need to develop the skill of communicating through a business lens. For example, one GC states that the primary goal of the law department is to “find the fastest and most compliant way for the sales department to sell products.” This response reframes the legal function’s activities as much more business fluent and value-added.

Legal teams are not always good at touting their accomplishments, however, and this is a challenge when a lot of the work can be categorized as invisible. For example, when protecting the company is done right, threats are eliminated before they occur and no one notices. When efficiency is unlocked through process improvement, the C-Suite only sees the outcome if someone connects the dots explicitly. This is why surfacing invisible value is now a business imperative for corporate law departments.

Advancing from AI literacy to AI fluency

The most significant skills challenge facing legal departments in 2026 is how to best use AI strategically. Mentions of AI as a strategic priority among GCs have doubled in the past year, according to the report. In fact, almost half of all GCs now reference AI in their survey interviews. Yet the report draws a sharp distinction between being AI literate and being AI fluent, with most departments being the former but not the latter.

To close that gap, the report recommends a six-layer model covering learning, empowerment, ownership, accountability, usage, and expectations.

Corporate Law

At its core, the model asks GCs to start with open encouragement and access to AI tools to build momentum, then shift toward more formal expectations around adoption to make AI use a daily habit.


You can download a full copy of the Thomson Reuters Institute’s here

]]>
Honing legal judgment: The AI era requires changes to how lawyers are trained during and after law school /en-us/posts/legal/honing-legal-judgment-training-lawyers/ Thu, 02 Apr 2026 15:36:44 +0000 https://blogs.thomsonreuters.com/en-us/?p=70236

Key takeaways:

      • AI threatens traditional lawyer development — As AI automates entry-level legal tasks like research and writing that historically has honed legal judgment skills, the profession faces a crisis in how new lawyers will develop such judgment abilities.

      • The profession can’t agree on what constitutes “legal judgment” — Unlike other professions, there is no agreed-upon definition of legal judgment or clear standards for when AI should be used.

      • Implementation requires unprecedented coordination and funding — A legal education fund as a proposed solution would require a small percentage of legal services revenue and coordinated action across law schools, legal employers, and state regulators.


This is the second of a two-part blog series that looks at how lawyer training needs to evolve in the age of AI. The first part of this series looked at how lawyers can keep their skills relevant amid AI utilization.

The key skills that comprise legal judgment have received mixed reviews, according to a recent white paper from the Thomson Reuters Institute that advocated for cultivating practice-ready lawyers. The white paper was based on feedback from thousands of experienced lawyers, judges, and law students and raises questions about how legal judgment forms when AI assistance is used for task completion.

notes that calls for “… to accelerate the development of legal judgment early in lawyers’ careers.”

The challenge is that each part of the profession — law schools, employers, state supreme courts (as regulators) — have distinctly separate responsibilities. That means, that in the age of AI, coordination across the entire legal profession is needed, especially as AI reduces the availability of traditional first jobs.

Furlong points out that there is no consensus for what legal judgment is or any agreed upon standards for in what instances AI should be used in legal. To bring clarity to these issues, the white paper proposed a profession-wide model that integrates three critical elements: i) work-based learning that’s modeled on medical residencies; ii) micro-skill decomposition of legal judgment; and iii) AI-as-thinking-partner throughout pedagogy.

Three pillars for an AI-era lawyer formation system

Not surprisingly, overreliance on AI can erode critical analysis and solid legal judgment skills. Addressing these concerns requires a comprehensive reimagining of how lawyers are educated and trained. One solution lies in three interconnected pillars that together form a cohesive system for developing legal judgment in an AI-integrated world.

Pillar 1: Integrate work experience into legal education

Core skills such as legal research, writing, and document review help develop legal judgment; yet these skills could collapse once AI assumes such tasks. The Brookings Institution recently proposed to preserve entry-level professional development in an AI era. This parallels the TRI white paper’s calls for mandatory supervised postgraduate practice as a key part of legal licensure.

While implementing a full residency model presents challenges, several law schools have already pioneered approaches that demonstrate the viability of work-integrated legal education that, if scaled appropriately, could improve new lawyer practice and judgment skills. For example, Northeastern Law School guarantees all students nearly before graduation through four quarter-length legal positions. The program integrates supervised practice into the curriculum so graduates can gain substantial hands-on experience alongside their classroom instruction.

Also, program offers an alternative pathway to bar admission through practice-based assessment rather than the traditional bar exam. The program demonstrates that competency can be evaluated through supervised experiential learning.

Pillar 2: Decompose legal judgment into teachable micro-skills

The legal profession needs to come to a common definition of legal judgment and develop its components to teach the concept effectively. “We can’t teach what we can’t describe,” Furlong says. To develop legal judgment, the profession must define its components, including:

      • Pattern recognition — The ability to identify when different fact patterns are related to similar legal frameworks and distinguish when superficially similar cases are legally distinct.
      • Strategic calibration and proportionality — This means understanding what level of effort, precision, and risk each matter requires and matching responses to the stakes involved.
      • Reasoning through uncertainty — This is the capacity to make defensible decisions and provide sound counsel even when the law is ambiguous, unsettled, or silent on an issue.
      • Source evaluation and authority weighting — This includes knowing which legal authorities are most suitable and being able to assess their persuasive value.
      • Ethical judgment under pressure — This means spotting conflicts, confidentiality issues, and duty-of-candor moments while maintaining competence and knowing when to escalate beyond expertise.

Breaking down legal judgment into these discrete components makes it possible to design targeted teaching interventions. For example, , former law professor and executive director of , suggests we back into AI-assisted workflows by requiring a short verification log (detailing sources checked, changes made, and why); running attack-the-draft drills (find missing authority, weak inferences, and jurisdictional mismatch); and preserving slow work as formative work (citation chaining, updating, and adversarial research memos).

With judgment skills clearly defined and work experience integrated into training, the profession must then tackle how AI itself should be incorporated into lawyer development.

Pillar 3: AI-as-thinking-partner throughout a lawyer’s career

Warnings that are mounting. The legal profession must provide clear standards for in what instances and how AI should be used, with training in verification and judgment skills. Overreliance on AI could compromise lawyers’ capacity to fulfill their fiduciary duties to clients.

A phased approach in the introduction of AI in legal work helps protect critical thinking while building AI competency. For example, in Year 1, law students could complete core legal reasoning exercises without AI assistance in order to better develop their analytical muscles. In Year 2, students use AI as a research assistant with mandatory verification protocols that teach students to check outputs against authoritative sources. Finally, in Year 3, residencies can immerse students in real-world AI workflows under proper supervision and while providing feedback.

These three pillars form a coherent vision for lawyer formation in the AI era. However, the most well-designed system faces the obstacle of funding.

The challenge of who pays

Perhaps the most difficult part of any overhaul is the cost. The medical residency model works because — up to $15 billion-plus annually — for teaching young medical students to be doctors. Legal education has no equivalent. Without addressing funding, however, even the best reforms will fail.

One idea is to establish a legal education fund that’s supported by an assessment of a small percentage of the legal industry’s gross legal services revenue (while exempting solo practitioners and firms with less than $500,000 in annual revenue). These funds could be used to subsidize thousands of supervised residency placements, fund law school curriculum development, support bar exam alternative assessments, and provide employer training and supervision stipends.


The challenge is that each part of the profession — law schools, employers, state supreme courts — have distinctly separate responsibilities, and that means coordination across the entire legal profession is needed.


This proposal, of course, would require unprecedented coordination and financial commitment from the legal profession. Skeptics might argue that market forces can solve this problem, or that firms will simply create new training pathways, or that AI will prove less disruptive than feared. However, waiting for market forces risks a lost generation of lawyers. The medical profession already when the medical industry’s voluntary reform failed. Only later did coordinated regulatory intervention produce the consistent quality standards the medical industry sees now.

What is clear is that inaction is resulting in degradation of lawyering skills. “Maybe… we need catastrophic external intervention to bring about the wholesale changes we can’t manage from the inside,” Furlong suggests.

However, the question is whether the legal profession will wait for a crisis to force change or act proactively to make the needed changes now, before the crisis hits.


You can learn more about the impact of AI on professional services organizations at TRI’s upcoming 2026 Future of AI & Technology Forum here

]]>
The 4 Plates: Are you measuring the real value of AI in your legal department? /en-us/posts/corporates/4-plates-measuring-efficiency/ Wed, 01 Apr 2026 13:15:21 +0000 https://blogs.thomsonreuters.com/en-us/?p=70085

Key takeaways:

      • Efficiency is a means, not an end — Gains from AI only count when you can show what they enabled: better advice, stronger protection, smarter business support.

      • Narrow measurement invites cuts — Legal departments that measure AI value only through cost savings are telling C-Suites that legal costs less, thereby inviting budget and headcount reductions.

      • Measure across all four plates — A framework that captures effectiveness, risk, and enablement alongside efficiency is what shifts perception of the legal department from cost center to strategic asset.


Your legal department has invested in AI tools, adoption is growing, your team is saving time on routine work and, by most accounts, work operations are running faster. Then your CFO asks a simple question: What has AI delivered for the legal department?

If your answer centers on hours saved and cost reduced, you are not alone. However, you may be leaving your most important value story untold. And in a climate in which legal departments are under more scrutiny than ever to demonstrate the full return on their AI investment, that gap matters.

This is the fourth and final part of our series on the “Four Spinning Plates” model, which frames the GC’s evolving responsibilities as:

      1. delivering effective advice
      2. operating efficiently
      3. protecting the business, and
      4. enabling strategic ambitions.

This article focuses on the Efficient plate and specifically on the risk of letting it do too much of the talking.

plates

The Efficient plate under pressure

For a GC, making the best use of what are often limited resources is a constant pressure. The Efficient plate sits alongside, not above, the other three plates and must be kept always spinning. Right now, however, for many in-house legal teams the Efficient plate is receiving disproportionate attention, and for understandable reasons.

AI adoption in corporate legal departments is accelerating quickly. According to the Thomson Reuters Institute’s AI in Professional Services Report 2026, nearly half (47%) of corporate legal respondents surveyed said their department has already integrated generative AI (GenAI) into their work — more than double the figure from the previous year. A further 18% reported that they’re already using agentic AI, with more than half expecting agentic AI to be central to their workflow within the next two years.

GCs are genuinely excited about what this makes possible. As one GC said in the survey that underpinned the AI in Professional Services Report: “It presents the promise of getting out of low-value work and into higher-value work that supports the business.” Another described their vision of a legal department that is “boldly digital-first, relentlessly innovative, and tightly woven into business priorities.”

Clearly, the opportunity is real, but so is the risk of measuring it badly.

The measurement trap

Our 2026 research found that only one-quarter of legal departments are currently measuring the ROI of their AI tools. That alone is striking given the pace of adoption but the follow-up finding is where the real problem lies — of those departments that are measuring ROI, 80% are tracking it in terms of internal cost savings.

Reducing external spend, automating high-volume processes, and bringing more work in-house are all legitimate efficiency gains and worth reporting, of course. However, when cost reduction becomes the only story being told, two things can happen. Your C-Suite learns to associate your department’s value with how little it costs, a frame that is very difficult to escape once it’s established. And the wider value that efficiency enables in terms of sharper risk identification, faster business support, and higher-quality advice goes unmeasured and therefore unrecognized.


ĚýIf your metrics only capture time saved and cost reduced, and not what that freed-up capacity actually delivered, you are measuring the means and ignoring the end.


Think about what GCs themselves say they want from AI. As several GCs said in the survey, they’re hoping AI will provide them with “better output on more meaningful tasks,” “proactive, strategic insight,” and “getting out of low-value work.” These are not efficient outcomes, per se; rather, they are effectiveness, protection, and enablement outcomes, made possible by improved efficiency.

So, if your metrics only capture the input (time saved, cost reduced) and not what that freed-up capacity actually delivered, you are measuring the means and ignoring the end. This is the efficiency trap — measuring the plate so narrowly that it starts to work against you.

Reframing how you measure efficiency

Measuring efficiency well does not mean measuring it more. It means measuring it differently, and always in relation to the business you support. A few principles worth applying include:

Present spend in a business context — Legal spend as a percentage of company revenue tells a more credible story than a raw cost figure. It scales with the business and can be benchmarked meaningfully against peers.

Show what technology investment actually delivered — Time saved through automation is a useful starting point, but the stronger case is what the team did with that time. Tracking the shift from routine to strategic work over a period of time is a far more compelling ROI story.

Connect efficiency gains to business outcomes — An efficiency gain that enabled a faster product launch, prevented a compliance risk, or improved stakeholder satisfaction has a value that no cost metric will capture. Build those connections explicitly into how you report the value of the legal department to the C-Suite.

New resources to help

To support GCs in getting this right, the Thomson Reuters Institute has added two new resources to its Value Alignment Toolkit that directly address this measurement gap.

The Metrics Library brings together more than 100 metrics organized across all four spinning plates. It is a practical starting point for GCs to browse, select, and adapt to the specific goals of their departments, making it easier to build a measurement framework that reflects everything departments do, not just the part that appears in a budget line.

The AI Success Metrics guide addresses the AI measurement gap head-on with a best practice guide and a hands-on worksheet designed specifically for legal departments navigating AI adoption and asking: How do we actually know whether this is working? It looks beyond cost savings to capture the fuller picture of AI value including quality, capacity, strategic contribution, and risk.

Getting the balance right

In today’s environment, every GC needs to consider their answer when their C-Suite asks what the legal department delivers. Are your department’s metrics giving them the full answer or just the part that’s easiest to count?

Efficiency is not the enemy of strategic value. A department that runs well, uses its resources wisely, and embraces technology thoughtfully can in turn create the conditions for everything else the business needs from its legal function. However, that case only lands if your metrics measure across all four plates, not just one.


You can explore the new Metrics Library and AI Success Metrics guide, along with the full Thomson Reuters Institute’s Value Alignment toolkitĚýhere

]]>
Helping the legal profession get AI‑ready: A new advisory board takes shape /en-us/posts/legal/ai-advisory-board/ Thu, 26 Mar 2026 11:31:32 +0000 https://blogs.thomsonreuters.com/en-us/?p=70080 Key insights:

      • AI is already reshaping the legal profession — AIĚýis already embedded in lawyers’ day-to-day legal work with a significant share of both law firm attorneys and in-house legal teams actively using GenAI tools, with many expecting it to become central to their work within the next five years.

      • AIFLP Advisory Board was formed to prepare lawyers for an AI-reshaped profession — TRI convened 21 respected leaders from legal education, private practice, the judiciary, and AI ethics and governance to help ensure lawyers and law students are prepared for a profession reshaped by AI.

      • Human judgment remains central in an AI enabled legal futureĚý— Becoming AI ready is not simply about learning to use new tools; the Advisory Board emphasizes strengthening irreplaceable human capabilities is critical.


In today’s tech-driven environment, AI is no longer a future concept for the legal profession — it’s already here, and it’s changing how lawyers work, learn, and serve clients. Recognizing just how fast the evolution is moving, the Thomson Reuters Institute (TRI) has launched the AI and the Future of Legal Practice (AIFLP) Advisory Board, bringing together a group of respected leaders from across the legal ecosystem to help guide what comes next.

The board includes 21 accomplished voices from legal education, private practice, the judiciary, and AI ethics and governance. Their shared goal is simple but ambitious: Help ensure that both today’s lawyers and tomorrow’s law students are prepared for a profession being reshaped by AI.

Why now?

Because the shift is already underway. According to TRI’s recent 2026 AI in Professional Services Report, 41% of law firm attorneys say their organizations are already using some form of generative AI (GenAI); and nearly half of those at corporate legal departments report that AI tools are being rolled out there too. Even more telling, most professionals said they expect GenAI to become central to their day‑to‑day work within the next five years.

That pace of change raises big questions about competence, ethics, education, risk, and access to justice. And those questions don’t have easy answers.

What the Advisory Board will focus on

The AIFLP Advisory Board is designed to tackle those challenges head‑on. Its work will center on four key areas that are already under pressure as AI adoption accelerates:

      • Legal education and talent development
      • Ethics, professional competence, and accountability
      • Governance, risk management, and client counseling
      • Access to justice and modern service delivery

The Advisory Board’s early focus areas will look at how AI is actually changing legal practice today, what future‑ready lawyers really need to know, and how legal education and real‑world practice can better align. The emphasis is not just on using AI tools, but on strengthening the human skills that matter most, such as sound judgment, critical thinking, and careful verification of AI‑generated outputs.

Shaping the future, not reacting to it

Citing the critical need for this Advisory Board’s creation, Mike Abbott, Head of the Thomson Reuters Institute, notes that the legal profession is at a crossroads, and it can either react to AI‑driven disruption or take an active role in shaping how these technologies are used to support lawyers, courts, and the public.

“By assembling a board of distinguished leaders, our goal is to help practicing lawyers and the lawyers of the future navigate a rapidly evolving landscape,” Abbott said. “Ensuring that legal education strengthens irreplaceable skills such as critical thinking, human judgment and effective communication helps make AI use safe and effective. The Board’s efforts will ultimately help shape a future-ready profession, leading to better outcomes for all.”

Meet the AIFLP Advisory Board Members

By convening experienced leaders from across the profession, TRI hopes to help lawyers navigate this landscape with confidence. Advisory Board Members include:

      • Michael Abbott, Head of the Thomson Reuters Institute
      • Soledad Atienza, Dean, IE Law School
      • The Honorable Jennifer D. Bailey, (Ret.), Partner, Bass Law
      • Benjamin Barros, Dean, Stetson University College of Law
      • Professor Sara J. Berman, University of Southern California, Gould School of Law
      • Megan Carpenter, Dean Emeritus, University of New Hampshire Franklin Pierce School of Law
      • Ronald S. Flagg, President, Legal Services Corporation
      • Donna Haddad, AI Ethics and Governance expert, and founding member, IBM AI Ethics Board
      • Johanna Kalb, Dean and Professor of Law, University of San Francisco School of Law
      • The Honorable Nelly Khouzam, Florida Second District Court of Appeal
      • The Honorable William Koch, Dean, Nashville School of Law, and former Tennessee Supreme Court Justice
      • Sheldon Krantz, retired partner, DLA Piper, and a founder, DC Affordable Law Firm
      • Stefanie A. Lindquist, Dean, School of Law, Washington University in St. Louis
      • The Honorable Mark Martin, Founding Dean and Professor of Law, Kenneth F. Kahn School of Law at High Point University, and former Chief Justice, Supreme Court of North Carolina
      • Caitlin (Cat) Moon, Professor of the Practice and founding co-director, Vanderbilt AI Law Lab, Vanderbilt Law School
      • Hari Osofsky, Myra and James Bradwell Professor and former Dean, Northwestern Pritzker School of Law; Founding Director, Northwestern University Energy Innovation Lab; and Founding Director, Rule of Law Global Academic Partnership
      • Joanna Penn, Chief Transformation Officer, Husch Blackwell
      • The Honorable Morris Silberman, Florida Second District Court of Appeal
      • The Honorable Samuel A. Thumma, Arizona Court of Appeals, Division One
      • Mark Wasserman, Partner and CEO Emeritus, Eversheds Sutherland
      • Donna E. Young, Founding Dean, Lincoln Alexander School of Law, Toronto Metropolitan University

What’s next?

The Advisory Board held its first meeting in February and will meet quarterly going forward. As the work progresses, TRI plans to publish research findings, best practices, and practical recommendations for legal educators, law firms, and courts.

In a profession built on precedent and careful reasoning, the rise of AI presents both opportunity and responsibility. The AIFLP Advisory Board is an effort to make sure the legal community meets that moment thoughtfully and on its own terms.


You can learn more about the impact of advanced technology on the legal profession here

]]>
Honing legal judgment: How professional acumen & fiduciary care can keep lawyers relevant in the age of AI /en-us/posts/legal/honing-legal-judgment-keeping-lawyers-relevant/ Wed, 25 Mar 2026 14:21:08 +0000 https://blogs.thomsonreuters.com/en-us/?p=70071

Key highlights:

      • Lawyers excel at semantic legal work while AI excels in syntactic tasks — Syntactic work (document generation, pattern recognition) is where AI excels, but semantic work involving exercising independent judgment, reflecting on consequences, and fulfilling fiduciary duties remains uniquely human.

      • Fiduciary duty as the core of legal relevance — What distinguishes lawyers isn’t justĚýwhatthey do, butĚýhow and whyĚýthey do it. The fiduciary relationship demands human understanding of context, balances competing interests, recognizes unstated concerns, and exercises discretion.

      • 5 hours to deepen or diminish — The five hours lawyers are expected to gain each week by using AI can either accelerate professional obsolescence or deepen lawyers’ relevance, depending on what they do with it.


This is the first of a two-part blog series that looks at how lawyers can keep their skills relevant in the age of AI

Lawyers expect to gain a full five hours per week of worktime due to the efficiency derived from AI use, according to the ¶¶ŇőłÉÄę 2025 Future of Professionals Report. Yet the fear of job loss among lawyers is rising, as those viewing AI as a threat or somewhat of a threat grew from to almost two-thirds (65%) of those surveyed, according to the Thomson Reuters Institute’s 2026 AI in Professional Services Report.

Many in the legal profession are asking how lawyers are uniquely valuable at a time when machines can process legal information faster and cheaper. The answer lies in understanding the difference between what AI does in processing legal information and what humans do in exercising legal judgment, says , Founding Director of the .

Defining 2 levels of legal work

Understanding what makes lawyers particularlyĚýmeaningfulĚýin this current AI moment requires distinguishing between two different levels of legal work in an environment in which AI-enabled information systems are compressing humanity and legal judgment into data points and draining away the storytelling and moral nuance that ground both. According to Lee, these different levels involve the syntactic and the semantic:

      • Syntactic — Lawyers process information, generate documents, and recognize patterns at the syntactic level, meaning those tasks in which AI excels and delivers promised efficiency gains. “The danger is that we will use this efficiency merely to generate more syntactic volume,” Lee explains, adding that this will result in faster processing of more documents at greater speeds. “If we do that, we will have automated ourselves out of a profession.”
      • Semantic — The semantic aspect of lawyering highlights the irreducible skills of the legal practice, which include exercising independent legal judgment, reflecting on consequences, demonstrating care for clients, and fulfilling fiduciary duties.

This distinction between the semantic level is inherent within the practice of law definition, Lee says, pointing out that many jurisdictions distinguish between “providing legal information” (not practicing law) and “exercising independent legal judgment” (the essence of legal practice).

He also rightly contends that the existential risk facing lawyers is not in AI completing legal tasks, but rather the temptation to reduce lawyers’ role to verifying machine output and processing legal information. Conflating these two concepts is a challenge for the legal profession and requires increasing the appreciation for the craft of legal reasoning and judgment.

legal judgment
Kevin Lee, Founding Director of the Institute for AI & Democratic Governance

Making this more difficult is that the current information age complicates this picture by challenging society’s assumptions about reality, consciousness, and the moral meaning of human life — all at an exponential rate, Lee says. Similarly, AI and information systems threaten to reduce everything, including human beings and law itself, to processable data by stripping away the narratives and meanings that define humanity, he adds.

Semantic qualities of legal judgment

The question of what makes lawyers especially relevant in the AI era is mainly answered in how and why they do what they do, rather than in what they do. For example, Lee points to skills around executing their fiduciary duty and ensuring legitimacy and meaning as key characteristics of lawyers’ semantic qualities.

Fiduciary duty — When a client seeks legal counsel, it’s legal judgment — not information processing — that the client wants. Lawyers, as part of their fiduciary duty to their clients, demonstrate human and legal understanding of the unique context of each case and the consequences of various legal paths forward. This bond of trust between attorney and client demands reflection, consideration, care, and proper purpose.

The fiduciary duty of the lawyer to the client requires balancing competing interests, recognizing unstated concerns, and exercising discretion in ways that honor both the letter and spirit of the law. At the heart of this balance is legal reasoning and professional judgment, which often involves navigating the critical gap between legal rules as written and their meaningful application to human circumstances.

Legitimacy and meaning — Beyond the fiduciary of care exercised in individual client relationships, lawyers serve a broader purpose in their role to safeguard law’s connection to the narratives of justice and human dignity that legitimize its authority. Indeed, lawyers maintain the connection between law and its humanistic foundations, so that the narratives that give legal authority its legitimacy depend on this connection. “The artwork that one associates with the law (in law schools and courtrooms) connects actions and legal judgment of attorneys to the mythic meaning of justice, equality, and the rule of law,” Lee explains.

How to deepen appreciation for the special relevance of lawyers

The five hours that lawyers said they expect to gain each week through AI-driven efficiency represents a choice point for the profession. These hours can either accelerate lawyers’ obsolescence or deepen their relevance. To ensure the latter, Lee advises lawyers and legal institutions to examine ways to put those hours to good use by, for example:

Collaborating on apprenticeships — Bar associations, practicing lawyers, legal service providers, and law schools should consider apprenticeship models that teach professional norms and values through mentorship that allow law students to learn the craft of legal reasoning through guided practice.

Recommitting more fully to legal service — Law firms and in-house counsel must reclaim humanistic awareness as central to their professional identity. The efficiency gains from AI should be reinvested into semantic work, which include counseling clients, exercising moral judgment, and fulfilling fiduciary duties with greater care and reflection.

Improving legal education — Law schools must return to the humanistic formation of lawyers, echoing the vision of the pre-2007 , before economic pressures reduced legal education to producing commercially exploitable graduates. In addition, AI ethics must be integrated systemically across the curriculum into doctrinal courses rather than being confined to elective courses.

Looking ahead

The five hours gained through AI represent a defining choice for the legal profession. The special relevance of lawyers in the AI age lies precisely in the human components and semantics aspects of lawyering.


In the concluding part of this blog series, we look at how the legal profession needs to rethink how it trains lawyers in order to prevent AI from eroding legal judgment skills

]]>
Move over, “Death of the billable hour,” Legalweek 2026 has found a new existential crisis /en-us/posts/legal/legalweek-2026-new-existential-crisis/ Thu, 19 Mar 2026 13:25:16 +0000 https://blogs.thomsonreuters.com/en-us/?p=70031

Key takeaways:

      • Structural change in firms — The traditional law firm pyramid, in which junior lawyers perform high-volume work at billable rates, is losing its foundation as AI compresses tasks that once took hours and clients increasingly bring more work in-house.

      • Finding new ways to train — AI-powered simulations are emerging as a concrete answer to the associate training problem, allowing new lawyers to build courtroom skills faster and fail safely behind closed doors.

      • The associate role isn’t dying, it’s being redefined — Those law firms that figure out the right mix of legal training, technological fluency, and management skills will have a significant edge over those that are still debating it.


NEW YORK —ĚýOn more than one occasion, I have written seriously and at length about the death of the billable hour. I’ve argued that alternative fee arrangements (AFAs) are the future, that the economic logic of hourly billing is irreconcilable with AI-driven productivity gains, and that the industry needs to prepare for a fundamentally different pricing model. I meant every word. I still do.

Yet, at last week’s one attendee pointed out they’ve been hearing about the death of billable hour since the 1990s. At this point, it’s less a prediction and more of a tradition. Indeed, Matthew Kohel, a partner at Saul Ewing, said despite the legal press coverage connecting AI to the billable hour’s demise that narrative is now entering its third or fourth decade. And Kohel said his firm simply isn’t seeing meaningful client-driven movement toward AFAs.

So let’s be honest: the billable hour is not dead, and in fact, it may not be even close to dead.

However, if you’re looking for something that is facing a genuine existential reckoning — something the legal industry whispered about in the early days of generative AI (GenAI) and is now discussing openly — Legalweek 2026 may have found it. It turns out the billable hour was never the thing in danger, rather it’s the person billing the hours.

It’s the associate.

The question nobody wanted to ask out loud

The future of the junior lawyer surfaced in virtually every breakout session across the three-days event, and while it may not be the point of inception for the question, it was certainly the moment this idea graduated from a half-whispered aside to main-stage conversation.

Moreover, the problem has grown more urgent since its inception in the early GenAI days, when the question was simply whether a firm would need fewer associates. Now, that question hasn’t gone away, but it’s been joined by harder ones concerning training, hiring, and legal and technical skills. For example, what if AI is already better than a junior associate at some of the tasks that defined the role in the past? And what happens if someone says it out loud?

Someone said it out loud.


If you’re looking for something that is facing a genuine existential reckoning, Legalweek 2026 may have found it. It turns out the billable hour was never the thing in danger, rather it’s the person billing the hours.ĚýIt’s the associate.


During a panel on Measuring What Matters, the conversation turned to client trust. Clients want to know: How can you be sure AI will catch everything? How do you trust it to find what matters across 5,000 pages of documents?

The response from the panel was direct, and it landed like a brick in the room: it’s 5,000 pages, and someone was reading those five thousand pages. That someone is an associate. If that associate — who, more often than not, is one of the least experienced lawyers in the building — is the one reading all those pages, why would you trust them to do it better than a machine?

While that question hung in the air during the panel, it does deserve to sit with you for a moment afterward. Because embedded in it is the uncomfortable arithmetic that drives the entire associate question. The traditional law firm pyramid is built on a base of junior lawyers performing high-volume, lower-complexity work such as document review, due diligence, first-pass research, and doing so at rates that generate revenue while the activity is simultaneously (in theory) training the next generation of partners. If AI can do that base-layer work faster, cheaper, and with accuracy that one panelist described as “beyond very good,” then the pyramid doesn’t just shrink. It loses its foundation.

Barclay Blair, Senior Managing Director of AI Innovation at DLA Piper, noted that tasks like due diligence on some types of financial contracts are already being compressed to two hours, down from 15 to 20 — with zero hours being a realistic possibility in the near future.

Further, as one attendee observed, clients increasingly are adopting AI internally, and they’re bringing work in-house that was previously sent to outside counsel. Clearly, the work that trained generations of associates isn’t just being automated — in some cases, it’s leaving the firm entirely.

Fewer reps, greater weight

Yet here is where it would be easy (and wrong) to write the doom-and-gloom version of the future, in which AI replaces associates, the pipeline collapses, nobody knows how to train lawyers anymore, civilization crumbles, etc. It’s a clean narrative, but it’s also not what Legalweek panels actually said.

Because alongside the anxiety, something else was happening. People were building answers.

In another panel, Developing the Future Lawyer, panelists spent an hour in the weeds of what associate training actually looks like when the old model breaks down — and the conversation was far more concrete than you might expect.


Panelist spent an hour in the weeds of what associate training actually looks like when the old model breaks down — and the conversation was far more concrete than you might expect.


Panelist Abdi Shayesteh, Founder and CEO of AltaClaro, laid out the core problem with precision, noting that there’s a growing gap in critical thinking among associates. Templates getting copy-pasted without relevance analysis, and there is a lack of knowing what you don’t know. And the traditional training methods such as videos, lectures, and passive learning, don’t fix it. Indeed, those outdated models may be making it worse. Shayesteh’s analogy was blunt: You don’t learn to swim by watching videos — you need to jump into the deep end.

His solution is AI-powered simulations. Not hypothetical ones, but working deposition simulations available today, with real-time AI feedback, in which associates can practice cross-examination, deal with opposing counsel objections, and build the muscle memory that used to require years of live experience.

Kate Orr, Managing Director of Practice Innovation at Orrick, picked up the thread with two observations that reframed the stakes. First, AI simulations allow associates to fail behind closed doors, a radical improvement over the old model, in which blowing it had real consequences because failure often happened directly in front of the partners Second, the tool isn’t just for juniors. Even experienced lawyers are using simulations to test different approaches, tweak personas, and sharpen arguments. Orrick’s own Supreme Court team had a lawyer use AI to review a draft brief and identify paragraphs that could be tighter.

Todd Heffner, Partner at Smith, Gambrell & Russell, said the real question isn’t whether associates will use AI, but rather whether it gets them to lead at trial in year 10 instead of year 20. Right now, most associates are lucky to see the inside of a courtroom in their first seven years, and even then, they spend most of their time back in the hotel prepping for the more experienced attorneys instead of arguing themselves. If simulations can compress that learning curve, the associate’s career doesn’t disappear, rather, it gets accelerated.

The dinosaur that adapted

During the Measuring What Matters panel, Mitchell Kaplan, Managing Director of Zarwin Baum, introduced himself with a memorable bit of self-deprecation: He’s a dinosaur — but one, he clarified, who understands how AI can revolutionize what he does.

Kaplan’s perspective threaded through both days of programming like a quiet counterweight to the anxiety. He’d seen this before — not AI specifically, but the fear of it. He watched the legal industry transition from physical libraries to digital research tools, and he watched attorneys adapt. And his message was consistent: the work changes, but the need for lawyers doesn’t disappear. Associates may be taking shortcuts, but they still need to read, still need to review, and still need to think.

They’re developing differently than his generation did, Kaplan said, but it’s the same way every generation develops differently from the one before it. And different doesn’t mean wrong.


The work changes, but the need for lawyers doesn’t disappear. Associates may be taking shortcuts, but they still need to read, still need to review, and still need to think.


It’s a perspective that found an unexpected echo in the Enterprise Alignment panel. Mark Brennan, a partner at Hogan Lovells, relayed a comment he heard at a previous AI conference: The next generation of entry-level jobs will be managers — because they’ll be managing agents and other tech tools. Brennan admitted he didn’t have all the answers on what that means for legal training, but the implication was clear. The associate role isn’t dying, instead, it’s being redefined. And the firms that figure out what that redefined role looks like, what mix of legal training, technological fluency, critical thinking, and management skills it requires, will have a significant advantage over those firms that are still debating it.

Another panelist, Andrew Medeiros, Managing Director of Innovation at Troutman Pepper Locke, made a prediction that felt like the sharpest version of this idea. He said that at some point, new lawyers are going to be doing simulated matters as a standard part of the development process. Eventually, there’s going to be a generation that walks in as new attorneys and finds themselves litigating right away.

That’s not the death of the associate. Rather, that’s the beginning of a different kind of associate — one who arrives at the courtroom sooner, with different preparation, carrying different tools.

The billable hour, for all the prophecies, refuses to die. The associate, it turns out, has no intention of dying either — just evolving. Mitchell Kaplan called himself a dinosaur — but Legalweek was full of dinosaurs, and every one of them was adapting and in that adaptation, thriving. The harder question is whether the firms that forged them will be brave enough to follow.


You can find more ofĚýour coverage of Legalweek eventsĚýhere

]]>
The efficiency imperative: AI as a tool for improving the way lawyers practice /en-us/posts/ai-in-courts/improving-lawyers-practice/ Wed, 18 Mar 2026 17:45:16 +0000 https://blogs.thomsonreuters.com/en-us/?p=70024

Key insights:

      • AI brings improved efficiency — AI accelerates tasks like document review and research, freeing lawyers to pursue more high-value work for clients.

      • AI does the work of a team of lawyers — AI levels the playing field for small law firms and solo practitioners by providing additional capacity without adding headcount, thereby allowing fewer lawyer to do the work of many.

      • Yet, AI still needs guardrails — Lawyers must remain accountable, however, with human oversight and review to ensure that AI outputs are accurate and correct, thereby preserving nuance and professional judgment.


Already, AI is no longer a theoretical concept for legal professionals, nor is it a nice-to-have for law firms that are seeking to impress their clients with improved efficiency and cost savings. That means, the practical question now becomes how to adopt AI in ways that improve speed and capacity of lawyers without compromising accuracy, confidentiality, or professional judgment.

The strongest near-term value shows up where modern practice is most strained: high-volume inputs and relentless timelines. In that environment, AI can be most helpful as an accelerant for the first pass through large bodies of material.

This possibilities, opportunities, and challenges of using AI in this way were discussed by a panel of experts in a recent webinar, , from theĚý, a joint effort by the National Center for State CourtsĚý(NCSC) and the Thomson Reuters Institute (TRI).

One panelist, Mark Francis, a partner at Holland & Knight, described one way that AI can be an enormous help. “Anything where we’re dealing with large volume of materials that need to be reviewed [such as] large sets of documents, large sets of legal research, large sets of discovery. Obviously, AI can be leveraged in all of those circumstances.” That framing is important because it anchors AI’s utility in a familiar workflow: review, triage, and synthesis at scale.

AI also has a role earlier in the workflow than many attorneys expect. In addition to sorting and summarizing, it can help generate starting structures. For lawyers drafting motions, client advisories, demand letters, contract markups, or internal investigations memos, the hardest step can be getting traction from a blank page. “It’s really good at content or idea generation,” Francis said, adding that lawyers can ask AI to “generate some ideas for me on this topic, or generate an outline of a document to cover a particular issue.”


“AI is definitely going to benefit some of the small law firms who cannot actually afford the workforce. AI can be an extension when it comes to the automation.”


Of course, that does not mean letting an AI model decide what the law is; rather, it means using AI to produce an initial outline, identify possible issues to consider, or propose alternate ways to organize an argument. Then, the attorney should apply their own judgment to accept, reject, refine, and verify the AI’s output.

For legal teams, the ideal mindset is that AI can compress the time between intake and a workable first draft, whether that draft is a research plan, a deposition outline, a set of contract fallback positions, or a motion framework. However, speed is only valuable if it facilitates careful lawyering, not just taking shortcuts.

Efficiency that scales down, not just up

AI’s impact is not limited to large law firms with dedicated tech & innovation budgets. In fact, the benefits may be most transformative for smaller legal organizations that feel every hour of administrative drag and every unstaffed matter. Panelist Ashwini Jarral, a Strategic Advisor at IGIS, underscores how broad the current level of AI adoption already is. “AI is already being used in a lot of legal research, contract analysis, and in office operations,” Jarral explained. “Whether that’s in a small law firm or a large law firm, everybody can benefit from that automation with this AI.”

For many practices, that list maps directly onto the work that consumes lawyers’ time without always adding commensurate value: repetitive research steps, first-pass contract review, intake and scheduling, matter administration, and other operational tasks.

Historically, scale favored organizations that could hire more associates, paralegals, and support staff to push volume through the pipeline. Now, AI offers a different form of leverage: additional capacity without adding headcount. “It is definitely going to also benefit some of the small law firms who cannot actually afford the workforce,” Jarral said, adding that “AI can be an extension when it comes to the automation.” For a solo or small firm, that extension can show up as faster first-pass review of contracts, quicker summarization of records, more consistent intake workflows, and reduced time spent on repetitive back-office tasks.

At the same time, it is crucial to be clear-eyed about what is being automated. While AI can help deliver efficiency, it does not offer legal judgment itself. The legal profession still must decide, matter by matter, what level of review is required and what risks are acceptable.


“Lawyers are trained a certain way, and AI is never going to be trained that way. AI misses nuances. We’re always going to need lawyers; we’re always going to need the human in the loop.”


And that’s where implementation discipline becomes a strategic differentiator. Law firms that treat AI as a general-purpose shortcut tend to create risk; while firms that treat AI as a workflow component, with guardrails, review steps, and clear accountability, are more likely to capture value without compromising quality.

The non-negotiable: lawyers remain accountable

Any serious conversation about AI in legal practice must address these limits, panelists agreed. The Hon. Linda Kevins, a Justice on the Supreme Court in the 10th Judicial District of New York (Suffolk County), offered the most direct articulation of the boundary line: “Lawyers are trained a certain way, and AI is never going to be trained that way. AI misses nuances. We’re always going to need lawyers; we’re always going to need the human in the loop.”

Indeed, legal work is saturated with nuance. The same set of facts can carry different weight depending on jurisdiction, judge, forum, procedural posture, and the client’s goals and risk tolerance. Even when the law is clear, the right action often is not. To strive for true justice requires judgment about timing, framing, business consequences, reputational risk, and settlement dynamics. Those are not merely inputs for an AI to process — they are human decisions that define legal representation.

As the webinar made clear, this is the point at which responsible use becomes practical, not abstract. If AI is used for research support, contract analysis, or document review, lawyers need an explicit approach for verification and oversight. The outputs may look polished and may sound confident; however, confidence is not accuracy, and professional responsibility does not shift to a vendor or an AI model. Human review is not a ceremonial or perfunctory step, nor is it a formality. Rather, it is the core control that protects clients and the court, and it is the inflection point that turns AI from a novelty into a defensible tool.

In practice, the human in the loop means deciding in which instances AI can assist and in what instances it cannot. It also means reserving an attorney’s time for the decisions that carry legal and ethical consequences and building repeatable habits that prevent teams from drifting into overreliance on AI, especially under deadline pressure.

The legal profession can capture real benefits from AI, including speed, scalability, and improved access, but only if it adopts the technology in a way that preserves what Justice Kevins highlighted: training, nuance, and human accountability.


You can find out more about how AI and other advanced technologies are impactingĚýbest practices in courts and administration here

]]>
The great AI disconnect: Firms and legal departments are not communicating about AI usage /en-us/posts/technology/great-ai-disconnect/ Wed, 18 Mar 2026 13:39:56 +0000 https://blogs.thomsonreuters.com/en-us/?p=70004

Key insights:

      • There’s an AI awareness gap — Most corporate legal professionals do not know whether their outside legal counsel are using AI in handling their client matters, leaving both law departments and their firms in a state of AI uncertainty.

      • A potential upcoming billing model shift — Efficiencies from AI usage could have a major impact on how many law firms bill matters; value-based billing may need to replace or supplement hourly billing for matters in which AI is used.

      • Transparency builds trust — Lack of visibility and ROI measurement could erode trust between law departments and their outside counsel. Dialogue and measurements can strengthen the firm/client relationship and create scenarios in which both sides can reap the benefits of AI usage.


While the use of AI is increasingly widespread for both corporate legal departments and their outside law firms, there is a considerable lack of dialogue and data-sharing between the two sides on usage, guidelines, and expectations regarding AI. This complicates efforts to maximize the benefits of using AI, and it also may be eroding trust between the two sides.

Significant gaps in visibility and measurement

The Thomson Reuters Institute’s (TRI’s) 2026 AI in Professional Services Report found major gaps in visibility and measurement between law firms and legal departments. The survey found that more than half of law firm respondents said their organizations are currently using or considering using GenAI. And more than half of corporate legal professionals surveyed said they feel that their outside legal firms should use AI on their matters.

However, more than two-thirds (68%) of corporate legal professionals admitted that they currently have no idea if their outside law firms are using AI or not.

AI disconnect

In addition, neither side is effectively measuring whether or to what degree their use of AI is improving the delivery of legal services. Indeed, 85% of law firm respondents and 75% of corporate legal department respondents said their organizations are either not collecting ROI data on AI usage or are unsure if they are doing so.

Is your organization measuring the ROI of AI tools?

AI disconnect

These visibility and measurement gaps make it difficult for both sides to plan how AI can and should be used in handling client matters. It also raises questions about how potential efficiencies from AI use will affect related factors such as how much firms charge for their services and how much clients are willing to pay. Half of legal professionals surveyed said they feel that AI is either a major threat or somewhat of a threat to billings and law firm revenues. Not surprisingly, the industry continues to wrestle with how to balance efficiency gains from AI against the limitations of the hourly billing model.

Concerns of corporate law departments

For corporate law departments, the lack of AI usage visibility and ROI measurement is producing a wide variety of responses, ranging from mild but growing concern all the way to outright suspicion about how law firms are using AI on their clients’ behalf. Law department respondents said that while they generally trust their outside counsel to make the right decisions regarding AI use and maintaining quality, most departments have not yet had conversations on those issues with their law firms, including how AI use will affect billing.

“Billing has remained the same as it did before,” noted one corporate legal department attorney. “So, either they are not using AI tools efficiently, or they are just doing double work.”

One corporate CLO was far more blunt in their assessment, especially given the lack of detailed discussions or data from firms: “I fear that firms will use AI to cut time, but continue to bill for the hypothetical amount of time a task would have taken without it. It’s dishonest, but so are many firms.”

One encouraging note is that, according to TRI’s 2025 Future of Professionals Report, 56% of law firm respondents said they are highly or moderately confident in their ability to articulate the value of AI to their clients. Despite law firms’ confidence in explaining the value of AI, however, the visibility gap illustrated in the 2026 AI in Professional Services Report indicates that law firms are not actually having those conversations with clients. Indeed, some corporate law department respondents suggested their outside counsel may be reluctant to discuss AI with them because of concerns about quality and accuracy. One even suggested that firms may feel threatened by AI.

More & better communication is needed

As difficult and complicated as discussions involving AI usage may be, they are also essential. Absent those discussions, trust between firms and clients may be eroding, potentially jeopardizing long-standing relationships.

Here are a few steps that both sides can take to build confidence around the use of AI:

For law firms —

    • Communicate with clients — Hold discussions with clients that allow firms to detail how AI is being or will be used in client matters. Solicit feedback from clients about in which instances they would accept (or even demand) AI usage on different parts of a matter.
    • Develop an AI billing strategy — Determine not only how AI usage is impacting billable hours, but also how that will interact with the firm’s billing and pricing strategy.
    • Demonstrate and articulate value — Be prepared to explain billings in detail and answer client questions in terms of not only time and rates, but of value to the client. This includes both the value that AI brings to client engagement, but also the value that the firm brings above and beyond what technology provides, such as more freed-up time for lawyers to pursue value-added work.

For corporate law departments —

    • Lead the conversation, if need be — About three-quarters of both law firm and legal department respondents said it is the firm’s responsibility to initiate discussions around AI usage. However, corporate law departments should not wait for their outside firms to start the conversation. Take the initiative and make sure firms’ delivery models and fee structures are clear regarding AI usage.
    • Set expectations — Provide guidelines, expectations, or mandates on how and when AI will be used in handling client matters. This includes outlining specific use cases, data security protocols, and the human-in-the-loop oversight mechanisms that are used to ensure accuracy.
    • Build an external-facing metrics program — Law departments need to accurately measure the efficiency gains their outside firms are achieving to ensure that they, as the client, are receiving a fair price for value received. Baselines can be established for how long various legal matters took historically and how much they cost. The baselines then can be compared against AI-enabled engagements to evaluate ROI and business impact. This also allows legal departments to more thoroughly explain those gains to their own stakeholders.

For both corporate law departments and their outside counsel, it is imperative to engage in thorough discussions and develop data that can inform better decision-making. Such dialogue and measurements can strengthen the firm/client relationship and create scenarios in which both sides can reap the benefits of AI use.


You can download a full copy of the Thomson Reuters Institute’sĚý2026 AI in Professional Services ReportĚýhere

]]>