Legal Technology Archives - Thomson Reuters Institute https://blogs.thomsonreuters.com/en-us/topic/legal-technology/ Thomson Reuters Institute is a blog from ¶¶ŇőłÉÄę, the intelligence, technology and human expertise you need to find trusted answers. Fri, 10 Apr 2026 08:56:53 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 Agentic AI following GenAI’s growth trajectory in legal, but with unique oversight challenges, new report shows /en-us/posts/technology/agentic-ai-oversight-challenges/ Thu, 09 Apr 2026 08:45:55 +0000 https://blogs.thomsonreuters.com/en-us/?p=70278

Key takeaways:

      • Agentic AI poised for adoption uptick — Agentic AI is following GenAI’s rapid adoption in the legal industry, with less than 20% of firms currently implementing agentic systems but half planning or considering adoption in the near future, according to a new report.

      • Adoption depends on human oversight answers — Legal professionals are generally optimistic about agentic AI’s potential, but successful adoption depends on explicit guidance about human oversight and the lawyer’s role in maintaining ethical standards.

      • Time to retool AI education? — Agentic AI’s increased autonomy introduces new oversight and ethical challenges for law firms, making targeted education and clear guidance essential to understanding the differences from GenAI.


Over the past several years, law firms and corporate legal departments have turned towards generative AI en masse. At the beginning of 2024, just 14% of all law firms and legal departments featured an enterprise-wide GenAI tool. Just two years later, that number had already risen to 43% of all firms and departments, according to the 2026 AI in Professional Services Report, from the Thomson Reuters Institute (TRI). For large law firms or legal departments, those percentages — not surprisingly — are beginning to approach 100%.

With GenAI adoption now this widespread, legal industry leaders are now turning their attention to two primary initiatives. One, of course, is how to get the most out of the AI tools they already have — a task that is proving a bit elusive. Currently, less than 20% of lawyers say their organizations measure AI’s return-on-investment, and most corporate lawyers say they have no idea how their outside law firms are approaching AI. Thus, instituting not just AI tools, but also an AI strategy is the second top priority for law firms and corporate legal departments in 2026 and beyond.

However, even as the legal industry reaches a tipping point in adopting GenAI tools, technology innovation still continues unabated. Agentic AI has emerged as the next wave of innovation that could change how lawyers work on a daily basis, offering a way to autonomously complete multi-step tasks. For example, agentic AI systems are already being built for the legal industry that independently researches a regulation or law, drafts a document based on the finding, identifies pitfalls, and revises the document, with stops for human guidance only instituted as desired.

According to the AI in Professional Services Report, the legal industry is already making headway towards implementing agentic AI systems. For agentic AI to truly take hold in legal, however, lawyers still require more education around not only how it differs from the GenAI systems they already have in place, but also when and where human intervention needs to occur within an agentic system.

The early stages of agentic AI

Examining current agentic AI adoption for the legal industry almost takes one back in time — two years, to be exact. Following the public release of GenAI in late-2022, many legal industry organizations spent 2023 evaluating and experimenting with AI systems, usually with a small working group of interested guinea pigs. As a result, only 14% of survey respondents said their law firms or corporate legal departments were engaged in organization-wide GenAI rollouts at the start of 2024. However, more than half of respondents said their organizations expected to be rolling out large-scale GenAI systems over the next 1 to 3 years. The intervening two years since then have proved that prediction to be largely true.

Agentic AI usage in the first half of 2026 looks largely similar to GenAI in 2024. The legal industry started to experiment with agentic AI at the beginning of 2025, with an eye towards actual implementation in 2026 and beyond (particularly as legal software providers began to integrate agentic systems into their own products). As such, less than 20% of recent survey respondents say their organization is engaged in widespread agentic AI adoption, but with about half of respondents said their organization is either planning to use or considering whether to use agentic AI in the near future.

agentic ai

By and large, lawyers feel positive about the agentic AI movement. When asked about their sentiment towards agentic AI, 51% of legal industry respondents said they felt excited or hopeful, while just 19% said they felt concerned or fearful. Further, about half (47%) said they actively believe agentic AI should be used for legal work, while 22% felt it should not, with the remainder saying they were unsure. These figures largely track with the sentiments expressed about GenAI in 2024, which have only grown over time from about 50% positive two years ago to two-thirds of all legal professionals feeling positive currently.

This all lends further credence to a rise in agentic AI usage similar to what law firms and corporate legal departments experienced with GenAI over the course of 2024 and 2025. Indeed, when asked when they expect agentic AI to be a central part of their workflow, few have baked agentic systems into their daily work currently, but a majority of legal industry respondents expect it to be central within the next 3 to 5 years.

agentic ai

The unique barriers of agentic AI adoption

Agentic AI does differ from GenAI in one crucial area that may limit its growth potential within the legal industry, however — autonomy. By and large, GenAI systems operate on a back-and-forth basis: Users provide the tool a prompt, receive its output, and then iterate back-and-forth from there. Agentic AI is intended to be more automated by design, only requiring human input at pre-determined points in the process. And that makes some lawyers understandably nervous.

When asked why they might feel hesitant about using agentic AI for legal tasks, the most common answer was a general fear of the unknown, but the second most common answer dealt with the need for careful monitoring and oversight. In fact, some respondents said they were excited about GenAI, but more cautious about agentic AI’s potential.

“Agentic AI, while exciting, to me removes oversight a step too far,” said one such lawyer from a US law firm. “I like the idea of prompting and reviewing a result. It is something else to have a machine have so much autonomy in the actual doing of a thing and potentially acting on my behalf without that very concrete review.”


Please add your voice to ¶¶ŇőłÉÄę’ flagship , a global study exploring how the professional landscape continues to change.Ěý


An assistant GC at a US company also pointed to potential privacy and security concerns, adding: “The fact that agentic AI operates in a much more autonomous way, with a lack of control from the user, means there are many unknowns that are hidden beneath the process.”

For law firm and corporate legal department leaders looking to potentially implement agentic AI systems into their practice, this means re-thinking what AI education and training will mean moving forward. Beyond that, however, legal AI educators also will need to make sure to pinpoint and perhaps over-explain those specific instances in which human oversight needs to occur in agentic systems. More autonomous does not mean fully autonomous, and particularly for lawyers with ethical duties to their work product, lawyer oversight will in fact be a necessary part of any agentic system.

For law firm or legal department leaders, that means that finding the right balance between efficient workflows and human intervention will be key to agentic AI adoption. And those organizations that can best communicate human-in-the-loop to their professionals up-front will be rewarded with more increased and reliable adoption.

Clearly, lawyers feel positively about the agentic AI future, after all. They just need it spelled out explicitly as to what the lawyer’s role will be in this new paradigm.

“Agentic AI is powerful, but its moral compass must come from humans,” one UK law firm barrister noted aptly. “Lawyers are trained to safeguard fairness, rights, and the rule of law — principles that should guide how AI is designed, governed, and deployed. Hope lies in our ability to shape AI through these values for fairer values for society as a whole.”


You can download a full copy of the Thomson Reuters Institute’sĚý2026 AI in Professional Services ReportĚýhere

]]>
The AI Law Professor: When AI quietly hijacks legal judgment /en-us/posts/technology/ai-law-professor-first-draft-trap/ Wed, 08 Apr 2026 07:56:33 +0000 https://blogs.thomsonreuters.com/en-us/?p=70293

Key takeaways:

      • Anchoring distorts judgment before you begin — Research shows a first draft shapes subsequent decisions; and an AI draft is the most seductive anchor imaginable, because it looks exactly like something a lawyer would write.

      • The First Draft Trap inverts legal training — The Socratic method builds the habit of holding multiple possibilities in tension before committing; but an AI first draft collapses that space before the real thinking begins.

      • The fix is to ask for the map, not the draft — Requesting multiple strategic framings before writing keeps judgment where it belongs and uses AI to expand possibilities rather than foreclose them.


Welcome back toĚýThe AI Law Professor. Last month, I examined why promised efficiency gains often become a cycle of work intensification. This month, I want to address a subtler challenge. I call it the First Draft Trap and understanding it may change how you reach for AI the next time a new matter lands on your desk

We have all heard the pitch: Staring at a blank page? Just prompt the AI. In seconds you have a working draft: structured, coherent, and surprisingly competent. The blank page problem, that ancient enemy of productivity, thus has been vanquished.

Except the blank page itself was never just an obstacle; rather, it was a space of possibility. For lawyers, it was the space in which the most important part of their work actually happens. Now, with AI in the mix, that may be changing.

Welcome to the First Draft Trap.

Simply put, the First Draft Trap is this: The moment you accept an AI-generated draft as your starting point, you have already made the most consequential decision of the entire project — most importantly, you made it by not making it. You let the machine choose your direction, your framing, and your theory. Everything that follows is editing; and editing, no matter how rigorous, is not the same as thinking.

The cognitive hijack

There is solid psychology behind why this happens. Daniel Kahneman and Amos Tversky demonstrated in their landmark 1974 paper, , that once people are exposed to an idea, this first impression distorts their subsequent judgments and becomes a mental anchor. In their experiments, subjects who watched a roulette wheel spin to a random number still let that number influence their estimates of completely unrelated quantities. The anchor held even when people knew it was meaningless.


Please join Tom Martin at the on April 28–29. It’s virtual and completely free — two days of keynotes, panels, and workshops on AI and the legal profession


An AI first draft is the most seductive anchor imaginable. It is not random — it is plausible, and it is well-organized. It sounds like something a lawyer would write. And that is precisely what makes it dangerous. You know intellectually that it is just one of many possible approaches to addressing the matter, but the anchor holds anyway.

That is the First Draft Trap at the cognitive level. The AI draft is not just one option you happen to prefer. It is a filter that prevents you from seeing the other options that were available to you, the roads you never even noticed that you did not take.

Consider what this means for a profession built on the opposite instinct. From the first day of law school, lawyers are trained to resist the obvious answer and to think like a lawyer. The Socratic method exists for exactly this reason. A good professor hears your confident response and asks: What else? What if the facts were different? What is the argument on the other side? The goal is not to arrive at an answer, per se. It is to build the mental habit of holding multiple possibilities in tension before committing to any one of them.

The First Draft Trap is the anti-Socratic method. It delivers a confident answer before you have even formulated the question properly — and instead of interrogating it, you polish it.

The value of the blank page

Think about what a senior partner actually does when a junior associate brings them a memo. The partner’s value is not better writing; rather, it is peripheral vision: The ability to see what the memo does not address, the argument not considered, or the framing that would land differently with this particular judge or this particular jury. That capacity to see beyond the document in front of them is why clients pay senior partners premium rates. And it is precisely the muscle that atrophies when your default workflow begins with the prompt generate a draft.


The AI draft is not just one option you happen to prefer. It is a filter that prevents you from seeing the other options that were available to you, the roads you never even noticed that you did not take.


The two-system framework offered by Kahneman and Tversky gives us a clean way to describe what is going wrong. System 1 is fast, intuitive, and pattern-matching; while System 2 is slow, deliberate, and analytical. The practice of law, at its best, is a System 2 discipline. We, as lawyers, are trained to override gut reactions, challenge assumptions, and think through consequences before acting.

In this way, the AI first draft feels like a System 2 output. It is structured, footnoted, and methodical. However, your decision to accept it as a starting point is pure System 1 — a fast, intuitive grab at the nearest plausible answer. You have used a sophisticated tool to bypass the sophisticated thinking the tool was supposed to support. That uncomfortable period of ambiguity, of not knowing which path is best, is where the real lawyering lives.

What to do instead

None of this means stop using AI. It means stop using AI to skip the hard part that matters.

Before you ever ask for a draft, ask for the map. Describe the matter or document you are working on, then ask the AI for three fundamentally different strategic framings for the problem. For each framing, request the strongest argument in its favor and its most serious vulnerability. Then ask which framing best fits the client’s goals, the audience, or the procedural posture. Close with a clear instruction: Do not write a draft yet.

That last instruction is the key. It keeps you in the driver’s seat during the phase that matters most. You are using AI to expand the possibilities before you prune them, not after. And, most importantly, it gives you the opportunity to think for yourself about other important possibilities and add them in.

In the terms used by Kahneman and Tversky, use AI to fuel System 2, not to hand the controls to System 1. Let the machine generate options, and you exercise judgment.

For lawyers, the ability to see what is not there is the whole game.

Do not let the first draft blind you to it.


Tom Martin is CEO & Founder of LawDroid, Adjunct Professor at Suffolk University Law School, and author of the forthcomingĚý. He is “The AI Law Professor” and writes this eponymous column for the Thomson Reuters Institute.

]]>
Relationship-building and AI fluency key to closing visibility gap, new report shows /en-us/posts/corporates/closing-ai-visibility-gap/ Mon, 06 Apr 2026 12:18:00 +0000 https://blogs.thomsonreuters.com/en-us/?p=70271

Key insights:

      • A significant visibility gap persists between legal departments and the C‑Suite — Most general counsel believe their legal department contributes strategically, yet senior executives often fail to see or understand that value.

      • Strong internal relationship‑building is critical (and often underdeveloped) — This capability enables legal teams to spot risks earlier, stay embedded in decision‑making, and make their work more visible across the business.

      • Closing the gap requires communicating legal’s value and increasing true AI fluency — For legal teams to be seen as proactive, strategic partners rather than task executors, communication and strong AI fluency are essential.


General counsel (GCs) have spent years doing more with less, tightening their legal spend, and aligning the law department’s priorities with the wider business. And yet, despite all of this effort, a striking visibility gap persists. While 86% of GCs believe their department is a significant contributor to overall organizational objectives, only 17% of the C-Suite agrees, according to the , from the Thomson Reuters Institute, which was based on more than 2,300 interviews with corporate general counsel. Meanwhile, 42% of C-Suite executives say the legal function contributes little or not at all to company performance.

The challenge for GCs is whether their staff have the skills and capabilities to make their work visible, relevant, and understood by the business at large. To address this perception gap in 2026, every GC needs to prioritize building richer internal relationships with business leads, moving from task-based to outcome-focused messaging, and improving the team’s collective AI fluency.

Empower teams to build internal relationships

Nearly half of all GCs surveyed for the report cited staffing and resource constraints as the top barrier to delivering additional value, a concern that has remained stubbornly consistent for years. Beyond headcount, the report underscores that the deeper challenge facing legal departments is relational.

Internal relationship-building is one of the most critical and underrated people skills in a legal department’s collective skill set. Indeed, 68% of GCs rate internal dialogue as their most valuable source of information about emerging risks. In fact, the most successful GCs use a deliberate combination of formal and informal methods to build connections with the internal business units that they serve.


You can learn more about how to assess your legal department’s strategic positioning with theĚýThomson Reuters Institute’s Value Alignment toolkit, here


Some run structured weekly face-to-face sessions with business departments, complete with schedules, plans, and frameworks. Others rely on walking the halls, open-door policies, and ad-hoc conversations that keep the corporate law department visible and accessible on a human level.

The report offers a five-dimensional framework to help GCs audit where, with whom, and how often legal is in dialogue with other parts of the business.

Corporate Law

Use communication tactics that focus on business outcomes

Even when legal departments are doing excellent work, they often describe it in the wrong language. Many in-house lawyers categorize their contributions in task-based terms — such as “We support M&A” or “We analyze contracts” — rather than in value-creating terms.

Some in-house legal leaders have progressed to stakeholder-level framing, such as, “We protect the company from competitive threats” or “We support new business opportunities.” Still, neither of these levels truly communicates value to a C-Suite audience, the report shows.

To effectively align the law department’s priorities with business goals, in-house attorneys need to develop the skill of communicating through a business lens. For example, one GC states that the primary goal of the law department is to “find the fastest and most compliant way for the sales department to sell products.” This response reframes the legal function’s activities as much more business fluent and value-added.

Legal teams are not always good at touting their accomplishments, however, and this is a challenge when a lot of the work can be categorized as invisible. For example, when protecting the company is done right, threats are eliminated before they occur and no one notices. When efficiency is unlocked through process improvement, the C-Suite only sees the outcome if someone connects the dots explicitly. This is why surfacing invisible value is now a business imperative for corporate law departments.

Advancing from AI literacy to AI fluency

The most significant skills challenge facing legal departments in 2026 is how to best use AI strategically. Mentions of AI as a strategic priority among GCs have doubled in the past year, according to the report. In fact, almost half of all GCs now reference AI in their survey interviews. Yet the report draws a sharp distinction between being AI literate and being AI fluent, with most departments being the former but not the latter.

To close that gap, the report recommends a six-layer model covering learning, empowerment, ownership, accountability, usage, and expectations.

Corporate Law

At its core, the model asks GCs to start with open encouragement and access to AI tools to build momentum, then shift toward more formal expectations around adoption to make AI use a daily habit.


You can download a full copy of the Thomson Reuters Institute’s here

]]>
The 4 Plates: Are you measuring the real value of AI in your legal department? /en-us/posts/corporates/4-plates-measuring-efficiency/ Wed, 01 Apr 2026 13:15:21 +0000 https://blogs.thomsonreuters.com/en-us/?p=70085

Key takeaways:

      • Efficiency is a means, not an end — Gains from AI only count when you can show what they enabled: better advice, stronger protection, smarter business support.

      • Narrow measurement invites cuts — Legal departments that measure AI value only through cost savings are telling C-Suites that legal costs less, thereby inviting budget and headcount reductions.

      • Measure across all four plates — A framework that captures effectiveness, risk, and enablement alongside efficiency is what shifts perception of the legal department from cost center to strategic asset.


Your legal department has invested in AI tools, adoption is growing, your team is saving time on routine work and, by most accounts, work operations are running faster. Then your CFO asks a simple question: What has AI delivered for the legal department?

If your answer centers on hours saved and cost reduced, you are not alone. However, you may be leaving your most important value story untold. And in a climate in which legal departments are under more scrutiny than ever to demonstrate the full return on their AI investment, that gap matters.

This is the fourth and final part of our series on the “Four Spinning Plates” model, which frames the GC’s evolving responsibilities as:

      1. delivering effective advice
      2. operating efficiently
      3. protecting the business, and
      4. enabling strategic ambitions.

This article focuses on the Efficient plate and specifically on the risk of letting it do too much of the talking.

plates

The Efficient plate under pressure

For a GC, making the best use of what are often limited resources is a constant pressure. The Efficient plate sits alongside, not above, the other three plates and must be kept always spinning. Right now, however, for many in-house legal teams the Efficient plate is receiving disproportionate attention, and for understandable reasons.

AI adoption in corporate legal departments is accelerating quickly. According to the Thomson Reuters Institute’s AI in Professional Services Report 2026, nearly half (47%) of corporate legal respondents surveyed said their department has already integrated generative AI (GenAI) into their work — more than double the figure from the previous year. A further 18% reported that they’re already using agentic AI, with more than half expecting agentic AI to be central to their workflow within the next two years.

GCs are genuinely excited about what this makes possible. As one GC said in the survey that underpinned the AI in Professional Services Report: “It presents the promise of getting out of low-value work and into higher-value work that supports the business.” Another described their vision of a legal department that is “boldly digital-first, relentlessly innovative, and tightly woven into business priorities.”

Clearly, the opportunity is real, but so is the risk of measuring it badly.

The measurement trap

Our 2026 research found that only one-quarter of legal departments are currently measuring the ROI of their AI tools. That alone is striking given the pace of adoption but the follow-up finding is where the real problem lies — of those departments that are measuring ROI, 80% are tracking it in terms of internal cost savings.

Reducing external spend, automating high-volume processes, and bringing more work in-house are all legitimate efficiency gains and worth reporting, of course. However, when cost reduction becomes the only story being told, two things can happen. Your C-Suite learns to associate your department’s value with how little it costs, a frame that is very difficult to escape once it’s established. And the wider value that efficiency enables in terms of sharper risk identification, faster business support, and higher-quality advice goes unmeasured and therefore unrecognized.


ĚýIf your metrics only capture time saved and cost reduced, and not what that freed-up capacity actually delivered, you are measuring the means and ignoring the end.


Think about what GCs themselves say they want from AI. As several GCs said in the survey, they’re hoping AI will provide them with “better output on more meaningful tasks,” “proactive, strategic insight,” and “getting out of low-value work.” These are not efficient outcomes, per se; rather, they are effectiveness, protection, and enablement outcomes, made possible by improved efficiency.

So, if your metrics only capture the input (time saved, cost reduced) and not what that freed-up capacity actually delivered, you are measuring the means and ignoring the end. This is the efficiency trap — measuring the plate so narrowly that it starts to work against you.

Reframing how you measure efficiency

Measuring efficiency well does not mean measuring it more. It means measuring it differently, and always in relation to the business you support. A few principles worth applying include:

Present spend in a business context — Legal spend as a percentage of company revenue tells a more credible story than a raw cost figure. It scales with the business and can be benchmarked meaningfully against peers.

Show what technology investment actually delivered — Time saved through automation is a useful starting point, but the stronger case is what the team did with that time. Tracking the shift from routine to strategic work over a period of time is a far more compelling ROI story.

Connect efficiency gains to business outcomes — An efficiency gain that enabled a faster product launch, prevented a compliance risk, or improved stakeholder satisfaction has a value that no cost metric will capture. Build those connections explicitly into how you report the value of the legal department to the C-Suite.

New resources to help

To support GCs in getting this right, the Thomson Reuters Institute has added two new resources to its Value Alignment Toolkit that directly address this measurement gap.

The Metrics Library brings together more than 100 metrics organized across all four spinning plates. It is a practical starting point for GCs to browse, select, and adapt to the specific goals of their departments, making it easier to build a measurement framework that reflects everything departments do, not just the part that appears in a budget line.

The AI Success Metrics guide addresses the AI measurement gap head-on with a best practice guide and a hands-on worksheet designed specifically for legal departments navigating AI adoption and asking: How do we actually know whether this is working? It looks beyond cost savings to capture the fuller picture of AI value including quality, capacity, strategic contribution, and risk.

Getting the balance right

In today’s environment, every GC needs to consider their answer when their C-Suite asks what the legal department delivers. Are your department’s metrics giving them the full answer or just the part that’s easiest to count?

Efficiency is not the enemy of strategic value. A department that runs well, uses its resources wisely, and embraces technology thoughtfully can in turn create the conditions for everything else the business needs from its legal function. However, that case only lands if your metrics measure across all four plates, not just one.


You can explore the new Metrics Library and AI Success Metrics guide, along with the full Thomson Reuters Institute’s Value Alignment toolkitĚýhere

]]>
Helping the legal profession get AI‑ready: A new advisory board takes shape /en-us/posts/legal/ai-advisory-board/ Thu, 26 Mar 2026 11:31:32 +0000 https://blogs.thomsonreuters.com/en-us/?p=70080 Key insights:

      • AI is already reshaping the legal profession — AIĚýis already embedded in lawyers’ day-to-day legal work with a significant share of both law firm attorneys and in-house legal teams actively using GenAI tools, with many expecting it to become central to their work within the next five years.

      • AIFLP Advisory Board was formed to prepare lawyers for an AI-reshaped profession — TRI convened 21 respected leaders from legal education, private practice, the judiciary, and AI ethics and governance to help ensure lawyers and law students are prepared for a profession reshaped by AI.

      • Human judgment remains central in an AI enabled legal futureĚý— Becoming AI ready is not simply about learning to use new tools; the Advisory Board emphasizes strengthening irreplaceable human capabilities is critical.


In today’s tech-driven environment, AI is no longer a future concept for the legal profession — it’s already here, and it’s changing how lawyers work, learn, and serve clients. Recognizing just how fast the evolution is moving, the Thomson Reuters Institute (TRI) has launched the AI and the Future of Legal Practice (AIFLP) Advisory Board, bringing together a group of respected leaders from across the legal ecosystem to help guide what comes next.

The board includes 21 accomplished voices from legal education, private practice, the judiciary, and AI ethics and governance. Their shared goal is simple but ambitious: Help ensure that both today’s lawyers and tomorrow’s law students are prepared for a profession being reshaped by AI.

Why now?

Because the shift is already underway. According to TRI’s recent 2026 AI in Professional Services Report, 41% of law firm attorneys say their organizations are already using some form of generative AI (GenAI); and nearly half of those at corporate legal departments report that AI tools are being rolled out there too. Even more telling, most professionals said they expect GenAI to become central to their day‑to‑day work within the next five years.

That pace of change raises big questions about competence, ethics, education, risk, and access to justice. And those questions don’t have easy answers.

What the Advisory Board will focus on

The AIFLP Advisory Board is designed to tackle those challenges head‑on. Its work will center on four key areas that are already under pressure as AI adoption accelerates:

      • Legal education and talent development
      • Ethics, professional competence, and accountability
      • Governance, risk management, and client counseling
      • Access to justice and modern service delivery

The Advisory Board’s early focus areas will look at how AI is actually changing legal practice today, what future‑ready lawyers really need to know, and how legal education and real‑world practice can better align. The emphasis is not just on using AI tools, but on strengthening the human skills that matter most, such as sound judgment, critical thinking, and careful verification of AI‑generated outputs.

Shaping the future, not reacting to it

Citing the critical need for this Advisory Board’s creation, Mike Abbott, Head of the Thomson Reuters Institute, notes that the legal profession is at a crossroads, and it can either react to AI‑driven disruption or take an active role in shaping how these technologies are used to support lawyers, courts, and the public.

“By assembling a board of distinguished leaders, our goal is to help practicing lawyers and the lawyers of the future navigate a rapidly evolving landscape,” Abbott said. “Ensuring that legal education strengthens irreplaceable skills such as critical thinking, human judgment and effective communication helps make AI use safe and effective. The Board’s efforts will ultimately help shape a future-ready profession, leading to better outcomes for all.”

Meet the AIFLP Advisory Board Members

By convening experienced leaders from across the profession, TRI hopes to help lawyers navigate this landscape with confidence. Advisory Board Members include:

      • Michael Abbott, Head of the Thomson Reuters Institute
      • Soledad Atienza, Dean, IE Law School
      • The Honorable Jennifer D. Bailey, (Ret.), Partner, Bass Law
      • Benjamin Barros, Dean, Stetson University College of Law
      • Professor Sara J. Berman, University of Southern California, Gould School of Law
      • Megan Carpenter, Dean Emeritus, University of New Hampshire Franklin Pierce School of Law
      • Ronald S. Flagg, President, Legal Services Corporation
      • Donna Haddad, AI Ethics and Governance expert, and founding member, IBM AI Ethics Board
      • Johanna Kalb, Dean and Professor of Law, University of San Francisco School of Law
      • The Honorable Nelly Khouzam, Florida Second District Court of Appeal
      • The Honorable William Koch, Dean, Nashville School of Law, and former Tennessee Supreme Court Justice
      • Sheldon Krantz, retired partner, DLA Piper, and a founder, DC Affordable Law Firm
      • Stefanie A. Lindquist, Dean, School of Law, Washington University in St. Louis
      • The Honorable Mark Martin, Founding Dean and Professor of Law, Kenneth F. Kahn School of Law at High Point University, and former Chief Justice, Supreme Court of North Carolina
      • Caitlin (Cat) Moon, Professor of the Practice and founding co-director, Vanderbilt AI Law Lab, Vanderbilt Law School
      • Hari Osofsky, Myra and James Bradwell Professor and former Dean, Northwestern Pritzker School of Law; Founding Director, Northwestern University Energy Innovation Lab; and Founding Director, Rule of Law Global Academic Partnership
      • Joanna Penn, Chief Transformation Officer, Husch Blackwell
      • The Honorable Morris Silberman, Florida Second District Court of Appeal
      • The Honorable Samuel A. Thumma, Arizona Court of Appeals, Division One
      • Mark Wasserman, Partner and CEO Emeritus, Eversheds Sutherland
      • Donna E. Young, Founding Dean, Lincoln Alexander School of Law, Toronto Metropolitan University

What’s next?

The Advisory Board held its first meeting in February and will meet quarterly going forward. As the work progresses, TRI plans to publish research findings, best practices, and practical recommendations for legal educators, law firms, and courts.

In a profession built on precedent and careful reasoning, the rise of AI presents both opportunity and responsibility. The AIFLP Advisory Board is an effort to make sure the legal community meets that moment thoughtfully and on its own terms.


You can learn more about the impact of advanced technology on the legal profession here

]]>
2026 State of the Corporate Law Department Report: GCs align strategy to corporate imperatives, but C-Suites want more /en-us/posts/corporates/state-of-the-corporate-law-department-report-2026/ Tue, 24 Mar 2026 12:09:01 +0000 https://blogs.thomsonreuters.com/en-us/?p=70047

Key takeaways:

      • Disconnect between legal departments and C-Suite perceptions — While many general counsel believe their departments are significant contributors to business success, most C-Suite executives do not share this view. Fully 86% of GCs say they believe their department is a significant contributor, but only 17% of C-Suite executives agree.

      • A need to find new ways to demonstrate value — Legal departments are under increasing pressure to do more with less, as nearly half of GCs surveyed cite staffing and resource constraints as their top barrier to delivering additional value. Despite these limitations, expectations from the C-Suite continue to rise.

      • AI adoption accelerates, business strategy comes next — Legal departments are rapidly embracing technology to improve efficiency, manage resources, and address cost pressures. Not surprisingly, the proportion of GCs calling AI a strategic imperative has doubled.


Over the past several years, general counsel and corporate law departments at large have transformed their operations. Many have become more efficient enterprises, leveraging technology, in particular AI, at an increased pace. GCs have adjusted their hiring practices to conform with the modern corporation, taking new ways of working into account. And they have embraced data-driven decision-making, evaluating outside counsel and their own operations alike with a wider suite of new metrics and KPIs.

But do you know who hasn’t yet realized the fruits of that labor? The corporate C-Suite.

Jump to ↓

2026 State of the Corporate Law Department Report

 

The , released today by the Thomson Reuters Institute, reveals a disconnect between how GCs and their corporate law departments view their own alignment to the wider business, and what C-Suite executives believe the legal department contributes. Within this gap, the message is clear: GCs not only need to align with their organizations’ overall business strategy, they need to learn how to prove that alignment to the rest of the company.

Indeed, when asked how they view legal’s contribution to the rest of the business, 86% of GCs surveyed said they viewed the legal function as a significant contributor. However, only 17% of other C-Suite executives said the same — and 42% said legal contributes little or not at all.

corporate law departments

As the report explains, this disconnect lays the inherent groundwork for the tension facing many GCs today. While they are increasingly aiming to align to business standards, the rest of the organization is not recognizing those actions. Instead, many C-Suites are looking for even more out of today’s legal departments to prove their contributions to organizations’ business imperatives.

As in past years, many in-house legal departments are being tasked to do more with less. Nearly half of GCs cited staffing and resource constraints as the top barrier they face to delivering additional value. Indeed, many said they expected outside counsel spend in some key areas — such as regulatory work and mergers & acquisitions — to remain high. As of the fourth quarter of 2025, more than one-third (36%) of GCs said they expect to increase overall spend on outside counsel over the next year, while only 20% said they plan to decrease their spend.


Despite legal departments’ gains, their C-Suites are looking for them to take the next step, turning operational excellence into business success.


Not surprisingly, many GCs said they view technology as one of the primary ways they have to combat these resourcing and cost issues. In fact, the proportion of GCs mentioning technology as a strategic priority entering 2026 doubled over the year prior. Legal departments have begun to feel positive effects of AI in their own organizations, the report notes, such as increased efficiency or time feed up for strategic work.

Despite these gains, C-Suites are looking for are looking for their legal functions to take the next step, turning operational excellence into business success. This can take a number of different forms, such as explicitly tying advice to client business objectives, presenting legal spend in the context of the business by showing it as a percentage of revenue, or approaching risk management with the goal of aiding business imperatives. “When we have a risky legal subject, the company never prefers just to see the legal opinion,” said one retail GC. “They’re also requesting you to drive them how to make a decision.”

AI and technology should also be approached in this same way, the report argues. Although almost half of all corporate legal departments have some type of enterprise-wide GenAI tool, according to the survey, very few are collecting success metrics around AI’s implementation or linking its use to business revenue. Put a different way, many legal departments are focused on unlocking capacity, rather than deploying capacity in a business-centric way — much to the chagrin of their C-Suites.

corporate law departments

Although legal departments have established a solid foundation upon which a business can stand, ultimately, C-Suites don’t want just a foundation. They want help building the entire house, the report shows, directly enabling the services that companies provide to customers. In that, GCs and legal departments have more work to do, not only tying strategy to overall business initiatives but actively communicating how the legal function’s work aids the company as a whole.


You can download

a full copy of the Thomson Reuters Institute’s “” here

]]>
Inside the Shift: The AI Adoption Boardgame & why law firm leaders can’t afford to play it safe /en-us/posts/technology/inside-the-shift-ai-adoption-boardgame/ Mon, 23 Mar 2026 13:00:33 +0000 https://blogs.thomsonreuters.com/en-us/?p=70057

You can read TRI’s latest “Inside the Shift” feature,ĚýThe AI adoption board game: Why law firm leaders can’t afford to play it safe here


Let’s be honest: most law firms know AI is a big deal. They’ve read the headlines, attended the conferences, and nodded along when someone says, “AI will change everything.” The problem? Knowing that AI matters and actually doing something strategic about it are two very different things. And according to our latest Inside the Shift feature article, that gap is where many law firms are starting to lose ground.

Our latest Inside the Shift feature, author Michelle Nesbitt-Burrell, Marketing Strategy Director for ¶¶ŇőłÉÄę (TR), frames AI adoption as a boardgame that’s already underway. Some law firms are moving confidently across the board, while others are stuck on the starting square, not because they don’t see the future, but because they’re hesitating. The latest TRI research shows that while the majority of lawyers say they believe AI will fundamentally transform the legal industry within the next few years, far fewer expect real change inside their own firms anytime soon. That disconnect is risky — especially when competitors and clients aren’t waiting around.


Inside the Shift

Here’s what should concern every law firm partner — corporate legal departments aren’t just playing the same AI adoption game, they’re winning it.

 


One of the most uncomfortable truths the article reveals is that corporate legal departments are further often ahead on AI adoption and utilization than their outside counsel. In fact, many corporate legal teams are investing in AI faster and using it more deeply in their day‑to‑day legal work. That means clients are reviewing contracts faster, doing more work internally, and increasingly judging their outside law firms on their technological sophistication. In a world like that, the excuse that We’re still experimenting stops sounding reasonable pretty quickly.

The article breaks law firms into three players on the game board:

          1. The laggards — Those firms with no meaningful AI plans and very little ROI to show for it.
          2. The adopters — Thos firms that are experimenting with tools but don’t really have a clear strategy. These firms see some efficiency gains but too often hit a ceiling.
          3. The innovators — Those firms with visible, intentional AI strategies. These firms are far more likely to see ROI, revenue growth, and long‑term competitive advantages.

So, what separates the winners from everyone else? The article details the PLAYERS framework: pilot with purpose, leadership that sets the pace, action over perfection, strong ethics, serious education, good data, and — most importantly — strategy before tools. In other words, those law firms that want to become innovators should stop asking, What AI should we buy? and start asking What are we actually trying to achieve?

Clearly, AI isn’t a side project anymore. Law firms that treat it like one may save some time, but as the article fully explains, those firms that approach AI adoption and implementation strategically will reshape how legal work gets done. The game is already moving — the only question is whether your firm is playing to win or quietly falling behind.


You can find moreĚýInside the Shift feature articlesĚýfrom the Thomson Reuters Institute here

]]>
The efficiency imperative: AI as a tool for improving the way lawyers practice /en-us/posts/ai-in-courts/improving-lawyers-practice/ Wed, 18 Mar 2026 17:45:16 +0000 https://blogs.thomsonreuters.com/en-us/?p=70024

Key insights:

      • AI brings improved efficiency — AI accelerates tasks like document review and research, freeing lawyers to pursue more high-value work for clients.

      • AI does the work of a team of lawyers — AI levels the playing field for small law firms and solo practitioners by providing additional capacity without adding headcount, thereby allowing fewer lawyer to do the work of many.

      • Yet, AI still needs guardrails — Lawyers must remain accountable, however, with human oversight and review to ensure that AI outputs are accurate and correct, thereby preserving nuance and professional judgment.


Already, AI is no longer a theoretical concept for legal professionals, nor is it a nice-to-have for law firms that are seeking to impress their clients with improved efficiency and cost savings. That means, the practical question now becomes how to adopt AI in ways that improve speed and capacity of lawyers without compromising accuracy, confidentiality, or professional judgment.

The strongest near-term value shows up where modern practice is most strained: high-volume inputs and relentless timelines. In that environment, AI can be most helpful as an accelerant for the first pass through large bodies of material.

This possibilities, opportunities, and challenges of using AI in this way were discussed by a panel of experts in a recent webinar, , from theĚý, a joint effort by the National Center for State CourtsĚý(NCSC) and the Thomson Reuters Institute (TRI).

One panelist, Mark Francis, a partner at Holland & Knight, described one way that AI can be an enormous help. “Anything where we’re dealing with large volume of materials that need to be reviewed [such as] large sets of documents, large sets of legal research, large sets of discovery. Obviously, AI can be leveraged in all of those circumstances.” That framing is important because it anchors AI’s utility in a familiar workflow: review, triage, and synthesis at scale.

AI also has a role earlier in the workflow than many attorneys expect. In addition to sorting and summarizing, it can help generate starting structures. For lawyers drafting motions, client advisories, demand letters, contract markups, or internal investigations memos, the hardest step can be getting traction from a blank page. “It’s really good at content or idea generation,” Francis said, adding that lawyers can ask AI to “generate some ideas for me on this topic, or generate an outline of a document to cover a particular issue.”


“AI is definitely going to benefit some of the small law firms who cannot actually afford the workforce. AI can be an extension when it comes to the automation.”


Of course, that does not mean letting an AI model decide what the law is; rather, it means using AI to produce an initial outline, identify possible issues to consider, or propose alternate ways to organize an argument. Then, the attorney should apply their own judgment to accept, reject, refine, and verify the AI’s output.

For legal teams, the ideal mindset is that AI can compress the time between intake and a workable first draft, whether that draft is a research plan, a deposition outline, a set of contract fallback positions, or a motion framework. However, speed is only valuable if it facilitates careful lawyering, not just taking shortcuts.

Efficiency that scales down, not just up

AI’s impact is not limited to large law firms with dedicated tech & innovation budgets. In fact, the benefits may be most transformative for smaller legal organizations that feel every hour of administrative drag and every unstaffed matter. Panelist Ashwini Jarral, a Strategic Advisor at IGIS, underscores how broad the current level of AI adoption already is. “AI is already being used in a lot of legal research, contract analysis, and in office operations,” Jarral explained. “Whether that’s in a small law firm or a large law firm, everybody can benefit from that automation with this AI.”

For many practices, that list maps directly onto the work that consumes lawyers’ time without always adding commensurate value: repetitive research steps, first-pass contract review, intake and scheduling, matter administration, and other operational tasks.

Historically, scale favored organizations that could hire more associates, paralegals, and support staff to push volume through the pipeline. Now, AI offers a different form of leverage: additional capacity without adding headcount. “It is definitely going to also benefit some of the small law firms who cannot actually afford the workforce,” Jarral said, adding that “AI can be an extension when it comes to the automation.” For a solo or small firm, that extension can show up as faster first-pass review of contracts, quicker summarization of records, more consistent intake workflows, and reduced time spent on repetitive back-office tasks.

At the same time, it is crucial to be clear-eyed about what is being automated. While AI can help deliver efficiency, it does not offer legal judgment itself. The legal profession still must decide, matter by matter, what level of review is required and what risks are acceptable.


“Lawyers are trained a certain way, and AI is never going to be trained that way. AI misses nuances. We’re always going to need lawyers; we’re always going to need the human in the loop.”


And that’s where implementation discipline becomes a strategic differentiator. Law firms that treat AI as a general-purpose shortcut tend to create risk; while firms that treat AI as a workflow component, with guardrails, review steps, and clear accountability, are more likely to capture value without compromising quality.

The non-negotiable: lawyers remain accountable

Any serious conversation about AI in legal practice must address these limits, panelists agreed. The Hon. Linda Kevins, a Justice on the Supreme Court in the 10th Judicial District of New York (Suffolk County), offered the most direct articulation of the boundary line: “Lawyers are trained a certain way, and AI is never going to be trained that way. AI misses nuances. We’re always going to need lawyers; we’re always going to need the human in the loop.”

Indeed, legal work is saturated with nuance. The same set of facts can carry different weight depending on jurisdiction, judge, forum, procedural posture, and the client’s goals and risk tolerance. Even when the law is clear, the right action often is not. To strive for true justice requires judgment about timing, framing, business consequences, reputational risk, and settlement dynamics. Those are not merely inputs for an AI to process — they are human decisions that define legal representation.

As the webinar made clear, this is the point at which responsible use becomes practical, not abstract. If AI is used for research support, contract analysis, or document review, lawyers need an explicit approach for verification and oversight. The outputs may look polished and may sound confident; however, confidence is not accuracy, and professional responsibility does not shift to a vendor or an AI model. Human review is not a ceremonial or perfunctory step, nor is it a formality. Rather, it is the core control that protects clients and the court, and it is the inflection point that turns AI from a novelty into a defensible tool.

In practice, the human in the loop means deciding in which instances AI can assist and in what instances it cannot. It also means reserving an attorney’s time for the decisions that carry legal and ethical consequences and building repeatable habits that prevent teams from drifting into overreliance on AI, especially under deadline pressure.

The legal profession can capture real benefits from AI, including speed, scalability, and improved access, but only if it adopts the technology in a way that preserves what Justice Kevins highlighted: training, nuance, and human accountability.


You can find out more about how AI and other advanced technologies are impactingĚýbest practices in courts and administration here

]]>
The great AI disconnect: Firms and legal departments are not communicating about AI usage /en-us/posts/technology/great-ai-disconnect/ Wed, 18 Mar 2026 13:39:56 +0000 https://blogs.thomsonreuters.com/en-us/?p=70004

Key insights:

      • There’s an AI awareness gap — Most corporate legal professionals do not know whether their outside legal counsel are using AI in handling their client matters, leaving both law departments and their firms in a state of AI uncertainty.

      • A potential upcoming billing model shift — Efficiencies from AI usage could have a major impact on how many law firms bill matters; value-based billing may need to replace or supplement hourly billing for matters in which AI is used.

      • Transparency builds trust — Lack of visibility and ROI measurement could erode trust between law departments and their outside counsel. Dialogue and measurements can strengthen the firm/client relationship and create scenarios in which both sides can reap the benefits of AI usage.


While the use of AI is increasingly widespread for both corporate legal departments and their outside law firms, there is a considerable lack of dialogue and data-sharing between the two sides on usage, guidelines, and expectations regarding AI. This complicates efforts to maximize the benefits of using AI, and it also may be eroding trust between the two sides.

Significant gaps in visibility and measurement

The Thomson Reuters Institute’s (TRI’s) 2026 AI in Professional Services Report found major gaps in visibility and measurement between law firms and legal departments. The survey found that more than half of law firm respondents said their organizations are currently using or considering using GenAI. And more than half of corporate legal professionals surveyed said they feel that their outside legal firms should use AI on their matters.

However, more than two-thirds (68%) of corporate legal professionals admitted that they currently have no idea if their outside law firms are using AI or not.

AI disconnect

In addition, neither side is effectively measuring whether or to what degree their use of AI is improving the delivery of legal services. Indeed, 85% of law firm respondents and 75% of corporate legal department respondents said their organizations are either not collecting ROI data on AI usage or are unsure if they are doing so.

Is your organization measuring the ROI of AI tools?

AI disconnect

These visibility and measurement gaps make it difficult for both sides to plan how AI can and should be used in handling client matters. It also raises questions about how potential efficiencies from AI use will affect related factors such as how much firms charge for their services and how much clients are willing to pay. Half of legal professionals surveyed said they feel that AI is either a major threat or somewhat of a threat to billings and law firm revenues. Not surprisingly, the industry continues to wrestle with how to balance efficiency gains from AI against the limitations of the hourly billing model.

Concerns of corporate law departments

For corporate law departments, the lack of AI usage visibility and ROI measurement is producing a wide variety of responses, ranging from mild but growing concern all the way to outright suspicion about how law firms are using AI on their clients’ behalf. Law department respondents said that while they generally trust their outside counsel to make the right decisions regarding AI use and maintaining quality, most departments have not yet had conversations on those issues with their law firms, including how AI use will affect billing.

“Billing has remained the same as it did before,” noted one corporate legal department attorney. “So, either they are not using AI tools efficiently, or they are just doing double work.”

One corporate CLO was far more blunt in their assessment, especially given the lack of detailed discussions or data from firms: “I fear that firms will use AI to cut time, but continue to bill for the hypothetical amount of time a task would have taken without it. It’s dishonest, but so are many firms.”

One encouraging note is that, according to TRI’s 2025 Future of Professionals Report, 56% of law firm respondents said they are highly or moderately confident in their ability to articulate the value of AI to their clients. Despite law firms’ confidence in explaining the value of AI, however, the visibility gap illustrated in the 2026 AI in Professional Services Report indicates that law firms are not actually having those conversations with clients. Indeed, some corporate law department respondents suggested their outside counsel may be reluctant to discuss AI with them because of concerns about quality and accuracy. One even suggested that firms may feel threatened by AI.

More & better communication is needed

As difficult and complicated as discussions involving AI usage may be, they are also essential. Absent those discussions, trust between firms and clients may be eroding, potentially jeopardizing long-standing relationships.

Here are a few steps that both sides can take to build confidence around the use of AI:

For law firms —

    • Communicate with clients — Hold discussions with clients that allow firms to detail how AI is being or will be used in client matters. Solicit feedback from clients about in which instances they would accept (or even demand) AI usage on different parts of a matter.
    • Develop an AI billing strategy — Determine not only how AI usage is impacting billable hours, but also how that will interact with the firm’s billing and pricing strategy.
    • Demonstrate and articulate value — Be prepared to explain billings in detail and answer client questions in terms of not only time and rates, but of value to the client. This includes both the value that AI brings to client engagement, but also the value that the firm brings above and beyond what technology provides, such as more freed-up time for lawyers to pursue value-added work.

For corporate law departments —

    • Lead the conversation, if need be — About three-quarters of both law firm and legal department respondents said it is the firm’s responsibility to initiate discussions around AI usage. However, corporate law departments should not wait for their outside firms to start the conversation. Take the initiative and make sure firms’ delivery models and fee structures are clear regarding AI usage.
    • Set expectations — Provide guidelines, expectations, or mandates on how and when AI will be used in handling client matters. This includes outlining specific use cases, data security protocols, and the human-in-the-loop oversight mechanisms that are used to ensure accuracy.
    • Build an external-facing metrics program — Law departments need to accurately measure the efficiency gains their outside firms are achieving to ensure that they, as the client, are receiving a fair price for value received. Baselines can be established for how long various legal matters took historically and how much they cost. The baselines then can be compared against AI-enabled engagements to evaluate ROI and business impact. This also allows legal departments to more thoroughly explain those gains to their own stakeholders.

For both corporate law departments and their outside counsel, it is imperative to engage in thorough discussions and develop data that can inform better decision-making. Such dialogue and measurements can strengthen the firm/client relationship and create scenarios in which both sides can reap the benefits of AI use.


You can download a full copy of the Thomson Reuters Institute’sĚý2026 AI in Professional Services ReportĚýhere

]]>
AI case study for law professors: How to build complimentary teaching tools /en-us/posts/legal/ai-law-professors/ Tue, 17 Mar 2026 13:30:24 +0000 https://blogs.thomsonreuters.com/en-us/?p=69996

Key insights:

        • Creating prototypes of IP-protected teaching tools — Law school faculty can build working AI teaching tool prototypes in one to two hours without IP worries because key optional settings enable a closed system to ensure professors’ intellectual property remains protected.

        • Strong prompting skills create faster prototypes — The best instructions initially set the AI’s character, explains what the AI needs to accomplish, lists which documents to reference exclusively, describes how the response should be formatted, and mentions any applicable legal jurisdiction limits.

        • Feedback from students is positive — Students’ responses show AI simulators reduce anxiety and build confidence by providing unlimited low-stakes practice opportunities that make legal concepts more digestible through active dialogue rather than passive reading.


Law schools face a persistent challenge on how to provide individualized skills practice when one professor must serve many students. And today’s traditional legal education offers limited opportunities for students to practice oral arguments, evidentiary objections, and witness examinations. Indeed, the repetition necessary to build authentic courtroom skills does not scale easily with law professors in the classroom alone.

To address this challenge, at the University of Missouri–Kansas City School of LawĚý that simulate trial judges, three-panel appellate courts, witnesses, and evidentiary objection scenarios. Prof. Serra has seen firsthand how these tools give students unlimited, low-stakes practice opportunities that reduce their anxiety while building confidence in their legal reasoning and judgement.

Building your first AI learning tool, step by step

Creating custom AI teaching tools requires far less technical expertise than most professors would assume. As Prof. Serra explains, if you have a general idea of what you want the tool to accomplish, then “you can have a working prototype in less than two hours from idea to execution.”

The process begins with choosing a large language model (LLM) platform, such as ChatGPT, Claude, or Gemini, and securing a paid subscription, which most law schools will provide, she explains. During the sign-up process, optional settings enable a closed system to ensure law professors’ intellectual property is not shown to the students and is not used to train the LLMs.

law professors
Prof. Alexandria Serra

Next, you should gather class materials, including slides, case files, manuals, and problems the professor has already created. After that, it is necessary to define one specific use case, such as an evidentiary objections practice tool, a Socratic method simulator, or a client interview assistant.

The building process itself takes about one to two hours and requires no coding skills. “You just start talking to the LLM like you are training a teaching assistant to do exactly what you want to do,” Prof. Serra adds.

Having built many tools, she highlights three critical components that are necessary for the efficient, useful, and flexible prototype. These include:

1. Prompting skills

Effective prompting is key to generating a good prototype. ĚýAccording to Prof. Serra, the ideal prompt includes defining the AI’s role (You are a trial judge in a federal district court), specifying the task the AI should deliver, identifying which documents to use exclusively, describing the desired output format, and including any jurisdictional constraints.

2. Multimodal features in AI tools

Most platforms allow for voice-activated chat mode, in addition to typing back and forth, which helps students respond out loud in real time. Custom AI tools also have shareable links, which enables easy deployment to students. Once a student engages with the tool, they can send back a transcript of the interaction. Some platforms even allow shareable audio files so students can get feedback from their professors on skills performance, not just content.

3. Verifying reliability

Evaluating the quality of the AI output is important but naturally varies by use case. For classroom tools, Prof. Serra recommends deploying prototypes quickly and using students as testers. If the tool produces outputs with inaccuracies, she encourages students to bring these errors to class for discussion. That way, everyone learns how to critically diagnose problems with AI outputs. A variety of problems cause AI inaccuracies — the AI itself, poor prompting, incorrect legal reasoning, or incomplete training.

For wider deployment without the builder’s direct oversight, Prof. Serra recommends an extended period of testing and iteration. Her tool, MootMentorAI, which simulates a three-judge appellate panel for first-year law students preparing for oral argument, is one example. Because MootMentorAI was developed for use by a colleague, Prof. Serra worked with a research assistant to conduct 80 simulations over the course of a semester — 40 from the plaintiff’s perspective and 40 from the defendant’s perspective — to verify reliability and improve performance before deployment without her supervision.

Overcoming adoption barriers among peers

Faculty resistance remains the most significant barrier to deploying AI-enabled teaching tools in legal education. “There’s lots of faculty pushback, distrust, and a healthy dose of skepticism with AI,” Prof. Serra acknowledges, arguing that even so, AI-powered tools are teaching assets for all law school courses. “Even in doctrinal classes that run on traditional Socratic dialogue, professors can still use AI to reinforce learning outside the classroom through tools, such as podcast-style lectures, a multiple-choice practice assistant, tools to enable issue-spotting, and essay practice tied to course fact patterns.”

Common concerns among law school faculty include confidentiality, intellectual property protection, fear of revealing exam content, and perceived lack of technical expertise. However, Prof. Serra points out that these fears often stem from her colleagues’ misunderstanding of how closed systems work. Indeed, if privacy settings are correctly deployed, uploaded materials will not be used to train public models and students cannot access source documents.

Indeed, the most effective strategy for overcoming resistance is personal demonstration, she says, noting that she frequently sits down with colleagues virtually to build tools based on the colleague’s own use case. She’s built everything from a Startup CEO simulator for a business course, to an interview assistant for Career Services, to a simulated forensics expert for students to cross-examine. This grassroots approach, combined with speaking at conferences and identifying super fans who can champion the technology, gradually builds institutional buy-in, she adds.

Multifaceted student feedback

Student feedback has been overwhelmingly positive, with learners describing how AI simulators make legal skills training more accessible, more engaging, and less intimidating. In fact, students are often surprised by how convincingly AI tools can simulate judges, witnesses, and other real-world lawyering scenarios. They also appreciate having permission to use AI as a legitimate learning aid.

They also report that real-time interaction makes course concepts more digestible because these tools turn learning into an active dialogue rather than passively staring at a casebook. Finally, students say the simulators reduce anxiety before oral arguments or presentations by enabling unlimited, low-stakes repetition that builds confidence and keeps practice from feeling overwhelming.

Clearly, AI tools are quickly becoming essential learning infrastructure, and legal education cannot afford to treat them as optional add-ons if it expects to stay relevant. As a growing chorus of educators and employers warns that institutions must evolve, the real question is whether schools will build responsible, faculty-guided systems fast enough to meet students where the profession is headed.

When deployed thoughtfully, these platforms can scale individualized skills training, deepen engagement beyond the casebook, and build durable confidence that law students can carry into their future legal practice.


You can download a full copy of the Thomson Reuters Institute’sĚýrecent white paper, , here

]]>