Corporate Law Departments Archives - Thomson Reuters Institute https://blogs.thomsonreuters.com/en-us/topic/corporate-law-departments/ Thomson Reuters Institute is a blog from ¶¶ŇőłÉÄę, the intelligence, technology and human expertise you need to find trusted answers. Fri, 10 Apr 2026 08:56:53 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 The AI Law Professor: When AI quietly hijacks legal judgment /en-us/posts/technology/ai-law-professor-first-draft-trap/ Wed, 08 Apr 2026 07:56:33 +0000 https://blogs.thomsonreuters.com/en-us/?p=70293

Key takeaways:

      • Anchoring distorts judgment before you begin — Research shows a first draft shapes subsequent decisions; and an AI draft is the most seductive anchor imaginable, because it looks exactly like something a lawyer would write.

      • The First Draft Trap inverts legal training — The Socratic method builds the habit of holding multiple possibilities in tension before committing; but an AI first draft collapses that space before the real thinking begins.

      • The fix is to ask for the map, not the draft — Requesting multiple strategic framings before writing keeps judgment where it belongs and uses AI to expand possibilities rather than foreclose them.


Welcome back toĚýThe AI Law Professor. Last month, I examined why promised efficiency gains often become a cycle of work intensification. This month, I want to address a subtler challenge. I call it the First Draft Trap and understanding it may change how you reach for AI the next time a new matter lands on your desk

We have all heard the pitch: Staring at a blank page? Just prompt the AI. In seconds you have a working draft: structured, coherent, and surprisingly competent. The blank page problem, that ancient enemy of productivity, thus has been vanquished.

Except the blank page itself was never just an obstacle; rather, it was a space of possibility. For lawyers, it was the space in which the most important part of their work actually happens. Now, with AI in the mix, that may be changing.

Welcome to the First Draft Trap.

Simply put, the First Draft Trap is this: The moment you accept an AI-generated draft as your starting point, you have already made the most consequential decision of the entire project — most importantly, you made it by not making it. You let the machine choose your direction, your framing, and your theory. Everything that follows is editing; and editing, no matter how rigorous, is not the same as thinking.

The cognitive hijack

There is solid psychology behind why this happens. Daniel Kahneman and Amos Tversky demonstrated in their landmark 1974 paper, , that once people are exposed to an idea, this first impression distorts their subsequent judgments and becomes a mental anchor. In their experiments, subjects who watched a roulette wheel spin to a random number still let that number influence their estimates of completely unrelated quantities. The anchor held even when people knew it was meaningless.


Please join Tom Martin at the on April 28–29. It’s virtual and completely free — two days of keynotes, panels, and workshops on AI and the legal profession


An AI first draft is the most seductive anchor imaginable. It is not random — it is plausible, and it is well-organized. It sounds like something a lawyer would write. And that is precisely what makes it dangerous. You know intellectually that it is just one of many possible approaches to addressing the matter, but the anchor holds anyway.

That is the First Draft Trap at the cognitive level. The AI draft is not just one option you happen to prefer. It is a filter that prevents you from seeing the other options that were available to you, the roads you never even noticed that you did not take.

Consider what this means for a profession built on the opposite instinct. From the first day of law school, lawyers are trained to resist the obvious answer and to think like a lawyer. The Socratic method exists for exactly this reason. A good professor hears your confident response and asks: What else? What if the facts were different? What is the argument on the other side? The goal is not to arrive at an answer, per se. It is to build the mental habit of holding multiple possibilities in tension before committing to any one of them.

The First Draft Trap is the anti-Socratic method. It delivers a confident answer before you have even formulated the question properly — and instead of interrogating it, you polish it.

The value of the blank page

Think about what a senior partner actually does when a junior associate brings them a memo. The partner’s value is not better writing; rather, it is peripheral vision: The ability to see what the memo does not address, the argument not considered, or the framing that would land differently with this particular judge or this particular jury. That capacity to see beyond the document in front of them is why clients pay senior partners premium rates. And it is precisely the muscle that atrophies when your default workflow begins with the prompt generate a draft.


The AI draft is not just one option you happen to prefer. It is a filter that prevents you from seeing the other options that were available to you, the roads you never even noticed that you did not take.


The two-system framework offered by Kahneman and Tversky gives us a clean way to describe what is going wrong. System 1 is fast, intuitive, and pattern-matching; while System 2 is slow, deliberate, and analytical. The practice of law, at its best, is a System 2 discipline. We, as lawyers, are trained to override gut reactions, challenge assumptions, and think through consequences before acting.

In this way, the AI first draft feels like a System 2 output. It is structured, footnoted, and methodical. However, your decision to accept it as a starting point is pure System 1 — a fast, intuitive grab at the nearest plausible answer. You have used a sophisticated tool to bypass the sophisticated thinking the tool was supposed to support. That uncomfortable period of ambiguity, of not knowing which path is best, is where the real lawyering lives.

What to do instead

None of this means stop using AI. It means stop using AI to skip the hard part that matters.

Before you ever ask for a draft, ask for the map. Describe the matter or document you are working on, then ask the AI for three fundamentally different strategic framings for the problem. For each framing, request the strongest argument in its favor and its most serious vulnerability. Then ask which framing best fits the client’s goals, the audience, or the procedural posture. Close with a clear instruction: Do not write a draft yet.

That last instruction is the key. It keeps you in the driver’s seat during the phase that matters most. You are using AI to expand the possibilities before you prune them, not after. And, most importantly, it gives you the opportunity to think for yourself about other important possibilities and add them in.

In the terms used by Kahneman and Tversky, use AI to fuel System 2, not to hand the controls to System 1. Let the machine generate options, and you exercise judgment.

For lawyers, the ability to see what is not there is the whole game.

Do not let the first draft blind you to it.


Tom Martin is CEO & Founder of LawDroid, Adjunct Professor at Suffolk University Law School, and author of the forthcomingĚý. He is “The AI Law Professor” and writes this eponymous column for the Thomson Reuters Institute.

]]>
Relationship-building and AI fluency key to closing visibility gap, new report shows /en-us/posts/corporates/closing-ai-visibility-gap/ Mon, 06 Apr 2026 12:18:00 +0000 https://blogs.thomsonreuters.com/en-us/?p=70271

Key insights:

      • A significant visibility gap persists between legal departments and the C‑Suite — Most general counsel believe their legal department contributes strategically, yet senior executives often fail to see or understand that value.

      • Strong internal relationship‑building is critical (and often underdeveloped) — This capability enables legal teams to spot risks earlier, stay embedded in decision‑making, and make their work more visible across the business.

      • Closing the gap requires communicating legal’s value and increasing true AI fluency — For legal teams to be seen as proactive, strategic partners rather than task executors, communication and strong AI fluency are essential.


General counsel (GCs) have spent years doing more with less, tightening their legal spend, and aligning the law department’s priorities with the wider business. And yet, despite all of this effort, a striking visibility gap persists. While 86% of GCs believe their department is a significant contributor to overall organizational objectives, only 17% of the C-Suite agrees, according to the , from the Thomson Reuters Institute, which was based on more than 2,300 interviews with corporate general counsel. Meanwhile, 42% of C-Suite executives say the legal function contributes little or not at all to company performance.

The challenge for GCs is whether their staff have the skills and capabilities to make their work visible, relevant, and understood by the business at large. To address this perception gap in 2026, every GC needs to prioritize building richer internal relationships with business leads, moving from task-based to outcome-focused messaging, and improving the team’s collective AI fluency.

Empower teams to build internal relationships

Nearly half of all GCs surveyed for the report cited staffing and resource constraints as the top barrier to delivering additional value, a concern that has remained stubbornly consistent for years. Beyond headcount, the report underscores that the deeper challenge facing legal departments is relational.

Internal relationship-building is one of the most critical and underrated people skills in a legal department’s collective skill set. Indeed, 68% of GCs rate internal dialogue as their most valuable source of information about emerging risks. In fact, the most successful GCs use a deliberate combination of formal and informal methods to build connections with the internal business units that they serve.


You can learn more about how to assess your legal department’s strategic positioning with theĚýThomson Reuters Institute’s Value Alignment toolkit, here


Some run structured weekly face-to-face sessions with business departments, complete with schedules, plans, and frameworks. Others rely on walking the halls, open-door policies, and ad-hoc conversations that keep the corporate law department visible and accessible on a human level.

The report offers a five-dimensional framework to help GCs audit where, with whom, and how often legal is in dialogue with other parts of the business.

Corporate Law

Use communication tactics that focus on business outcomes

Even when legal departments are doing excellent work, they often describe it in the wrong language. Many in-house lawyers categorize their contributions in task-based terms — such as “We support M&A” or “We analyze contracts” — rather than in value-creating terms.

Some in-house legal leaders have progressed to stakeholder-level framing, such as, “We protect the company from competitive threats” or “We support new business opportunities.” Still, neither of these levels truly communicates value to a C-Suite audience, the report shows.

To effectively align the law department’s priorities with business goals, in-house attorneys need to develop the skill of communicating through a business lens. For example, one GC states that the primary goal of the law department is to “find the fastest and most compliant way for the sales department to sell products.” This response reframes the legal function’s activities as much more business fluent and value-added.

Legal teams are not always good at touting their accomplishments, however, and this is a challenge when a lot of the work can be categorized as invisible. For example, when protecting the company is done right, threats are eliminated before they occur and no one notices. When efficiency is unlocked through process improvement, the C-Suite only sees the outcome if someone connects the dots explicitly. This is why surfacing invisible value is now a business imperative for corporate law departments.

Advancing from AI literacy to AI fluency

The most significant skills challenge facing legal departments in 2026 is how to best use AI strategically. Mentions of AI as a strategic priority among GCs have doubled in the past year, according to the report. In fact, almost half of all GCs now reference AI in their survey interviews. Yet the report draws a sharp distinction between being AI literate and being AI fluent, with most departments being the former but not the latter.

To close that gap, the report recommends a six-layer model covering learning, empowerment, ownership, accountability, usage, and expectations.

Corporate Law

At its core, the model asks GCs to start with open encouragement and access to AI tools to build momentum, then shift toward more formal expectations around adoption to make AI use a daily habit.


You can download a full copy of the Thomson Reuters Institute’s here

]]>
Honing legal judgment: The AI era requires changes to how lawyers are trained during and after law school /en-us/posts/legal/honing-legal-judgment-training-lawyers/ Thu, 02 Apr 2026 15:36:44 +0000 https://blogs.thomsonreuters.com/en-us/?p=70236

Key takeaways:

      • AI threatens traditional lawyer development — As AI automates entry-level legal tasks like research and writing that historically has honed legal judgment skills, the profession faces a crisis in how new lawyers will develop such judgment abilities.

      • The profession can’t agree on what constitutes “legal judgment” — Unlike other professions, there is no agreed-upon definition of legal judgment or clear standards for when AI should be used.

      • Implementation requires unprecedented coordination and funding — A legal education fund as a proposed solution would require a small percentage of legal services revenue and coordinated action across law schools, legal employers, and state regulators.


This is the second of a two-part blog series that looks at how lawyer training needs to evolve in the age of AI. The first part of this series looked at how lawyers can keep their skills relevant amid AI utilization.

The key skills that comprise legal judgment have received mixed reviews, according to a recent white paper from the Thomson Reuters Institute that advocated for cultivating practice-ready lawyers. The white paper was based on feedback from thousands of experienced lawyers, judges, and law students and raises questions about how legal judgment forms when AI assistance is used for task completion.

notes that calls for “… to accelerate the development of legal judgment early in lawyers’ careers.”

The challenge is that each part of the profession — law schools, employers, state supreme courts (as regulators) — have distinctly separate responsibilities. That means, that in the age of AI, coordination across the entire legal profession is needed, especially as AI reduces the availability of traditional first jobs.

Furlong points out that there is no consensus for what legal judgment is or any agreed upon standards for in what instances AI should be used in legal. To bring clarity to these issues, the white paper proposed a profession-wide model that integrates three critical elements: i) work-based learning that’s modeled on medical residencies; ii) micro-skill decomposition of legal judgment; and iii) AI-as-thinking-partner throughout pedagogy.

Three pillars for an AI-era lawyer formation system

Not surprisingly, overreliance on AI can erode critical analysis and solid legal judgment skills. Addressing these concerns requires a comprehensive reimagining of how lawyers are educated and trained. One solution lies in three interconnected pillars that together form a cohesive system for developing legal judgment in an AI-integrated world.

Pillar 1: Integrate work experience into legal education

Core skills such as legal research, writing, and document review help develop legal judgment; yet these skills could collapse once AI assumes such tasks. The Brookings Institution recently proposed to preserve entry-level professional development in an AI era. This parallels the TRI white paper’s calls for mandatory supervised postgraduate practice as a key part of legal licensure.

While implementing a full residency model presents challenges, several law schools have already pioneered approaches that demonstrate the viability of work-integrated legal education that, if scaled appropriately, could improve new lawyer practice and judgment skills. For example, Northeastern Law School guarantees all students nearly before graduation through four quarter-length legal positions. The program integrates supervised practice into the curriculum so graduates can gain substantial hands-on experience alongside their classroom instruction.

Also, program offers an alternative pathway to bar admission through practice-based assessment rather than the traditional bar exam. The program demonstrates that competency can be evaluated through supervised experiential learning.

Pillar 2: Decompose legal judgment into teachable micro-skills

The legal profession needs to come to a common definition of legal judgment and develop its components to teach the concept effectively. “We can’t teach what we can’t describe,” Furlong says. To develop legal judgment, the profession must define its components, including:

      • Pattern recognition — The ability to identify when different fact patterns are related to similar legal frameworks and distinguish when superficially similar cases are legally distinct.
      • Strategic calibration and proportionality — This means understanding what level of effort, precision, and risk each matter requires and matching responses to the stakes involved.
      • Reasoning through uncertainty — This is the capacity to make defensible decisions and provide sound counsel even when the law is ambiguous, unsettled, or silent on an issue.
      • Source evaluation and authority weighting — This includes knowing which legal authorities are most suitable and being able to assess their persuasive value.
      • Ethical judgment under pressure — This means spotting conflicts, confidentiality issues, and duty-of-candor moments while maintaining competence and knowing when to escalate beyond expertise.

Breaking down legal judgment into these discrete components makes it possible to design targeted teaching interventions. For example, , former law professor and executive director of , suggests we back into AI-assisted workflows by requiring a short verification log (detailing sources checked, changes made, and why); running attack-the-draft drills (find missing authority, weak inferences, and jurisdictional mismatch); and preserving slow work as formative work (citation chaining, updating, and adversarial research memos).

With judgment skills clearly defined and work experience integrated into training, the profession must then tackle how AI itself should be incorporated into lawyer development.

Pillar 3: AI-as-thinking-partner throughout a lawyer’s career

Warnings that are mounting. The legal profession must provide clear standards for in what instances and how AI should be used, with training in verification and judgment skills. Overreliance on AI could compromise lawyers’ capacity to fulfill their fiduciary duties to clients.

A phased approach in the introduction of AI in legal work helps protect critical thinking while building AI competency. For example, in Year 1, law students could complete core legal reasoning exercises without AI assistance in order to better develop their analytical muscles. In Year 2, students use AI as a research assistant with mandatory verification protocols that teach students to check outputs against authoritative sources. Finally, in Year 3, residencies can immerse students in real-world AI workflows under proper supervision and while providing feedback.

These three pillars form a coherent vision for lawyer formation in the AI era. However, the most well-designed system faces the obstacle of funding.

The challenge of who pays

Perhaps the most difficult part of any overhaul is the cost. The medical residency model works because — up to $15 billion-plus annually — for teaching young medical students to be doctors. Legal education has no equivalent. Without addressing funding, however, even the best reforms will fail.

One idea is to establish a legal education fund that’s supported by an assessment of a small percentage of the legal industry’s gross legal services revenue (while exempting solo practitioners and firms with less than $500,000 in annual revenue). These funds could be used to subsidize thousands of supervised residency placements, fund law school curriculum development, support bar exam alternative assessments, and provide employer training and supervision stipends.


The challenge is that each part of the profession — law schools, employers, state supreme courts — have distinctly separate responsibilities, and that means coordination across the entire legal profession is needed.


This proposal, of course, would require unprecedented coordination and financial commitment from the legal profession. Skeptics might argue that market forces can solve this problem, or that firms will simply create new training pathways, or that AI will prove less disruptive than feared. However, waiting for market forces risks a lost generation of lawyers. The medical profession already when the medical industry’s voluntary reform failed. Only later did coordinated regulatory intervention produce the consistent quality standards the medical industry sees now.

What is clear is that inaction is resulting in degradation of lawyering skills. “Maybe… we need catastrophic external intervention to bring about the wholesale changes we can’t manage from the inside,” Furlong suggests.

However, the question is whether the legal profession will wait for a crisis to force change or act proactively to make the needed changes now, before the crisis hits.


You can learn more about the impact of AI on professional services organizations at TRI’s upcoming 2026 Future of AI & Technology Forum here

]]>
The 4 Plates: Are you measuring the real value of AI in your legal department? /en-us/posts/corporates/4-plates-measuring-efficiency/ Wed, 01 Apr 2026 13:15:21 +0000 https://blogs.thomsonreuters.com/en-us/?p=70085

Key takeaways:

      • Efficiency is a means, not an end — Gains from AI only count when you can show what they enabled: better advice, stronger protection, smarter business support.

      • Narrow measurement invites cuts — Legal departments that measure AI value only through cost savings are telling C-Suites that legal costs less, thereby inviting budget and headcount reductions.

      • Measure across all four plates — A framework that captures effectiveness, risk, and enablement alongside efficiency is what shifts perception of the legal department from cost center to strategic asset.


Your legal department has invested in AI tools, adoption is growing, your team is saving time on routine work and, by most accounts, work operations are running faster. Then your CFO asks a simple question: What has AI delivered for the legal department?

If your answer centers on hours saved and cost reduced, you are not alone. However, you may be leaving your most important value story untold. And in a climate in which legal departments are under more scrutiny than ever to demonstrate the full return on their AI investment, that gap matters.

This is the fourth and final part of our series on the “Four Spinning Plates” model, which frames the GC’s evolving responsibilities as:

      1. delivering effective advice
      2. operating efficiently
      3. protecting the business, and
      4. enabling strategic ambitions.

This article focuses on the Efficient plate and specifically on the risk of letting it do too much of the talking.

plates

The Efficient plate under pressure

For a GC, making the best use of what are often limited resources is a constant pressure. The Efficient plate sits alongside, not above, the other three plates and must be kept always spinning. Right now, however, for many in-house legal teams the Efficient plate is receiving disproportionate attention, and for understandable reasons.

AI adoption in corporate legal departments is accelerating quickly. According to the Thomson Reuters Institute’s AI in Professional Services Report 2026, nearly half (47%) of corporate legal respondents surveyed said their department has already integrated generative AI (GenAI) into their work — more than double the figure from the previous year. A further 18% reported that they’re already using agentic AI, with more than half expecting agentic AI to be central to their workflow within the next two years.

GCs are genuinely excited about what this makes possible. As one GC said in the survey that underpinned the AI in Professional Services Report: “It presents the promise of getting out of low-value work and into higher-value work that supports the business.” Another described their vision of a legal department that is “boldly digital-first, relentlessly innovative, and tightly woven into business priorities.”

Clearly, the opportunity is real, but so is the risk of measuring it badly.

The measurement trap

Our 2026 research found that only one-quarter of legal departments are currently measuring the ROI of their AI tools. That alone is striking given the pace of adoption but the follow-up finding is where the real problem lies — of those departments that are measuring ROI, 80% are tracking it in terms of internal cost savings.

Reducing external spend, automating high-volume processes, and bringing more work in-house are all legitimate efficiency gains and worth reporting, of course. However, when cost reduction becomes the only story being told, two things can happen. Your C-Suite learns to associate your department’s value with how little it costs, a frame that is very difficult to escape once it’s established. And the wider value that efficiency enables in terms of sharper risk identification, faster business support, and higher-quality advice goes unmeasured and therefore unrecognized.


ĚýIf your metrics only capture time saved and cost reduced, and not what that freed-up capacity actually delivered, you are measuring the means and ignoring the end.


Think about what GCs themselves say they want from AI. As several GCs said in the survey, they’re hoping AI will provide them with “better output on more meaningful tasks,” “proactive, strategic insight,” and “getting out of low-value work.” These are not efficient outcomes, per se; rather, they are effectiveness, protection, and enablement outcomes, made possible by improved efficiency.

So, if your metrics only capture the input (time saved, cost reduced) and not what that freed-up capacity actually delivered, you are measuring the means and ignoring the end. This is the efficiency trap — measuring the plate so narrowly that it starts to work against you.

Reframing how you measure efficiency

Measuring efficiency well does not mean measuring it more. It means measuring it differently, and always in relation to the business you support. A few principles worth applying include:

Present spend in a business context — Legal spend as a percentage of company revenue tells a more credible story than a raw cost figure. It scales with the business and can be benchmarked meaningfully against peers.

Show what technology investment actually delivered — Time saved through automation is a useful starting point, but the stronger case is what the team did with that time. Tracking the shift from routine to strategic work over a period of time is a far more compelling ROI story.

Connect efficiency gains to business outcomes — An efficiency gain that enabled a faster product launch, prevented a compliance risk, or improved stakeholder satisfaction has a value that no cost metric will capture. Build those connections explicitly into how you report the value of the legal department to the C-Suite.

New resources to help

To support GCs in getting this right, the Thomson Reuters Institute has added two new resources to its Value Alignment Toolkit that directly address this measurement gap.

The Metrics Library brings together more than 100 metrics organized across all four spinning plates. It is a practical starting point for GCs to browse, select, and adapt to the specific goals of their departments, making it easier to build a measurement framework that reflects everything departments do, not just the part that appears in a budget line.

The AI Success Metrics guide addresses the AI measurement gap head-on with a best practice guide and a hands-on worksheet designed specifically for legal departments navigating AI adoption and asking: How do we actually know whether this is working? It looks beyond cost savings to capture the fuller picture of AI value including quality, capacity, strategic contribution, and risk.

Getting the balance right

In today’s environment, every GC needs to consider their answer when their C-Suite asks what the legal department delivers. Are your department’s metrics giving them the full answer or just the part that’s easiest to count?

Efficiency is not the enemy of strategic value. A department that runs well, uses its resources wisely, and embraces technology thoughtfully can in turn create the conditions for everything else the business needs from its legal function. However, that case only lands if your metrics measure across all four plates, not just one.


You can explore the new Metrics Library and AI Success Metrics guide, along with the full Thomson Reuters Institute’s Value Alignment toolkitĚýhere

]]>
Helping the legal profession get AI‑ready: A new advisory board takes shape /en-us/posts/legal/ai-advisory-board/ Thu, 26 Mar 2026 11:31:32 +0000 https://blogs.thomsonreuters.com/en-us/?p=70080 Key insights:

      • AI is already reshaping the legal profession — AIĚýis already embedded in lawyers’ day-to-day legal work with a significant share of both law firm attorneys and in-house legal teams actively using GenAI tools, with many expecting it to become central to their work within the next five years.

      • AIFLP Advisory Board was formed to prepare lawyers for an AI-reshaped profession — TRI convened 21 respected leaders from legal education, private practice, the judiciary, and AI ethics and governance to help ensure lawyers and law students are prepared for a profession reshaped by AI.

      • Human judgment remains central in an AI enabled legal futureĚý— Becoming AI ready is not simply about learning to use new tools; the Advisory Board emphasizes strengthening irreplaceable human capabilities is critical.


In today’s tech-driven environment, AI is no longer a future concept for the legal profession — it’s already here, and it’s changing how lawyers work, learn, and serve clients. Recognizing just how fast the evolution is moving, the Thomson Reuters Institute (TRI) has launched the AI and the Future of Legal Practice (AIFLP) Advisory Board, bringing together a group of respected leaders from across the legal ecosystem to help guide what comes next.

The board includes 21 accomplished voices from legal education, private practice, the judiciary, and AI ethics and governance. Their shared goal is simple but ambitious: Help ensure that both today’s lawyers and tomorrow’s law students are prepared for a profession being reshaped by AI.

Why now?

Because the shift is already underway. According to TRI’s recent 2026 AI in Professional Services Report, 41% of law firm attorneys say their organizations are already using some form of generative AI (GenAI); and nearly half of those at corporate legal departments report that AI tools are being rolled out there too. Even more telling, most professionals said they expect GenAI to become central to their day‑to‑day work within the next five years.

That pace of change raises big questions about competence, ethics, education, risk, and access to justice. And those questions don’t have easy answers.

What the Advisory Board will focus on

The AIFLP Advisory Board is designed to tackle those challenges head‑on. Its work will center on four key areas that are already under pressure as AI adoption accelerates:

      • Legal education and talent development
      • Ethics, professional competence, and accountability
      • Governance, risk management, and client counseling
      • Access to justice and modern service delivery

The Advisory Board’s early focus areas will look at how AI is actually changing legal practice today, what future‑ready lawyers really need to know, and how legal education and real‑world practice can better align. The emphasis is not just on using AI tools, but on strengthening the human skills that matter most, such as sound judgment, critical thinking, and careful verification of AI‑generated outputs.

Shaping the future, not reacting to it

Citing the critical need for this Advisory Board’s creation, Mike Abbott, Head of the Thomson Reuters Institute, notes that the legal profession is at a crossroads, and it can either react to AI‑driven disruption or take an active role in shaping how these technologies are used to support lawyers, courts, and the public.

“By assembling a board of distinguished leaders, our goal is to help practicing lawyers and the lawyers of the future navigate a rapidly evolving landscape,” Abbott said. “Ensuring that legal education strengthens irreplaceable skills such as critical thinking, human judgment and effective communication helps make AI use safe and effective. The Board’s efforts will ultimately help shape a future-ready profession, leading to better outcomes for all.”

Meet the AIFLP Advisory Board Members

By convening experienced leaders from across the profession, TRI hopes to help lawyers navigate this landscape with confidence. Advisory Board Members include:

      • Michael Abbott, Head of the Thomson Reuters Institute
      • Soledad Atienza, Dean, IE Law School
      • The Honorable Jennifer D. Bailey, (Ret.), Partner, Bass Law
      • Benjamin Barros, Dean, Stetson University College of Law
      • Professor Sara J. Berman, University of Southern California, Gould School of Law
      • Megan Carpenter, Dean Emeritus, University of New Hampshire Franklin Pierce School of Law
      • Ronald S. Flagg, President, Legal Services Corporation
      • Donna Haddad, AI Ethics and Governance expert, and founding member, IBM AI Ethics Board
      • Johanna Kalb, Dean and Professor of Law, University of San Francisco School of Law
      • The Honorable Nelly Khouzam, Florida Second District Court of Appeal
      • The Honorable William Koch, Dean, Nashville School of Law, and former Tennessee Supreme Court Justice
      • Sheldon Krantz, retired partner, DLA Piper, and a founder, DC Affordable Law Firm
      • Stefanie A. Lindquist, Dean, School of Law, Washington University in St. Louis
      • The Honorable Mark Martin, Founding Dean and Professor of Law, Kenneth F. Kahn School of Law at High Point University, and former Chief Justice, Supreme Court of North Carolina
      • Caitlin (Cat) Moon, Professor of the Practice and founding co-director, Vanderbilt AI Law Lab, Vanderbilt Law School
      • Hari Osofsky, Myra and James Bradwell Professor and former Dean, Northwestern Pritzker School of Law; Founding Director, Northwestern University Energy Innovation Lab; and Founding Director, Rule of Law Global Academic Partnership
      • Joanna Penn, Chief Transformation Officer, Husch Blackwell
      • The Honorable Morris Silberman, Florida Second District Court of Appeal
      • The Honorable Samuel A. Thumma, Arizona Court of Appeals, Division One
      • Mark Wasserman, Partner and CEO Emeritus, Eversheds Sutherland
      • Donna E. Young, Founding Dean, Lincoln Alexander School of Law, Toronto Metropolitan University

What’s next?

The Advisory Board held its first meeting in February and will meet quarterly going forward. As the work progresses, TRI plans to publish research findings, best practices, and practical recommendations for legal educators, law firms, and courts.

In a profession built on precedent and careful reasoning, the rise of AI presents both opportunity and responsibility. The AIFLP Advisory Board is an effort to make sure the legal community meets that moment thoughtfully and on its own terms.


You can learn more about the impact of advanced technology on the legal profession here

]]>
Honing legal judgment: How professional acumen & fiduciary care can keep lawyers relevant in the age of AI /en-us/posts/legal/honing-legal-judgment-keeping-lawyers-relevant/ Wed, 25 Mar 2026 14:21:08 +0000 https://blogs.thomsonreuters.com/en-us/?p=70071

Key highlights:

      • Lawyers excel at semantic legal work while AI excels in syntactic tasks — Syntactic work (document generation, pattern recognition) is where AI excels, but semantic work involving exercising independent judgment, reflecting on consequences, and fulfilling fiduciary duties remains uniquely human.

      • Fiduciary duty as the core of legal relevance — What distinguishes lawyers isn’t justĚýwhatthey do, butĚýhow and whyĚýthey do it. The fiduciary relationship demands human understanding of context, balances competing interests, recognizes unstated concerns, and exercises discretion.

      • 5 hours to deepen or diminish — The five hours lawyers are expected to gain each week by using AI can either accelerate professional obsolescence or deepen lawyers’ relevance, depending on what they do with it.


This is the first of a two-part blog series that looks at how lawyers can keep their skills relevant in the age of AI

Lawyers expect to gain a full five hours per week of worktime due to the efficiency derived from AI use, according to the ¶¶ŇőłÉÄę 2025 Future of Professionals Report. Yet the fear of job loss among lawyers is rising, as those viewing AI as a threat or somewhat of a threat grew from to almost two-thirds (65%) of those surveyed, according to the Thomson Reuters Institute’s 2026 AI in Professional Services Report.

Many in the legal profession are asking how lawyers are uniquely valuable at a time when machines can process legal information faster and cheaper. The answer lies in understanding the difference between what AI does in processing legal information and what humans do in exercising legal judgment, says , Founding Director of the .

Defining 2 levels of legal work

Understanding what makes lawyers particularlyĚýmeaningfulĚýin this current AI moment requires distinguishing between two different levels of legal work in an environment in which AI-enabled information systems are compressing humanity and legal judgment into data points and draining away the storytelling and moral nuance that ground both. According to Lee, these different levels involve the syntactic and the semantic:

      • Syntactic — Lawyers process information, generate documents, and recognize patterns at the syntactic level, meaning those tasks in which AI excels and delivers promised efficiency gains. “The danger is that we will use this efficiency merely to generate more syntactic volume,” Lee explains, adding that this will result in faster processing of more documents at greater speeds. “If we do that, we will have automated ourselves out of a profession.”
      • Semantic — The semantic aspect of lawyering highlights the irreducible skills of the legal practice, which include exercising independent legal judgment, reflecting on consequences, demonstrating care for clients, and fulfilling fiduciary duties.

This distinction between the semantic level is inherent within the practice of law definition, Lee says, pointing out that many jurisdictions distinguish between “providing legal information” (not practicing law) and “exercising independent legal judgment” (the essence of legal practice).

He also rightly contends that the existential risk facing lawyers is not in AI completing legal tasks, but rather the temptation to reduce lawyers’ role to verifying machine output and processing legal information. Conflating these two concepts is a challenge for the legal profession and requires increasing the appreciation for the craft of legal reasoning and judgment.

legal judgment
Kevin Lee, Founding Director of the Institute for AI & Democratic Governance

Making this more difficult is that the current information age complicates this picture by challenging society’s assumptions about reality, consciousness, and the moral meaning of human life — all at an exponential rate, Lee says. Similarly, AI and information systems threaten to reduce everything, including human beings and law itself, to processable data by stripping away the narratives and meanings that define humanity, he adds.

Semantic qualities of legal judgment

The question of what makes lawyers especially relevant in the AI era is mainly answered in how and why they do what they do, rather than in what they do. For example, Lee points to skills around executing their fiduciary duty and ensuring legitimacy and meaning as key characteristics of lawyers’ semantic qualities.

Fiduciary duty — When a client seeks legal counsel, it’s legal judgment — not information processing — that the client wants. Lawyers, as part of their fiduciary duty to their clients, demonstrate human and legal understanding of the unique context of each case and the consequences of various legal paths forward. This bond of trust between attorney and client demands reflection, consideration, care, and proper purpose.

The fiduciary duty of the lawyer to the client requires balancing competing interests, recognizing unstated concerns, and exercising discretion in ways that honor both the letter and spirit of the law. At the heart of this balance is legal reasoning and professional judgment, which often involves navigating the critical gap between legal rules as written and their meaningful application to human circumstances.

Legitimacy and meaning — Beyond the fiduciary of care exercised in individual client relationships, lawyers serve a broader purpose in their role to safeguard law’s connection to the narratives of justice and human dignity that legitimize its authority. Indeed, lawyers maintain the connection between law and its humanistic foundations, so that the narratives that give legal authority its legitimacy depend on this connection. “The artwork that one associates with the law (in law schools and courtrooms) connects actions and legal judgment of attorneys to the mythic meaning of justice, equality, and the rule of law,” Lee explains.

How to deepen appreciation for the special relevance of lawyers

The five hours that lawyers said they expect to gain each week through AI-driven efficiency represents a choice point for the profession. These hours can either accelerate lawyers’ obsolescence or deepen their relevance. To ensure the latter, Lee advises lawyers and legal institutions to examine ways to put those hours to good use by, for example:

Collaborating on apprenticeships — Bar associations, practicing lawyers, legal service providers, and law schools should consider apprenticeship models that teach professional norms and values through mentorship that allow law students to learn the craft of legal reasoning through guided practice.

Recommitting more fully to legal service — Law firms and in-house counsel must reclaim humanistic awareness as central to their professional identity. The efficiency gains from AI should be reinvested into semantic work, which include counseling clients, exercising moral judgment, and fulfilling fiduciary duties with greater care and reflection.

Improving legal education — Law schools must return to the humanistic formation of lawyers, echoing the vision of the pre-2007 , before economic pressures reduced legal education to producing commercially exploitable graduates. In addition, AI ethics must be integrated systemically across the curriculum into doctrinal courses rather than being confined to elective courses.

Looking ahead

The five hours gained through AI represent a defining choice for the legal profession. The special relevance of lawyers in the AI age lies precisely in the human components and semantics aspects of lawyering.


In the concluding part of this blog series, we look at how the legal profession needs to rethink how it trains lawyers in order to prevent AI from eroding legal judgment skills

]]>
2026 State of the Corporate Law Department Report: GCs align strategy to corporate imperatives, but C-Suites want more /en-us/posts/corporates/state-of-the-corporate-law-department-report-2026/ Tue, 24 Mar 2026 12:09:01 +0000 https://blogs.thomsonreuters.com/en-us/?p=70047

Key takeaways:

      • Disconnect between legal departments and C-Suite perceptions — While many general counsel believe their departments are significant contributors to business success, most C-Suite executives do not share this view. Fully 86% of GCs say they believe their department is a significant contributor, but only 17% of C-Suite executives agree.

      • A need to find new ways to demonstrate value — Legal departments are under increasing pressure to do more with less, as nearly half of GCs surveyed cite staffing and resource constraints as their top barrier to delivering additional value. Despite these limitations, expectations from the C-Suite continue to rise.

      • AI adoption accelerates, business strategy comes next — Legal departments are rapidly embracing technology to improve efficiency, manage resources, and address cost pressures. Not surprisingly, the proportion of GCs calling AI a strategic imperative has doubled.


Over the past several years, general counsel and corporate law departments at large have transformed their operations. Many have become more efficient enterprises, leveraging technology, in particular AI, at an increased pace. GCs have adjusted their hiring practices to conform with the modern corporation, taking new ways of working into account. And they have embraced data-driven decision-making, evaluating outside counsel and their own operations alike with a wider suite of new metrics and KPIs.

But do you know who hasn’t yet realized the fruits of that labor? The corporate C-Suite.

Jump to ↓

2026 State of the Corporate Law Department Report

 

The , released today by the Thomson Reuters Institute, reveals a disconnect between how GCs and their corporate law departments view their own alignment to the wider business, and what C-Suite executives believe the legal department contributes. Within this gap, the message is clear: GCs not only need to align with their organizations’ overall business strategy, they need to learn how to prove that alignment to the rest of the company.

Indeed, when asked how they view legal’s contribution to the rest of the business, 86% of GCs surveyed said they viewed the legal function as a significant contributor. However, only 17% of other C-Suite executives said the same — and 42% said legal contributes little or not at all.

corporate law departments

As the report explains, this disconnect lays the inherent groundwork for the tension facing many GCs today. While they are increasingly aiming to align to business standards, the rest of the organization is not recognizing those actions. Instead, many C-Suites are looking for even more out of today’s legal departments to prove their contributions to organizations’ business imperatives.

As in past years, many in-house legal departments are being tasked to do more with less. Nearly half of GCs cited staffing and resource constraints as the top barrier they face to delivering additional value. Indeed, many said they expected outside counsel spend in some key areas — such as regulatory work and mergers & acquisitions — to remain high. As of the fourth quarter of 2025, more than one-third (36%) of GCs said they expect to increase overall spend on outside counsel over the next year, while only 20% said they plan to decrease their spend.


Despite legal departments’ gains, their C-Suites are looking for them to take the next step, turning operational excellence into business success.


Not surprisingly, many GCs said they view technology as one of the primary ways they have to combat these resourcing and cost issues. In fact, the proportion of GCs mentioning technology as a strategic priority entering 2026 doubled over the year prior. Legal departments have begun to feel positive effects of AI in their own organizations, the report notes, such as increased efficiency or time feed up for strategic work.

Despite these gains, C-Suites are looking for are looking for their legal functions to take the next step, turning operational excellence into business success. This can take a number of different forms, such as explicitly tying advice to client business objectives, presenting legal spend in the context of the business by showing it as a percentage of revenue, or approaching risk management with the goal of aiding business imperatives. “When we have a risky legal subject, the company never prefers just to see the legal opinion,” said one retail GC. “They’re also requesting you to drive them how to make a decision.”

AI and technology should also be approached in this same way, the report argues. Although almost half of all corporate legal departments have some type of enterprise-wide GenAI tool, according to the survey, very few are collecting success metrics around AI’s implementation or linking its use to business revenue. Put a different way, many legal departments are focused on unlocking capacity, rather than deploying capacity in a business-centric way — much to the chagrin of their C-Suites.

corporate law departments

Although legal departments have established a solid foundation upon which a business can stand, ultimately, C-Suites don’t want just a foundation. They want help building the entire house, the report shows, directly enabling the services that companies provide to customers. In that, GCs and legal departments have more work to do, not only tying strategy to overall business initiatives but actively communicating how the legal function’s work aids the company as a whole.


You can download

a full copy of the Thomson Reuters Institute’s “” here

]]>
The great AI disconnect: Firms and legal departments are not communicating about AI usage /en-us/posts/technology/great-ai-disconnect/ Wed, 18 Mar 2026 13:39:56 +0000 https://blogs.thomsonreuters.com/en-us/?p=70004

Key insights:

      • There’s an AI awareness gap — Most corporate legal professionals do not know whether their outside legal counsel are using AI in handling their client matters, leaving both law departments and their firms in a state of AI uncertainty.

      • A potential upcoming billing model shift — Efficiencies from AI usage could have a major impact on how many law firms bill matters; value-based billing may need to replace or supplement hourly billing for matters in which AI is used.

      • Transparency builds trust — Lack of visibility and ROI measurement could erode trust between law departments and their outside counsel. Dialogue and measurements can strengthen the firm/client relationship and create scenarios in which both sides can reap the benefits of AI usage.


While the use of AI is increasingly widespread for both corporate legal departments and their outside law firms, there is a considerable lack of dialogue and data-sharing between the two sides on usage, guidelines, and expectations regarding AI. This complicates efforts to maximize the benefits of using AI, and it also may be eroding trust between the two sides.

Significant gaps in visibility and measurement

The Thomson Reuters Institute’s (TRI’s) 2026 AI in Professional Services Report found major gaps in visibility and measurement between law firms and legal departments. The survey found that more than half of law firm respondents said their organizations are currently using or considering using GenAI. And more than half of corporate legal professionals surveyed said they feel that their outside legal firms should use AI on their matters.

However, more than two-thirds (68%) of corporate legal professionals admitted that they currently have no idea if their outside law firms are using AI or not.

AI disconnect

In addition, neither side is effectively measuring whether or to what degree their use of AI is improving the delivery of legal services. Indeed, 85% of law firm respondents and 75% of corporate legal department respondents said their organizations are either not collecting ROI data on AI usage or are unsure if they are doing so.

Is your organization measuring the ROI of AI tools?

AI disconnect

These visibility and measurement gaps make it difficult for both sides to plan how AI can and should be used in handling client matters. It also raises questions about how potential efficiencies from AI use will affect related factors such as how much firms charge for their services and how much clients are willing to pay. Half of legal professionals surveyed said they feel that AI is either a major threat or somewhat of a threat to billings and law firm revenues. Not surprisingly, the industry continues to wrestle with how to balance efficiency gains from AI against the limitations of the hourly billing model.

Concerns of corporate law departments

For corporate law departments, the lack of AI usage visibility and ROI measurement is producing a wide variety of responses, ranging from mild but growing concern all the way to outright suspicion about how law firms are using AI on their clients’ behalf. Law department respondents said that while they generally trust their outside counsel to make the right decisions regarding AI use and maintaining quality, most departments have not yet had conversations on those issues with their law firms, including how AI use will affect billing.

“Billing has remained the same as it did before,” noted one corporate legal department attorney. “So, either they are not using AI tools efficiently, or they are just doing double work.”

One corporate CLO was far more blunt in their assessment, especially given the lack of detailed discussions or data from firms: “I fear that firms will use AI to cut time, but continue to bill for the hypothetical amount of time a task would have taken without it. It’s dishonest, but so are many firms.”

One encouraging note is that, according to TRI’s 2025 Future of Professionals Report, 56% of law firm respondents said they are highly or moderately confident in their ability to articulate the value of AI to their clients. Despite law firms’ confidence in explaining the value of AI, however, the visibility gap illustrated in the 2026 AI in Professional Services Report indicates that law firms are not actually having those conversations with clients. Indeed, some corporate law department respondents suggested their outside counsel may be reluctant to discuss AI with them because of concerns about quality and accuracy. One even suggested that firms may feel threatened by AI.

More & better communication is needed

As difficult and complicated as discussions involving AI usage may be, they are also essential. Absent those discussions, trust between firms and clients may be eroding, potentially jeopardizing long-standing relationships.

Here are a few steps that both sides can take to build confidence around the use of AI:

For law firms —

    • Communicate with clients — Hold discussions with clients that allow firms to detail how AI is being or will be used in client matters. Solicit feedback from clients about in which instances they would accept (or even demand) AI usage on different parts of a matter.
    • Develop an AI billing strategy — Determine not only how AI usage is impacting billable hours, but also how that will interact with the firm’s billing and pricing strategy.
    • Demonstrate and articulate value — Be prepared to explain billings in detail and answer client questions in terms of not only time and rates, but of value to the client. This includes both the value that AI brings to client engagement, but also the value that the firm brings above and beyond what technology provides, such as more freed-up time for lawyers to pursue value-added work.

For corporate law departments —

    • Lead the conversation, if need be — About three-quarters of both law firm and legal department respondents said it is the firm’s responsibility to initiate discussions around AI usage. However, corporate law departments should not wait for their outside firms to start the conversation. Take the initiative and make sure firms’ delivery models and fee structures are clear regarding AI usage.
    • Set expectations — Provide guidelines, expectations, or mandates on how and when AI will be used in handling client matters. This includes outlining specific use cases, data security protocols, and the human-in-the-loop oversight mechanisms that are used to ensure accuracy.
    • Build an external-facing metrics program — Law departments need to accurately measure the efficiency gains their outside firms are achieving to ensure that they, as the client, are receiving a fair price for value received. Baselines can be established for how long various legal matters took historically and how much they cost. The baselines then can be compared against AI-enabled engagements to evaluate ROI and business impact. This also allows legal departments to more thoroughly explain those gains to their own stakeholders.

For both corporate law departments and their outside counsel, it is imperative to engage in thorough discussions and develop data that can inform better decision-making. Such dialogue and measurements can strengthen the firm/client relationship and create scenarios in which both sides can reap the benefits of AI use.


You can download a full copy of the Thomson Reuters Institute’sĚý2026 AI in Professional Services ReportĚýhere

]]>
Couples counseling at Legalweek 2026: Firms and clients confront the AI value divide /en-us/posts/legal/legalweek-2026-firm-client-divide/ Fri, 13 Mar 2026 13:29:53 +0000 https://blogs.thomsonreuters.com/en-us/?p=69954

Key insights:

      • Client expectations around AI have shifted from curiosity to accountability — Law firms are now being asked not just whether they use GenAI, but to prove how it delivers measurable cost savings on specific matters — a question most firms still cannot answer with hard data.

      • A growing contradiction defines firm/client relationships — As clients simultaneously demand AI adoption, require granular billing transparency, and in some cases refuse to pay for work performed with AI, they’re creating a pricing and value paradox with no clear resolution for their law firms.

      • The ROI challenge around AI is fundamentally a relationship problem — Driven by a widening gap between what clients expect to save and what firms can demonstrate, a rift has developed between clients and firms, which is compounded by the fact that few firms have a coherent GenAI strategy in place.


NEW YORK — opened with a keynote conversation featuring Mindy Kaling, the Emmy-nominated writer, producer, and Tony Award-winning playwright, who reflected on a career built around one enduring fascination: messy relationships. She talked about growing up wanting to write something like Sex and the City, only to end up helping to chronicle the internal politics of a Scranton, Pennsylvania paper company in The Office. She talked about her love of watching people navigate breakups and power struggles and then finding the comedy in it all.

If she’s looking for new material, the three standing-room-only panels that followed could keep her busy for seasons.

Not surprisingly, the relationship between clients and their law firms has always been complicated — bound by mutual need but strained by competing incentives. Now, that tension is starting to reach a rolling boil as many law firms can’t seem to agree on exactly how the gains of their use of AI tools, especially generative AI (GenAI), are going to be split, or even if they’re going to be split at all.


AI is no longer optional or experimental — and many clients simply assume it’s already in use.


Across three ¶¶ŇőłÉÄę-sponsored sessions during this week’s Legalweek event, that tension surfaced again and again — not as a future concern, but as a present reality. Today, clients are arriving at the table more informed, more demanding, and more willing to use AI themselves. Firms are investing heavily in AI, but they still are struggling to quantify returns in terms their clients will accept. With the rates that law firms charge increasing — averaging more than 7% growth in 2025, and likely to stay on that pace in 2026 — it sets up a collision with savings mandates that have yet to produce a shared framework for measurement. And underneath all of it, a fault line is building pressure — one that, as Ellen Hudock, GSK’s Chief of Staff Legal and Compliance, is not being resolved.

In 2026, GenAI has become the thing neither side can stop talking about, the thing both sides agree matters, and the thing that neither side can agree on how to handle.

This is not the story of an industry resisting change. Nearly everyone at Legalweek agreed that AI adoption is no longer optional. The harder questions, however, and the ones that echoed through every panel, every audience comment, and every hallway conversation is who benefits, how much, and who gets to decide.

Proving AI’s path to saving clients money

Three years ago, the client question was simple: Are you using AI, and would you use it on our matters? In 2026, that question has matured, and the new version is much harder to answer.

GSK’s Hudock described the shift bluntly during one panel. GSK is learning as much as it can from its outside law firms about how they’re deploying GenAI, she said, and are always looking to partner on new use cases. However, she noted that the conversation has moved well past curiosity. The pressure to deliver savings — internally and externally — is intense, and the questions have sharpened accordingly: What are you using? How are you using it? How does it generate savings?

Clearly, firms are hearing this message. Matthew Beekhuizen, Chief Pricing and Innovation Officer at Greenberg Traurig, noted that the pace of AI-driven change has accelerated sharply, particularly since October 2025. Clients who had previously said nothing about AI are now asking how it’s being used on their specific legal matters.

Indeed, AI is no longer optional or experimental — and many clients simply assume it’s already in use, said Mark Brennan, a partner at Hogan Lovells.

The trouble is that firms still can’t give clients the answer they most want to hear. When pressed on how much cost savings AI is actually achieving, the response from the firm side is often: We’re still gathering the data. Mitchell Kaplan, Managing Director of Zarwin Baum, acknowledged the industry is still in the anecdotal phase of measuring returns.

Sergey Polak, Director of Technology Innovation at Ropes & Gray, described the current state of ROI measurement as being based more on conventional wisdom rather than hard evidence. Hudock’s response to this was pointed: That’s exactly the situation in which clients want to partner. Supply the work, and let’s figure it out together.

The contradictions in the room

If the evolution in client expectations were the whole story, it would be manageable; however, the reality is messier than that, because clients are not speaking with one voice.

During another panel, Barclay Blair, Senior Managing Director of AI Innovation at DLA Piper, laid out the contradictions in sharp relief. Blair, who introduced himself as “the extremist on the panel,” is seeing clients who expect AI to be used and are asking how it will achieve specific savings targets. At the same time, many law firms are still receiving directives that feel lifted out of 2023, such as demands for warrants that models are unbiased, and declarations that firms cannot use AI without explicit permission. In 2026, both postures are arriving in the same inbox.


When pressed on how much cost savings AI is actually achieving, the response from the firm side is often: We’re still gathering the data.


The billing conversation captures this tension perfectly. Polak of Ropes & Gray noted that clients are beginning to ask for line-item transparency on invoices — was AI used on this task, and how much time or money did it save? Simultaneously, as Blair observed, other clients are issuing guidelines stating they won’t pay for certain services if performed by AI. This isn’t clients barring AI outright; rather, its clients demanding firms adopt AI, then using that very adoption as leverage to negotiate a decrease in costs. Not surprisingly, this becomes a self-reinforcing cycle with no obvious exit — at least, not for law firms.

Meanwhile, Zarwin Baum’s Kaplan raised a billing paradox that GenAI is making harder to ignore. As AI compresses work that once took hours into minutes, an itemized hourly bill increasingly tells a story that undersells the value delivered. His proposed answer: a return to the single line-item services rendered bill, which actually predated the billable hour. Kaplan then asked whether clients would actually accept it.

The advice to the law firms in the room from DLA Piper’s Blair was more blunt: Don’t wait for the client to set the terms. Lead the conversation about AI ROI and set the meeting. As Blair described, this is now the time to negotiate how value gets shared, while both sides are still figuring out the rules — not after one side has already written them.

The pressure hasn’t yet found a release valve

None of these tensions exist in isolation. They are symptoms of a structural mismatch between what clients need from the economics of legal AI and what firms are currently able to demonstrate — and the numbers suggest the legal industry is less prepared for this conversation than it thinks.

As ¶¶ŇőłÉÄę’ Steven Petrie pointed out, those law firms with a GenAI strategy are 3.9-times more likely to achieve ROI than those without one. Yet, only 22% of firms have such a strategy, Petrie said. That gap — between the firms that are thinking systematically about AI’s role in their business and those that aren’t — may turn out to matter less than the gap between what clients expect to save and what firms can show they’ve delivered.

The ROI question, in other words, is not just a measurement challenge, rather it’s a relationship challenge. And like all the best relationship drama, the tension doesn’t come from disagreement about whether the relationship matters. It comes from both sides wanting something slightly different from it — and neither being quite sure if both sides can get what they want.

If Mindy Kaling is still looking for complicated relationships to write about, she knows where to find them. This one’s going to need a few seasons to work itself out.


You can find more of here

]]>
The 4 Plates: Why GCs need stakeholder intelligence to be effective in the AI era /en-us/posts/corporates/4-plates-delivering-effective-advice/ Thu, 19 Feb 2026 02:11:03 +0000 https://blogs.thomsonreuters.com/en-us/?p=69466

Key takeaways:

      • Become truly client-centered — Legal departments claim to be client-focused yet frequently make strategic decisions about effectiveness without systematically understanding stakeholder needs.

      • Decide where to automate — As AI transforms legal services delivery, decisions about where to automate versus where to deploy human judgment require evidence, not assumptions.

      • Build intelligence with continuous feedback — Systematic stakeholder intelligence reveals where speed matters more than depth, which services lack visibility, and where relationships can create differentiated value.


Today’s general counsels face a fundamental challenge as AI capabilities expand, that of determining where to deploy technology and where to deploy human judgment. Getting this formula right can create irreplaceable value for an organization. Yet many GCs may be making these critical decisions based on assumptions about what stakeholders need rather than evidence.

The paradox is that while corporate legal departments consistently say they want to be effective, client-focused, and responsive partners in service of the business, many are making strategic decisions about how to be that way without systematically measuring or understanding the stakeholder experience they’re trying to optimize. It’s like declaring customer satisfaction as your goal while never actually asking customers how satisfied they are. This blind spot doesn’t just undermine service quality; it undermines one of the four core accountabilities of every legal department which is that of being Effective.

This is the third partĚýof our series on the “Four Spinning Plates” model, which frames the GCs’ evolving responsibilities as:

      1. delivering effective advice
      2. operating efficiently
      3. protecting the business, and
      4. enabling strategic ambitions.

This article focuses on theĚýEffectiveĚýplate.

effectiveness

The information gap

Being Effective as a legal department means delivering high-quality, practical legal advice that is responsive to business needs, and this requires knowing what those needs are. Most legal departments rely on hallway conversations, occasional feedback during business reviews, and organic complaints or praise. While these interactions are valuable and should continue, what they lack is systematic intelligence that could be used to determine the best strategic decisions.

Ad hoc feedback is reactive, incomplete, and reflects the loudest voices rather than the broader reality. You hear from the very satisfied or the very unsatisfied, rarely from the middle majority of stakeholders whose experience shapes overall effectiveness.

As AI transforms legal delivery, this information gap becomes more costly. Without understanding which feedback touchpoints stakeholders prefer as human interactions and which they’d rather handle on their own, how can you decide which legal services to automate and where your team’s judgment and relationship-building are essential?

When legal departments systematically gather stakeholder feedback, they uncover patterns that challenge assumptions about what effectiveness means to the business.

Consider response time, for example. Many legal teams pride themselves on providing thorough, carefully crafted advice. However, stakeholder feedback often reveals that the speed of an initial response matters more than depth, at least for the first touchpoint. What lawyers see as diligence, stakeholders may experience as delay. This insight doesn’t mean the legal team should compromise quality; rather, true effectiveness comes from knowing when a quick acknowledgment is sufficient and when an issue demands thorough analysis right away.

Varied responses needed

Of course, different stakeholders have different expectations of responsiveness. For example, sales colleagues working under targets and time pressure need speed to drive momentum in contract negotiations. Understanding different stakeholder personas can help manage expectations and educate junior lawyers about the different business rhythms that the legal department must respond to.

Or, as another example, take service awareness. It’s common to discover that stakeholders simply don’t know the full extent of what the legal team can offer. Business leaders may not realize their legal team provides training, templates, or advisory services that could prevent issues before they escalate. The problem here isn’t service quality, it’s visibility — and that distinction matters enormously when deciding where to invest limited resources.


You can learn more about how theĚýThomson Reuters Institute’s Value Alignment toolkitĚýallows you to assess your legal department’s strategic positioning here


More importantly, these insights directly inform AI integration strategy for corporate law departments. Routine, high-volume work in which speed matters is a prime candidate for automation and self-service tools. Complex matters in which stakeholders specifically value a lawyer’s business understanding and strategic judgment is where to protect and focus human capacity.

Perhaps the most valuable output of fostering systematic feedback is when that feedback reveals where satisfaction varies across departments or stakeholder groups. A legal department might assume it delivers consistent service, only to discover that one business unit rates the department highly for responsiveness while another complains that it struggles to receive timely answers. These variations point to either inconsistent delivery or improperly communicated expectations. which are exactly the kinds of problems that process standardization, better intervention systems, or technology can address.

Without this type of intelligence, GCs risk automating services that should stay personalized, or maintaining high-touch approaches for work that stakeholders would happily handle themselves through self-service options.

The human value imperative

As AI handles more legal work, the question becomes: What can legal professionals do that technology cannot? The answer lies in the distinctly human elements of legal service such as judgment, knowledge of the business, relationship building, and strategic counsel.

The challenge for corporate law departments, however, is that without first knowing which touchpoints stakeholders value as human interactions, you can’t strategically deploy your team’s capabilities. Systematic stakeholder feedback allows evidence-based decisions on where the legal team’s relationship adds value and where speed or self-service could better serve stakeholder needs.


The question for every General Counsel then becomes: Are you making decisions on the department’s effectiveness based on systematic stakeholder intelligence, or operating with a blind spot that may be costing you more than you realize?


This then becomes critical intelligence for decision-making around resource allocation and restructuring, as well as for demonstrating the legal team’s value to the C-Suite in terms they can recognize. When a GC can articulate not just what their department does but how effectively it serves broader stakeholder needs, they are speaking the same language as the business they support.

This also allows a GC to shift from defending their department headcount based on workload volume to justifying resources based on stakeholder-defined value — and that’s a fundamentally stronger position.

Understanding the Spinning Plates

The Four Spinning Plates model — Effective, Efficient, Protect, and Enable — represents the complete picture of a legal department’s role and value within the organization. Yet research consistently shows a perception gap. For example, C-Suite executives over-emphasize the Effective plate while under-recognizing Protection and Enablement contributions.

This gap exists partly because legal departments lack metrics that capture effectiveness in business terms. They can report cost savings and matter volumes but struggle to demonstrate how well they’re actually serving stakeholder needs. Stakeholder feedback mechanisms bridge this gap by making effectiveness measurable and visible through the lens of those the department serves.

Indeed, it’s not about running surveys for the sake of feedback. It’s about grounding strategic decisions about AI integration, service design, and where to focus human talent, in evidence not assumptions. For those GCs navigating AI transformation specifically, this isn’t optional. Rather, it’s the difference between guessing where to automate and knowing where automation serves stakeholders.

Leading legal departments are already using stakeholder intelligence as their compass for AI transformation, leveraging that intelligence to best determine where to standardize, where to automate, and where human judgment remains irreplaceable.

The question for every General Counsel then becomes: Are you making decisions on the department’s effectiveness based on systematic stakeholder intelligence, or operating with a blind spot that may be costing you more than you realize?


You can learn more about the challenges that corporate GCs face every day

]]>