Workflow Archives - Thomson Reuters Institute https://blogs.thomsonreuters.com/en-us/topic/workflow/ Thomson Reuters Institute is a blog from ¶¶ŇőłÉÄę, the intelligence, technology and human expertise you need to find trusted answers. Wed, 15 Apr 2026 14:06:38 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 Looking beyond the bench at the importance of judicial well-being /en-us/posts/government/beyond-the-bench/ Wed, 15 Apr 2026 14:06:38 +0000 https://blogs.thomsonreuters.com/en-us/?p=70384

Key insights:

      • Well-being is a professional necessity — Judges experience decision fatigue, emotional stress, and personal biases that can affect their rulings, making mental and physical well-being a judicial duty.

      • Community engagement builds better judgment — Staying connected to the communities they serve helps judges develop empathy, recognize bias, and deliver fairer decisions.

      • Diverse experience strengthens the judiciary — Varied backgrounds and ongoing education in areas like restorative justice make courts more responsive, inclusive, and publicly trusted.


Judges play a unique and essential role in society. They are tasked with interpreting the law, resolving disputes, and upholding justice — often under intense scrutiny and pressure. Their decisions shape lives, influence public policy, and reinforce the rule of law.

Indeed, judicial rulings may be the most visible part of the job, but they are not the only measure of a judge’s effectiveness — or of the judiciary’s overall health.

To truly understand and support a robust legal system, it is vital to look beyond the courtroom and examine the broader context in which judges operate. A judiciary that is fair, empathetic, and resilient depends not only on legal expertise, but also on balance, self-awareness, and active engagement with the communities it serves.

The weight of the robe & the value of connection

Despite the solemnity of the judicial office, judges also carry personal experiences, cognitive biases, and emotional responses. The weight of responsibility in adjudicating complex, often emotionally charged cases can lead to stress, burnout, and decision fatigue. that judicial decisions can be influenced by factors such as time of day, caseload volume, and even personal well-being.

When judges prioritize their own well-being through physical health, mental resilience, and time away from the bench, they are better equipped to render fair and consistent decisions. Judicial wellness is not a personal luxury; rather, it is a professional imperative.

Equally important is the role of community engagement. The law does not exist in a vacuum but is shaped by social norms, economic realities, and cultural shifts. Judges who remain isolated from the communities that are affected by their rulings risk losing touch with the lived experiences of the people before them.


Judicial rulings may be the most visible part of the job, but they are not the only measure of a judge’s effectiveness — or of the judiciary’s overall health.


Engagement with the public helps judges better understand how the law impacts and operates in people’s lives. It also builds the empathy and contextual awareness needed for interpreting statutes or imposing sentences.

For example, a judge who volunteers with youth programs or participates in community forums on public safety may develop a more nuanced understanding of cases involving juvenile offenders or policing practices. Similarly, a judge who attends local cultural events or listens to community leaders may be better positioned to recognize implicit biases or systemic inequities that may be inherent in the justice system.

Community involvement also strengthens public trust. When citizens see judges as accessible and engaged, rather than distant or aloof, confidence in the judiciary increases. And these ideas of transparency and connection are key to maintaining citizens’ trust in the courts.

These themes are explored more in depth in the Thomson Reuters Institute’s video series,ĚýBeyond the Bench. For example, in the episodeĚý,ĚýAssociate Justice Tanya R. Kennedy shares her experience educating youth, participating in civic organizations, and leading legal reform initiatives. The episode also highlights how service beyond judicial duties enhances judges’ decision-making and strengthens community ties.

Another episode of the series,Ěý,Ěýexamines the personal and professional challenges faced by judges and attorneys alike. It features a candid interview with Judge Mark Pfiffer, who emphasizes the importance of mindfulness, peer support, and institutional policies that promote mental health and sustainable work practices.

A judiciary that reflects society

The same principle applies at the institutional level. A judiciary is strongest when it reflects the range of experiences and perspectives present in the society it serves.

Beyond individual judges, the judiciary can benefit from diversity and inclusion. A bench that reflects the full spectrum of society is more likely to deliver balanced and equitable justice. But diversity is not just about representation — it’s also about perspective.

Judges who have worked in public defense, civil rights advocacy, or rural legal services bring different insights to the bench than those who have spent their careers in corporate law or prosecution. These varied experiences enrich judicial deliberation and help ensure that decisions are informed by a broad understanding of justice.

Encouraging judges and court personnel to engage in lifelong learning, mentorship, and cross-sector collaboration further strengthens the judiciary. Programs that support judicial education on topics like implicit bias, trauma-informed practices, or restorative justice are essential to modern, responsive courts.

Improving judges’ well-being

The quality of justice depends not only on what happens in the courtroom, of course, but on what happens outside of it. Judges who maintain personal balance, engage with their communities, and remain open to diverse perspectives are better equipped to serve the public good.

Legal professionals, court administrators, and policymakers should support the kinds of initiatives that promote judicial wellness, community outreach, and professional development. By fostering a judiciary that looks beyond the bench, we ensure a justice system that is not only legally sound, but also humane, inclusive, and trusted.

In the end, judges and the justice they mete out are not defined by court rulings alone. It also depends on relationships, context, and public trust. Recognizing that reality is essential to preserving the well-being of the judiciary and the integrity of the law.


TheĚý“Beyond the Bench”Ěývideo series is available on

]]>
The 4 Plates: Are you measuring the real value of AI in your legal department? /en-us/posts/corporates/4-plates-measuring-efficiency/ Wed, 01 Apr 2026 13:15:21 +0000 https://blogs.thomsonreuters.com/en-us/?p=70085

Key takeaways:

      • Efficiency is a means, not an end — Gains from AI only count when you can show what they enabled: better advice, stronger protection, smarter business support.

      • Narrow measurement invites cuts — Legal departments that measure AI value only through cost savings are telling C-Suites that legal costs less, thereby inviting budget and headcount reductions.

      • Measure across all four plates — A framework that captures effectiveness, risk, and enablement alongside efficiency is what shifts perception of the legal department from cost center to strategic asset.


Your legal department has invested in AI tools, adoption is growing, your team is saving time on routine work and, by most accounts, work operations are running faster. Then your CFO asks a simple question: What has AI delivered for the legal department?

If your answer centers on hours saved and cost reduced, you are not alone. However, you may be leaving your most important value story untold. And in a climate in which legal departments are under more scrutiny than ever to demonstrate the full return on their AI investment, that gap matters.

This is the fourth and final part of our series on the “Four Spinning Plates” model, which frames the GC’s evolving responsibilities as:

      1. delivering effective advice
      2. operating efficiently
      3. protecting the business, and
      4. enabling strategic ambitions.

This article focuses on the Efficient plate and specifically on the risk of letting it do too much of the talking.

plates

The Efficient plate under pressure

For a GC, making the best use of what are often limited resources is a constant pressure. The Efficient plate sits alongside, not above, the other three plates and must be kept always spinning. Right now, however, for many in-house legal teams the Efficient plate is receiving disproportionate attention, and for understandable reasons.

AI adoption in corporate legal departments is accelerating quickly. According to the Thomson Reuters Institute’s AI in Professional Services Report 2026, nearly half (47%) of corporate legal respondents surveyed said their department has already integrated generative AI (GenAI) into their work — more than double the figure from the previous year. A further 18% reported that they’re already using agentic AI, with more than half expecting agentic AI to be central to their workflow within the next two years.

GCs are genuinely excited about what this makes possible. As one GC said in the survey that underpinned the AI in Professional Services Report: “It presents the promise of getting out of low-value work and into higher-value work that supports the business.” Another described their vision of a legal department that is “boldly digital-first, relentlessly innovative, and tightly woven into business priorities.”

Clearly, the opportunity is real, but so is the risk of measuring it badly.

The measurement trap

Our 2026 research found that only one-quarter of legal departments are currently measuring the ROI of their AI tools. That alone is striking given the pace of adoption but the follow-up finding is where the real problem lies — of those departments that are measuring ROI, 80% are tracking it in terms of internal cost savings.

Reducing external spend, automating high-volume processes, and bringing more work in-house are all legitimate efficiency gains and worth reporting, of course. However, when cost reduction becomes the only story being told, two things can happen. Your C-Suite learns to associate your department’s value with how little it costs, a frame that is very difficult to escape once it’s established. And the wider value that efficiency enables in terms of sharper risk identification, faster business support, and higher-quality advice goes unmeasured and therefore unrecognized.


ĚýIf your metrics only capture time saved and cost reduced, and not what that freed-up capacity actually delivered, you are measuring the means and ignoring the end.


Think about what GCs themselves say they want from AI. As several GCs said in the survey, they’re hoping AI will provide them with “better output on more meaningful tasks,” “proactive, strategic insight,” and “getting out of low-value work.” These are not efficient outcomes, per se; rather, they are effectiveness, protection, and enablement outcomes, made possible by improved efficiency.

So, if your metrics only capture the input (time saved, cost reduced) and not what that freed-up capacity actually delivered, you are measuring the means and ignoring the end. This is the efficiency trap — measuring the plate so narrowly that it starts to work against you.

Reframing how you measure efficiency

Measuring efficiency well does not mean measuring it more. It means measuring it differently, and always in relation to the business you support. A few principles worth applying include:

Present spend in a business context — Legal spend as a percentage of company revenue tells a more credible story than a raw cost figure. It scales with the business and can be benchmarked meaningfully against peers.

Show what technology investment actually delivered — Time saved through automation is a useful starting point, but the stronger case is what the team did with that time. Tracking the shift from routine to strategic work over a period of time is a far more compelling ROI story.

Connect efficiency gains to business outcomes — An efficiency gain that enabled a faster product launch, prevented a compliance risk, or improved stakeholder satisfaction has a value that no cost metric will capture. Build those connections explicitly into how you report the value of the legal department to the C-Suite.

New resources to help

To support GCs in getting this right, the Thomson Reuters Institute has added two new resources to its Value Alignment Toolkit that directly address this measurement gap.

The Metrics Library brings together more than 100 metrics organized across all four spinning plates. It is a practical starting point for GCs to browse, select, and adapt to the specific goals of their departments, making it easier to build a measurement framework that reflects everything departments do, not just the part that appears in a budget line.

The AI Success Metrics guide addresses the AI measurement gap head-on with a best practice guide and a hands-on worksheet designed specifically for legal departments navigating AI adoption and asking: How do we actually know whether this is working? It looks beyond cost savings to capture the fuller picture of AI value including quality, capacity, strategic contribution, and risk.

Getting the balance right

In today’s environment, every GC needs to consider their answer when their C-Suite asks what the legal department delivers. Are your department’s metrics giving them the full answer or just the part that’s easiest to count?

Efficiency is not the enemy of strategic value. A department that runs well, uses its resources wisely, and embraces technology thoughtfully can in turn create the conditions for everything else the business needs from its legal function. However, that case only lands if your metrics measure across all four plates, not just one.


You can explore the new Metrics Library and AI Success Metrics guide, along with the full Thomson Reuters Institute’s Value Alignment toolkitĚýhere

]]>
Helping the legal profession get AI‑ready: A new advisory board takes shape /en-us/posts/legal/ai-advisory-board/ Thu, 26 Mar 2026 11:31:32 +0000 https://blogs.thomsonreuters.com/en-us/?p=70080 Key insights:

      • AI is already reshaping the legal profession — AIĚýis already embedded in lawyers’ day-to-day legal work with a significant share of both law firm attorneys and in-house legal teams actively using GenAI tools, with many expecting it to become central to their work within the next five years.

      • AIFLP Advisory Board was formed to prepare lawyers for an AI-reshaped profession — TRI convened 21 respected leaders from legal education, private practice, the judiciary, and AI ethics and governance to help ensure lawyers and law students are prepared for a profession reshaped by AI.

      • Human judgment remains central in an AI enabled legal futureĚý— Becoming AI ready is not simply about learning to use new tools; the Advisory Board emphasizes strengthening irreplaceable human capabilities is critical.


In today’s tech-driven environment, AI is no longer a future concept for the legal profession — it’s already here, and it’s changing how lawyers work, learn, and serve clients. Recognizing just how fast the evolution is moving, the Thomson Reuters Institute (TRI) has launched the AI and the Future of Legal Practice (AIFLP) Advisory Board, bringing together a group of respected leaders from across the legal ecosystem to help guide what comes next.

The board includes 21 accomplished voices from legal education, private practice, the judiciary, and AI ethics and governance. Their shared goal is simple but ambitious: Help ensure that both today’s lawyers and tomorrow’s law students are prepared for a profession being reshaped by AI.

Why now?

Because the shift is already underway. According to TRI’s recent 2026 AI in Professional Services Report, 41% of law firm attorneys say their organizations are already using some form of generative AI (GenAI); and nearly half of those at corporate legal departments report that AI tools are being rolled out there too. Even more telling, most professionals said they expect GenAI to become central to their day‑to‑day work within the next five years.

That pace of change raises big questions about competence, ethics, education, risk, and access to justice. And those questions don’t have easy answers.

What the Advisory Board will focus on

The AIFLP Advisory Board is designed to tackle those challenges head‑on. Its work will center on four key areas that are already under pressure as AI adoption accelerates:

      • Legal education and talent development
      • Ethics, professional competence, and accountability
      • Governance, risk management, and client counseling
      • Access to justice and modern service delivery

The Advisory Board’s early focus areas will look at how AI is actually changing legal practice today, what future‑ready lawyers really need to know, and how legal education and real‑world practice can better align. The emphasis is not just on using AI tools, but on strengthening the human skills that matter most, such as sound judgment, critical thinking, and careful verification of AI‑generated outputs.

Shaping the future, not reacting to it

Citing the critical need for this Advisory Board’s creation, Mike Abbott, Head of the Thomson Reuters Institute, notes that the legal profession is at a crossroads, and it can either react to AI‑driven disruption or take an active role in shaping how these technologies are used to support lawyers, courts, and the public.

“By assembling a board of distinguished leaders, our goal is to help practicing lawyers and the lawyers of the future navigate a rapidly evolving landscape,” Abbott said. “Ensuring that legal education strengthens irreplaceable skills such as critical thinking, human judgment and effective communication helps make AI use safe and effective. The Board’s efforts will ultimately help shape a future-ready profession, leading to better outcomes for all.”

Meet the AIFLP Advisory Board Members

By convening experienced leaders from across the profession, TRI hopes to help lawyers navigate this landscape with confidence. Advisory Board Members include:

      • Michael Abbott, Head of the Thomson Reuters Institute
      • Soledad Atienza, Dean, IE Law School
      • The Honorable Jennifer D. Bailey, (Ret.), Partner, Bass Law
      • Benjamin Barros, Dean, Stetson University College of Law
      • Professor Sara J. Berman, University of Southern California, Gould School of Law
      • Megan Carpenter, Dean Emeritus, University of New Hampshire Franklin Pierce School of Law
      • Ronald S. Flagg, President, Legal Services Corporation
      • Donna Haddad, AI Ethics and Governance expert, and founding member, IBM AI Ethics Board
      • Johanna Kalb, Dean and Professor of Law, University of San Francisco School of Law
      • The Honorable Nelly Khouzam, Florida Second District Court of Appeal
      • The Honorable William Koch, Dean, Nashville School of Law, and former Tennessee Supreme Court Justice
      • Sheldon Krantz, retired partner, DLA Piper, and a founder, DC Affordable Law Firm
      • Stefanie A. Lindquist, Dean, School of Law, Washington University in St. Louis
      • The Honorable Mark Martin, Founding Dean and Professor of Law, Kenneth F. Kahn School of Law at High Point University, and former Chief Justice, Supreme Court of North Carolina
      • Caitlin (Cat) Moon, Professor of the Practice and founding co-director, Vanderbilt AI Law Lab, Vanderbilt Law School
      • Hari Osofsky, Myra and James Bradwell Professor and former Dean, Northwestern Pritzker School of Law; Founding Director, Northwestern University Energy Innovation Lab; and Founding Director, Rule of Law Global Academic Partnership
      • Joanna Penn, Chief Transformation Officer, Husch Blackwell
      • The Honorable Morris Silberman, Florida Second District Court of Appeal
      • The Honorable Samuel A. Thumma, Arizona Court of Appeals, Division One
      • Mark Wasserman, Partner and CEO Emeritus, Eversheds Sutherland
      • Donna E. Young, Founding Dean, Lincoln Alexander School of Law, Toronto Metropolitan University

What’s next?

The Advisory Board held its first meeting in February and will meet quarterly going forward. As the work progresses, TRI plans to publish research findings, best practices, and practical recommendations for legal educators, law firms, and courts.

In a profession built on precedent and careful reasoning, the rise of AI presents both opportunity and responsibility. The AIFLP Advisory Board is an effort to make sure the legal community meets that moment thoughtfully and on its own terms.


You can learn more about the impact of advanced technology on the legal profession here

]]>
2026 State of the Corporate Law Department Report: GCs align strategy to corporate imperatives, but C-Suites want more /en-us/posts/corporates/state-of-the-corporate-law-department-report-2026/ Tue, 24 Mar 2026 12:09:01 +0000 https://blogs.thomsonreuters.com/en-us/?p=70047

Key takeaways:

      • Disconnect between legal departments and C-Suite perceptions — While many general counsel believe their departments are significant contributors to business success, most C-Suite executives do not share this view. Fully 86% of GCs say they believe their department is a significant contributor, but only 17% of C-Suite executives agree.

      • A need to find new ways to demonstrate value — Legal departments are under increasing pressure to do more with less, as nearly half of GCs surveyed cite staffing and resource constraints as their top barrier to delivering additional value. Despite these limitations, expectations from the C-Suite continue to rise.

      • AI adoption accelerates, business strategy comes next — Legal departments are rapidly embracing technology to improve efficiency, manage resources, and address cost pressures. Not surprisingly, the proportion of GCs calling AI a strategic imperative has doubled.


Over the past several years, general counsel and corporate law departments at large have transformed their operations. Many have become more efficient enterprises, leveraging technology, in particular AI, at an increased pace. GCs have adjusted their hiring practices to conform with the modern corporation, taking new ways of working into account. And they have embraced data-driven decision-making, evaluating outside counsel and their own operations alike with a wider suite of new metrics and KPIs.

But do you know who hasn’t yet realized the fruits of that labor? The corporate C-Suite.

Jump to ↓

2026 State of the Corporate Law Department Report

 

The , released today by the Thomson Reuters Institute, reveals a disconnect between how GCs and their corporate law departments view their own alignment to the wider business, and what C-Suite executives believe the legal department contributes. Within this gap, the message is clear: GCs not only need to align with their organizations’ overall business strategy, they need to learn how to prove that alignment to the rest of the company.

Indeed, when asked how they view legal’s contribution to the rest of the business, 86% of GCs surveyed said they viewed the legal function as a significant contributor. However, only 17% of other C-Suite executives said the same — and 42% said legal contributes little or not at all.

corporate law departments

As the report explains, this disconnect lays the inherent groundwork for the tension facing many GCs today. While they are increasingly aiming to align to business standards, the rest of the organization is not recognizing those actions. Instead, many C-Suites are looking for even more out of today’s legal departments to prove their contributions to organizations’ business imperatives.

As in past years, many in-house legal departments are being tasked to do more with less. Nearly half of GCs cited staffing and resource constraints as the top barrier they face to delivering additional value. Indeed, many said they expected outside counsel spend in some key areas — such as regulatory work and mergers & acquisitions — to remain high. As of the fourth quarter of 2025, more than one-third (36%) of GCs said they expect to increase overall spend on outside counsel over the next year, while only 20% said they plan to decrease their spend.


Despite legal departments’ gains, their C-Suites are looking for them to take the next step, turning operational excellence into business success.


Not surprisingly, many GCs said they view technology as one of the primary ways they have to combat these resourcing and cost issues. In fact, the proportion of GCs mentioning technology as a strategic priority entering 2026 doubled over the year prior. Legal departments have begun to feel positive effects of AI in their own organizations, the report notes, such as increased efficiency or time feed up for strategic work.

Despite these gains, C-Suites are looking for are looking for their legal functions to take the next step, turning operational excellence into business success. This can take a number of different forms, such as explicitly tying advice to client business objectives, presenting legal spend in the context of the business by showing it as a percentage of revenue, or approaching risk management with the goal of aiding business imperatives. “When we have a risky legal subject, the company never prefers just to see the legal opinion,” said one retail GC. “They’re also requesting you to drive them how to make a decision.”

AI and technology should also be approached in this same way, the report argues. Although almost half of all corporate legal departments have some type of enterprise-wide GenAI tool, according to the survey, very few are collecting success metrics around AI’s implementation or linking its use to business revenue. Put a different way, many legal departments are focused on unlocking capacity, rather than deploying capacity in a business-centric way — much to the chagrin of their C-Suites.

corporate law departments

Although legal departments have established a solid foundation upon which a business can stand, ultimately, C-Suites don’t want just a foundation. They want help building the entire house, the report shows, directly enabling the services that companies provide to customers. In that, GCs and legal departments have more work to do, not only tying strategy to overall business initiatives but actively communicating how the legal function’s work aids the company as a whole.


You can download

a full copy of the Thomson Reuters Institute’s “” here

]]>
Move over, “Death of the billable hour,” Legalweek 2026 has found a new existential crisis /en-us/posts/legal/legalweek-2026-new-existential-crisis/ Thu, 19 Mar 2026 13:25:16 +0000 https://blogs.thomsonreuters.com/en-us/?p=70031

Key takeaways:

      • Structural change in firms — The traditional law firm pyramid, in which junior lawyers perform high-volume work at billable rates, is losing its foundation as AI compresses tasks that once took hours and clients increasingly bring more work in-house.

      • Finding new ways to train — AI-powered simulations are emerging as a concrete answer to the associate training problem, allowing new lawyers to build courtroom skills faster and fail safely behind closed doors.

      • The associate role isn’t dying, it’s being redefined — Those law firms that figure out the right mix of legal training, technological fluency, and management skills will have a significant edge over those that are still debating it.


NEW YORK —ĚýOn more than one occasion, I have written seriously and at length about the death of the billable hour. I’ve argued that alternative fee arrangements (AFAs) are the future, that the economic logic of hourly billing is irreconcilable with AI-driven productivity gains, and that the industry needs to prepare for a fundamentally different pricing model. I meant every word. I still do.

Yet, at last week’s one attendee pointed out they’ve been hearing about the death of billable hour since the 1990s. At this point, it’s less a prediction and more of a tradition. Indeed, Matthew Kohel, a partner at Saul Ewing, said despite the legal press coverage connecting AI to the billable hour’s demise that narrative is now entering its third or fourth decade. And Kohel said his firm simply isn’t seeing meaningful client-driven movement toward AFAs.

So let’s be honest: the billable hour is not dead, and in fact, it may not be even close to dead.

However, if you’re looking for something that is facing a genuine existential reckoning — something the legal industry whispered about in the early days of generative AI (GenAI) and is now discussing openly — Legalweek 2026 may have found it. It turns out the billable hour was never the thing in danger, rather it’s the person billing the hours.

It’s the associate.

The question nobody wanted to ask out loud

The future of the junior lawyer surfaced in virtually every breakout session across the three-days event, and while it may not be the point of inception for the question, it was certainly the moment this idea graduated from a half-whispered aside to main-stage conversation.

Moreover, the problem has grown more urgent since its inception in the early GenAI days, when the question was simply whether a firm would need fewer associates. Now, that question hasn’t gone away, but it’s been joined by harder ones concerning training, hiring, and legal and technical skills. For example, what if AI is already better than a junior associate at some of the tasks that defined the role in the past? And what happens if someone says it out loud?

Someone said it out loud.


If you’re looking for something that is facing a genuine existential reckoning, Legalweek 2026 may have found it. It turns out the billable hour was never the thing in danger, rather it’s the person billing the hours.ĚýIt’s the associate.


During a panel on Measuring What Matters, the conversation turned to client trust. Clients want to know: How can you be sure AI will catch everything? How do you trust it to find what matters across 5,000 pages of documents?

The response from the panel was direct, and it landed like a brick in the room: it’s 5,000 pages, and someone was reading those five thousand pages. That someone is an associate. If that associate — who, more often than not, is one of the least experienced lawyers in the building — is the one reading all those pages, why would you trust them to do it better than a machine?

While that question hung in the air during the panel, it does deserve to sit with you for a moment afterward. Because embedded in it is the uncomfortable arithmetic that drives the entire associate question. The traditional law firm pyramid is built on a base of junior lawyers performing high-volume, lower-complexity work such as document review, due diligence, first-pass research, and doing so at rates that generate revenue while the activity is simultaneously (in theory) training the next generation of partners. If AI can do that base-layer work faster, cheaper, and with accuracy that one panelist described as “beyond very good,” then the pyramid doesn’t just shrink. It loses its foundation.

Barclay Blair, Senior Managing Director of AI Innovation at DLA Piper, noted that tasks like due diligence on some types of financial contracts are already being compressed to two hours, down from 15 to 20 — with zero hours being a realistic possibility in the near future.

Further, as one attendee observed, clients increasingly are adopting AI internally, and they’re bringing work in-house that was previously sent to outside counsel. Clearly, the work that trained generations of associates isn’t just being automated — in some cases, it’s leaving the firm entirely.

Fewer reps, greater weight

Yet here is where it would be easy (and wrong) to write the doom-and-gloom version of the future, in which AI replaces associates, the pipeline collapses, nobody knows how to train lawyers anymore, civilization crumbles, etc. It’s a clean narrative, but it’s also not what Legalweek panels actually said.

Because alongside the anxiety, something else was happening. People were building answers.

In another panel, Developing the Future Lawyer, panelists spent an hour in the weeds of what associate training actually looks like when the old model breaks down — and the conversation was far more concrete than you might expect.


Panelist spent an hour in the weeds of what associate training actually looks like when the old model breaks down — and the conversation was far more concrete than you might expect.


Panelist Abdi Shayesteh, Founder and CEO of AltaClaro, laid out the core problem with precision, noting that there’s a growing gap in critical thinking among associates. Templates getting copy-pasted without relevance analysis, and there is a lack of knowing what you don’t know. And the traditional training methods such as videos, lectures, and passive learning, don’t fix it. Indeed, those outdated models may be making it worse. Shayesteh’s analogy was blunt: You don’t learn to swim by watching videos — you need to jump into the deep end.

His solution is AI-powered simulations. Not hypothetical ones, but working deposition simulations available today, with real-time AI feedback, in which associates can practice cross-examination, deal with opposing counsel objections, and build the muscle memory that used to require years of live experience.

Kate Orr, Managing Director of Practice Innovation at Orrick, picked up the thread with two observations that reframed the stakes. First, AI simulations allow associates to fail behind closed doors, a radical improvement over the old model, in which blowing it had real consequences because failure often happened directly in front of the partners Second, the tool isn’t just for juniors. Even experienced lawyers are using simulations to test different approaches, tweak personas, and sharpen arguments. Orrick’s own Supreme Court team had a lawyer use AI to review a draft brief and identify paragraphs that could be tighter.

Todd Heffner, Partner at Smith, Gambrell & Russell, said the real question isn’t whether associates will use AI, but rather whether it gets them to lead at trial in year 10 instead of year 20. Right now, most associates are lucky to see the inside of a courtroom in their first seven years, and even then, they spend most of their time back in the hotel prepping for the more experienced attorneys instead of arguing themselves. If simulations can compress that learning curve, the associate’s career doesn’t disappear, rather, it gets accelerated.

The dinosaur that adapted

During the Measuring What Matters panel, Mitchell Kaplan, Managing Director of Zarwin Baum, introduced himself with a memorable bit of self-deprecation: He’s a dinosaur — but one, he clarified, who understands how AI can revolutionize what he does.

Kaplan’s perspective threaded through both days of programming like a quiet counterweight to the anxiety. He’d seen this before — not AI specifically, but the fear of it. He watched the legal industry transition from physical libraries to digital research tools, and he watched attorneys adapt. And his message was consistent: the work changes, but the need for lawyers doesn’t disappear. Associates may be taking shortcuts, but they still need to read, still need to review, and still need to think.

They’re developing differently than his generation did, Kaplan said, but it’s the same way every generation develops differently from the one before it. And different doesn’t mean wrong.


The work changes, but the need for lawyers doesn’t disappear. Associates may be taking shortcuts, but they still need to read, still need to review, and still need to think.


It’s a perspective that found an unexpected echo in the Enterprise Alignment panel. Mark Brennan, a partner at Hogan Lovells, relayed a comment he heard at a previous AI conference: The next generation of entry-level jobs will be managers — because they’ll be managing agents and other tech tools. Brennan admitted he didn’t have all the answers on what that means for legal training, but the implication was clear. The associate role isn’t dying, instead, it’s being redefined. And the firms that figure out what that redefined role looks like, what mix of legal training, technological fluency, critical thinking, and management skills it requires, will have a significant advantage over those firms that are still debating it.

Another panelist, Andrew Medeiros, Managing Director of Innovation at Troutman Pepper Locke, made a prediction that felt like the sharpest version of this idea. He said that at some point, new lawyers are going to be doing simulated matters as a standard part of the development process. Eventually, there’s going to be a generation that walks in as new attorneys and finds themselves litigating right away.

That’s not the death of the associate. Rather, that’s the beginning of a different kind of associate — one who arrives at the courtroom sooner, with different preparation, carrying different tools.

The billable hour, for all the prophecies, refuses to die. The associate, it turns out, has no intention of dying either — just evolving. Mitchell Kaplan called himself a dinosaur — but Legalweek was full of dinosaurs, and every one of them was adapting and in that adaptation, thriving. The harder question is whether the firms that forged them will be brave enough to follow.


You can find more ofĚýour coverage of Legalweek eventsĚýhere

]]>
Corporate tax teams eager for AI, but frustrated by pace of change, new report shows /en-us/posts/corporates/corporate-tax-department-technology-report-2026/ Mon, 16 Mar 2026 13:06:11 +0000 https://blogs.thomsonreuters.com/en-us/?p=69963

Key insights:

      • Possibilities vs. practicality — There is a growing frustration gap between what corporate tax professionals want to achieve and what their current technological tools will allow.

      • Expectations about AI — Tax professionals have significantly accelerated the timeframe in which they expect AI to become a central part of their workflow.

      • Proactive progress — Automation is enabling a gradual shift toward more strategic, proactive tax work, although not as quickly as many tax professionals would like.


The recently released , from the Thomson Reuters Institute and Tax Executives Institute, reveals that while automation of routine tax functions is indeed enabling a long-desired shift toward more strategic, proactive tax work in some corporate tax departments, a majority of tax leaders surveyed say upgrading their department’s tax technology is still a relatively low priority at their company.

Jump to ↓

2026 Corporate Tax Department Technology Report

 

The report surveyed 170 tax leaders from companies of all sizes to find out how corporate tax professionals are using technology, overcoming obstacles, and planning for the future.

A growing “frustration gap”

In general, the report found that while many companies (especially larger ones) are actively upgrading their tax department’s technological capabilities, there is a growing frustration gap between what tax professionals know they can accomplish with more robust technologies and what their current tools allow them to do.

Adding to this frustration is a growing discrepancy between the additional budget and resources tax departments hope to get each year and the harsher reality they often face. Indeed, even though tax leaders remain optimistic that their budgets and capabilities will expand and improve in the coming years, fewer than half of the respondents surveyed said their departments received a budget increase last year, and many saw budget cuts.


corporate tax

Further, the report shows that the prospect of incorporating ever more sophisticated forms of AI and AI-driven tools into tax workflows is also very much on the minds of tax professionals. Even though the actual usage of AI in corporate tax departments is still relatively low, the report reveals that tax professionals now expect AI become a central part of their workflow within one to two years, much faster than they did in last year’s report.

Indeed, as the report explains, this expectation of more imminent AI adoption represents a significant shift in attitude, because most corporate tax departments are rather circumspect about how, when, and why they incorporate new tech tools into their established routines.

If today’s technological capabilities continue to accelerate, companies that have been slow to invest in the infrastructure necessary to keep pace may soon find themselves struggling to catch up with their more tech-savvy counterparts, the report warns.

Moving toward more proactive work, albeit slowly

For companies that have invested in the technological infrastructure necessary to support advanced tax technologies, the payoff is becoming increasingly evident.

According to the report, about two-thirds (67%) of tax professionals surveyed said their company’s investment in technology had enabled a shift toward more proactive tax work within their departments. This shift is particularly noticeable at large corporations, at which, unsurprisingly, investment in tax technology has been more generous.

The 2026 Corporate Tax Department Technology Report also explores other aspects of corporate tax departments, including their hiring practices, tech training, purchasing strategies, what they see as the most popular tech tools for tax, and numerous other factors that affect how tax departments operate.


You can download

a full copy of the Thomson Reuters Institute’s here

]]>
Reinventing the data core: The arrival of the adaptable AI data foundry /en-us/posts/technology/reinventing-data-core-adaptable-data-foundry/ Thu, 05 Mar 2026 16:08:59 +0000 https://blogs.thomsonreuters.com/en-us/?p=69795

Key takeaways:

      • There is a widening gap between AI ambition and readiness — The gap between AI ambition and data readiness is widening, making the adoption of an adaptable data foundry essential for scalable, explainable, and compliant AI outcomes.

      • A data foundry model directly addresses the root cause — A data foundry model enables organizations to industrialize data production, automate compliance, and ensure consistent data lineage, thereby overcoming the limitations of brittle, legacy data architectures.

      • Incorporate the data core into your AI planning — Reinventing the data core is now a strategic imperative for those enterprises that aim to thrive in 2026 and beyond, as agentic AI, regulatory demands, and integration complexity accelerate.


This article is the third and final installment in a 3-part blog series exploring how organizations can reset and empower their data core.

A defining theme of this year so far is the widening distance between organizational ambition and data readiness. Leaders want the hype and inherent capabilities they believe are instantly contained within agentic AI — automated compliance, predictive integration for M&A, and decision-intelligence pipelines that reduce operational friction.

Without a data foundry, however, much of that will be impossible. Instead, workflows will remain brittle, AI agents will hallucinate under inconsistent semantics, and data lineage will break down across federated sources. Further, without a data foundry, regulatory mappings involved with the Financial Data Transparency Act (FDTA) and the Standard Business Reporting (SBR) framework cannot be automated, cross-functional insights will require manual reconciliation, and auditability will collapse under scrutiny.

This is not a failure of leadership. It is a failure of architectural design to recognize the congealment of data as a predecessor to technologies and the critical priorities of data security, auditability, and lineage.

data core

For decades, organizations built monolithic systems that were optimized for stability and reporting. Today’s world demands modularity, continuous adaptation, and agent-driven interoperability. Architecture has shifted from build and operate to build and evolve. This is precisely what a data foundry enables.

Why reinvention can no longer wait

Throughout 2025 and now into the early months of 2026, data and AI have quietly shifted from innovation topics to enterprise constraints. Leaders across regulated markets are starting to recognize that the obstacles limiting their AI ambitions are neither mysterious nor technical — they are structural. These obstacles sit inside the data core, waiting inside the silent architecture that determines whether any form of automation, intelligence, or compliance can scale beyond a pilot.

The data bears this out. When you examine the work coming from Tier-1 research bodies, supervisory institutions, and global transformation benchmarks, a consistent narrative emerges beneath the headlines: AI is accelerating, regulation is hardening, and integration demands are expanding. Moreover, organizational data remains pinned to assumptions that were forged in static, pre-AI operating environments. This gap is not theoretical; rather, it is measurable, persistent, and directly correlated to business performance.

data core

Let’s look at the AI results first. Across industries, organizations continue to experience a familiar pattern: early promise, limited adoption, and rapid degradation once the model encounters inconsistent semantics or fragmented lineage. Global studies show that the vast majority of enterprise AI initiatives still struggle to reach full production maturity, and among those that do, many encounter performance drift within the first year.

The driver is remarkably consistent. It is not the sophistication of the model nor the skill of the data science team — it is the quality, clarity, and traceability of the data that is feeding the system.

Taken together, these signals deliver a clear message. The gap between AI ambition and data readiness is widening, not narrowing. This is why the data foundry conversation matters now. It is not an abstract architectural concept. It is a response to the full stack of quantitative pressures the market has been telegraphing for years — costs rising, compliance hardening, AI accelerating, and integration straining under inconsistent semantics and fragile lineage.

A data foundry model directly addresses the root cause of this by industrializing the creation of consistent, reusable, explainable data products that can fuel agentic AI, support regulatory defensibility, and accelerate enterprise reinvention.

The numbers point to a simple conclusion. Reinvention is no longer optional, and the window to address the data core before agentic AI becomes standard practice is narrow and closing. The organizations that act now will be the ones that define what compliant, explainable, interoperable AI looks like in the next decade. Those that defer the work will find themselves restructuring under pressure rather than reinventing by choice.

This is the inflection point. In truth, the quantitative signals have made the case more clearly than a multitude of strategy narratives ever could.

The data foundry: A model for continuous alignment

Unsurprisingly, agentic AI introduces new, more demanding requirements, including:

      • machine-interpretable semantics;
      • context-preserving lineage across federated systems;
      • decomposition of enterprise knowledge into reusable data products;
      • dynamic trust-scoring tied to source reliability and timeliness;
      • automated compliance overlays and regulatory logic; and
      • cross-domain metadata orchestration.

These capabilities are not optional, and they are non-negotiable. Indeed, they determine whether AI elevates risk or mitigates it, whether it accelerates productivity or introduces unrecoverable inconsistencies. And they determine whether AI augments decision quality or produces volatility.

A data foundry shifts organizations from artisanal, one-off data preparation and toward industrialized data production, in which patterns replace pipelines, and building blocks replace custom engineering. This shift will mean that lineage is generated, not documented; semantics are governed, not patched; and compliance is automated, not reconstructed. In this way, reuse becomes the default, not the exception.

In fact, this process is analogous to manufacturing. Instead of producing bespoke components for each need, the enterprise creates standardized, high-fidelity data assets that can be assembled into any workflow, any AI use case, and any reporting requirement.

A data foundry becomes the quiet architecture behind every future capability, making these capabilities systematic rather than ad-hoc. The chart below showcases the progressive build-up using a data factory, beginning with data intake and harmonization and ending with the AI agent orchestration and reusable data products that learn from their deployment.

data core

Unfortunately, organizations are still building increasingly advanced AI decisioning and efficiency solutions on top of an aging and brittle data foundation. The results are predictable: stalled initiatives, compliance exposure, and stakeholder frustration. Additionally, instead of asking why, organizations keep adding more tools — more dashboards, more cloud services, more AI pilots, and more flavors of transformation.

Clearly, enterprises aren’t dealing with an AI problem. They’re dealing with a data alignment problem disguised as progress within fragmented, AI enclosures.

Reinvention starts at the data core

For more than a decade, firms across regulated industries have repeated the same mantra: Data is our most critical asset. When you peel back the layers or when you sit in board review sessions or integration meetings or regulatory remediation audits, however, the evidence does not match the rhetoric.

Reinvention is no longer optional. Instead, it is the starting point for meeting the demands of 2026 and beyond. The institutions that thrive will be those that understand that the data core is not a technical asset — it is the operational backbone of the enterprise. Indeed, the institutions that succeed will be those that recognize the truth early: AI is an output, and the data core is the strategy. And the organizations able to industrialize their data — through a foundry model, through AXTent, through repeatable semantic structures — will be the ones leading innovation, reducing compliance risk, accelerating M&A synergies, and achieving enterprise-wide reinvention.

In the end, the real question isn’t whether AI will transform business; the question is whether the data foundation will allow it. And the answer is rebuilding your data core so AI can actually deliver the outcomes your organization needs — and that work begins now.


You can find more blog postsĚýby this author here

]]>
The professional judgment gap: Tracing AI’s impact from lecture hall to professional services /en-us/posts/corporates/ai-professional-judgment-gap/ Thu, 05 Mar 2026 12:59:12 +0000 https://blogs.thomsonreuters.com/en-us/?p=69771

Key highlights:

      • Universities face pressure over pedagogy— Academic institutions are adopting AI as a reputational marker that’s driven by market pressure rather than educational need, creating a risk for students who can work with AI but not independently of it.

      • Entry-level roles under threat— AI is being deployed most heavily to automate the grunt work of entry-level positions in which foundational professional skills are traditionally built through struggle and feedback.

      • K-shaped cognitive economy emerging— Experienced professionals with existing expertise are gaining efficiency from AI, while entry-level workers are losing access to skill-building experiences.


According to Harvard University’s Professional & Executive Development division, innovation is defined as a “process that guides businesses through developing products or services that deliver value to customers in new and novel ways.” Along this journey, professional judgement in decision-making is used numerous times to determine next steps at key stages.

Notably, the word technology is nowhere to be found in this definition — an absence , Assistant Professor of Learning Technologies at the University of Minnesota, has long found revealing. Instead, innovation is framed as creative problem-solving, contextual intelligence, and the ability to work across perspectives. Interestingly, Dr. Heinsfeld adds, none of these require constant automation. In fact, many of them are undermined by it.

However, AI adoption has the real potential to automate away the very experiences that build these capabilities from university lecture halls to corporate offices. With notable data already suggesting that , the risk that the current approaches to AI use in universities and companies are engineering away innovation and professional judgement skills is real, notes , Group Leader in AI Research at Harvard and NTT Research.

Indeed, some observers view AI as the largest unregulated cognitive engineering experiment in human history. Yet, unlike medical drugs that require years of approval and testing, AI systems are reshaping how millions of students think, learn, and make decisions without a comparable approval process or a shared framework for discussing any potential “side effects,” as Dr. Heinsfeld pointed out.


Most worrisome is that AI is being deployed most heavily to automate precisely the entry-level roles where foundational professional skills are built.


So, what happens when an entire generation of future employees learn to delegate judgment before they develop it? And what actions do universities and companies need to take now to avoid this reality?

Risks of universities adopting AI under pressure

For universities, AI “has become a reputational marker, and not adopting AI is framed as institutional risk, regardless of whether an educational case has been made or not,” says Dr. Heinsfeld, adding that this is being driven, in part, by market pressure rather than pedagogical need.

Already, companies can greatly influence universities as employers of new graduates; and as such, AI systems are currently being optimized for speed, agreeability, and accessibility to stimulate ongoing use. However, as Dr. Heinsfeld contends, as universities race to earn the label AI ready without a careful, cautious and detailed understanding of how it may impact students’ cognitive processes, they run the risk of damage to their reputations of pedagogical integrity.

In addition, the “data as truth” paradigm is a complicating factor, she says. Drawing on her research, Dr. Heinsfeld explains how data “is often framed as the idea of being a single source of truth based on the assumption that when collected and analyzed, it can reveal objective, indisputable facts about the world.” Indeed, this ubiquitous mindset across universities and corporations treats data — such as that used to train large and small language models — as objective and indisputable.

Yet this obscures critical decisions about what gets measured, whose perspectives are included, and what forms of knowledge are systematically excluded from AI systems. As Dr. Heinsfeld warns, when data becomes synonymous with truth, “knowledge is what is measurable and optimizable.” This narrows professional judgment to efficiency metrics rather than the interpretive depth, ethical reasoning, and cultural context that are essential for sound decision-making.

Judgment gap widens in workforce downstream

Under the current AI adoption approach, students could leave universities able to workĚýwithĚýAI but not independentlyĚýofĚýit, a distinction emphasized by Dr. Heinsfeld. Like calculators, AI works as a tool only when foundational skills for its use exist first. Without this, graduates enter the workforce with a critical judgment gap that compounds from their lives as students at college campuses to becoming employees working in corporations.


AI adoption has the real potential to automate away the very experiences that build these capabilities from university lecture halls to corporate offices.


Most worrisome is that AI is being deployed most heavily to automate precisely the entry-level roles where foundational professional skills are built, warns Dr. Tanaka. Indeed, this is exactly the type of grunt work that teaches judgment through struggle and feedback. Over time, overuse of AI will result in quality being sacrificed because critical evaluation skills have atrophied.

Looking into the future, Dr. Tanaka foresees a K-shaped economy of cognitive capacity. Experienced professionals with existing expertise and contextual judgment built through years of experience will gain increasing efficiency from AI. Entry-level workers, however, will lose access to the valuable experiences that build professional judgement. This gap widens between professionals who can independently accelerate their workflows using AI and those whose traditional tasks are merely displaced by it.

Intervention may be able to break the cycle

The pattern is not inevitable, as both Dr. Tanaka and Dr. Heinsfeld explain. Drawing on Dr. Heinsfeld’s emphasis on institutional agency, meaningful intervention will depend on conscious, intentional choices made at every level. Both experts share their guidance for how different organizations can manage this:

Academic institutions — Universities must first recognize that AI adoption is a decision rather than an inevitability and make educational need the North Star for decision-making around AI. In her analysis, Dr. Heinsfeld emphasizes that when vendors set defaults, they quietly redefine academic practice. Defaults shape what is made visible or invisible and what becomes normalized. In AI-driven environments, universities often lose control over how models are trained and updated, what data shapes outputs, how knowledge is filtered and ranked, and how student and faculty data circulate beyond institutional boundaries — especially if decision-making is left to vendors. As a result, the intellectual byproducts of teaching and learning increasingly become inputs into external systems that universities do not govern.

Private entities — For organizations, Dr. Tanaka calls for feedback loops and other mechanisms that will promote more open discussion about AI use without stigma. In addition, companies need to proactively redesign entry-level rolesĚýto ensure these positions continue to cultivate judgment and foundational skills in an AI-driven environment. Likewise, Dr. Tanaka suggests that companies explicitly provide feedback about cognitive trade-offs to employees, fostering an understanding of possible skill entrophy.

Employees — Similarly, individuals working for organizations bear much of the responsibility for making sure critical thinking is enhanced by AI. Indeed, strategic decisions about when to use AI while seeking to preserve cognitive capacity and professional judgement are key.

Looking ahead

In today’s increasingly AI-driven environment, a new paradigm is needed to combat the current operating assumption that optimization from AI is the sole path to progress. And because the current trajectory sacrifices human development for efficiency, the need for universities and companies to choose a different path is urgent — while they still have the judgment capacity to do so.


You can find out more about how organizations are managing their talent and training issues here

]]>
The AI Law Professor: When AI makes lawyers work more, not less /en-us/posts/technology/ai-law-professor-ai-makes-lawyers-work-more-not-less/ Tue, 03 Mar 2026 14:58:48 +0000 https://blogs.thomsonreuters.com/en-us/?p=69696

Key points:

      • The productivity promise is largely wrong — Emerging research shows that AI doesn’t reduce work — it intensifies it. Lawyers work faster, take on broader responsibilities, and extend their hours without recognizing the expansion. Further, because prompting AI feels like chatting rather than laboring, lawyers slip work into evenings and weekends without registering it as additional effort.

      • Self-reinforcing acceleration is the real risk — AI speeds tasks, which raises expectations, which increases reliance, which expands scope, ultimately creating a cycle that drives burnout in a profession already plagued by it.

      • Purposeful integration is the antidote — Legal organizations need to promote intentional governance structures that account for how people actually behave with AI, not how leadership imagines they will or should.


Welcome back to The AI Law Professor. Last month, I examined how AI is forcing us to rethink training for junior lawyers. This month, I examine a question that affects every lawyer: What happens when the efficiency gains we’ve been promised don’t materialize the way we expected? A recent study out of UC-Berkeley suggests the answer is more troubling than most law firm leaders realize.

If you’ve attended a legal technology conference anytime over the past two years, you’ve heard the pitch: Automate the mundane and elevate the meaningful.

A in the Harvard Business Review by UC-Berkeley researchers Aruna Ranganathan and Xingqi Maggie Ye suggests we should be more skeptical. They tracked how generative AI (GenAI) changed work habits over eight months at a 200-person technology company. Their findings were striking — AI tools didn’t reduce work; rather, they intensified it.

According to the study, the tech employees studied were shown to work faster, take on broader responsibilities, extend their hours into evenings and weekends, and multitask more aggressively — all without being asked to do so. The promise of liberation became a reality of acceleration and overwork.

For those of us in the legal profession, this should be a wake-up call.

Three forms of intensification

The researchers identified three patterns that will sound familiar to anyone watching lawyers adopt GenAI in their work processes.

Task expansion

Because AI fills knowledge gaps, professionals stepped into responsibilities that previously belonged to others. Product managers started writing code, and researchers took on engineering tasks. In legal contexts, the parallel is obvious. Associates use AI to attempt tasks once reserved for senior lawyers. Paralegals draft documents that previously required attorney oversight. Solo practitioners take on matters outside their core expertise because their AI tools make it feel manageable. The result isn’t less work distributed more efficiently, it’s more work concentrated in fewer hands, with less institutional knowledge guiding the output.

Blurred boundaries

AI blurred the boundaries between work and non-work. Because prompting an AI feels more like chatting than labor, lawyers (like the tech workers in the study) may slip work into lunch breaks, evenings, and commutes without registering it as additional effort. The conversational interface is seductive precisely because it doesn’t feel like work. It is work, however, and much more of it.

Pervasive multitasking

Workers managed multiple AI threads simultaneously, generating a sense of momentum that masked increasing cognitive load. For lawyers, this means running parallel research queries, drafting multiple documents at once, and constantly monitoring AI outputs, all while believing they’re saving time.

The productivity trap

The most important insight from the research is that these effects are self-reinforcing. AI accelerates tasks, which raises expectations for speed. Higher speed increases reliance on AI, and greater reliance expands the scope of what people attempt. And expanded scope generates even more work. Rinse and repeat.

Parkinson’s law: “Work expands to fill the time available for its completion.”

In a profession already plagued by burnout, this cycle should alarm us. The legal industry’s adoption of AI is being driven largely by the promise of doing the same work in less time. But if the Berkeley research is any guide, what actually happens is that we do more work in the same amount of time, or more work in more time, while telling ourselves we’re being more productive.

And because the extra effort feels voluntary, firm leadership may not see the problem until it manifests as errors, attrition, or ethical lapses. In law, the cost of impaired judgment isn’t just a missed deadline — it’s a client’s liberty, livelihood, or life savings.

From productivity to purposeful practice

The Berkeley researchers propose what they call an AI practice consisting of intentional norms and routines that structure how AI is used, including determining when to stop and how work should and should not expand. I’d go further. For legal organizations, purposeful AI integration requires more than workplace wellness norms. It requires a strategic framework that aligns AI capabilities with organizational mission, ethical obligations, and sustainable human performance.

This means, first off, being honest about what AI actually does to workloads rather than what we hope it will do. If your firm adopted AI expecting to reduce associate hours, audit whether that has actually happened, or whether associates are simply filling reclaimed time with more work.

Second, it means building governance structures that account for how people actually behave with these tools, rather than how leadership imagines they will. The Berkeley study found that workers expanded their workloads voluntarily, without management direction. Top-down AI policies that focus solely on permissible use will miss the intensification that could be happening in plain sight.


The most important insight from the research is that these effects are self-reinforcing. AI accelerates tasks, which raises expectations for speed. Higher speed increases reliance on AI, and greater reliance expands the scope of what people attempt. And expanded scope generates even more work.


Third, it means preserving space for the distinctly human work that AI cannot replicate, such as judgment, empathy, ethical reasoning, and the kind of creative problem-solving that emerges from genuine human dialogue — not from a conversation with a chatbot. The researchers also found that AI-enabled work became increasingly solitary and continuous, a dangerous trajectory.

The narrative that AI will free lawyers for higher-value work isn’t just optimistic. It’s a misunderstanding of how these tools interact with human psychology. AI doesn’t create leisure. It creates capacity — and without intentional structures, that capacity gets filled, not with strategic thinking, but with more of everything.

While it’s clear that AI will change the legal profession, the real challenge is whether law firms will integrate AI with purpose, shaping it to serve their values, their clients, and their professionals’ well-being. Or, whether they’ll be allowing the technology to quietly shape us into something we didn’t intend to become.

Tom Martin is CEO & Founder of LawDroid, Adjunct Professor at Suffolk University Law School, and author of the forthcomingĚý. He is “The AI Law Professor” and writes this eponymous column for the Thomson Reuters Institute.


You can find more aboutĚýthe use of AI and GenAI in the legal industryĚýhere

]]>
2026 AI in Professional Services Report: AI adoption has hit critical mass, but now comes the tough business questions /en-us/posts/technology/ai-in-professional-services-report-2026/ Mon, 09 Feb 2026 13:05:35 +0000 https://blogs.thomsonreuters.com/en-us/?p=69356

Key findings:

      • AI adoption accelerates across professional servicesĚý— Organization-wide use of AI in professional services almost doubled to 40% in 2026, with most individual professionals now using GenAI tools, and many preparing for the next wave of tools such as agentic AI.

      • Strategic integration and measurement lag behind usage — While AI use is widespread, only 18% of respondents say their organization tracks ROI of AI tools, and even fewer measure AI’s impact on broader business goals such as client satisfaction or revenue generation.

      • Communication around AI use remains inconsistentĚý— While most corporate departments want their outside firms to use AI on client matters, less than one-third are aware whether their firms are doing so. Meanwhile, firms report receiving conflicting instructions from clients about AI use, highlighting a need for clearer dialogue and shared strategy around AI adoption.


Over the past several years, AI usage within professional services industries has come into focus. As we enter 2026 in earnest, the early adoption phase of generative AI (GenAI) has come and gone. Today, most professionals have experimented with some form of GenAI, and many organizations integrated GenAI into their workflows — and now, a number are preparing for the next wave of technological innovation such as agentic AI.

Given this, the question for professionals and organizational leaders has now become: What will be AI’s long-term impact on my business?

Jump to ↓

2026 AI in Professional Services Report

 

To delve into this question further, the Thomson Reuters Institute has released its 2026 AI in Professional Services Report, which takes a broad view into the current usage and planning, sentiment towards, and business impact of AI for legal, tax & accounting, corporate functions, and government agencies. Taken from a survey of more than 1,500 respondents across 27 different countries, the report finds a professional services world that has embraced AI’s use but is continuing to evolve business strategy around its implementation.

For instance, the report shows that to 40% in 2026, compared to 22% in 2025 — and for the first time, a majority of individual professionals reported using publicly-available tools such as ChatGPT. Additionally, a majority of respondents said they feel either excited or hopeful for GenAI’s prospects in their respective industries, and about two-thirds said they felt GenAI should be applied to their work in some manner.

At the same time, however, many are exploring GenAI tools without much guidance as to how that use will be quantified or measured. Only 18% of respondents said they knew their organization was tracking return-on-investment (ROI) of AI tools in some manner, roughly the same proportion as last year. And even among those tracking AI metrics, most are tracking mainly internally-focused, operational metrics; and only a small proportion analyzed AI’s impact on their organization’s larger business goals — such as client satisfaction, external revenue generation, and new business won.

AI in Professional Services

This slow move to strategic thinking also impacts client-firm relationships. Although more than half of both corporate legal departments and corporate tax departments want their outside firms to use AI on client matters, less than one-third said they were aware whether their firms were doing so or not. From the firm standpoint, meanwhile, confusion reigns: 40% of firm respondents said they have received orders both to use AI on matters and not to use AI on matters from various clients.

Indeed, bout three-quarters of corporate respondents and firm respondents agreed that firms should be taking the lead in starting these conversations around proper AI use. Yet these discussions have not yet happened en masse. “Firms are reluctant — they claim it would compromise quality and fidelity,” said one U.S.-based corporate chief legal officer. “I think they are threatened by it.”

All the while, technological innovation progresses ever quicker. This year’s version of the report measures agentic AI use for the first time, finding that already 15% of organizations have adopted some type of agentic AI tool. Perhaps more interesting, however, is that an additional 53% report their organizations are either actively planning for agentic AI tools or are considering whether to use them, indicating perhaps an even more rapid pace of adoption than we’ve already seen with the speedy rise of GenAI.

AI in Professional Services

Overall, the report makes it clear that most professionals do understand that change, driven by AI in the workplace, is undoubtedly here. Even compared with 2025, a higher proportion of professionals said they believe that AI will have a major impact on jobs, billing and revenue, and even the need for legal or tax & accounting professionals as a whole. The percentage of lawyers calling AI a major threat to the unauthorized practice of law rose to 50% in 2026 from 36% in 2025.

Further, this report paints the picture of a professional services world that has embraced AI, begun to see its impact, and realized that it will have broader business and industry implications than previously imagined. As a result, the time for professionals and organizations to begin planning in earnest for an AI future has already arrived.

As a corporate general counsel from Sweden noted: “We cannot keep up with the modern-day corporations’ demands unless we also develop and adapt our way of working.”

You can download

a full copy of the Thomson Reuters Institute’s 2026 AI in Professional Services ReportĚýhere


]]>