AI literacy Archives - Thomson Reuters Institute https://blogs.thomsonreuters.com/en-us/topic/ai-literacy/ Thomson Reuters Institute is a blog from ¶¶ŇőłÉÄę, the intelligence, technology and human expertise you need to find trusted answers. Fri, 10 Apr 2026 08:56:53 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 Pattern, proof & rights: How AI is reshaping criminal justice /en-us/posts/ai-in-courts/ai-reshapes-criminal-justice/ Fri, 10 Apr 2026 08:46:55 +0000 https://blogs.thomsonreuters.com/en-us/?p=70255

Key insights:

      • AI’s greatest strength in criminal justice is pattern recognition— AI can process vast amounts of data quickly, helping law enforcement and legal professionals detect connections, reduce oversight gaps, and improve consistency across investigations and casework.

      • AI should strengthen justice, not substitute for human judgment— Legal professionals are integral to evaluating AI-generated outputs, especially when decisions affect evidence, warrants, and individuals’ constitutional rights.

      • The most effective model is human/AI collaboration— AI handles scale and speed, while judges, attorneys, and investigators provide context, accountability, and ethical reasoning needed to protect due process.


The law has always been about patterns — patterns of behavior, patterns of evidence, and patterns of justice. Now, courts and law enforcement can leverage a tool powerful enough to see those patterns at a scale at a speed no human mind could match: AI.

At its core, AI works by recognizing patterns. Rather than simply matching keywords, it learns from large amounts of existing text to understand meaning and context and uses that learning to make predictions about what comes next. In the context of law enforcement, that capability is nothing short of transformative.

These themes were front and center in a recent webinar, , from theĚý, a joint effort by the National Center for State CourtsĚý(NCSC) and the Thomson Reuters Institute (TRI). The webinar brought together voices from across the justice system, and what emerged was a clear and consistent message: AI is a powerful ally in the pursuit of justice, but only when paired with the judgment, accountability, and constitutional grounding that human professionals can provide.

AI’s pattern recognition is a gamechanger

“AI is excellent,” said Mark Cheatham, Chief of Police in Acworth, Georgia, during the webinar. “It is better than anyone else in your office at recognizing patterns. No doubt about it. It is the smartest, most capable employee that you have.”

That kind of capability, applied to the demands of modern policing, investigation, and prosecution, is a genuine gamechanger. However, the promise of AI extends far beyond the patrol car or the precinct. Indeed, it cascades through the entire arc of justice — from the moment a crime is detected all the way through prosecution and adjudication.

Each step in that chain represents not just an operational and efficiency upgrade, but an opportunity to make the system more fair, more consistent, and more protective of the rights of everyone involved.

Webinar participants considered the practical implications. For example, AI can identify and mitigate human error in decision-making, promoting greater consistency and fairness in outcomes across cases. And by automating labor-intensive tasks such as reviewing body camera footage, AI frees prosecutors and defense attorneys to focus on other aspects of their work that demand professional judgment and legal expertise.

In legal education, the potential of AI is similarly recognized. Hon. Eric DuBois of the 9th Judicial Circuit Court in Florida emphasizes its role as a tool rather than a substitute. “I encourage the law students to use AI as a starting point,” Judge DuBois explained. “But it’s not going to replace us. You’ve got to put the work in, you’ve got to put the effort in.”


AI can never replace the detective, the prosecutor, the judge, or the defense attorney; however, it can work alongside them, handling the volume and velocity of data that no human team could process alone.


Judge DuBois’ perspective aligns with broader judicial sentiment on the responsible integration of AI. In fact, one consistent theme across the webinar was the necessity of maintaining human oversight. The role of the legal professional remains central, participants stressed, because that ensures accuracy, accountability, and ethical judgment. The appropriate placement of human expertise within AI-assisted processes is essential to ensuring a fair and effective legal system.

That balance between leveraging AI and preserving human judgment is not just good practice, rather it’s a cornerstone of justice. While Chief Cheatham praises AI’s pattern recognition, he also cautions that it “will call in sick, frequently and unexpectedly.” In other words, AI is a powerful but imperfect tool, and those professionals who rely on it must always be prepared to intervene in those situations in which AI falls short. Moreover, the technology is improving extremely rapidly, and the models we are using today will likely be the worst models we ever use.

Naturally, that readiness is especially critical when individuals’ rights are on the line. “A human cannot just rely on that machine,” said Joyce King, Deputy State’s Attorney for Frederick County in Maryland. “You need a warrant to open that cyber tip separately, to get human eyes on that for confirmation, that we cannot rely on the machine.” Clearly, as the webinar explained, AI does not replace constitutional obligations; rather, it operates within them, and the professionals who use AI are still the guardians of due process.

The human/AI partnership is where justice is served

Bob Rhodes, Chief Technology Officer for ¶¶ŇőłÉÄę Special Services (TRSS) echoed that sentiment with a principle that cuts across every application of AI in the justice system. “The number one thing… is a human should always be in the loop to verify what the systems are giving them,” Rhodes said.

This is not a limitation of AI; instead, it’s the design of a system that works. AI identifies the patterns, and trained, experienced professionals evaluate them, act on them, and are accountable for them.

That partnership is where the real opportunity lives. AI can never replace the detective, the prosecutor, the judge, or the defense attorney. However, it can work alongside them, handling the volume and velocity of data that no human team could process alone. So that means the humans in the room can focus on what they do best: applying judgment, upholding the law, and protecting an individual’s rights.

For judicial and law enforcement professionals, this is the moment to lean in. The patterns are there, the technology to read them is here, and the opportunity to use both in service of rights — not against them — has never been greater.


Please add your voice to ¶¶ŇőłÉÄę’ flagship , a global study exploring how the professional landscape continues to change.Ěý

]]>
Agentic AI following GenAI’s growth trajectory in legal, but with unique oversight challenges, new report shows /en-us/posts/technology/agentic-ai-oversight-challenges/ Thu, 09 Apr 2026 08:45:55 +0000 https://blogs.thomsonreuters.com/en-us/?p=70278

Key takeaways:

      • Agentic AI poised for adoption uptick — Agentic AI is following GenAI’s rapid adoption in the legal industry, with less than 20% of firms currently implementing agentic systems but half planning or considering adoption in the near future, according to a new report.

      • Adoption depends on human oversight answers — Legal professionals are generally optimistic about agentic AI’s potential, but successful adoption depends on explicit guidance about human oversight and the lawyer’s role in maintaining ethical standards.

      • Time to retool AI education? — Agentic AI’s increased autonomy introduces new oversight and ethical challenges for law firms, making targeted education and clear guidance essential to understanding the differences from GenAI.


Over the past several years, law firms and corporate legal departments have turned towards generative AI en masse. At the beginning of 2024, just 14% of all law firms and legal departments featured an enterprise-wide GenAI tool. Just two years later, that number had already risen to 43% of all firms and departments, according to the 2026 AI in Professional Services Report, from the Thomson Reuters Institute (TRI). For large law firms or legal departments, those percentages — not surprisingly — are beginning to approach 100%.

With GenAI adoption now this widespread, legal industry leaders are now turning their attention to two primary initiatives. One, of course, is how to get the most out of the AI tools they already have — a task that is proving a bit elusive. Currently, less than 20% of lawyers say their organizations measure AI’s return-on-investment, and most corporate lawyers say they have no idea how their outside law firms are approaching AI. Thus, instituting not just AI tools, but also an AI strategy is the second top priority for law firms and corporate legal departments in 2026 and beyond.

However, even as the legal industry reaches a tipping point in adopting GenAI tools, technology innovation still continues unabated. Agentic AI has emerged as the next wave of innovation that could change how lawyers work on a daily basis, offering a way to autonomously complete multi-step tasks. For example, agentic AI systems are already being built for the legal industry that independently researches a regulation or law, drafts a document based on the finding, identifies pitfalls, and revises the document, with stops for human guidance only instituted as desired.

According to the AI in Professional Services Report, the legal industry is already making headway towards implementing agentic AI systems. For agentic AI to truly take hold in legal, however, lawyers still require more education around not only how it differs from the GenAI systems they already have in place, but also when and where human intervention needs to occur within an agentic system.

The early stages of agentic AI

Examining current agentic AI adoption for the legal industry almost takes one back in time — two years, to be exact. Following the public release of GenAI in late-2022, many legal industry organizations spent 2023 evaluating and experimenting with AI systems, usually with a small working group of interested guinea pigs. As a result, only 14% of survey respondents said their law firms or corporate legal departments were engaged in organization-wide GenAI rollouts at the start of 2024. However, more than half of respondents said their organizations expected to be rolling out large-scale GenAI systems over the next 1 to 3 years. The intervening two years since then have proved that prediction to be largely true.

Agentic AI usage in the first half of 2026 looks largely similar to GenAI in 2024. The legal industry started to experiment with agentic AI at the beginning of 2025, with an eye towards actual implementation in 2026 and beyond (particularly as legal software providers began to integrate agentic systems into their own products). As such, less than 20% of recent survey respondents say their organization is engaged in widespread agentic AI adoption, but with about half of respondents said their organization is either planning to use or considering whether to use agentic AI in the near future.

agentic ai

By and large, lawyers feel positive about the agentic AI movement. When asked about their sentiment towards agentic AI, 51% of legal industry respondents said they felt excited or hopeful, while just 19% said they felt concerned or fearful. Further, about half (47%) said they actively believe agentic AI should be used for legal work, while 22% felt it should not, with the remainder saying they were unsure. These figures largely track with the sentiments expressed about GenAI in 2024, which have only grown over time from about 50% positive two years ago to two-thirds of all legal professionals feeling positive currently.

This all lends further credence to a rise in agentic AI usage similar to what law firms and corporate legal departments experienced with GenAI over the course of 2024 and 2025. Indeed, when asked when they expect agentic AI to be a central part of their workflow, few have baked agentic systems into their daily work currently, but a majority of legal industry respondents expect it to be central within the next 3 to 5 years.

agentic ai

The unique barriers of agentic AI adoption

Agentic AI does differ from GenAI in one crucial area that may limit its growth potential within the legal industry, however — autonomy. By and large, GenAI systems operate on a back-and-forth basis: Users provide the tool a prompt, receive its output, and then iterate back-and-forth from there. Agentic AI is intended to be more automated by design, only requiring human input at pre-determined points in the process. And that makes some lawyers understandably nervous.

When asked why they might feel hesitant about using agentic AI for legal tasks, the most common answer was a general fear of the unknown, but the second most common answer dealt with the need for careful monitoring and oversight. In fact, some respondents said they were excited about GenAI, but more cautious about agentic AI’s potential.

“Agentic AI, while exciting, to me removes oversight a step too far,” said one such lawyer from a US law firm. “I like the idea of prompting and reviewing a result. It is something else to have a machine have so much autonomy in the actual doing of a thing and potentially acting on my behalf without that very concrete review.”


Please add your voice to ¶¶ŇőłÉÄę’ flagship , a global study exploring how the professional landscape continues to change.Ěý


An assistant GC at a US company also pointed to potential privacy and security concerns, adding: “The fact that agentic AI operates in a much more autonomous way, with a lack of control from the user, means there are many unknowns that are hidden beneath the process.”

For law firm and corporate legal department leaders looking to potentially implement agentic AI systems into their practice, this means re-thinking what AI education and training will mean moving forward. Beyond that, however, legal AI educators also will need to make sure to pinpoint and perhaps over-explain those specific instances in which human oversight needs to occur in agentic systems. More autonomous does not mean fully autonomous, and particularly for lawyers with ethical duties to their work product, lawyer oversight will in fact be a necessary part of any agentic system.

For law firm or legal department leaders, that means that finding the right balance between efficient workflows and human intervention will be key to agentic AI adoption. And those organizations that can best communicate human-in-the-loop to their professionals up-front will be rewarded with more increased and reliable adoption.

Clearly, lawyers feel positively about the agentic AI future, after all. They just need it spelled out explicitly as to what the lawyer’s role will be in this new paradigm.

“Agentic AI is powerful, but its moral compass must come from humans,” one UK law firm barrister noted aptly. “Lawyers are trained to safeguard fairness, rights, and the rule of law — principles that should guide how AI is designed, governed, and deployed. Hope lies in our ability to shape AI through these values for fairer values for society as a whole.”


You can download a full copy of the Thomson Reuters Institute’sĚý2026 AI in Professional Services ReportĚýhere

]]>
The AI Law Professor: When AI quietly hijacks legal judgment /en-us/posts/technology/ai-law-professor-first-draft-trap/ Wed, 08 Apr 2026 07:56:33 +0000 https://blogs.thomsonreuters.com/en-us/?p=70293

Key takeaways:

      • Anchoring distorts judgment before you begin — Research shows a first draft shapes subsequent decisions; and an AI draft is the most seductive anchor imaginable, because it looks exactly like something a lawyer would write.

      • The First Draft Trap inverts legal training — The Socratic method builds the habit of holding multiple possibilities in tension before committing; but an AI first draft collapses that space before the real thinking begins.

      • The fix is to ask for the map, not the draft — Requesting multiple strategic framings before writing keeps judgment where it belongs and uses AI to expand possibilities rather than foreclose them.


Welcome back toĚýThe AI Law Professor. Last month, I examined why promised efficiency gains often become a cycle of work intensification. This month, I want to address a subtler challenge. I call it the First Draft Trap and understanding it may change how you reach for AI the next time a new matter lands on your desk

We have all heard the pitch: Staring at a blank page? Just prompt the AI. In seconds you have a working draft: structured, coherent, and surprisingly competent. The blank page problem, that ancient enemy of productivity, thus has been vanquished.

Except the blank page itself was never just an obstacle; rather, it was a space of possibility. For lawyers, it was the space in which the most important part of their work actually happens. Now, with AI in the mix, that may be changing.

Welcome to the First Draft Trap.

Simply put, the First Draft Trap is this: The moment you accept an AI-generated draft as your starting point, you have already made the most consequential decision of the entire project — most importantly, you made it by not making it. You let the machine choose your direction, your framing, and your theory. Everything that follows is editing; and editing, no matter how rigorous, is not the same as thinking.

The cognitive hijack

There is solid psychology behind why this happens. Daniel Kahneman and Amos Tversky demonstrated in their landmark 1974 paper, , that once people are exposed to an idea, this first impression distorts their subsequent judgments and becomes a mental anchor. In their experiments, subjects who watched a roulette wheel spin to a random number still let that number influence their estimates of completely unrelated quantities. The anchor held even when people knew it was meaningless.


Please join Tom Martin at the on April 28–29. It’s virtual and completely free — two days of keynotes, panels, and workshops on AI and the legal profession


An AI first draft is the most seductive anchor imaginable. It is not random — it is plausible, and it is well-organized. It sounds like something a lawyer would write. And that is precisely what makes it dangerous. You know intellectually that it is just one of many possible approaches to addressing the matter, but the anchor holds anyway.

That is the First Draft Trap at the cognitive level. The AI draft is not just one option you happen to prefer. It is a filter that prevents you from seeing the other options that were available to you, the roads you never even noticed that you did not take.

Consider what this means for a profession built on the opposite instinct. From the first day of law school, lawyers are trained to resist the obvious answer and to think like a lawyer. The Socratic method exists for exactly this reason. A good professor hears your confident response and asks: What else? What if the facts were different? What is the argument on the other side? The goal is not to arrive at an answer, per se. It is to build the mental habit of holding multiple possibilities in tension before committing to any one of them.

The First Draft Trap is the anti-Socratic method. It delivers a confident answer before you have even formulated the question properly — and instead of interrogating it, you polish it.

The value of the blank page

Think about what a senior partner actually does when a junior associate brings them a memo. The partner’s value is not better writing; rather, it is peripheral vision: The ability to see what the memo does not address, the argument not considered, or the framing that would land differently with this particular judge or this particular jury. That capacity to see beyond the document in front of them is why clients pay senior partners premium rates. And it is precisely the muscle that atrophies when your default workflow begins with the prompt generate a draft.


The AI draft is not just one option you happen to prefer. It is a filter that prevents you from seeing the other options that were available to you, the roads you never even noticed that you did not take.


The two-system framework offered by Kahneman and Tversky gives us a clean way to describe what is going wrong. System 1 is fast, intuitive, and pattern-matching; while System 2 is slow, deliberate, and analytical. The practice of law, at its best, is a System 2 discipline. We, as lawyers, are trained to override gut reactions, challenge assumptions, and think through consequences before acting.

In this way, the AI first draft feels like a System 2 output. It is structured, footnoted, and methodical. However, your decision to accept it as a starting point is pure System 1 — a fast, intuitive grab at the nearest plausible answer. You have used a sophisticated tool to bypass the sophisticated thinking the tool was supposed to support. That uncomfortable period of ambiguity, of not knowing which path is best, is where the real lawyering lives.

What to do instead

None of this means stop using AI. It means stop using AI to skip the hard part that matters.

Before you ever ask for a draft, ask for the map. Describe the matter or document you are working on, then ask the AI for three fundamentally different strategic framings for the problem. For each framing, request the strongest argument in its favor and its most serious vulnerability. Then ask which framing best fits the client’s goals, the audience, or the procedural posture. Close with a clear instruction: Do not write a draft yet.

That last instruction is the key. It keeps you in the driver’s seat during the phase that matters most. You are using AI to expand the possibilities before you prune them, not after. And, most importantly, it gives you the opportunity to think for yourself about other important possibilities and add them in.

In the terms used by Kahneman and Tversky, use AI to fuel System 2, not to hand the controls to System 1. Let the machine generate options, and you exercise judgment.

For lawyers, the ability to see what is not there is the whole game.

Do not let the first draft blind you to it.


Tom Martin is CEO & Founder of LawDroid, Adjunct Professor at Suffolk University Law School, and author of the forthcomingĚý. He is “The AI Law Professor” and writes this eponymous column for the Thomson Reuters Institute.

]]>
From emerging player to contender: How Latin America can compete in the global AI race /en-us/posts/technology/latam-ai-investment/ Mon, 06 Apr 2026 11:57:46 +0000 https://blogs.thomsonreuters.com/en-us/?p=70259

Key takeaways:

      • Strategic collaboration is becoming a defining strength for the region — Latin American organizations are realizing that progress in AI accelerates when they combine forces by linking industry expertise, academic talent, and public‑sector support.

      • AI initiatives rooted in real local challenges are gaining global relevance — By developing solutions grounded in the region’s own structural needs, whether in infrastructure, finance, agriculture, education, or mobility, many LatAm firms are producing technologies that are both highly impactful and naturally scalable.

      • Demonstrating clear outcomes is becoming fundamental — Organizations that show concrete operational improvements, measurable efficiencies, or stronger customer outcomes are strengthening their position with investors and partners.


In recent years, Latin America has experienced significant growth in investments related to AI, accounting for . This is strikingly low given that the region makes up around 6.6% of global GDP, highlighting the region’s opportunities to scale AI initiatives even further. Although there are notable differences among countries, Mexico and Brazil — the two largest LatAm economies — stand out for their volume of AI projects and funding, followed by other nations such as Chile, Colombia, and Argentina.

By recognizing the region’s strengths — which include cost-effective operations, access to data, clean energy, and public support — the region’s businesses can better position themselves and design strategies to draw in international investors that may be increasingly seeking promising locations for AI development.

Lessons from LatAm’s AI success stories

Latin America has produced remarkable AI success stories that can serve as models to build confidence among investors. These cases — involving companies that attracted substantial investment and achieved growth — demonstrate valuable best practices that range from technological innovation to working with governments and corporations. Some of these best practices include:

Building strategic alliances

The journey of innovation rarely unfolds in isolation. At times, the presence of large, established companies, whether local industry leaders or multinationals, has served as a catalyst for AI projects. The experience of that specializes in AI-powered agricultural irrigation, proves it. Now, Kilimo is partnering with EdgeConneX, a data center company based in the United States, on a community .

Academia, too, can be woven into this narrative. Collaborations with research centers or universities offer scientific credibility and connect ventures with emerging talent. In Mexico, AI startups often originate within university settings — such as computer vision projects from the National Autonomous University of Mexico (UNAM), for instance — and maintain agreements that sustain ongoing innovation and technical progress even with modest resources. And academic validations, whether in published papers or conference accolades, tend to resonate with foreign investors. Indeed, the emergence of this ecosystem that features early corporate clients and academic mentors frequently lends a distinctive appeal for those seeking investment.

Focusing on local problems with global impact

Within Latin America, certain issues prove especially relevant in situations in which AI solutions intersect with sectors renowned for regional strengths, such as fintech and financial inclusion, agrotech optimizing agriculture, and foodtech drawing on local ingredients. The experience of Chilean food startup NotCo — in which and subsequently exported — suggests how innovations rooted in local context may generate broader attention.

By addressing needs in urban transport, education, mining and related areas, local LatAm companies can provide access to homegrown data and users, which can further refine technology and open pathways for investors into similar emerging markets. When AI solutions respond to genuine pain points rather than mere novelty, momentum often builds more quickly, and the model finds validation among that evaluate investments.

Showing results and AI ROI early on

Questions linger for many executives . Evidence of clear metrics like cost savings, sales growth, or error reduction can prove persuasive, especially when complemented by success stories from local clients.

Recent studies show that companies ; and such figures tend to reassure those considering investment by illustrating tangible improvements. Testimonials or independent validations, such as a university study, can further illuminate achievements.

The act of quantifying impact — whether in efficiency, revenue, or other relevant KPIs — has a way of transforming perceptions from uncertainty toward clarity.

Leveraging government incentives and collaborations

Many Latin American nations have put forth support programs for AI and tech projects, such as non-repayable funds, soft loans, and tax benefits for innovation illustrated in , , , or the .

Public financing, when present, often acts as a stamp of validation for private investors. For example, this trust extended to Brazilian startups receiving Finep support for AI health projects, which in turn can shift perceptions for foreign ventures capitals. Engagement in government pilots, such as smart city initiatives or solutions for ministries, provides valuable exposure. In such contexts, public-private partnerships and incentives seem to act as quiet levers for growth and legitimacy.

Seeking smart and diversified financing

Financial strategies in Latin America have been shaped by the interplay of local and foreign capital. Local funds often bring insights and patience, while foreign funds may offer larger investments and global scaling experience. Ownership dilution sometimes accompanies the arrival of strategic investors, whose networks can prove invaluable, such as . Programs like 500 Startups, Y Combinator, MassChallenge, and international competitions have ushered LatAm AI startups such as Heru, Rappi, Bitso, and Clip into new rounds of capital following increased exposure.

Efficiency in capital management, which can be demonstrated with lean burn rates and milestone achievement with limited resources, signals an ability to execute within the realities of LatAm, which may enhance the appeal for future investments. The cultivation of relationships and responsible stewardship of capital frequently matters as much as the funds themselves, suggesting that the value of mentorship, contacts, and reputation is often intertwined with deepening financial support.

Unlocking AI Investment

By applying these principles, Latin American companies have achieved a better position to attract AI investments to their projects and help position the region as a viable destination for technology capital. These recent experiences show that when a LatAm company combines innovation, talent, and strategy — while communicating its story well — it can win over global and local investors alike. Each of the best practices noted above is based on real lessons: international alliances (NotCo with US funds), leveraging incentives (Brazilian companies funded by Finep), talent formation (Santander and Microsoft programs), focus on ROI (successful use cases that convince boards), and more.

Latin America has challenges but also unique advantages. Companies that manage to navigate this environment intelligently will increase their chances of securing the financing needed to innovate and grow. By doing so, they will contribute to a virtuous circle in which each new success attracts more investment to the region and opens doors for the next generation of LatAm AI ventures.


You can find more about the challenges and opportunities in the Latin American region here

]]>
Honing legal judgment: The AI era requires changes to how lawyers are trained during and after law school /en-us/posts/legal/honing-legal-judgment-training-lawyers/ Thu, 02 Apr 2026 15:36:44 +0000 https://blogs.thomsonreuters.com/en-us/?p=70236

Key takeaways:

      • AI threatens traditional lawyer development — As AI automates entry-level legal tasks like research and writing that historically has honed legal judgment skills, the profession faces a crisis in how new lawyers will develop such judgment abilities.

      • The profession can’t agree on what constitutes “legal judgment” — Unlike other professions, there is no agreed-upon definition of legal judgment or clear standards for when AI should be used.

      • Implementation requires unprecedented coordination and funding — A legal education fund as a proposed solution would require a small percentage of legal services revenue and coordinated action across law schools, legal employers, and state regulators.


This is the second of a two-part blog series that looks at how lawyer training needs to evolve in the age of AI. The first part of this series looked at how lawyers can keep their skills relevant amid AI utilization.

The key skills that comprise legal judgment have received mixed reviews, according to a recent white paper from the Thomson Reuters Institute that advocated for cultivating practice-ready lawyers. The white paper was based on feedback from thousands of experienced lawyers, judges, and law students and raises questions about how legal judgment forms when AI assistance is used for task completion.

notes that calls for “… to accelerate the development of legal judgment early in lawyers’ careers.”

The challenge is that each part of the profession — law schools, employers, state supreme courts (as regulators) — have distinctly separate responsibilities. That means, that in the age of AI, coordination across the entire legal profession is needed, especially as AI reduces the availability of traditional first jobs.

Furlong points out that there is no consensus for what legal judgment is or any agreed upon standards for in what instances AI should be used in legal. To bring clarity to these issues, the white paper proposed a profession-wide model that integrates three critical elements: i) work-based learning that’s modeled on medical residencies; ii) micro-skill decomposition of legal judgment; and iii) AI-as-thinking-partner throughout pedagogy.

Three pillars for an AI-era lawyer formation system

Not surprisingly, overreliance on AI can erode critical analysis and solid legal judgment skills. Addressing these concerns requires a comprehensive reimagining of how lawyers are educated and trained. One solution lies in three interconnected pillars that together form a cohesive system for developing legal judgment in an AI-integrated world.

Pillar 1: Integrate work experience into legal education

Core skills such as legal research, writing, and document review help develop legal judgment; yet these skills could collapse once AI assumes such tasks. The Brookings Institution recently proposed to preserve entry-level professional development in an AI era. This parallels the TRI white paper’s calls for mandatory supervised postgraduate practice as a key part of legal licensure.

While implementing a full residency model presents challenges, several law schools have already pioneered approaches that demonstrate the viability of work-integrated legal education that, if scaled appropriately, could improve new lawyer practice and judgment skills. For example, Northeastern Law School guarantees all students nearly before graduation through four quarter-length legal positions. The program integrates supervised practice into the curriculum so graduates can gain substantial hands-on experience alongside their classroom instruction.

Also, program offers an alternative pathway to bar admission through practice-based assessment rather than the traditional bar exam. The program demonstrates that competency can be evaluated through supervised experiential learning.

Pillar 2: Decompose legal judgment into teachable micro-skills

The legal profession needs to come to a common definition of legal judgment and develop its components to teach the concept effectively. “We can’t teach what we can’t describe,” Furlong says. To develop legal judgment, the profession must define its components, including:

      • Pattern recognition — The ability to identify when different fact patterns are related to similar legal frameworks and distinguish when superficially similar cases are legally distinct.
      • Strategic calibration and proportionality — This means understanding what level of effort, precision, and risk each matter requires and matching responses to the stakes involved.
      • Reasoning through uncertainty — This is the capacity to make defensible decisions and provide sound counsel even when the law is ambiguous, unsettled, or silent on an issue.
      • Source evaluation and authority weighting — This includes knowing which legal authorities are most suitable and being able to assess their persuasive value.
      • Ethical judgment under pressure — This means spotting conflicts, confidentiality issues, and duty-of-candor moments while maintaining competence and knowing when to escalate beyond expertise.

Breaking down legal judgment into these discrete components makes it possible to design targeted teaching interventions. For example, , former law professor and executive director of , suggests we back into AI-assisted workflows by requiring a short verification log (detailing sources checked, changes made, and why); running attack-the-draft drills (find missing authority, weak inferences, and jurisdictional mismatch); and preserving slow work as formative work (citation chaining, updating, and adversarial research memos).

With judgment skills clearly defined and work experience integrated into training, the profession must then tackle how AI itself should be incorporated into lawyer development.

Pillar 3: AI-as-thinking-partner throughout a lawyer’s career

Warnings that are mounting. The legal profession must provide clear standards for in what instances and how AI should be used, with training in verification and judgment skills. Overreliance on AI could compromise lawyers’ capacity to fulfill their fiduciary duties to clients.

A phased approach in the introduction of AI in legal work helps protect critical thinking while building AI competency. For example, in Year 1, law students could complete core legal reasoning exercises without AI assistance in order to better develop their analytical muscles. In Year 2, students use AI as a research assistant with mandatory verification protocols that teach students to check outputs against authoritative sources. Finally, in Year 3, residencies can immerse students in real-world AI workflows under proper supervision and while providing feedback.

These three pillars form a coherent vision for lawyer formation in the AI era. However, the most well-designed system faces the obstacle of funding.

The challenge of who pays

Perhaps the most difficult part of any overhaul is the cost. The medical residency model works because — up to $15 billion-plus annually — for teaching young medical students to be doctors. Legal education has no equivalent. Without addressing funding, however, even the best reforms will fail.

One idea is to establish a legal education fund that’s supported by an assessment of a small percentage of the legal industry’s gross legal services revenue (while exempting solo practitioners and firms with less than $500,000 in annual revenue). These funds could be used to subsidize thousands of supervised residency placements, fund law school curriculum development, support bar exam alternative assessments, and provide employer training and supervision stipends.


The challenge is that each part of the profession — law schools, employers, state supreme courts — have distinctly separate responsibilities, and that means coordination across the entire legal profession is needed.


This proposal, of course, would require unprecedented coordination and financial commitment from the legal profession. Skeptics might argue that market forces can solve this problem, or that firms will simply create new training pathways, or that AI will prove less disruptive than feared. However, waiting for market forces risks a lost generation of lawyers. The medical profession already when the medical industry’s voluntary reform failed. Only later did coordinated regulatory intervention produce the consistent quality standards the medical industry sees now.

What is clear is that inaction is resulting in degradation of lawyering skills. “Maybe… we need catastrophic external intervention to bring about the wholesale changes we can’t manage from the inside,” Furlong suggests.

However, the question is whether the legal profession will wait for a crisis to force change or act proactively to make the needed changes now, before the crisis hits.


You can learn more about the impact of AI on professional services organizations at TRI’s upcoming 2026 Future of AI & Technology Forum here

]]>
AI use and employee experience: New research reveals guidance gap in professional services /en-us/posts/technology/ai-guidance-gap/ Mon, 30 Mar 2026 11:23:47 +0000 https://blogs.thomsonreuters.com/en-us/?p=70090

Key takeaways:

      • Employees face contradictory messages or none at all — Nearly 40% of professionals surveyed report receiving conflicting directives about AI usage from clients and leadership, while half report no client conversations about AI have occurred at all.

      • Workers lack feedback on whether their AI efforts matter — Professionals who are experimenting with AI tools without knowing if their efforts are valued are left uncertain about whether investing time in developing AI skills is worth it.

      • Job displacement fears are rising — While employees remain cautiously optimistic about AI usage in their workplace, concerns about job displacement have doubled over the past year.


As generative AI (GenAI) tools flood into legal and accounting workplaces, organizations are deploying powerful technology without giving their employees clear directions on how to use it. Worse, some have received no guidance.

New research that underpinned the recent 2026 AI in Professional Services ReportĚýfrom the Thomson Reuters Institute (TRI), reveals a disconnect between AI availability and organizational guidance, which is creating confusion that may undermine both employee experience and the technology’s potential value. (The report’s data was gathered from surveys of more than 1,500 legal, tax, accounting, and compliance professionals across 26 countries.)

Employees navigate inconsistent AI policies or none at all

Approximately 40% of the professionals surveyed said they received contradictory guidance from clients and leadership about AI tool usage, with directives both encouraging and discouraging their use on projects and in RFPs. This ambivalence is slowing down decision-making at the front lines — a place in which AI could deliver the most value.

Equally concerning is the fact that half of professionals indicated that no conversations with clients about AI tool usage have taken place yet. And when discussions do occur, concerns about data protection and accuracy are the main topics.

guidance gap

This confusion extends to external relationships as well. More than two-thirds of corporate and government clients remain unaware of whether their outside professional service providers are even utilizing GenAI. And the majority of clients have provided no direction whatsoever to their outside law firms concerning AI use, respondents said.

guidance gap

Organizations often ignore what employees need to know

Perhaps most revealing is how organizations are measuring — or failing to measure — whether their AI investments are paying off. Almost half of respondents said their organizations are not measuring return on investment (ROI) at all. Among the minority (18%) of respondents that said their organizations do track ROI, the metrics they use tell a story about organizational priorities. That fact that internal cost savings and employee usage rates lead the list suggests a focus on efficiency over innovation or quality improvements.

guidance gap

This measurement vacuum has consequences for employee experience. Without clear success metrics, employees lack feedback on whether their AI experimentation is valued, discouraged, or even noticed. The absence of ROI frameworks also makes it hard to justify training investments or dedicated time that allows employees to develop AI fluency.

AI usage doubles while support systems fall behind

AI usage among professional service organizations has nearly doubled over the past year, and professionals are increasingly integrating these tools into their workflows, the report shows. Yet organizational infrastructure that could support this adoption surge lags badly. Most professionals said they expect GenAI to become central to their work within the next two years — but that may be happening without roadmaps from their employers.

In addition, notable barriers in employees’ usage of AI remain. When asked what barriers could prevent their organization from more widely adopting GenAI and agentic AI, almost 80% of professionals cited concerns over inaccurate responses. Other concerns included worries over data security, privacy, and ethical use. Most of these suggest an ongoing lack of trust in GenAI.

guidance gap

The tool landscape adds another layer of complexity. Publicly available tools dominate current usage, with more than half of respondents (57%) citing their use, while proprietary or industry-specific solutions remain largely in the consideration phase. This suggests employees are often self-provisioning AI tools rather than working within enterprise-supported ecosystems. This potentially opens organizations to increased risk exposure because of security gaps, compliance risks, and inconsistent quality.

Employees’ job displacement fears increasing

Despite these challenges, employee sentiment toward AI remains cautiously optimistic. More than half (57%) of respondents said they are either hopeful or excited about the future of GenAI in their industry. Clearly, employees see AI’s potential to enhance their efficiency, automate routine tasks, and free up their time for higher-value work.

At the same time, hesitation and concern among employees are rising, particularly around accuracy, job displacement fears, and the unknown implications of autonomous AI systems. Notably, concerns about job displacement have doubled over the past year, and this trend demands organizational attention and transparent communication about a workforce strategy to combat this concern.

What organizations need to do now

Organizational leaders who are serious about positive employee AI experiences need to step up their efforts to provide guidance to employees and gain the ROI that AI promises. Specific steps they can take include:

      • Draft clear and consistent guidance — Create explicit policies for employees about in which instances AI use is encouraged, required, or prohibited. This includes client communication protocols, data-handling requirements, and escalation procedures in those situations in which AI outputs seem questionable.
      • Develop and implement meaningful ROI metrics — Organizations must move beyond usage rates and cost savings as key success measurements. Tracking data points that capture quality improvements, time redeployed to strategic work, and client feedback on AI-enhanced deliverables present a more comprehensive picture. Also, leaders need to share these metrics transparently in order to give employees an understanding about organizational priorities.
      • Invest in structured learning — The survey shows professionals are experimenting with dozens of different tools from ChatGPT to specialized legal tech platforms. Organizations should curate recommended toolsets, provide hands-on training, and create communities of practice in which employees can share effective prompts and use cases with other users.

Our data shows that the employee experience around AI adoption reveals a workforce that is hopeful but hungry for direction and concerned about job impacts. Leaders who implement these actions effectively are more likely to unlock the strategic value that AI promises while building the trust and competence needed for their organizations and its employees to thrive in an automated future.


You can download a full copy of the Thomson Reuters Institute’sĚý2026 AI in Professional Services ReportĚýhere

]]>
Helping the legal profession get AI‑ready: A new advisory board takes shape /en-us/posts/legal/ai-advisory-board/ Thu, 26 Mar 2026 11:31:32 +0000 https://blogs.thomsonreuters.com/en-us/?p=70080 Key insights:

      • AI is already reshaping the legal profession — AIĚýis already embedded in lawyers’ day-to-day legal work with a significant share of both law firm attorneys and in-house legal teams actively using GenAI tools, with many expecting it to become central to their work within the next five years.

      • AIFLP Advisory Board was formed to prepare lawyers for an AI-reshaped profession — TRI convened 21 respected leaders from legal education, private practice, the judiciary, and AI ethics and governance to help ensure lawyers and law students are prepared for a profession reshaped by AI.

      • Human judgment remains central in an AI enabled legal futureĚý— Becoming AI ready is not simply about learning to use new tools; the Advisory Board emphasizes strengthening irreplaceable human capabilities is critical.


In today’s tech-driven environment, AI is no longer a future concept for the legal profession — it’s already here, and it’s changing how lawyers work, learn, and serve clients. Recognizing just how fast the evolution is moving, the Thomson Reuters Institute (TRI) has launched the AI and the Future of Legal Practice (AIFLP) Advisory Board, bringing together a group of respected leaders from across the legal ecosystem to help guide what comes next.

The board includes 21 accomplished voices from legal education, private practice, the judiciary, and AI ethics and governance. Their shared goal is simple but ambitious: Help ensure that both today’s lawyers and tomorrow’s law students are prepared for a profession being reshaped by AI.

Why now?

Because the shift is already underway. According to TRI’s recent 2026 AI in Professional Services Report, 41% of law firm attorneys say their organizations are already using some form of generative AI (GenAI); and nearly half of those at corporate legal departments report that AI tools are being rolled out there too. Even more telling, most professionals said they expect GenAI to become central to their day‑to‑day work within the next five years.

That pace of change raises big questions about competence, ethics, education, risk, and access to justice. And those questions don’t have easy answers.

What the Advisory Board will focus on

The AIFLP Advisory Board is designed to tackle those challenges head‑on. Its work will center on four key areas that are already under pressure as AI adoption accelerates:

      • Legal education and talent development
      • Ethics, professional competence, and accountability
      • Governance, risk management, and client counseling
      • Access to justice and modern service delivery

The Advisory Board’s early focus areas will look at how AI is actually changing legal practice today, what future‑ready lawyers really need to know, and how legal education and real‑world practice can better align. The emphasis is not just on using AI tools, but on strengthening the human skills that matter most, such as sound judgment, critical thinking, and careful verification of AI‑generated outputs.

Shaping the future, not reacting to it

Citing the critical need for this Advisory Board’s creation, Mike Abbott, Head of the Thomson Reuters Institute, notes that the legal profession is at a crossroads, and it can either react to AI‑driven disruption or take an active role in shaping how these technologies are used to support lawyers, courts, and the public.

“By assembling a board of distinguished leaders, our goal is to help practicing lawyers and the lawyers of the future navigate a rapidly evolving landscape,” Abbott said. “Ensuring that legal education strengthens irreplaceable skills such as critical thinking, human judgment and effective communication helps make AI use safe and effective. The Board’s efforts will ultimately help shape a future-ready profession, leading to better outcomes for all.”

Meet the AIFLP Advisory Board Members

By convening experienced leaders from across the profession, TRI hopes to help lawyers navigate this landscape with confidence. Advisory Board Members include:

      • Michael Abbott, Head of the Thomson Reuters Institute
      • Soledad Atienza, Dean, IE Law School
      • The Honorable Jennifer D. Bailey, (Ret.), Partner, Bass Law
      • Benjamin Barros, Dean, Stetson University College of Law
      • Professor Sara J. Berman, University of Southern California, Gould School of Law
      • Megan Carpenter, Dean Emeritus, University of New Hampshire Franklin Pierce School of Law
      • Ronald S. Flagg, President, Legal Services Corporation
      • Donna Haddad, AI Ethics and Governance expert, and founding member, IBM AI Ethics Board
      • Johanna Kalb, Dean and Professor of Law, University of San Francisco School of Law
      • The Honorable Nelly Khouzam, Florida Second District Court of Appeal
      • The Honorable William Koch, Dean, Nashville School of Law, and former Tennessee Supreme Court Justice
      • Sheldon Krantz, retired partner, DLA Piper, and a founder, DC Affordable Law Firm
      • Stefanie A. Lindquist, Dean, School of Law, Washington University in St. Louis
      • The Honorable Mark Martin, Founding Dean and Professor of Law, Kenneth F. Kahn School of Law at High Point University, and former Chief Justice, Supreme Court of North Carolina
      • Caitlin (Cat) Moon, Professor of the Practice and founding co-director, Vanderbilt AI Law Lab, Vanderbilt Law School
      • Hari Osofsky, Myra and James Bradwell Professor and former Dean, Northwestern Pritzker School of Law; Founding Director, Northwestern University Energy Innovation Lab; and Founding Director, Rule of Law Global Academic Partnership
      • Joanna Penn, Chief Transformation Officer, Husch Blackwell
      • The Honorable Morris Silberman, Florida Second District Court of Appeal
      • The Honorable Samuel A. Thumma, Arizona Court of Appeals, Division One
      • Mark Wasserman, Partner and CEO Emeritus, Eversheds Sutherland
      • Donna E. Young, Founding Dean, Lincoln Alexander School of Law, Toronto Metropolitan University

What’s next?

The Advisory Board held its first meeting in February and will meet quarterly going forward. As the work progresses, TRI plans to publish research findings, best practices, and practical recommendations for legal educators, law firms, and courts.

In a profession built on precedent and careful reasoning, the rise of AI presents both opportunity and responsibility. The AIFLP Advisory Board is an effort to make sure the legal community meets that moment thoughtfully and on its own terms.


You can learn more about the impact of advanced technology on the legal profession here

]]>
The efficiency imperative: AI as a tool for improving the way lawyers practice /en-us/posts/ai-in-courts/improving-lawyers-practice/ Wed, 18 Mar 2026 17:45:16 +0000 https://blogs.thomsonreuters.com/en-us/?p=70024

Key insights:

      • AI brings improved efficiency — AI accelerates tasks like document review and research, freeing lawyers to pursue more high-value work for clients.

      • AI does the work of a team of lawyers — AI levels the playing field for small law firms and solo practitioners by providing additional capacity without adding headcount, thereby allowing fewer lawyer to do the work of many.

      • Yet, AI still needs guardrails — Lawyers must remain accountable, however, with human oversight and review to ensure that AI outputs are accurate and correct, thereby preserving nuance and professional judgment.


Already, AI is no longer a theoretical concept for legal professionals, nor is it a nice-to-have for law firms that are seeking to impress their clients with improved efficiency and cost savings. That means, the practical question now becomes how to adopt AI in ways that improve speed and capacity of lawyers without compromising accuracy, confidentiality, or professional judgment.

The strongest near-term value shows up where modern practice is most strained: high-volume inputs and relentless timelines. In that environment, AI can be most helpful as an accelerant for the first pass through large bodies of material.

This possibilities, opportunities, and challenges of using AI in this way were discussed by a panel of experts in a recent webinar, , from theĚý, a joint effort by the National Center for State CourtsĚý(NCSC) and the Thomson Reuters Institute (TRI).

One panelist, Mark Francis, a partner at Holland & Knight, described one way that AI can be an enormous help. “Anything where we’re dealing with large volume of materials that need to be reviewed [such as] large sets of documents, large sets of legal research, large sets of discovery. Obviously, AI can be leveraged in all of those circumstances.” That framing is important because it anchors AI’s utility in a familiar workflow: review, triage, and synthesis at scale.

AI also has a role earlier in the workflow than many attorneys expect. In addition to sorting and summarizing, it can help generate starting structures. For lawyers drafting motions, client advisories, demand letters, contract markups, or internal investigations memos, the hardest step can be getting traction from a blank page. “It’s really good at content or idea generation,” Francis said, adding that lawyers can ask AI to “generate some ideas for me on this topic, or generate an outline of a document to cover a particular issue.”


“AI is definitely going to benefit some of the small law firms who cannot actually afford the workforce. AI can be an extension when it comes to the automation.”


Of course, that does not mean letting an AI model decide what the law is; rather, it means using AI to produce an initial outline, identify possible issues to consider, or propose alternate ways to organize an argument. Then, the attorney should apply their own judgment to accept, reject, refine, and verify the AI’s output.

For legal teams, the ideal mindset is that AI can compress the time between intake and a workable first draft, whether that draft is a research plan, a deposition outline, a set of contract fallback positions, or a motion framework. However, speed is only valuable if it facilitates careful lawyering, not just taking shortcuts.

Efficiency that scales down, not just up

AI’s impact is not limited to large law firms with dedicated tech & innovation budgets. In fact, the benefits may be most transformative for smaller legal organizations that feel every hour of administrative drag and every unstaffed matter. Panelist Ashwini Jarral, a Strategic Advisor at IGIS, underscores how broad the current level of AI adoption already is. “AI is already being used in a lot of legal research, contract analysis, and in office operations,” Jarral explained. “Whether that’s in a small law firm or a large law firm, everybody can benefit from that automation with this AI.”

For many practices, that list maps directly onto the work that consumes lawyers’ time without always adding commensurate value: repetitive research steps, first-pass contract review, intake and scheduling, matter administration, and other operational tasks.

Historically, scale favored organizations that could hire more associates, paralegals, and support staff to push volume through the pipeline. Now, AI offers a different form of leverage: additional capacity without adding headcount. “It is definitely going to also benefit some of the small law firms who cannot actually afford the workforce,” Jarral said, adding that “AI can be an extension when it comes to the automation.” For a solo or small firm, that extension can show up as faster first-pass review of contracts, quicker summarization of records, more consistent intake workflows, and reduced time spent on repetitive back-office tasks.

At the same time, it is crucial to be clear-eyed about what is being automated. While AI can help deliver efficiency, it does not offer legal judgment itself. The legal profession still must decide, matter by matter, what level of review is required and what risks are acceptable.


“Lawyers are trained a certain way, and AI is never going to be trained that way. AI misses nuances. We’re always going to need lawyers; we’re always going to need the human in the loop.”


And that’s where implementation discipline becomes a strategic differentiator. Law firms that treat AI as a general-purpose shortcut tend to create risk; while firms that treat AI as a workflow component, with guardrails, review steps, and clear accountability, are more likely to capture value without compromising quality.

The non-negotiable: lawyers remain accountable

Any serious conversation about AI in legal practice must address these limits, panelists agreed. The Hon. Linda Kevins, a Justice on the Supreme Court in the 10th Judicial District of New York (Suffolk County), offered the most direct articulation of the boundary line: “Lawyers are trained a certain way, and AI is never going to be trained that way. AI misses nuances. We’re always going to need lawyers; we’re always going to need the human in the loop.”

Indeed, legal work is saturated with nuance. The same set of facts can carry different weight depending on jurisdiction, judge, forum, procedural posture, and the client’s goals and risk tolerance. Even when the law is clear, the right action often is not. To strive for true justice requires judgment about timing, framing, business consequences, reputational risk, and settlement dynamics. Those are not merely inputs for an AI to process — they are human decisions that define legal representation.

As the webinar made clear, this is the point at which responsible use becomes practical, not abstract. If AI is used for research support, contract analysis, or document review, lawyers need an explicit approach for verification and oversight. The outputs may look polished and may sound confident; however, confidence is not accuracy, and professional responsibility does not shift to a vendor or an AI model. Human review is not a ceremonial or perfunctory step, nor is it a formality. Rather, it is the core control that protects clients and the court, and it is the inflection point that turns AI from a novelty into a defensible tool.

In practice, the human in the loop means deciding in which instances AI can assist and in what instances it cannot. It also means reserving an attorney’s time for the decisions that carry legal and ethical consequences and building repeatable habits that prevent teams from drifting into overreliance on AI, especially under deadline pressure.

The legal profession can capture real benefits from AI, including speed, scalability, and improved access, but only if it adopts the technology in a way that preserves what Justice Kevins highlighted: training, nuance, and human accountability.


You can find out more about how AI and other advanced technologies are impactingĚýbest practices in courts and administration here

]]>
The great AI disconnect: Firms and legal departments are not communicating about AI usage /en-us/posts/technology/great-ai-disconnect/ Wed, 18 Mar 2026 13:39:56 +0000 https://blogs.thomsonreuters.com/en-us/?p=70004

Key insights:

      • There’s an AI awareness gap — Most corporate legal professionals do not know whether their outside legal counsel are using AI in handling their client matters, leaving both law departments and their firms in a state of AI uncertainty.

      • A potential upcoming billing model shift — Efficiencies from AI usage could have a major impact on how many law firms bill matters; value-based billing may need to replace or supplement hourly billing for matters in which AI is used.

      • Transparency builds trust — Lack of visibility and ROI measurement could erode trust between law departments and their outside counsel. Dialogue and measurements can strengthen the firm/client relationship and create scenarios in which both sides can reap the benefits of AI usage.


While the use of AI is increasingly widespread for both corporate legal departments and their outside law firms, there is a considerable lack of dialogue and data-sharing between the two sides on usage, guidelines, and expectations regarding AI. This complicates efforts to maximize the benefits of using AI, and it also may be eroding trust between the two sides.

Significant gaps in visibility and measurement

The Thomson Reuters Institute’s (TRI’s) 2026 AI in Professional Services Report found major gaps in visibility and measurement between law firms and legal departments. The survey found that more than half of law firm respondents said their organizations are currently using or considering using GenAI. And more than half of corporate legal professionals surveyed said they feel that their outside legal firms should use AI on their matters.

However, more than two-thirds (68%) of corporate legal professionals admitted that they currently have no idea if their outside law firms are using AI or not.

AI disconnect

In addition, neither side is effectively measuring whether or to what degree their use of AI is improving the delivery of legal services. Indeed, 85% of law firm respondents and 75% of corporate legal department respondents said their organizations are either not collecting ROI data on AI usage or are unsure if they are doing so.

Is your organization measuring the ROI of AI tools?

AI disconnect

These visibility and measurement gaps make it difficult for both sides to plan how AI can and should be used in handling client matters. It also raises questions about how potential efficiencies from AI use will affect related factors such as how much firms charge for their services and how much clients are willing to pay. Half of legal professionals surveyed said they feel that AI is either a major threat or somewhat of a threat to billings and law firm revenues. Not surprisingly, the industry continues to wrestle with how to balance efficiency gains from AI against the limitations of the hourly billing model.

Concerns of corporate law departments

For corporate law departments, the lack of AI usage visibility and ROI measurement is producing a wide variety of responses, ranging from mild but growing concern all the way to outright suspicion about how law firms are using AI on their clients’ behalf. Law department respondents said that while they generally trust their outside counsel to make the right decisions regarding AI use and maintaining quality, most departments have not yet had conversations on those issues with their law firms, including how AI use will affect billing.

“Billing has remained the same as it did before,” noted one corporate legal department attorney. “So, either they are not using AI tools efficiently, or they are just doing double work.”

One corporate CLO was far more blunt in their assessment, especially given the lack of detailed discussions or data from firms: “I fear that firms will use AI to cut time, but continue to bill for the hypothetical amount of time a task would have taken without it. It’s dishonest, but so are many firms.”

One encouraging note is that, according to TRI’s 2025 Future of Professionals Report, 56% of law firm respondents said they are highly or moderately confident in their ability to articulate the value of AI to their clients. Despite law firms’ confidence in explaining the value of AI, however, the visibility gap illustrated in the 2026 AI in Professional Services Report indicates that law firms are not actually having those conversations with clients. Indeed, some corporate law department respondents suggested their outside counsel may be reluctant to discuss AI with them because of concerns about quality and accuracy. One even suggested that firms may feel threatened by AI.

More & better communication is needed

As difficult and complicated as discussions involving AI usage may be, they are also essential. Absent those discussions, trust between firms and clients may be eroding, potentially jeopardizing long-standing relationships.

Here are a few steps that both sides can take to build confidence around the use of AI:

For law firms —

    • Communicate with clients — Hold discussions with clients that allow firms to detail how AI is being or will be used in client matters. Solicit feedback from clients about in which instances they would accept (or even demand) AI usage on different parts of a matter.
    • Develop an AI billing strategy — Determine not only how AI usage is impacting billable hours, but also how that will interact with the firm’s billing and pricing strategy.
    • Demonstrate and articulate value — Be prepared to explain billings in detail and answer client questions in terms of not only time and rates, but of value to the client. This includes both the value that AI brings to client engagement, but also the value that the firm brings above and beyond what technology provides, such as more freed-up time for lawyers to pursue value-added work.

For corporate law departments —

    • Lead the conversation, if need be — About three-quarters of both law firm and legal department respondents said it is the firm’s responsibility to initiate discussions around AI usage. However, corporate law departments should not wait for their outside firms to start the conversation. Take the initiative and make sure firms’ delivery models and fee structures are clear regarding AI usage.
    • Set expectations — Provide guidelines, expectations, or mandates on how and when AI will be used in handling client matters. This includes outlining specific use cases, data security protocols, and the human-in-the-loop oversight mechanisms that are used to ensure accuracy.
    • Build an external-facing metrics program — Law departments need to accurately measure the efficiency gains their outside firms are achieving to ensure that they, as the client, are receiving a fair price for value received. Baselines can be established for how long various legal matters took historically and how much they cost. The baselines then can be compared against AI-enabled engagements to evaluate ROI and business impact. This also allows legal departments to more thoroughly explain those gains to their own stakeholders.

For both corporate law departments and their outside counsel, it is imperative to engage in thorough discussions and develop data that can inform better decision-making. Such dialogue and measurements can strengthen the firm/client relationship and create scenarios in which both sides can reap the benefits of AI use.


You can download a full copy of the Thomson Reuters Institute’sĚý2026 AI in Professional Services ReportĚýhere

]]>
Human layer of AI: How to build human-centered AI safety to mitigate harm and misuse /en-us/posts/human-rights-crimes/human-layer-of-ai-building-safety/ Mon, 09 Mar 2026 17:33:34 +0000 https://blogs.thomsonreuters.com/en-us/?p=69789

Key highlights:

      • Map risks before building— Distinguish between foreseeable harms that may be embedded in your product’s design and potential misuse by bad actors.

      • Safety processes need real authority— An AI safety framework is only credible if it has the power to delay launches, halt deployments, or mandate redesigns when human rights risks outweigh business incentives.

      • Triggers enable proactive intervention— Define clear, automatic review triggers such as product updates, geographic expansion, or emerging patterns in user reports to ensure your safety processes adapt as risks evolve rather than reacting after harm occurs.


In recent months, the human cost of AI has become impossible to ignore. after interacting with AI chatbots, while generative AI (GenAI) tools have been weaponized to create that digitally undress women and children. These tragedies underscore that the gap between stated values around AI and actual safeguards remains wide, despite major tech companies publishing responsible AI principles.

, a senior associate at , who works at the intersection of technology and human rights, argues that closing this gap requires companies to: i) systematically assess both foreseeable harms from intended AI use and plausible misuse by bad actors; and ii) build safety processes powerful enough to actually stop launches when risks to people outweigh commercial incentives.

Detailing the two-step framework for anticipating and addressing AI risks

To build effective AI safety processes, companies must first understand what they’re protecting against, then establish credible mechanisms to act on that knowledge.

Step 1: Mapping foreseeable arms and intentional misuse

When mapping AI risks during “responsible foresight workshops” with clients, Richard-Carvajal says she takes them through a process that identifies:

    • foreseeable harms that emerge from a product’s design itself. For example, algorithm-driven recommender systems — which often are used by social media platforms to keep users on the site — are designed to drive engagement through personalized content, and are well-documented in amplifying sensationalist, polarizing, and emotionally harmful content, according to Richard-Carvajal.
    • intentional misuse that involves bad actors who may weaponize technology beyond its purpose. Richard-Carvajal points to the example of Bluetooth tracking devices, which initially were designed to help people find lost items, but were quickly exploited by stalkers, who placed them in victims’ handbags in order to track their movements and in some cases, to follow them home.

Tactically, the role-playing use of “bad actor personas” by Richard-Carvajal and her colleagues can help clients imagine misuse scenarios and help ensure companies anticipate harm before it occurs rather than responding after people have been hurt.

Step 2: Building a credible AI safety process

Once risks are identified, Richard-Carvajal says she advises that companies identify mechanisms to address them.ĚýThe components of a legitimate AI safety framework mirror the structure of robust human rights due diligence by centering on the risks to people.

Indeed, Richard-Carvajal identifies core components of this framework, which include: i) hazard analysis and to anticipate both foreseeable harms and potential misuse; ii) incident response mechanisms that allow users to report problems; and iii) ongoing review protocols that adapt as risks evolve.

Continual evaluation of new emerging risks is needed

As AI capabilities advance and deployment contexts expand, companies must continuously reassess whether their existing safeguards remain adequate against evolving threats to privacy, vulnerable populations, human autonomy, and explainability. Richard-Carvajal discusses each one of these factors in depth.

Privacy — Traditional privacy mitigations, such as removing information that leads to identifying specific individuals, are no longer sufficient as AI systems can now re-identify individuals by linking supposedly anonymized data back to specific people or using synthetic training data that still enables re-identification. The rise of personalized AI — in which sensitive information from emails, calendars, and health data aggregates into comprehensive profiles shared across third-party providers — can create new privacy vulnerabilities.

Children — Companies must apply a heightened risk lens for vulnerable populations, such as children, because young users lack the same capacity as adults to critically assess AI outputs. Indeed, the growing concerns around AI usage and children are warranted because of AI-generated deepfakes involving real children are being created without their consent. In fact, Richard-Carvajal says that current guidance calls for specific child rights impact assessments and emphasizes the need to engage children, caregivers, educators, and communities.

Cognitive decay — A growing concern is that too much AI usage can harm human autonomy and contribute to a decline in critical thinking. This occurs when , and it has the potential to undermine their human rights in regard to work, education, and informed civic participation.

Meaningful explainability — Companies’ commitment to explainability as a core tenet of their responsible AI programs was always a challenge. As synthetic AI-generated data increasingly trains new models, explainability becomes even more critical because engineers may struggle to trace decision-making through these layered systems. To make explainability meaningful in these contexts, companies must disclose AI limitations and appropriate use contexts, while maintaining human-in-the-loop oversight for consequential decisions. Likewise, testing explanations should require engagement with actual rights holders instead of just relying on internal reviews.

Moving forward safely

While no universal checklist exists for AI safety, the systematic approach itself is non-negotiable. Success means empowering engineers to identify and address human-centered risks early, maintaining ongoing stakeholder engagement, and building safety processes that have genuine authority to delay launches, halt deployments, or mandate redesigns when human rights outweigh commercial pressures to ship products.

If your company builds or deploys AI, take action now: Give your engineers and risk teams the authority and resources to identify harms early, keep continuous engagement with affected people and independent stakeholders, and create governance that have the power to keep harm from happening.

Indeed, companies need to make sure these steps go beyond simple best practices on paper and make these protective processes operational, measurable, and enforceable before their next product release.


You can find more about human rights considerations around AI in our ongoingĚýHuman Layer of AI seriesĚýhere

]]>