Talent Development & Management Archives - Thomson Reuters Institute https://blogs.thomsonreuters.com/en-us/topic/talent-development-management/ Thomson Reuters Institute is a blog from ¶¶ŇőłÉÄę, the intelligence, technology and human expertise you need to find trusted answers. Thu, 16 Apr 2026 15:10:52 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 Rethinking lawyer development in future AI-enabled law firms /en-us/posts/legal/lawyer-development-ai-enabled-law-firms/ Thu, 16 Apr 2026 15:10:23 +0000 https://blogs.thomsonreuters.com/en-us/?p=70390

Key highlights:

      • Three emerging business models, one unresolved tensionĚý— AI is compressing time, which directly threatens the logic of billing by the hour, but the smartest law firms are not waiting for a winner to emerge before building their strategic foundation.

      • Technology strategy and talent strategy are the same conversation — The talent model must be designed in tandem with the business model, even amid uncertainty, because many of the structural conditions of legal work are changing all at once.

      • The next great lawyer will lead with human skills, not tool proficiencyĚý— Forward-thinking firms are doubling down on their lawyers’ curiosity, judgment, client skills, and relationship-building as these capabilities are those that AI cannot replicate.


Every law firm is asking how AI will change the way legal work gets done; but , Chief Legal Operations Officer at , is asking a more consequential question: How will AI change the way legal work getsĚýpaid for?

Planning around 3 law firm business models in the AI era

AI is making law firms more efficient, of course, but efficiency alone does not answer the harder question of how to capture value and how AI-enabled legal services get priced. Olson Bluvshtein sees three paths emerging in law firms:

      1. Billable-hour (still) — The first is the path of least resistance. Firms stay anchored to the billable hour, raise rates, and use AI to move faster and handle more volume, with the idea that more volume will make up the revenue losses of faster work. With this model, however, the client-firm incentive misalignment remains intact, and the fundamental tension between billing for time and AI compressing that time never gets resolved.
      2. Value-based pricing — The fixed fee pathway also is likely to gain further traction, as it’s one that many AI-native law firms are pursuing. In this model, value-based pricing creates a natural meeting point between firm and client interests because when incentives align, everyone wins, Olson Bluvshtein explains.
      3. Frontier models rule — The third scenario is more speculative but worth watching. As foundational models improve, the need for expensive legal-specific tools may diminish. “I could see a scenario in the future in which we don’t necessarily need all the legal-specific tools that are out there,” she says. Even though technology costs historically come down, cheaper tools do not make the business model question disappear, Olson Bluvshtein notes.

Candidly, Olson Bluvshtein admits that “the truth is probably somewhere in the middle,” and the firms best positioned for any of these futures are the ones building the strategic and operational foundation now rather than waiting for the answer to become obvious.

Indeed, the most thoughtfully designed business model will fall short without the right talent foundation to support it. “Technology strategy and people strategy are not separate conversations,” Olson Bluvshtein says, adding that they are key parts of the same strategy.

Legal innovation consultant reinforces this point in , noting that many aspects of the structural foundation under which the legal profession has operated are changing all at once. This means that addressing the technology strategy separately from the human side, slice by slice, does not make sense.

Boyko says she encourages law firms to take a step back and approach the problem by identifying what the firm will need first in the future and then plan the talent and tech part for that reality.

Aligning the talent model to the future business model

Not surprisingly, a key challenge for law firms right now is that the future is uncertain. Therefore, it is difficult to design a talent model for an uncertain future and an unknown business model. At the same time, there are some known facts, but the unknown aspect is when these certainties will occur.

More specifically, what is known is that there is mounting pressure on the three possible law firm business models because AI is automating the tasks of past junior associates, clients do not want to pay for tasks completed by junior associates, and clients are bringing more legal work in-house, often until the time when the almost final deliverable is handed over to outside counsel for final review.

Norah Olson Bluvshtein of Fredrikson & Byron

To explore the right talent model, one experiment that Boyko suggests is to expand the junior associate experience to include rotations through back-office functions, such as knowledge management, professional development, and technology functions.

At law firm Fredrikson & Byron, Olson Bluvshtein says its associate development program is evolving to prepare for the uncertain future based on three current tactics:

      • Building AI fluency — This is a near-term imperative that will soon become table stakes. The goal is to move past basic adoption into something more sophisticated and durable. To enable this, the litigation and M&A practices at Fredrikson are actively working with a variety of tools to test prompts that they can then share more broadly with other teams, while also identifying how AI policy guidance will evolve.
      • Accelerating the development of legal judgment — Shortening the learning curve for developing legal judgment, which includes the ability to supervise and efficiently validate AI-produced work, is the second essential part of the firm’s talent development framework. Olson Bluvshtein is candid about where things stand. “It has not fully happened yet,” she says. “But building the training infrastructure to operationalize this is a stated goal for the year ahead, including formalized curriculum around effectively and efficiently supervising AI output.”
      • Being hyper-focused on the development and recruiting of human skills — Doubling down on the human skills — including client development, negotiation, relationship-building, and sound judgment — that technology cannot replicate are the capabilities that will define the next generation of great lawyers, regardless of which law firm business model ultimately prevails.

This same philosophy is shaping how Fredrikson recruits. Rather than screening candidates for a checklist of AI tools, the firm is prioritizing curiosity, openness, and the ability to demonstrate human skills. Indeed, the firm is looking for lawyers “who are really good at those human skills” and who bring the kind of judgment and adaptability that compounds over time, explains Olson Bluvshtein.

Boyko underscores a similar approach to skills. “Right now, the skills needed to be a good lawyer are no longer those rote skills that AI can automate,” she explains. “Instead, they are the people skills, the operational skills, and the client skills.”

Of course, moving from broad experimentation to disciplined, firm-wide maturity takes time, and the gap between early movers and late adopters is already widening. Those firms that will define the next era of legal services already are asking how AI changes the way it delivers value and what skills its lawyers will most need — and not just looking for the next tool to buy.


You can learn more about the challenges facing legal talent here

]]>
Agentic AI following GenAI’s growth trajectory in legal, but with unique oversight challenges, new report shows /en-us/posts/technology/agentic-ai-oversight-challenges/ Thu, 09 Apr 2026 08:45:55 +0000 https://blogs.thomsonreuters.com/en-us/?p=70278

Key takeaways:

      • Agentic AI poised for adoption uptick — Agentic AI is following GenAI’s rapid adoption in the legal industry, with less than 20% of firms currently implementing agentic systems but half planning or considering adoption in the near future, according to a new report.

      • Adoption depends on human oversight answers — Legal professionals are generally optimistic about agentic AI’s potential, but successful adoption depends on explicit guidance about human oversight and the lawyer’s role in maintaining ethical standards.

      • Time to retool AI education? — Agentic AI’s increased autonomy introduces new oversight and ethical challenges for law firms, making targeted education and clear guidance essential to understanding the differences from GenAI.


Over the past several years, law firms and corporate legal departments have turned towards generative AI en masse. At the beginning of 2024, just 14% of all law firms and legal departments featured an enterprise-wide GenAI tool. Just two years later, that number had already risen to 43% of all firms and departments, according to the 2026 AI in Professional Services Report, from the Thomson Reuters Institute (TRI). For large law firms or legal departments, those percentages — not surprisingly — are beginning to approach 100%.

With GenAI adoption now this widespread, legal industry leaders are now turning their attention to two primary initiatives. One, of course, is how to get the most out of the AI tools they already have — a task that is proving a bit elusive. Currently, less than 20% of lawyers say their organizations measure AI’s return-on-investment, and most corporate lawyers say they have no idea how their outside law firms are approaching AI. Thus, instituting not just AI tools, but also an AI strategy is the second top priority for law firms and corporate legal departments in 2026 and beyond.

However, even as the legal industry reaches a tipping point in adopting GenAI tools, technology innovation still continues unabated. Agentic AI has emerged as the next wave of innovation that could change how lawyers work on a daily basis, offering a way to autonomously complete multi-step tasks. For example, agentic AI systems are already being built for the legal industry that independently researches a regulation or law, drafts a document based on the finding, identifies pitfalls, and revises the document, with stops for human guidance only instituted as desired.

According to the AI in Professional Services Report, the legal industry is already making headway towards implementing agentic AI systems. For agentic AI to truly take hold in legal, however, lawyers still require more education around not only how it differs from the GenAI systems they already have in place, but also when and where human intervention needs to occur within an agentic system.

The early stages of agentic AI

Examining current agentic AI adoption for the legal industry almost takes one back in time — two years, to be exact. Following the public release of GenAI in late-2022, many legal industry organizations spent 2023 evaluating and experimenting with AI systems, usually with a small working group of interested guinea pigs. As a result, only 14% of survey respondents said their law firms or corporate legal departments were engaged in organization-wide GenAI rollouts at the start of 2024. However, more than half of respondents said their organizations expected to be rolling out large-scale GenAI systems over the next 1 to 3 years. The intervening two years since then have proved that prediction to be largely true.

Agentic AI usage in the first half of 2026 looks largely similar to GenAI in 2024. The legal industry started to experiment with agentic AI at the beginning of 2025, with an eye towards actual implementation in 2026 and beyond (particularly as legal software providers began to integrate agentic systems into their own products). As such, less than 20% of recent survey respondents say their organization is engaged in widespread agentic AI adoption, but with about half of respondents said their organization is either planning to use or considering whether to use agentic AI in the near future.

agentic ai

By and large, lawyers feel positive about the agentic AI movement. When asked about their sentiment towards agentic AI, 51% of legal industry respondents said they felt excited or hopeful, while just 19% said they felt concerned or fearful. Further, about half (47%) said they actively believe agentic AI should be used for legal work, while 22% felt it should not, with the remainder saying they were unsure. These figures largely track with the sentiments expressed about GenAI in 2024, which have only grown over time from about 50% positive two years ago to two-thirds of all legal professionals feeling positive currently.

This all lends further credence to a rise in agentic AI usage similar to what law firms and corporate legal departments experienced with GenAI over the course of 2024 and 2025. Indeed, when asked when they expect agentic AI to be a central part of their workflow, few have baked agentic systems into their daily work currently, but a majority of legal industry respondents expect it to be central within the next 3 to 5 years.

agentic ai

The unique barriers of agentic AI adoption

Agentic AI does differ from GenAI in one crucial area that may limit its growth potential within the legal industry, however — autonomy. By and large, GenAI systems operate on a back-and-forth basis: Users provide the tool a prompt, receive its output, and then iterate back-and-forth from there. Agentic AI is intended to be more automated by design, only requiring human input at pre-determined points in the process. And that makes some lawyers understandably nervous.

When asked why they might feel hesitant about using agentic AI for legal tasks, the most common answer was a general fear of the unknown, but the second most common answer dealt with the need for careful monitoring and oversight. In fact, some respondents said they were excited about GenAI, but more cautious about agentic AI’s potential.

“Agentic AI, while exciting, to me removes oversight a step too far,” said one such lawyer from a US law firm. “I like the idea of prompting and reviewing a result. It is something else to have a machine have so much autonomy in the actual doing of a thing and potentially acting on my behalf without that very concrete review.”


Please add your voice to ¶¶ŇőłÉÄę’ flagship , a global study exploring how the professional landscape continues to change.Ěý


An assistant GC at a US company also pointed to potential privacy and security concerns, adding: “The fact that agentic AI operates in a much more autonomous way, with a lack of control from the user, means there are many unknowns that are hidden beneath the process.”

For law firm and corporate legal department leaders looking to potentially implement agentic AI systems into their practice, this means re-thinking what AI education and training will mean moving forward. Beyond that, however, legal AI educators also will need to make sure to pinpoint and perhaps over-explain those specific instances in which human oversight needs to occur in agentic systems. More autonomous does not mean fully autonomous, and particularly for lawyers with ethical duties to their work product, lawyer oversight will in fact be a necessary part of any agentic system.

For law firm or legal department leaders, that means that finding the right balance between efficient workflows and human intervention will be key to agentic AI adoption. And those organizations that can best communicate human-in-the-loop to their professionals up-front will be rewarded with more increased and reliable adoption.

Clearly, lawyers feel positively about the agentic AI future, after all. They just need it spelled out explicitly as to what the lawyer’s role will be in this new paradigm.

“Agentic AI is powerful, but its moral compass must come from humans,” one UK law firm barrister noted aptly. “Lawyers are trained to safeguard fairness, rights, and the rule of law — principles that should guide how AI is designed, governed, and deployed. Hope lies in our ability to shape AI through these values for fairer values for society as a whole.”


You can download a full copy of the Thomson Reuters Institute’sĚý2026 AI in Professional Services ReportĚýhere

]]>
Honing legal judgment: The AI era requires changes to how lawyers are trained during and after law school /en-us/posts/legal/honing-legal-judgment-training-lawyers/ Thu, 02 Apr 2026 15:36:44 +0000 https://blogs.thomsonreuters.com/en-us/?p=70236

Key takeaways:

      • AI threatens traditional lawyer development — As AI automates entry-level legal tasks like research and writing that historically has honed legal judgment skills, the profession faces a crisis in how new lawyers will develop such judgment abilities.

      • The profession can’t agree on what constitutes “legal judgment” — Unlike other professions, there is no agreed-upon definition of legal judgment or clear standards for when AI should be used.

      • Implementation requires unprecedented coordination and funding — A legal education fund as a proposed solution would require a small percentage of legal services revenue and coordinated action across law schools, legal employers, and state regulators.


This is the second of a two-part blog series that looks at how lawyer training needs to evolve in the age of AI. The first part of this series looked at how lawyers can keep their skills relevant amid AI utilization.

The key skills that comprise legal judgment have received mixed reviews, according to a recent white paper from the Thomson Reuters Institute that advocated for cultivating practice-ready lawyers. The white paper was based on feedback from thousands of experienced lawyers, judges, and law students and raises questions about how legal judgment forms when AI assistance is used for task completion.

notes that calls for “… to accelerate the development of legal judgment early in lawyers’ careers.”

The challenge is that each part of the profession — law schools, employers, state supreme courts (as regulators) — have distinctly separate responsibilities. That means, that in the age of AI, coordination across the entire legal profession is needed, especially as AI reduces the availability of traditional first jobs.

Furlong points out that there is no consensus for what legal judgment is or any agreed upon standards for in what instances AI should be used in legal. To bring clarity to these issues, the white paper proposed a profession-wide model that integrates three critical elements: i) work-based learning that’s modeled on medical residencies; ii) micro-skill decomposition of legal judgment; and iii) AI-as-thinking-partner throughout pedagogy.

Three pillars for an AI-era lawyer formation system

Not surprisingly, overreliance on AI can erode critical analysis and solid legal judgment skills. Addressing these concerns requires a comprehensive reimagining of how lawyers are educated and trained. One solution lies in three interconnected pillars that together form a cohesive system for developing legal judgment in an AI-integrated world.

Pillar 1: Integrate work experience into legal education

Core skills such as legal research, writing, and document review help develop legal judgment; yet these skills could collapse once AI assumes such tasks. The Brookings Institution recently proposed to preserve entry-level professional development in an AI era. This parallels the TRI white paper’s calls for mandatory supervised postgraduate practice as a key part of legal licensure.

While implementing a full residency model presents challenges, several law schools have already pioneered approaches that demonstrate the viability of work-integrated legal education that, if scaled appropriately, could improve new lawyer practice and judgment skills. For example, Northeastern Law School guarantees all students nearly before graduation through four quarter-length legal positions. The program integrates supervised practice into the curriculum so graduates can gain substantial hands-on experience alongside their classroom instruction.

Also, program offers an alternative pathway to bar admission through practice-based assessment rather than the traditional bar exam. The program demonstrates that competency can be evaluated through supervised experiential learning.

Pillar 2: Decompose legal judgment into teachable micro-skills

The legal profession needs to come to a common definition of legal judgment and develop its components to teach the concept effectively. “We can’t teach what we can’t describe,” Furlong says. To develop legal judgment, the profession must define its components, including:

      • Pattern recognition — The ability to identify when different fact patterns are related to similar legal frameworks and distinguish when superficially similar cases are legally distinct.
      • Strategic calibration and proportionality — This means understanding what level of effort, precision, and risk each matter requires and matching responses to the stakes involved.
      • Reasoning through uncertainty — This is the capacity to make defensible decisions and provide sound counsel even when the law is ambiguous, unsettled, or silent on an issue.
      • Source evaluation and authority weighting — This includes knowing which legal authorities are most suitable and being able to assess their persuasive value.
      • Ethical judgment under pressure — This means spotting conflicts, confidentiality issues, and duty-of-candor moments while maintaining competence and knowing when to escalate beyond expertise.

Breaking down legal judgment into these discrete components makes it possible to design targeted teaching interventions. For example, , former law professor and executive director of , suggests we back into AI-assisted workflows by requiring a short verification log (detailing sources checked, changes made, and why); running attack-the-draft drills (find missing authority, weak inferences, and jurisdictional mismatch); and preserving slow work as formative work (citation chaining, updating, and adversarial research memos).

With judgment skills clearly defined and work experience integrated into training, the profession must then tackle how AI itself should be incorporated into lawyer development.

Pillar 3: AI-as-thinking-partner throughout a lawyer’s career

Warnings that are mounting. The legal profession must provide clear standards for in what instances and how AI should be used, with training in verification and judgment skills. Overreliance on AI could compromise lawyers’ capacity to fulfill their fiduciary duties to clients.

A phased approach in the introduction of AI in legal work helps protect critical thinking while building AI competency. For example, in Year 1, law students could complete core legal reasoning exercises without AI assistance in order to better develop their analytical muscles. In Year 2, students use AI as a research assistant with mandatory verification protocols that teach students to check outputs against authoritative sources. Finally, in Year 3, residencies can immerse students in real-world AI workflows under proper supervision and while providing feedback.

These three pillars form a coherent vision for lawyer formation in the AI era. However, the most well-designed system faces the obstacle of funding.

The challenge of who pays

Perhaps the most difficult part of any overhaul is the cost. The medical residency model works because — up to $15 billion-plus annually — for teaching young medical students to be doctors. Legal education has no equivalent. Without addressing funding, however, even the best reforms will fail.

One idea is to establish a legal education fund that’s supported by an assessment of a small percentage of the legal industry’s gross legal services revenue (while exempting solo practitioners and firms with less than $500,000 in annual revenue). These funds could be used to subsidize thousands of supervised residency placements, fund law school curriculum development, support bar exam alternative assessments, and provide employer training and supervision stipends.


The challenge is that each part of the profession — law schools, employers, state supreme courts — have distinctly separate responsibilities, and that means coordination across the entire legal profession is needed.


This proposal, of course, would require unprecedented coordination and financial commitment from the legal profession. Skeptics might argue that market forces can solve this problem, or that firms will simply create new training pathways, or that AI will prove less disruptive than feared. However, waiting for market forces risks a lost generation of lawyers. The medical profession already when the medical industry’s voluntary reform failed. Only later did coordinated regulatory intervention produce the consistent quality standards the medical industry sees now.

What is clear is that inaction is resulting in degradation of lawyering skills. “Maybe… we need catastrophic external intervention to bring about the wholesale changes we can’t manage from the inside,” Furlong suggests.

However, the question is whether the legal profession will wait for a crisis to force change or act proactively to make the needed changes now, before the crisis hits.


You can learn more about the impact of AI on professional services organizations at TRI’s upcoming 2026 Future of AI & Technology Forum here

]]>
AI use and employee experience: New research reveals guidance gap in professional services /en-us/posts/technology/ai-guidance-gap/ Mon, 30 Mar 2026 11:23:47 +0000 https://blogs.thomsonreuters.com/en-us/?p=70090

Key takeaways:

      • Employees face contradictory messages or none at all — Nearly 40% of professionals surveyed report receiving conflicting directives about AI usage from clients and leadership, while half report no client conversations about AI have occurred at all.

      • Workers lack feedback on whether their AI efforts matter — Professionals who are experimenting with AI tools without knowing if their efforts are valued are left uncertain about whether investing time in developing AI skills is worth it.

      • Job displacement fears are rising — While employees remain cautiously optimistic about AI usage in their workplace, concerns about job displacement have doubled over the past year.


As generative AI (GenAI) tools flood into legal and accounting workplaces, organizations are deploying powerful technology without giving their employees clear directions on how to use it. Worse, some have received no guidance.

New research that underpinned the recent 2026 AI in Professional Services ReportĚýfrom the Thomson Reuters Institute (TRI), reveals a disconnect between AI availability and organizational guidance, which is creating confusion that may undermine both employee experience and the technology’s potential value. (The report’s data was gathered from surveys of more than 1,500 legal, tax, accounting, and compliance professionals across 26 countries.)

Employees navigate inconsistent AI policies or none at all

Approximately 40% of the professionals surveyed said they received contradictory guidance from clients and leadership about AI tool usage, with directives both encouraging and discouraging their use on projects and in RFPs. This ambivalence is slowing down decision-making at the front lines — a place in which AI could deliver the most value.

Equally concerning is the fact that half of professionals indicated that no conversations with clients about AI tool usage have taken place yet. And when discussions do occur, concerns about data protection and accuracy are the main topics.

guidance gap

This confusion extends to external relationships as well. More than two-thirds of corporate and government clients remain unaware of whether their outside professional service providers are even utilizing GenAI. And the majority of clients have provided no direction whatsoever to their outside law firms concerning AI use, respondents said.

guidance gap

Organizations often ignore what employees need to know

Perhaps most revealing is how organizations are measuring — or failing to measure — whether their AI investments are paying off. Almost half of respondents said their organizations are not measuring return on investment (ROI) at all. Among the minority (18%) of respondents that said their organizations do track ROI, the metrics they use tell a story about organizational priorities. That fact that internal cost savings and employee usage rates lead the list suggests a focus on efficiency over innovation or quality improvements.

guidance gap

This measurement vacuum has consequences for employee experience. Without clear success metrics, employees lack feedback on whether their AI experimentation is valued, discouraged, or even noticed. The absence of ROI frameworks also makes it hard to justify training investments or dedicated time that allows employees to develop AI fluency.

AI usage doubles while support systems fall behind

AI usage among professional service organizations has nearly doubled over the past year, and professionals are increasingly integrating these tools into their workflows, the report shows. Yet organizational infrastructure that could support this adoption surge lags badly. Most professionals said they expect GenAI to become central to their work within the next two years — but that may be happening without roadmaps from their employers.

In addition, notable barriers in employees’ usage of AI remain. When asked what barriers could prevent their organization from more widely adopting GenAI and agentic AI, almost 80% of professionals cited concerns over inaccurate responses. Other concerns included worries over data security, privacy, and ethical use. Most of these suggest an ongoing lack of trust in GenAI.

guidance gap

The tool landscape adds another layer of complexity. Publicly available tools dominate current usage, with more than half of respondents (57%) citing their use, while proprietary or industry-specific solutions remain largely in the consideration phase. This suggests employees are often self-provisioning AI tools rather than working within enterprise-supported ecosystems. This potentially opens organizations to increased risk exposure because of security gaps, compliance risks, and inconsistent quality.

Employees’ job displacement fears increasing

Despite these challenges, employee sentiment toward AI remains cautiously optimistic. More than half (57%) of respondents said they are either hopeful or excited about the future of GenAI in their industry. Clearly, employees see AI’s potential to enhance their efficiency, automate routine tasks, and free up their time for higher-value work.

At the same time, hesitation and concern among employees are rising, particularly around accuracy, job displacement fears, and the unknown implications of autonomous AI systems. Notably, concerns about job displacement have doubled over the past year, and this trend demands organizational attention and transparent communication about a workforce strategy to combat this concern.

What organizations need to do now

Organizational leaders who are serious about positive employee AI experiences need to step up their efforts to provide guidance to employees and gain the ROI that AI promises. Specific steps they can take include:

      • Draft clear and consistent guidance — Create explicit policies for employees about in which instances AI use is encouraged, required, or prohibited. This includes client communication protocols, data-handling requirements, and escalation procedures in those situations in which AI outputs seem questionable.
      • Develop and implement meaningful ROI metrics — Organizations must move beyond usage rates and cost savings as key success measurements. Tracking data points that capture quality improvements, time redeployed to strategic work, and client feedback on AI-enhanced deliverables present a more comprehensive picture. Also, leaders need to share these metrics transparently in order to give employees an understanding about organizational priorities.
      • Invest in structured learning — The survey shows professionals are experimenting with dozens of different tools from ChatGPT to specialized legal tech platforms. Organizations should curate recommended toolsets, provide hands-on training, and create communities of practice in which employees can share effective prompts and use cases with other users.

Our data shows that the employee experience around AI adoption reveals a workforce that is hopeful but hungry for direction and concerned about job impacts. Leaders who implement these actions effectively are more likely to unlock the strategic value that AI promises while building the trust and competence needed for their organizations and its employees to thrive in an automated future.


You can download a full copy of the Thomson Reuters Institute’sĚý2026 AI in Professional Services ReportĚýhere

]]>
Honing legal judgment: How professional acumen & fiduciary care can keep lawyers relevant in the age of AI /en-us/posts/legal/honing-legal-judgment-keeping-lawyers-relevant/ Wed, 25 Mar 2026 14:21:08 +0000 https://blogs.thomsonreuters.com/en-us/?p=70071

Key highlights:

      • Lawyers excel at semantic legal work while AI excels in syntactic tasks — Syntactic work (document generation, pattern recognition) is where AI excels, but semantic work involving exercising independent judgment, reflecting on consequences, and fulfilling fiduciary duties remains uniquely human.

      • Fiduciary duty as the core of legal relevance — What distinguishes lawyers isn’t justĚýwhatthey do, butĚýhow and whyĚýthey do it. The fiduciary relationship demands human understanding of context, balances competing interests, recognizes unstated concerns, and exercises discretion.

      • 5 hours to deepen or diminish — The five hours lawyers are expected to gain each week by using AI can either accelerate professional obsolescence or deepen lawyers’ relevance, depending on what they do with it.


This is the first of a two-part blog series that looks at how lawyers can keep their skills relevant in the age of AI

Lawyers expect to gain a full five hours per week of worktime due to the efficiency derived from AI use, according to the ¶¶ŇőłÉÄę 2025 Future of Professionals Report. Yet the fear of job loss among lawyers is rising, as those viewing AI as a threat or somewhat of a threat grew from to almost two-thirds (65%) of those surveyed, according to the Thomson Reuters Institute’s 2026 AI in Professional Services Report.

Many in the legal profession are asking how lawyers are uniquely valuable at a time when machines can process legal information faster and cheaper. The answer lies in understanding the difference between what AI does in processing legal information and what humans do in exercising legal judgment, says , Founding Director of the .

Defining 2 levels of legal work

Understanding what makes lawyers particularlyĚýmeaningfulĚýin this current AI moment requires distinguishing between two different levels of legal work in an environment in which AI-enabled information systems are compressing humanity and legal judgment into data points and draining away the storytelling and moral nuance that ground both. According to Lee, these different levels involve the syntactic and the semantic:

      • Syntactic — Lawyers process information, generate documents, and recognize patterns at the syntactic level, meaning those tasks in which AI excels and delivers promised efficiency gains. “The danger is that we will use this efficiency merely to generate more syntactic volume,” Lee explains, adding that this will result in faster processing of more documents at greater speeds. “If we do that, we will have automated ourselves out of a profession.”
      • Semantic — The semantic aspect of lawyering highlights the irreducible skills of the legal practice, which include exercising independent legal judgment, reflecting on consequences, demonstrating care for clients, and fulfilling fiduciary duties.

This distinction between the semantic level is inherent within the practice of law definition, Lee says, pointing out that many jurisdictions distinguish between “providing legal information” (not practicing law) and “exercising independent legal judgment” (the essence of legal practice).

He also rightly contends that the existential risk facing lawyers is not in AI completing legal tasks, but rather the temptation to reduce lawyers’ role to verifying machine output and processing legal information. Conflating these two concepts is a challenge for the legal profession and requires increasing the appreciation for the craft of legal reasoning and judgment.

legal judgment
Kevin Lee, Founding Director of the Institute for AI & Democratic Governance

Making this more difficult is that the current information age complicates this picture by challenging society’s assumptions about reality, consciousness, and the moral meaning of human life — all at an exponential rate, Lee says. Similarly, AI and information systems threaten to reduce everything, including human beings and law itself, to processable data by stripping away the narratives and meanings that define humanity, he adds.

Semantic qualities of legal judgment

The question of what makes lawyers especially relevant in the AI era is mainly answered in how and why they do what they do, rather than in what they do. For example, Lee points to skills around executing their fiduciary duty and ensuring legitimacy and meaning as key characteristics of lawyers’ semantic qualities.

Fiduciary duty — When a client seeks legal counsel, it’s legal judgment — not information processing — that the client wants. Lawyers, as part of their fiduciary duty to their clients, demonstrate human and legal understanding of the unique context of each case and the consequences of various legal paths forward. This bond of trust between attorney and client demands reflection, consideration, care, and proper purpose.

The fiduciary duty of the lawyer to the client requires balancing competing interests, recognizing unstated concerns, and exercising discretion in ways that honor both the letter and spirit of the law. At the heart of this balance is legal reasoning and professional judgment, which often involves navigating the critical gap between legal rules as written and their meaningful application to human circumstances.

Legitimacy and meaning — Beyond the fiduciary of care exercised in individual client relationships, lawyers serve a broader purpose in their role to safeguard law’s connection to the narratives of justice and human dignity that legitimize its authority. Indeed, lawyers maintain the connection between law and its humanistic foundations, so that the narratives that give legal authority its legitimacy depend on this connection. “The artwork that one associates with the law (in law schools and courtrooms) connects actions and legal judgment of attorneys to the mythic meaning of justice, equality, and the rule of law,” Lee explains.

How to deepen appreciation for the special relevance of lawyers

The five hours that lawyers said they expect to gain each week through AI-driven efficiency represents a choice point for the profession. These hours can either accelerate lawyers’ obsolescence or deepen their relevance. To ensure the latter, Lee advises lawyers and legal institutions to examine ways to put those hours to good use by, for example:

Collaborating on apprenticeships — Bar associations, practicing lawyers, legal service providers, and law schools should consider apprenticeship models that teach professional norms and values through mentorship that allow law students to learn the craft of legal reasoning through guided practice.

Recommitting more fully to legal service — Law firms and in-house counsel must reclaim humanistic awareness as central to their professional identity. The efficiency gains from AI should be reinvested into semantic work, which include counseling clients, exercising moral judgment, and fulfilling fiduciary duties with greater care and reflection.

Improving legal education — Law schools must return to the humanistic formation of lawyers, echoing the vision of the pre-2007 , before economic pressures reduced legal education to producing commercially exploitable graduates. In addition, AI ethics must be integrated systemically across the curriculum into doctrinal courses rather than being confined to elective courses.

Looking ahead

The five hours gained through AI represent a defining choice for the legal profession. The special relevance of lawyers in the AI age lies precisely in the human components and semantics aspects of lawyering.


In the concluding part of this blog series, we look at how the legal profession needs to rethink how it trains lawyers in order to prevent AI from eroding legal judgment skills

]]>
Move over, “Death of the billable hour,” Legalweek 2026 has found a new existential crisis /en-us/posts/legal/legalweek-2026-new-existential-crisis/ Thu, 19 Mar 2026 13:25:16 +0000 https://blogs.thomsonreuters.com/en-us/?p=70031

Key takeaways:

      • Structural change in firms — The traditional law firm pyramid, in which junior lawyers perform high-volume work at billable rates, is losing its foundation as AI compresses tasks that once took hours and clients increasingly bring more work in-house.

      • Finding new ways to train — AI-powered simulations are emerging as a concrete answer to the associate training problem, allowing new lawyers to build courtroom skills faster and fail safely behind closed doors.

      • The associate role isn’t dying, it’s being redefined — Those law firms that figure out the right mix of legal training, technological fluency, and management skills will have a significant edge over those that are still debating it.


NEW YORK —ĚýOn more than one occasion, I have written seriously and at length about the death of the billable hour. I’ve argued that alternative fee arrangements (AFAs) are the future, that the economic logic of hourly billing is irreconcilable with AI-driven productivity gains, and that the industry needs to prepare for a fundamentally different pricing model. I meant every word. I still do.

Yet, at last week’s one attendee pointed out they’ve been hearing about the death of billable hour since the 1990s. At this point, it’s less a prediction and more of a tradition. Indeed, Matthew Kohel, a partner at Saul Ewing, said despite the legal press coverage connecting AI to the billable hour’s demise that narrative is now entering its third or fourth decade. And Kohel said his firm simply isn’t seeing meaningful client-driven movement toward AFAs.

So let’s be honest: the billable hour is not dead, and in fact, it may not be even close to dead.

However, if you’re looking for something that is facing a genuine existential reckoning — something the legal industry whispered about in the early days of generative AI (GenAI) and is now discussing openly — Legalweek 2026 may have found it. It turns out the billable hour was never the thing in danger, rather it’s the person billing the hours.

It’s the associate.

The question nobody wanted to ask out loud

The future of the junior lawyer surfaced in virtually every breakout session across the three-days event, and while it may not be the point of inception for the question, it was certainly the moment this idea graduated from a half-whispered aside to main-stage conversation.

Moreover, the problem has grown more urgent since its inception in the early GenAI days, when the question was simply whether a firm would need fewer associates. Now, that question hasn’t gone away, but it’s been joined by harder ones concerning training, hiring, and legal and technical skills. For example, what if AI is already better than a junior associate at some of the tasks that defined the role in the past? And what happens if someone says it out loud?

Someone said it out loud.


If you’re looking for something that is facing a genuine existential reckoning, Legalweek 2026 may have found it. It turns out the billable hour was never the thing in danger, rather it’s the person billing the hours.ĚýIt’s the associate.


During a panel on Measuring What Matters, the conversation turned to client trust. Clients want to know: How can you be sure AI will catch everything? How do you trust it to find what matters across 5,000 pages of documents?

The response from the panel was direct, and it landed like a brick in the room: it’s 5,000 pages, and someone was reading those five thousand pages. That someone is an associate. If that associate — who, more often than not, is one of the least experienced lawyers in the building — is the one reading all those pages, why would you trust them to do it better than a machine?

While that question hung in the air during the panel, it does deserve to sit with you for a moment afterward. Because embedded in it is the uncomfortable arithmetic that drives the entire associate question. The traditional law firm pyramid is built on a base of junior lawyers performing high-volume, lower-complexity work such as document review, due diligence, first-pass research, and doing so at rates that generate revenue while the activity is simultaneously (in theory) training the next generation of partners. If AI can do that base-layer work faster, cheaper, and with accuracy that one panelist described as “beyond very good,” then the pyramid doesn’t just shrink. It loses its foundation.

Barclay Blair, Senior Managing Director of AI Innovation at DLA Piper, noted that tasks like due diligence on some types of financial contracts are already being compressed to two hours, down from 15 to 20 — with zero hours being a realistic possibility in the near future.

Further, as one attendee observed, clients increasingly are adopting AI internally, and they’re bringing work in-house that was previously sent to outside counsel. Clearly, the work that trained generations of associates isn’t just being automated — in some cases, it’s leaving the firm entirely.

Fewer reps, greater weight

Yet here is where it would be easy (and wrong) to write the doom-and-gloom version of the future, in which AI replaces associates, the pipeline collapses, nobody knows how to train lawyers anymore, civilization crumbles, etc. It’s a clean narrative, but it’s also not what Legalweek panels actually said.

Because alongside the anxiety, something else was happening. People were building answers.

In another panel, Developing the Future Lawyer, panelists spent an hour in the weeds of what associate training actually looks like when the old model breaks down — and the conversation was far more concrete than you might expect.


Panelist spent an hour in the weeds of what associate training actually looks like when the old model breaks down — and the conversation was far more concrete than you might expect.


Panelist Abdi Shayesteh, Founder and CEO of AltaClaro, laid out the core problem with precision, noting that there’s a growing gap in critical thinking among associates. Templates getting copy-pasted without relevance analysis, and there is a lack of knowing what you don’t know. And the traditional training methods such as videos, lectures, and passive learning, don’t fix it. Indeed, those outdated models may be making it worse. Shayesteh’s analogy was blunt: You don’t learn to swim by watching videos — you need to jump into the deep end.

His solution is AI-powered simulations. Not hypothetical ones, but working deposition simulations available today, with real-time AI feedback, in which associates can practice cross-examination, deal with opposing counsel objections, and build the muscle memory that used to require years of live experience.

Kate Orr, Managing Director of Practice Innovation at Orrick, picked up the thread with two observations that reframed the stakes. First, AI simulations allow associates to fail behind closed doors, a radical improvement over the old model, in which blowing it had real consequences because failure often happened directly in front of the partners Second, the tool isn’t just for juniors. Even experienced lawyers are using simulations to test different approaches, tweak personas, and sharpen arguments. Orrick’s own Supreme Court team had a lawyer use AI to review a draft brief and identify paragraphs that could be tighter.

Todd Heffner, Partner at Smith, Gambrell & Russell, said the real question isn’t whether associates will use AI, but rather whether it gets them to lead at trial in year 10 instead of year 20. Right now, most associates are lucky to see the inside of a courtroom in their first seven years, and even then, they spend most of their time back in the hotel prepping for the more experienced attorneys instead of arguing themselves. If simulations can compress that learning curve, the associate’s career doesn’t disappear, rather, it gets accelerated.

The dinosaur that adapted

During the Measuring What Matters panel, Mitchell Kaplan, Managing Director of Zarwin Baum, introduced himself with a memorable bit of self-deprecation: He’s a dinosaur — but one, he clarified, who understands how AI can revolutionize what he does.

Kaplan’s perspective threaded through both days of programming like a quiet counterweight to the anxiety. He’d seen this before — not AI specifically, but the fear of it. He watched the legal industry transition from physical libraries to digital research tools, and he watched attorneys adapt. And his message was consistent: the work changes, but the need for lawyers doesn’t disappear. Associates may be taking shortcuts, but they still need to read, still need to review, and still need to think.

They’re developing differently than his generation did, Kaplan said, but it’s the same way every generation develops differently from the one before it. And different doesn’t mean wrong.


The work changes, but the need for lawyers doesn’t disappear. Associates may be taking shortcuts, but they still need to read, still need to review, and still need to think.


It’s a perspective that found an unexpected echo in the Enterprise Alignment panel. Mark Brennan, a partner at Hogan Lovells, relayed a comment he heard at a previous AI conference: The next generation of entry-level jobs will be managers — because they’ll be managing agents and other tech tools. Brennan admitted he didn’t have all the answers on what that means for legal training, but the implication was clear. The associate role isn’t dying, instead, it’s being redefined. And the firms that figure out what that redefined role looks like, what mix of legal training, technological fluency, critical thinking, and management skills it requires, will have a significant advantage over those firms that are still debating it.

Another panelist, Andrew Medeiros, Managing Director of Innovation at Troutman Pepper Locke, made a prediction that felt like the sharpest version of this idea. He said that at some point, new lawyers are going to be doing simulated matters as a standard part of the development process. Eventually, there’s going to be a generation that walks in as new attorneys and finds themselves litigating right away.

That’s not the death of the associate. Rather, that’s the beginning of a different kind of associate — one who arrives at the courtroom sooner, with different preparation, carrying different tools.

The billable hour, for all the prophecies, refuses to die. The associate, it turns out, has no intention of dying either — just evolving. Mitchell Kaplan called himself a dinosaur — but Legalweek was full of dinosaurs, and every one of them was adapting and in that adaptation, thriving. The harder question is whether the firms that forged them will be brave enough to follow.


You can find more ofĚýour coverage of Legalweek eventsĚýhere

]]>
The professional judgment gap: Tracing AI’s impact from lecture hall to professional services /en-us/posts/corporates/ai-professional-judgment-gap/ Thu, 05 Mar 2026 12:59:12 +0000 https://blogs.thomsonreuters.com/en-us/?p=69771

Key highlights:

      • Universities face pressure over pedagogy— Academic institutions are adopting AI as a reputational marker that’s driven by market pressure rather than educational need, creating a risk for students who can work with AI but not independently of it.

      • Entry-level roles under threat— AI is being deployed most heavily to automate the grunt work of entry-level positions in which foundational professional skills are traditionally built through struggle and feedback.

      • K-shaped cognitive economy emerging— Experienced professionals with existing expertise are gaining efficiency from AI, while entry-level workers are losing access to skill-building experiences.


According to Harvard University’s Professional & Executive Development division, innovation is defined as a “process that guides businesses through developing products or services that deliver value to customers in new and novel ways.” Along this journey, professional judgement in decision-making is used numerous times to determine next steps at key stages.

Notably, the word technology is nowhere to be found in this definition — an absence , Assistant Professor of Learning Technologies at the University of Minnesota, has long found revealing. Instead, innovation is framed as creative problem-solving, contextual intelligence, and the ability to work across perspectives. Interestingly, Dr. Heinsfeld adds, none of these require constant automation. In fact, many of them are undermined by it.

However, AI adoption has the real potential to automate away the very experiences that build these capabilities from university lecture halls to corporate offices. With notable data already suggesting that , the risk that the current approaches to AI use in universities and companies are engineering away innovation and professional judgement skills is real, notes , Group Leader in AI Research at Harvard and NTT Research.

Indeed, some observers view AI as the largest unregulated cognitive engineering experiment in human history. Yet, unlike medical drugs that require years of approval and testing, AI systems are reshaping how millions of students think, learn, and make decisions without a comparable approval process or a shared framework for discussing any potential “side effects,” as Dr. Heinsfeld pointed out.


Most worrisome is that AI is being deployed most heavily to automate precisely the entry-level roles where foundational professional skills are built.


So, what happens when an entire generation of future employees learn to delegate judgment before they develop it? And what actions do universities and companies need to take now to avoid this reality?

Risks of universities adopting AI under pressure

For universities, AI “has become a reputational marker, and not adopting AI is framed as institutional risk, regardless of whether an educational case has been made or not,” says Dr. Heinsfeld, adding that this is being driven, in part, by market pressure rather than pedagogical need.

Already, companies can greatly influence universities as employers of new graduates; and as such, AI systems are currently being optimized for speed, agreeability, and accessibility to stimulate ongoing use. However, as Dr. Heinsfeld contends, as universities race to earn the label AI ready without a careful, cautious and detailed understanding of how it may impact students’ cognitive processes, they run the risk of damage to their reputations of pedagogical integrity.

In addition, the “data as truth” paradigm is a complicating factor, she says. Drawing on her research, Dr. Heinsfeld explains how data “is often framed as the idea of being a single source of truth based on the assumption that when collected and analyzed, it can reveal objective, indisputable facts about the world.” Indeed, this ubiquitous mindset across universities and corporations treats data — such as that used to train large and small language models — as objective and indisputable.

Yet this obscures critical decisions about what gets measured, whose perspectives are included, and what forms of knowledge are systematically excluded from AI systems. As Dr. Heinsfeld warns, when data becomes synonymous with truth, “knowledge is what is measurable and optimizable.” This narrows professional judgment to efficiency metrics rather than the interpretive depth, ethical reasoning, and cultural context that are essential for sound decision-making.

Judgment gap widens in workforce downstream

Under the current AI adoption approach, students could leave universities able to workĚýwithĚýAI but not independentlyĚýofĚýit, a distinction emphasized by Dr. Heinsfeld. Like calculators, AI works as a tool only when foundational skills for its use exist first. Without this, graduates enter the workforce with a critical judgment gap that compounds from their lives as students at college campuses to becoming employees working in corporations.


AI adoption has the real potential to automate away the very experiences that build these capabilities from university lecture halls to corporate offices.


Most worrisome is that AI is being deployed most heavily to automate precisely the entry-level roles where foundational professional skills are built, warns Dr. Tanaka. Indeed, this is exactly the type of grunt work that teaches judgment through struggle and feedback. Over time, overuse of AI will result in quality being sacrificed because critical evaluation skills have atrophied.

Looking into the future, Dr. Tanaka foresees a K-shaped economy of cognitive capacity. Experienced professionals with existing expertise and contextual judgment built through years of experience will gain increasing efficiency from AI. Entry-level workers, however, will lose access to the valuable experiences that build professional judgement. This gap widens between professionals who can independently accelerate their workflows using AI and those whose traditional tasks are merely displaced by it.

Intervention may be able to break the cycle

The pattern is not inevitable, as both Dr. Tanaka and Dr. Heinsfeld explain. Drawing on Dr. Heinsfeld’s emphasis on institutional agency, meaningful intervention will depend on conscious, intentional choices made at every level. Both experts share their guidance for how different organizations can manage this:

Academic institutions — Universities must first recognize that AI adoption is a decision rather than an inevitability and make educational need the North Star for decision-making around AI. In her analysis, Dr. Heinsfeld emphasizes that when vendors set defaults, they quietly redefine academic practice. Defaults shape what is made visible or invisible and what becomes normalized. In AI-driven environments, universities often lose control over how models are trained and updated, what data shapes outputs, how knowledge is filtered and ranked, and how student and faculty data circulate beyond institutional boundaries — especially if decision-making is left to vendors. As a result, the intellectual byproducts of teaching and learning increasingly become inputs into external systems that universities do not govern.

Private entities — For organizations, Dr. Tanaka calls for feedback loops and other mechanisms that will promote more open discussion about AI use without stigma. In addition, companies need to proactively redesign entry-level rolesĚýto ensure these positions continue to cultivate judgment and foundational skills in an AI-driven environment. Likewise, Dr. Tanaka suggests that companies explicitly provide feedback about cognitive trade-offs to employees, fostering an understanding of possible skill entrophy.

Employees — Similarly, individuals working for organizations bear much of the responsibility for making sure critical thinking is enhanced by AI. Indeed, strategic decisions about when to use AI while seeking to preserve cognitive capacity and professional judgement are key.

Looking ahead

In today’s increasingly AI-driven environment, a new paradigm is needed to combat the current operating assumption that optimization from AI is the sole path to progress. And because the current trajectory sacrifices human development for efficiency, the need for universities and companies to choose a different path is urgent — while they still have the judgment capacity to do so.


You can find out more about how organizations are managing their talent and training issues here

]]>
The AI Law Professor: When AI makes lawyers work more, not less /en-us/posts/technology/ai-law-professor-ai-makes-lawyers-work-more-not-less/ Tue, 03 Mar 2026 14:58:48 +0000 https://blogs.thomsonreuters.com/en-us/?p=69696

Key points:

      • The productivity promise is largely wrong — Emerging research shows that AI doesn’t reduce work — it intensifies it. Lawyers work faster, take on broader responsibilities, and extend their hours without recognizing the expansion. Further, because prompting AI feels like chatting rather than laboring, lawyers slip work into evenings and weekends without registering it as additional effort.

      • Self-reinforcing acceleration is the real risk — AI speeds tasks, which raises expectations, which increases reliance, which expands scope, ultimately creating a cycle that drives burnout in a profession already plagued by it.

      • Purposeful integration is the antidote — Legal organizations need to promote intentional governance structures that account for how people actually behave with AI, not how leadership imagines they will or should.


Welcome back to The AI Law Professor. Last month, I examined how AI is forcing us to rethink training for junior lawyers. This month, I examine a question that affects every lawyer: What happens when the efficiency gains we’ve been promised don’t materialize the way we expected? A recent study out of UC-Berkeley suggests the answer is more troubling than most law firm leaders realize.

If you’ve attended a legal technology conference anytime over the past two years, you’ve heard the pitch: Automate the mundane and elevate the meaningful.

A in the Harvard Business Review by UC-Berkeley researchers Aruna Ranganathan and Xingqi Maggie Ye suggests we should be more skeptical. They tracked how generative AI (GenAI) changed work habits over eight months at a 200-person technology company. Their findings were striking — AI tools didn’t reduce work; rather, they intensified it.

According to the study, the tech employees studied were shown to work faster, take on broader responsibilities, extend their hours into evenings and weekends, and multitask more aggressively — all without being asked to do so. The promise of liberation became a reality of acceleration and overwork.

For those of us in the legal profession, this should be a wake-up call.

Three forms of intensification

The researchers identified three patterns that will sound familiar to anyone watching lawyers adopt GenAI in their work processes.

Task expansion

Because AI fills knowledge gaps, professionals stepped into responsibilities that previously belonged to others. Product managers started writing code, and researchers took on engineering tasks. In legal contexts, the parallel is obvious. Associates use AI to attempt tasks once reserved for senior lawyers. Paralegals draft documents that previously required attorney oversight. Solo practitioners take on matters outside their core expertise because their AI tools make it feel manageable. The result isn’t less work distributed more efficiently, it’s more work concentrated in fewer hands, with less institutional knowledge guiding the output.

Blurred boundaries

AI blurred the boundaries between work and non-work. Because prompting an AI feels more like chatting than labor, lawyers (like the tech workers in the study) may slip work into lunch breaks, evenings, and commutes without registering it as additional effort. The conversational interface is seductive precisely because it doesn’t feel like work. It is work, however, and much more of it.

Pervasive multitasking

Workers managed multiple AI threads simultaneously, generating a sense of momentum that masked increasing cognitive load. For lawyers, this means running parallel research queries, drafting multiple documents at once, and constantly monitoring AI outputs, all while believing they’re saving time.

The productivity trap

The most important insight from the research is that these effects are self-reinforcing. AI accelerates tasks, which raises expectations for speed. Higher speed increases reliance on AI, and greater reliance expands the scope of what people attempt. And expanded scope generates even more work. Rinse and repeat.

Parkinson’s law: “Work expands to fill the time available for its completion.”

In a profession already plagued by burnout, this cycle should alarm us. The legal industry’s adoption of AI is being driven largely by the promise of doing the same work in less time. But if the Berkeley research is any guide, what actually happens is that we do more work in the same amount of time, or more work in more time, while telling ourselves we’re being more productive.

And because the extra effort feels voluntary, firm leadership may not see the problem until it manifests as errors, attrition, or ethical lapses. In law, the cost of impaired judgment isn’t just a missed deadline — it’s a client’s liberty, livelihood, or life savings.

From productivity to purposeful practice

The Berkeley researchers propose what they call an AI practice consisting of intentional norms and routines that structure how AI is used, including determining when to stop and how work should and should not expand. I’d go further. For legal organizations, purposeful AI integration requires more than workplace wellness norms. It requires a strategic framework that aligns AI capabilities with organizational mission, ethical obligations, and sustainable human performance.

This means, first off, being honest about what AI actually does to workloads rather than what we hope it will do. If your firm adopted AI expecting to reduce associate hours, audit whether that has actually happened, or whether associates are simply filling reclaimed time with more work.

Second, it means building governance structures that account for how people actually behave with these tools, rather than how leadership imagines they will. The Berkeley study found that workers expanded their workloads voluntarily, without management direction. Top-down AI policies that focus solely on permissible use will miss the intensification that could be happening in plain sight.


The most important insight from the research is that these effects are self-reinforcing. AI accelerates tasks, which raises expectations for speed. Higher speed increases reliance on AI, and greater reliance expands the scope of what people attempt. And expanded scope generates even more work.


Third, it means preserving space for the distinctly human work that AI cannot replicate, such as judgment, empathy, ethical reasoning, and the kind of creative problem-solving that emerges from genuine human dialogue — not from a conversation with a chatbot. The researchers also found that AI-enabled work became increasingly solitary and continuous, a dangerous trajectory.

The narrative that AI will free lawyers for higher-value work isn’t just optimistic. It’s a misunderstanding of how these tools interact with human psychology. AI doesn’t create leisure. It creates capacity — and without intentional structures, that capacity gets filled, not with strategic thinking, but with more of everything.

While it’s clear that AI will change the legal profession, the real challenge is whether law firms will integrate AI with purpose, shaping it to serve their values, their clients, and their professionals’ well-being. Or, whether they’ll be allowing the technology to quietly shape us into something we didn’t intend to become.

Tom Martin is CEO & Founder of LawDroid, Adjunct Professor at Suffolk University Law School, and author of the forthcomingĚý. He is “The AI Law Professor” and writes this eponymous column for the Thomson Reuters Institute.


You can find more aboutĚýthe use of AI and GenAI in the legal industryĚýhere

]]>
Corporate tax departments’ Groundhog Day problem — and the hybrid model that could fix it /en-us/posts/corporates/tax-departments-hybrid-model/ Thu, 26 Feb 2026 15:20:56 +0000 https://blogs.thomsonreuters.com/en-us/?p=69625

Key takeaways:

      • Tax departments lack resources and confidence — More than half (58%) of tax departments are under-resourced, and 59% are not confident that they can upgrade their tax technology over the next two years.

      • Under-resourced departments incur more penalties — At least half of respondents from under-resourced tax departments say their departments incurred penalties over the past year, compared to only about one-third of those from properly resourced departments.

      • Making the shift to proactive planning and value creation — For many tax departments, the winning model blends in-house expertise, targeted external support, and a coherent tech/AI stack that allows teams to shift from tactical compliance to proactive planning and strategic value creation.


Under-resourced corporate tax departments spend more of their budget on external support compared to well-resourced teams — yet they’re more likely to incur penalties and less confident in forecasting, according to the Thomson Reuters Institute’s .

Given this, the problem isn’t a lack of spending — it’s the operating model. With respondents from 58% of tax departments saying they are under-resourced, 59% saying they lack the confidence needed to upgrade their existing tax technology over the next two years, and most spending more than half their time on reactive compliance work when they’d prefer to focus on strategic planning, clearly the gap between ambition and reality has never been wider.

The answer isn’t working harder or throwing more money at consultants, however. It’s building a hybrid ecosystem of people, platforms, and partners designed to shift capacity from firefighting to foresight.

The Groundhog Day problem

Every year feels the same: New tax legislation (such as the One Big Beautiful Bill Act or Pillar 2), new compliance burdens, new geopolitical uncertainty — coupled with the same old constraints. Too much work, not enough time, and technology that lags.

When deadlines hit, under-resourced teams rely on two blunt levers: overtime and reactive outsourcing. Internal staff end up working longer hours, and external providers plug the gaps at short notice. This model is breaking departments and it’s breaking down itself.

Under-resourced departments are significantly more likely to incur penalties, with 50% of respondents saying their under-resourced department had been penalized in the past year, compared to just 34% of respondents from well-resourced departments that say that, according to the report.

Further, under-resourced department respondents said they were less confident in their ability to forecast accurately, with just 26% saying their ability to forecast accurately was “very likely” compared to 43% of well-resourced department respondents. Ironically, under-resourced departments also spend more on external support as a percentage of budget (44%) compared to 37% for well-resourced departments. Clearly, spending more doesn’t solve structural problems — it often masks them.

Meanwhile, tax professionals report spending more than half their time on tactical or reactive work, even though they would prefer to spend up to two-thirds of their time on strategic analysis. Not surprisingly, when the team is locked into manual reconciliations and last-minute fixes, it’s nearly impossible to influence business decisions or shape strategy.

Why “all in-house” or “all outsourced” no longer works

When more work is moved onto the plates of the internal tax team, all in-house can often come to mean all heroics — talented people drowning in compliance volume with no time to use the analytical tools already on their desks. Conversely, all outsourced risks hollowing out the department’s institutional knowledge and weakening its seat at the table.

A hybrid model asks better questions: What kind of work is this, and where does it create the most leverage? These questions can be used to determine where and to whom work should go. For example, high-volume, rule-based, recurring tasks are prime candidates for automation, shared services, or managed services under strong tax oversight; while complex, judgment-heavy, strategically sensitive work should remain anchored in-house, with external advisors extending capacity and offering specialized insight.

Thus, the best model for a modern corporate tax department is a hybrid ecosystem — not a fixed organizations chart, but a deliberate blend of internal expertise, enabling technology, and external capability partners.

Four layers of the hybrid ecosystem

This hybrid ecosystem can be delineated into four layers, each bringing their own insight and value:

      1. People and roles redesigned — High-performing tax functions invest in analyst and tax-tech roles that connect tax to enterprise resource planning (ERP) systems, data hubs, and analytics, thus freeing technical experts from manual data work. Senior professionals then become embedded advisors to finance, treasury, and the business, not just compliance reviewers.
      2. Processes segmented into “run” and “change” — The biggest barriers to strategic work are excessive volume, heavy compliance burdens, limited resources, and time pressure. Modern tax departments respond by explicitly segmenting work in which run the business processes are documented, standardized, and increasingly automated or pushed into shared or managed service models. Change the business work remains tightly linked to senior tax staff.
      3. Technology becomes the data spine — More than half of respondents say they expect above-normal increases in their tax technology budgets, and more than half say their main resourcing strategy is introducing more automation. The goal isn’t collecting point solutions; rather, it’s building a coherent data spine that includes ERP integration, tax-specific data models, consistent workflow tooling, and strategic platforms that flex as regulations shift.
      4. AI act as an accelerator — Two-thirds of tax departments aren’t yet using generative AI (GenAI), according to the report. And among the one-third that are, usage clusters around research, document summarization, drafting, and some analytical support. The next step up the AI chain is for departments to move from individual experiments to standardized, governed workflows that scan legislation, prepare first drafts of memos, or interrogate large data sets for anomalies.

What high-performing hybrid tax departments do next

Departments that feel well-resourced, allocate more time for their professionals to conduct proactive work, and invest deliberately in technology and skills are significantly more confident in their ability to forecast accurately, avoid penalties, and minimize tax liabilities, the report shows.

Indeed, these high-performing hybrid tax departments:

      • invest ahead of crises in people, tech, and processes
      • treat external providers as capability partners, not emergency relief
      • actively protect time for strategic work by automating or outsourcing routine tasks
      • insist on a durable seat at the strategy table, not just one for compliance reporting
      • experiment with automation and AI in focused, repeatable use cases

It is worth noting that smaller companies (those under $50 million in annual revenue) and the largest one (those with more than $5 billion in revenue) are leading the way by securing leadership buy-in early and leveraging specialized external expertise rather than trying to build everything in-house. Midsize companies, by contrast, are more likely to rely on in-house teams to lead automation efforts and less likely to use third-party vendors — a cautious approach that risks having them fall too far behind to catch up.

The message: Design the ecosystem, don’t just work harder

For corporate tax professionals, the message may be harsh but hopeful: You cannot work your way out of structural constraints by effort alone. Rather, a well-designed hybrid ecosystem can turn those constraints into a catalyst that will allow the department to deliver more value to the business. In fact, the modern corporate tax department is hybrid by necessity; but the question is whether it’s hybrid by design — or just by accident.


You can learn more about the challenges facing modern corporate tax departments here

]]>
Inside the Shift: What happens in the professional workplace when AI does too much? /en-us/posts/sustainability/inside-the-shift-ai-overuse/ Wed, 25 Feb 2026 16:21:23 +0000 https://blogs.thomsonreuters.com/en-us/?p=69610

You can read TRI’s latest “Inside the Shift” feature,ĚýThe human side of AI: The growing risks of ubiquitous use of AI on talent here


It’s no exaggeration to say that AI is everywhere in our workplaces right now. It writes our emails, summarizes our meetings, generates slides, and even helps us think through problems. On the surface, this may sound like progress — and in many ways, it is.

However, our latestĚýInside the ShiftĚýfeature, The human side of AI: The growing risks of ubiquitous use of AI on talent by Natalie Runyon, Content Strategist for Sustainability and Human Rights Crimes for the Thomson Reuters Institute, makes a clear and timely point: When AI use becomes excessive and unchecked, it can quietly undermine the very people it’s meant to help.


One major consequence of cognitive decay is the weakening of the brain’s capacity to engage deeply, question systematically, and — somewhat ironically — resist the potential manipulation of AI.


As the article goes into in much greater detail, these harms caused by AI overuse can include a slow erosion of human connections, a loss of a professional’s sense of purpose, and a general sense of feeling overwhelmed in the workplace.

Of course, the solution isn’t to reject AI, it’s to use it better. To this end, the article makes a strong case for organizations to foster hybrid intelligence, a process by which human judgment and creativity work alongside AI capabilities.

In today’s workplace, AI can be a powerful advantage; however, that is only if organizational leaders can remember that technology should enhance the human experience, not replaces the parts of professional life that workers value.


To examine this and many more situations, the Thomson Reuters Institute (TRI) has launched a new feature segment,ĚýInside the Shift, that leverages our expert analysis and supporting data to tell some of the most compelling stories professional services today

]]>