Legal Talent Archives - Thomson Reuters Institute https://blogs.thomsonreuters.com/en-us/topic/legal-talent/ Thomson Reuters Institute is a blog from , the intelligence, technology and human expertise you need to find trusted answers. Thu, 02 Apr 2026 16:49:08 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 Honing legal judgment: The AI era requires changes to how lawyers are trained during and after law school /en-us/posts/legal/honing-legal-judgment-training-lawyers/ Thu, 02 Apr 2026 15:36:44 +0000 https://blogs.thomsonreuters.com/en-us/?p=70236

Key takeaways:

      • AI threatens traditional lawyer development — As AI automates entry-level legal tasks like research and writing that historically has honed legal judgment skills, the profession faces a crisis in how new lawyers will develop such judgment abilities.

      • The profession can’t agree on what constitutes “legal judgment” — Unlike other professions, there is no agreed-upon definition of legal judgment or clear standards for when AI should be used.

      • Implementation requires unprecedented coordination and funding — A legal education fund as a proposed solution would require a small percentage of legal services revenue and coordinated action across law schools, legal employers, and state regulators.


This is the second of a two-part blog series that looks at how lawyer training needs to evolve in the age of AI. The first part of this series looked at how lawyers can keep their skills relevant amid AI utilization.

The key skills that comprise legal judgment have received mixed reviews, according to a recent white paper from the Thomson Reuters Institute that advocated for cultivating practice-ready lawyers. The white paper was based on feedback from thousands of experienced lawyers, judges, and law students and raises questions about how legal judgment forms when AI assistance is used for task completion.

notes that calls for “… to accelerate the development of legal judgment early in lawyers’ careers.”

The challenge is that each part of the profession — law schools, employers, state supreme courts (as regulators) — have distinctly separate responsibilities. That means, that in the age of AI, coordination across the entire legal profession is needed, especially as AI reduces the availability of traditional first jobs.

Furlong points out that there is no consensus for what legal judgment is or any agreed upon standards for in what instances AI should be used in legal. To bring clarity to these issues, the white paper proposed a profession-wide model that integrates three critical elements: i) work-based learning that’s modeled on medical residencies; ii) micro-skill decomposition of legal judgment; and iii) AI-as-thinking-partner throughout pedagogy.

Three pillars for an AI-era lawyer formation system

Not surprisingly, overreliance on AI can erode critical analysis and solid legal judgment skills. Addressing these concerns requires a comprehensive reimagining of how lawyers are educated and trained. One solution lies in three interconnected pillars that together form a cohesive system for developing legal judgment in an AI-integrated world.

Pillar 1: Integrate work experience into legal education

Core skills such as legal research, writing, and document review help develop legal judgment; yet these skills could collapse once AI assumes such tasks. The Brookings Institution recently proposed to preserve entry-level professional development in an AI era. This parallels the TRI white paper’s calls for mandatory supervised postgraduate practice as a key part of legal licensure.

While implementing a full residency model presents challenges, several law schools have already pioneered approaches that demonstrate the viability of work-integrated legal education that, if scaled appropriately, could improve new lawyer practice and judgment skills. For example, Northeastern Law School guarantees all students nearly before graduation through four quarter-length legal positions. The program integrates supervised practice into the curriculum so graduates can gain substantial hands-on experience alongside their classroom instruction.

Also, program offers an alternative pathway to bar admission through practice-based assessment rather than the traditional bar exam. The program demonstrates that competency can be evaluated through supervised experiential learning.

Pillar 2: Decompose legal judgment into teachable micro-skills

The legal profession needs to come to a common definition of legal judgment and develop its components to teach the concept effectively. “We can’t teach what we can’t describe,” Furlong says. To develop legal judgment, the profession must define its components, including:

      • Pattern recognition — The ability to identify when different fact patterns are related to similar legal frameworks and distinguish when superficially similar cases are legally distinct.
      • Strategic calibration and proportionality — This means understanding what level of effort, precision, and risk each matter requires and matching responses to the stakes involved.
      • Reasoning through uncertainty — This is the capacity to make defensible decisions and provide sound counsel even when the law is ambiguous, unsettled, or silent on an issue.
      • Source evaluation and authority weighting — This includes knowing which legal authorities are most suitable and being able to assess their persuasive value.
      • Ethical judgment under pressure — This means spotting conflicts, confidentiality issues, and duty-of-candor moments while maintaining competence and knowing when to escalate beyond expertise.

Breaking down legal judgment into these discrete components makes it possible to design targeted teaching interventions. For example, , former law professor and executive director of , suggests we back into AI-assisted workflows by requiring a short verification log (detailing sources checked, changes made, and why); running attack-the-draft drills (find missing authority, weak inferences, and jurisdictional mismatch); and preserving slow work as formative work (citation chaining, updating, and adversarial research memos).

With judgment skills clearly defined and work experience integrated into training, the profession must then tackle how AI itself should be incorporated into lawyer development.

Pillar 3: AI-as-thinking-partner throughout a lawyer’s career

Warnings that are mounting. The legal profession must provide clear standards for in what instances and how AI should be used, with training in verification and judgment skills. Overreliance on AI could compromise lawyers’ capacity to fulfill their fiduciary duties to clients.

A phased approach in the introduction of AI in legal work helps protect critical thinking while building AI competency. For example, in Year 1, law students could complete core legal reasoning exercises without AI assistance in order to better develop their analytical muscles. In Year 2, students use AI as a research assistant with mandatory verification protocols that teach students to check outputs against authoritative sources. Finally, in Year 3, residencies can immerse students in real-world AI workflows under proper supervision and while providing feedback.

These three pillars form a coherent vision for lawyer formation in the AI era. However, the most well-designed system faces the obstacle of funding.

The challenge of who pays

Perhaps the most difficult part of any overhaul is the cost. The medical residency model works because — up to $15 billion-plus annually — for teaching young medical students to be doctors. Legal education has no equivalent. Without addressing funding, however, even the best reforms will fail.

One idea is to establish a legal education fund that’s supported by an assessment of a small percentage of the legal industry’s gross legal services revenue (while exempting solo practitioners and firms with less than $500,000 in annual revenue). These funds could be used to subsidize thousands of supervised residency placements, fund law school curriculum development, support bar exam alternative assessments, and provide employer training and supervision stipends.


The challenge is that each part of the profession — law schools, employers, state supreme courts — have distinctly separate responsibilities, and that means coordination across the entire legal profession is needed.


This proposal, of course, would require unprecedented coordination and financial commitment from the legal profession. Skeptics might argue that market forces can solve this problem, or that firms will simply create new training pathways, or that AI will prove less disruptive than feared. However, waiting for market forces risks a lost generation of lawyers. The medical profession already when the medical industry’s voluntary reform failed. Only later did coordinated regulatory intervention produce the consistent quality standards the medical industry sees now.

What is clear is that inaction is resulting in degradation of lawyering skills. “Maybe… we need catastrophic external intervention to bring about the wholesale changes we can’t manage from the inside,” Furlong suggests.

However, the question is whether the legal profession will wait for a crisis to force change or act proactively to make the needed changes now, before the crisis hits.


You can learn more about the impact of AI on professional services organizations at TRI’s upcoming 2026 Future of AI & Technology Forum here

]]>
AI use and employee experience: New research reveals guidance gap in professional services /en-us/posts/technology/ai-guidance-gap/ Mon, 30 Mar 2026 11:23:47 +0000 https://blogs.thomsonreuters.com/en-us/?p=70090

Key takeaways:

      • Employees face contradictory messages or none at all Nearly 40% of professionals surveyed report receiving conflicting directives about AI usage from clients and leadership, while half report no client conversations about AI have occurred at all.

      • Workers lack feedback on whether their AI efforts matter Professionals who are experimenting with AI tools without knowing if their efforts are valued are left uncertain about whether investing time in developing AI skills is worth it.

      • Job displacement fears are rising — While employees remain cautiously optimistic about AI usage in their workplace, concerns about job displacement have doubled over the past year.


As generative AI (GenAI) tools flood into legal and accounting workplaces, organizations are deploying powerful technology without giving their employees clear directions on how to use it. Worse, some have received no guidance.

New research that underpinned the recent 2026 AI in Professional Services Reportfrom the Thomson Reuters Institute (TRI), reveals a disconnect between AI availability and organizational guidance, which is creating confusion that may undermine both employee experience and the technology’s potential value. (The report’s data was gathered from surveys of more than 1,500 legal, tax, accounting, and compliance professionals across 26 countries.)

Employees navigate inconsistent AI policies or none at all

Approximately 40% of the professionals surveyed said they received contradictory guidance from clients and leadership about AI tool usage, with directives both encouraging and discouraging their use on projects and in RFPs. This ambivalence is slowing down decision-making at the front lines — a place in which AI could deliver the most value.

Equally concerning is the fact that half of professionals indicated that no conversations with clients about AI tool usage have taken place yet. And when discussions do occur, concerns about data protection and accuracy are the main topics.

guidance gap

This confusion extends to external relationships as well. More than two-thirds of corporate and government clients remain unaware of whether their outside professional service providers are even utilizing GenAI. And the majority of clients have provided no direction whatsoever to their outside law firms concerning AI use, respondents said.

guidance gap

Organizations often ignore what employees need to know

Perhaps most revealing is how organizations are measuring — or failing to measure — whether their AI investments are paying off. Almost half of respondents said their organizations are not measuring return on investment (ROI) at all. Among the minority (18%) of respondents that said their organizations do track ROI, the metrics they use tell a story about organizational priorities. That fact that internal cost savings and employee usage rates lead the list suggests a focus on efficiency over innovation or quality improvements.

guidance gap

This measurement vacuum has consequences for employee experience. Without clear success metrics, employees lack feedback on whether their AI experimentation is valued, discouraged, or even noticed. The absence of ROI frameworks also makes it hard to justify training investments or dedicated time that allows employees to develop AI fluency.

AI usage doubles while support systems fall behind

AI usage among professional service organizations has nearly doubled over the past year, and professionals are increasingly integrating these tools into their workflows, the report shows. Yet organizational infrastructure that could support this adoption surge lags badly. Most professionals said they expect GenAI to become central to their work within the next two years — but that may be happening without roadmaps from their employers.

In addition, notable barriers in employees’ usage of AI remain. When asked what barriers could prevent their organization from more widely adopting GenAI and agentic AI, almost 80% of professionals cited concerns over inaccurate responses. Other concerns included worries over data security, privacy, and ethical use. Most of these suggest an ongoing lack of trust in GenAI.

guidance gap

The tool landscape adds another layer of complexity. Publicly available tools dominate current usage, with more than half of respondents (57%) citing their use, while proprietary or industry-specific solutions remain largely in the consideration phase. This suggests employees are often self-provisioning AI tools rather than working within enterprise-supported ecosystems. This potentially opens organizations to increased risk exposure because of security gaps, compliance risks, and inconsistent quality.

Employees’ job displacement fears increasing

Despite these challenges, employee sentiment toward AI remains cautiously optimistic. More than half (57%) of respondents said they are either hopeful or excited about the future of GenAI in their industry. Clearly, employees see AI’s potential to enhance their efficiency, automate routine tasks, and free up their time for higher-value work.

At the same time, hesitation and concern among employees are rising, particularly around accuracy, job displacement fears, and the unknown implications of autonomous AI systems. Notably, concerns about job displacement have doubled over the past year, and this trend demands organizational attention and transparent communication about a workforce strategy to combat this concern.

What organizations need to do now

Organizational leaders who are serious about positive employee AI experiences need to step up their efforts to provide guidance to employees and gain the ROI that AI promises. Specific steps they can take include:

      • Draft clear and consistent guidance — Create explicit policies for employees about in which instances AI use is encouraged, required, or prohibited. This includes client communication protocols, data-handling requirements, and escalation procedures in those situations in which AI outputs seem questionable.
      • Develop and implement meaningful ROI metrics — Organizations must move beyond usage rates and cost savings as key success measurements. Tracking data points that capture quality improvements, time redeployed to strategic work, and client feedback on AI-enhanced deliverables present a more comprehensive picture. Also, leaders need to share these metrics transparently in order to give employees an understanding about organizational priorities.
      • Invest in structured learning — The survey shows professionals are experimenting with dozens of different tools from ChatGPT to specialized legal tech platforms. Organizations should curate recommended toolsets, provide hands-on training, and create communities of practice in which employees can share effective prompts and use cases with other users.

Our data shows that the employee experience around AI adoption reveals a workforce that is hopeful but hungry for direction and concerned about job impacts. Leaders who implement these actions effectively are more likely to unlock the strategic value that AI promises while building the trust and competence needed for their organizations and its employees to thrive in an automated future.


You can download a full copy of the Thomson Reuters Institute’s2026 AI in Professional Services Reporthere

]]>
Honing legal judgment: How professional acumen & fiduciary care can keep lawyers relevant in the age of AI /en-us/posts/legal/honing-legal-judgment-keeping-lawyers-relevant/ Wed, 25 Mar 2026 14:21:08 +0000 https://blogs.thomsonreuters.com/en-us/?p=70071

Key highlights:

      • Lawyers excel at semantic legal work while AI excels in syntactic tasks — Syntactic work (document generation, pattern recognition) is where AI excels, but semantic work involving exercising independent judgment, reflecting on consequences, and fulfilling fiduciary duties remains uniquely human.

      • Fiduciary duty as the core of legal relevance — What distinguishes lawyers isn’t justwhatthey do, buthow and whythey do it. The fiduciary relationship demands human understanding of context, balances competing interests, recognizes unstated concerns, and exercises discretion.

      • 5 hours to deepen or diminish — The five hours lawyers are expected to gain each week by using AI can either accelerate professional obsolescence or deepen lawyers’ relevance, depending on what they do with it.


This is the first of a two-part blog series that looks at how lawyers can keep their skills relevant in the age of AI

Lawyers expect to gain a full five hours per week of worktime due to the efficiency derived from AI use, according to the 2025 Future of Professionals Report. Yet the fear of job loss among lawyers is rising, as those viewing AI as a threat or somewhat of a threat grew from to almost two-thirds (65%) of those surveyed, according to the Thomson Reuters Institute’s 2026 AI in Professional Services Report.

Many in the legal profession are asking how lawyers are uniquely valuable at a time when machines can process legal information faster and cheaper. The answer lies in understanding the difference between what AI does in processing legal information and what humans do in exercising legal judgment, says , Founding Director of the .

Defining 2 levels of legal work

Understanding what makes lawyers particularlymeaningfulin this current AI moment requires distinguishing between two different levels of legal work in an environment in which AI-enabled information systems are compressing humanity and legal judgment into data points and draining away the storytelling and moral nuance that ground both. According to Lee, these different levels involve the syntactic and the semantic:

      • Syntactic — Lawyers process information, generate documents, and recognize patterns at the syntactic level, meaning those tasks in which AI excels and delivers promised efficiency gains. “The danger is that we will use this efficiency merely to generate more syntactic volume,” Lee explains, adding that this will result in faster processing of more documents at greater speeds. “If we do that, we will have automated ourselves out of a profession.”
      • Semantic — The semantic aspect of lawyering highlights the irreducible skills of the legal practice, which include exercising independent legal judgment, reflecting on consequences, demonstrating care for clients, and fulfilling fiduciary duties.

This distinction between the semantic level is inherent within the practice of law definition, Lee says, pointing out that many jurisdictions distinguish between “providing legal information” (not practicing law) and “exercising independent legal judgment” (the essence of legal practice).

He also rightly contends that the existential risk facing lawyers is not in AI completing legal tasks, but rather the temptation to reduce lawyers’ role to verifying machine output and processing legal information. Conflating these two concepts is a challenge for the legal profession and requires increasing the appreciation for the craft of legal reasoning and judgment.

legal judgment
Kevin Lee, Founding Director of the Institute for AI & Democratic Governance

Making this more difficult is that the current information age complicates this picture by challenging society’s assumptions about reality, consciousness, and the moral meaning of human life — all at an exponential rate, Lee says. Similarly, AI and information systems threaten to reduce everything, including human beings and law itself, to processable data by stripping away the narratives and meanings that define humanity, he adds.

Semantic qualities of legal judgment

The question of what makes lawyers especially relevant in the AI era is mainly answered in how and why they do what they do, rather than in what they do. For example, Lee points to skills around executing their fiduciary duty and ensuring legitimacy and meaning as key characteristics of lawyers’ semantic qualities.

Fiduciary duty — When a client seeks legal counsel, it’s legal judgment — not information processing — that the client wants. Lawyers, as part of their fiduciary duty to their clients, demonstrate human and legal understanding of the unique context of each case and the consequences of various legal paths forward. This bond of trust between attorney and client demands reflection, consideration, care, and proper purpose.

The fiduciary duty of the lawyer to the client requires balancing competing interests, recognizing unstated concerns, and exercising discretion in ways that honor both the letter and spirit of the law. At the heart of this balance is legal reasoning and professional judgment, which often involves navigating the critical gap between legal rules as written and their meaningful application to human circumstances.

Legitimacy and meaning — Beyond the fiduciary of care exercised in individual client relationships, lawyers serve a broader purpose in their role to safeguard law’s connection to the narratives of justice and human dignity that legitimize its authority. Indeed, lawyers maintain the connection between law and its humanistic foundations, so that the narratives that give legal authority its legitimacy depend on this connection. “The artwork that one associates with the law (in law schools and courtrooms) connects actions and legal judgment of attorneys to the mythic meaning of justice, equality, and the rule of law,” Lee explains.

How to deepen appreciation for the special relevance of lawyers

The five hours that lawyers said they expect to gain each week through AI-driven efficiency represents a choice point for the profession. These hours can either accelerate lawyers’ obsolescence or deepen their relevance. To ensure the latter, Lee advises lawyers and legal institutions to examine ways to put those hours to good use by, for example:

Collaborating on apprenticeships — Bar associations, practicing lawyers, legal service providers, and law schools should consider apprenticeship models that teach professional norms and values through mentorship that allow law students to learn the craft of legal reasoning through guided practice.

Recommitting more fully to legal service — Law firms and in-house counsel must reclaim humanistic awareness as central to their professional identity. The efficiency gains from AI should be reinvested into semantic work, which include counseling clients, exercising moral judgment, and fulfilling fiduciary duties with greater care and reflection.

Improving legal education — Law schools must return to the humanistic formation of lawyers, echoing the vision of the pre-2007 , before economic pressures reduced legal education to producing commercially exploitable graduates. In addition, AI ethics must be integrated systemically across the curriculum into doctrinal courses rather than being confined to elective courses.

Looking ahead

The five hours gained through AI represent a defining choice for the legal profession. The special relevance of lawyers in the AI age lies precisely in the human components and semantics aspects of lawyering.


In the concluding part of this blog series, we look at how the legal profession needs to rethink how it trains lawyers in order to prevent AI from eroding legal judgment skills

]]>
Move over, “Death of the billable hour,” Legalweek 2026 has found a new existential crisis /en-us/posts/legal/legalweek-2026-new-existential-crisis/ Thu, 19 Mar 2026 13:25:16 +0000 https://blogs.thomsonreuters.com/en-us/?p=70031

Key takeaways:

      • Structural change in firms — The traditional law firm pyramid, in which junior lawyers perform high-volume work at billable rates, is losing its foundation as AI compresses tasks that once took hours and clients increasingly bring more work in-house.

      • Finding new ways to train — AI-powered simulations are emerging as a concrete answer to the associate training problem, allowing new lawyers to build courtroom skills faster and fail safely behind closed doors.

      • The associate role isn’t dying, it’s being redefined — Those law firms that figure out the right mix of legal training, technological fluency, and management skills will have a significant edge over those that are still debating it.


NEW YORK —On more than one occasion, I have written seriously and at length about the death of the billable hour. I’ve argued that alternative fee arrangements (AFAs) are the future, that the economic logic of hourly billing is irreconcilable with AI-driven productivity gains, and that the industry needs to prepare for a fundamentally different pricing model. I meant every word. I still do.

Yet, at last week’s one attendee pointed out they’ve been hearing about the death of billable hour since the 1990s. At this point, it’s less a prediction and more of a tradition. Indeed, Matthew Kohel, a partner at Saul Ewing, said despite the legal press coverage connecting AI to the billable hour’s demise that narrative is now entering its third or fourth decade. And Kohel said his firm simply isn’t seeing meaningful client-driven movement toward AFAs.

So let’s be honest: the billable hour is not dead, and in fact, it may not be even close to dead.

However, if you’re looking for something that is facing a genuine existential reckoning — something the legal industry whispered about in the early days of generative AI (GenAI) and is now discussing openly — Legalweek 2026 may have found it. It turns out the billable hour was never the thing in danger, rather it’s the person billing the hours.

It’s the associate.

The question nobody wanted to ask out loud

The future of the junior lawyer surfaced in virtually every breakout session across the three-days event, and while it may not be the point of inception for the question, it was certainly the moment this idea graduated from a half-whispered aside to main-stage conversation.

Moreover, the problem has grown more urgent since its inception in the early GenAI days, when the question was simply whether a firm would need fewer associates. Now, that question hasn’t gone away, but it’s been joined by harder ones concerning training, hiring, and legal and technical skills. For example, what if AI is already better than a junior associate at some of the tasks that defined the role in the past? And what happens if someone says it out loud?

Someone said it out loud.


If you’re looking for something that is facing a genuine existential reckoning, Legalweek 2026 may have found it. It turns out the billable hour was never the thing in danger, rather it’s the person billing the hours.It’s the associate.


During a panel on Measuring What Matters, the conversation turned to client trust. Clients want to know: How can you be sure AI will catch everything? How do you trust it to find what matters across 5,000 pages of documents?

The response from the panel was direct, and it landed like a brick in the room: it’s 5,000 pages, and someone was reading those five thousand pages. That someone is an associate. If that associate — who, more often than not, is one of the least experienced lawyers in the building — is the one reading all those pages, why would you trust them to do it better than a machine?

While that question hung in the air during the panel, it does deserve to sit with you for a moment afterward. Because embedded in it is the uncomfortable arithmetic that drives the entire associate question. The traditional law firm pyramid is built on a base of junior lawyers performing high-volume, lower-complexity work such as document review, due diligence, first-pass research, and doing so at rates that generate revenue while the activity is simultaneously (in theory) training the next generation of partners. If AI can do that base-layer work faster, cheaper, and with accuracy that one panelist described as “beyond very good,” then the pyramid doesn’t just shrink. It loses its foundation.

Barclay Blair, Senior Managing Director of AI Innovation at DLA Piper, noted that tasks like due diligence on some types of financial contracts are already being compressed to two hours, down from 15 to 20 — with zero hours being a realistic possibility in the near future.

Further, as one attendee observed, clients increasingly are adopting AI internally, and they’re bringing work in-house that was previously sent to outside counsel. Clearly, the work that trained generations of associates isn’t just being automated — in some cases, it’s leaving the firm entirely.

Fewer reps, greater weight

Yet here is where it would be easy (and wrong) to write the doom-and-gloom version of the future, in which AI replaces associates, the pipeline collapses, nobody knows how to train lawyers anymore, civilization crumbles, etc. It’s a clean narrative, but it’s also not what Legalweek panels actually said.

Because alongside the anxiety, something else was happening. People were building answers.

In another panel, Developing the Future Lawyer, panelists spent an hour in the weeds of what associate training actually looks like when the old model breaks down — and the conversation was far more concrete than you might expect.


Panelist spent an hour in the weeds of what associate training actually looks like when the old model breaks down — and the conversation was far more concrete than you might expect.


Panelist Abdi Shayesteh, Founder and CEO of AltaClaro, laid out the core problem with precision, noting that there’s a growing gap in critical thinking among associates. Templates getting copy-pasted without relevance analysis, and there is a lack of knowing what you don’t know. And the traditional training methods such as videos, lectures, and passive learning, don’t fix it. Indeed, those outdated models may be making it worse. Shayesteh’s analogy was blunt: You don’t learn to swim by watching videos — you need to jump into the deep end.

His solution is AI-powered simulations. Not hypothetical ones, but working deposition simulations available today, with real-time AI feedback, in which associates can practice cross-examination, deal with opposing counsel objections, and build the muscle memory that used to require years of live experience.

Kate Orr, Managing Director of Practice Innovation at Orrick, picked up the thread with two observations that reframed the stakes. First, AI simulations allow associates to fail behind closed doors, a radical improvement over the old model, in which blowing it had real consequences because failure often happened directly in front of the partners Second, the tool isn’t just for juniors. Even experienced lawyers are using simulations to test different approaches, tweak personas, and sharpen arguments. Orrick’s own Supreme Court team had a lawyer use AI to review a draft brief and identify paragraphs that could be tighter.

Todd Heffner, Partner at Smith, Gambrell & Russell, said the real question isn’t whether associates will use AI, but rather whether it gets them to lead at trial in year 10 instead of year 20. Right now, most associates are lucky to see the inside of a courtroom in their first seven years, and even then, they spend most of their time back in the hotel prepping for the more experienced attorneys instead of arguing themselves. If simulations can compress that learning curve, the associate’s career doesn’t disappear, rather, it gets accelerated.

The dinosaur that adapted

During the Measuring What Matters panel, Mitchell Kaplan, Managing Director of Zarwin Baum, introduced himself with a memorable bit of self-deprecation: He’s a dinosaur — but one, he clarified, who understands how AI can revolutionize what he does.

Kaplan’s perspective threaded through both days of programming like a quiet counterweight to the anxiety. He’d seen this before — not AI specifically, but the fear of it. He watched the legal industry transition from physical libraries to digital research tools, and he watched attorneys adapt. And his message was consistent: the work changes, but the need for lawyers doesn’t disappear. Associates may be taking shortcuts, but they still need to read, still need to review, and still need to think.

They’re developing differently than his generation did, Kaplan said, but it’s the same way every generation develops differently from the one before it. And different doesn’t mean wrong.


The work changes, but the need for lawyers doesn’t disappear. Associates may be taking shortcuts, but they still need to read, still need to review, and still need to think.


It’s a perspective that found an unexpected echo in the Enterprise Alignment panel. Mark Brennan, a partner at Hogan Lovells, relayed a comment he heard at a previous AI conference: The next generation of entry-level jobs will be managers — because they’ll be managing agents and other tech tools. Brennan admitted he didn’t have all the answers on what that means for legal training, but the implication was clear. The associate role isn’t dying, instead, it’s being redefined. And the firms that figure out what that redefined role looks like, what mix of legal training, technological fluency, critical thinking, and management skills it requires, will have a significant advantage over those firms that are still debating it.

Another panelist, Andrew Medeiros, Managing Director of Innovation at Troutman Pepper Locke, made a prediction that felt like the sharpest version of this idea. He said that at some point, new lawyers are going to be doing simulated matters as a standard part of the development process. Eventually, there’s going to be a generation that walks in as new attorneys and finds themselves litigating right away.

That’s not the death of the associate. Rather, that’s the beginning of a different kind of associate — one who arrives at the courtroom sooner, with different preparation, carrying different tools.

The billable hour, for all the prophecies, refuses to die. The associate, it turns out, has no intention of dying either — just evolving. Mitchell Kaplan called himself a dinosaur — but Legalweek was full of dinosaurs, and every one of them was adapting and in that adaptation, thriving. The harder question is whether the firms that forged them will be brave enough to follow.


You can find more ofour coverage of Legalweek eventshere

]]>
The professional judgment gap: Tracing AI’s impact from lecture hall to professional services /en-us/posts/corporates/ai-professional-judgment-gap/ Thu, 05 Mar 2026 12:59:12 +0000 https://blogs.thomsonreuters.com/en-us/?p=69771

Key highlights:

      • Universities face pressure over pedagogy— Academic institutions are adopting AI as a reputational marker that’s driven by market pressure rather than educational need, creating a risk for students who can work with AI but not independently of it.

      • Entry-level roles under threat— AI is being deployed most heavily to automate the grunt work of entry-level positions in which foundational professional skills are traditionally built through struggle and feedback.

      • K-shaped cognitive economy emerging— Experienced professionals with existing expertise are gaining efficiency from AI, while entry-level workers are losing access to skill-building experiences.


According to Harvard University’s Professional & Executive Development division, innovation is defined as a “process that guides businesses through developing products or services that deliver value to customers in new and novel ways.” Along this journey, professional judgement in decision-making is used numerous times to determine next steps at key stages.

Notably, the word technology is nowhere to be found in this definition — an absence , Assistant Professor of Learning Technologies at the University of Minnesota, has long found revealing. Instead, innovation is framed as creative problem-solving, contextual intelligence, and the ability to work across perspectives. Interestingly, Dr. Heinsfeld adds, none of these require constant automation. In fact, many of them are undermined by it.

However, AI adoption has the real potential to automate away the very experiences that build these capabilities from university lecture halls to corporate offices. With notable data already suggesting that , the risk that the current approaches to AI use in universities and companies are engineering away innovation and professional judgement skills is real, notes , Group Leader in AI Research at Harvard and NTT Research.

Indeed, some observers view AI as the largest unregulated cognitive engineering experiment in human history. Yet, unlike medical drugs that require years of approval and testing, AI systems are reshaping how millions of students think, learn, and make decisions without a comparable approval process or a shared framework for discussing any potential “side effects,” as Dr. Heinsfeld pointed out.


Most worrisome is that AI is being deployed most heavily to automate precisely the entry-level roles where foundational professional skills are built.


So, what happens when an entire generation of future employees learn to delegate judgment before they develop it? And what actions do universities and companies need to take now to avoid this reality?

Risks of universities adopting AI under pressure

For universities, AI “has become a reputational marker, and not adopting AI is framed as institutional risk, regardless of whether an educational case has been made or not,” says Dr. Heinsfeld, adding that this is being driven, in part, by market pressure rather than pedagogical need.

Already, companies can greatly influence universities as employers of new graduates; and as such, AI systems are currently being optimized for speed, agreeability, and accessibility to stimulate ongoing use. However, as Dr. Heinsfeld contends, as universities race to earn the label AI ready without a careful, cautious and detailed understanding of how it may impact students’ cognitive processes, they run the risk of damage to their reputations of pedagogical integrity.

In addition, the “data as truth” paradigm is a complicating factor, she says. Drawing on her research, Dr. Heinsfeld explains how data “is often framed as the idea of being a single source of truth based on the assumption that when collected and analyzed, it can reveal objective, indisputable facts about the world.” Indeed, this ubiquitous mindset across universities and corporations treats data — such as that used to train large and small language models — as objective and indisputable.

Yet this obscures critical decisions about what gets measured, whose perspectives are included, and what forms of knowledge are systematically excluded from AI systems. As Dr. Heinsfeld warns, when data becomes synonymous with truth, “knowledge is what is measurable and optimizable.” This narrows professional judgment to efficiency metrics rather than the interpretive depth, ethical reasoning, and cultural context that are essential for sound decision-making.

Judgment gap widens in workforce downstream

Under the current AI adoption approach, students could leave universities able to workwithAI but not independentlyofit, a distinction emphasized by Dr. Heinsfeld. Like calculators, AI works as a tool only when foundational skills for its use exist first. Without this, graduates enter the workforce with a critical judgment gap that compounds from their lives as students at college campuses to becoming employees working in corporations.


AI adoption has the real potential to automate away the very experiences that build these capabilities from university lecture halls to corporate offices.


Most worrisome is that AI is being deployed most heavily to automate precisely the entry-level roles where foundational professional skills are built, warns Dr. Tanaka. Indeed, this is exactly the type of grunt work that teaches judgment through struggle and feedback. Over time, overuse of AI will result in quality being sacrificed because critical evaluation skills have atrophied.

Looking into the future, Dr. Tanaka foresees a K-shaped economy of cognitive capacity. Experienced professionals with existing expertise and contextual judgment built through years of experience will gain increasing efficiency from AI. Entry-level workers, however, will lose access to the valuable experiences that build professional judgement. This gap widens between professionals who can independently accelerate their workflows using AI and those whose traditional tasks are merely displaced by it.

Intervention may be able to break the cycle

The pattern is not inevitable, as both Dr. Tanaka and Dr. Heinsfeld explain. Drawing on Dr. Heinsfeld’s emphasis on institutional agency, meaningful intervention will depend on conscious, intentional choices made at every level. Both experts share their guidance for how different organizations can manage this:

Academic institutions — Universities must first recognize that AI adoption is a decision rather than an inevitability and make educational need the North Star for decision-making around AI. In her analysis, Dr. Heinsfeld emphasizes that when vendors set defaults, they quietly redefine academic practice. Defaults shape what is made visible or invisible and what becomes normalized. In AI-driven environments, universities often lose control over how models are trained and updated, what data shapes outputs, how knowledge is filtered and ranked, and how student and faculty data circulate beyond institutional boundaries — especially if decision-making is left to vendors. As a result, the intellectual byproducts of teaching and learning increasingly become inputs into external systems that universities do not govern.

Private entities — For organizations, Dr. Tanaka calls for feedback loops and other mechanisms that will promote more open discussion about AI use without stigma. In addition, companies need to proactively redesign entry-level rolesto ensure these positions continue to cultivate judgment and foundational skills in an AI-driven environment. Likewise, Dr. Tanaka suggests that companies explicitly provide feedback about cognitive trade-offs to employees, fostering an understanding of possible skill entrophy.

Employees — Similarly, individuals working for organizations bear much of the responsibility for making sure critical thinking is enhanced by AI. Indeed, strategic decisions about when to use AI while seeking to preserve cognitive capacity and professional judgement are key.

Looking ahead

In today’s increasingly AI-driven environment, a new paradigm is needed to combat the current operating assumption that optimization from AI is the sole path to progress. And because the current trajectory sacrifices human development for efficiency, the need for universities and companies to choose a different path is urgent — while they still have the judgment capacity to do so.


You can find out more about how organizations are managing their talent and training issues here

]]>
Inside the Shift: What happens in the professional workplace when AI does too much? /en-us/posts/sustainability/inside-the-shift-ai-overuse/ Wed, 25 Feb 2026 16:21:23 +0000 https://blogs.thomsonreuters.com/en-us/?p=69610

You can read TRI’s latest “Inside the Shift” feature,The human side of AI: The growing risks of ubiquitous use of AI on talent here


It’s no exaggeration to say that AI is everywhere in our workplaces right now. It writes our emails, summarizes our meetings, generates slides, and even helps us think through problems. On the surface, this may sound like progress — and in many ways, it is.

However, our latestInside the Shiftfeature, The human side of AI: The growing risks of ubiquitous use of AI on talent by Natalie Runyon, Content Strategist for Sustainability and Human Rights Crimes for the Thomson Reuters Institute, makes a clear and timely point: When AI use becomes excessive and unchecked, it can quietly undermine the very people it’s meant to help.


One major consequence of cognitive decay is the weakening of the brain’s capacity to engage deeply, question systematically, and — somewhat ironically — resist the potential manipulation of AI.


As the article goes into in much greater detail, these harms caused by AI overuse can include a slow erosion of human connections, a loss of a professional’s sense of purpose, and a general sense of feeling overwhelmed in the workplace.

Of course, the solution isn’t to reject AI, it’s to use it better. To this end, the article makes a strong case for organizations to foster hybrid intelligence, a process by which human judgment and creativity work alongside AI capabilities.

In today’s workplace, AI can be a powerful advantage; however, that is only if organizational leaders can remember that technology should enhance the human experience, not replaces the parts of professional life that workers value.


To examine this and many more situations, the Thomson Reuters Institute (TRI) has launched a new feature segment,Inside the Shift, that leverages our expert analysis and supporting data to tell some of the most compelling stories professional services today

]]>
The 4 Plates: Why GCs need stakeholder intelligence to be effective in the AI era /en-us/posts/corporates/4-plates-delivering-effective-advice/ Thu, 19 Feb 2026 02:11:03 +0000 https://blogs.thomsonreuters.com/en-us/?p=69466

Key takeaways:

      • Become truly client-centered — Legal departments claim to be client-focused yet frequently make strategic decisions about effectiveness without systematically understanding stakeholder needs.

      • Decide where to automate — As AI transforms legal services delivery, decisions about where to automate versus where to deploy human judgment require evidence, not assumptions.

      • Build intelligence with continuous feedback — Systematic stakeholder intelligence reveals where speed matters more than depth, which services lack visibility, and where relationships can create differentiated value.


Today’s general counsels face a fundamental challenge as AI capabilities expand, that of determining where to deploy technology and where to deploy human judgment. Getting this formula right can create irreplaceable value for an organization. Yet many GCs may be making these critical decisions based on assumptions about what stakeholders need rather than evidence.

The paradox is that while corporate legal departments consistently say they want to be effective, client-focused, and responsive partners in service of the business, many are making strategic decisions about how to be that way without systematically measuring or understanding the stakeholder experience they’re trying to optimize. It’s like declaring customer satisfaction as your goal while never actually asking customers how satisfied they are. This blind spot doesn’t just undermine service quality; it undermines one of the four core accountabilities of every legal department which is that of being Effective.

This is the third partof our series on the “Four Spinning Plates” model, which frames the GCs’ evolving responsibilities as:

      1. delivering effective advice
      2. operating efficiently
      3. protecting the business, and
      4. enabling strategic ambitions.

This article focuses on theEffectiveplate.

effectiveness

The information gap

Being Effective as a legal department means delivering high-quality, practical legal advice that is responsive to business needs, and this requires knowing what those needs are. Most legal departments rely on hallway conversations, occasional feedback during business reviews, and organic complaints or praise. While these interactions are valuable and should continue, what they lack is systematic intelligence that could be used to determine the best strategic decisions.

Ad hoc feedback is reactive, incomplete, and reflects the loudest voices rather than the broader reality. You hear from the very satisfied or the very unsatisfied, rarely from the middle majority of stakeholders whose experience shapes overall effectiveness.

As AI transforms legal delivery, this information gap becomes more costly. Without understanding which feedback touchpoints stakeholders prefer as human interactions and which they’d rather handle on their own, how can you decide which legal services to automate and where your team’s judgment and relationship-building are essential?

When legal departments systematically gather stakeholder feedback, they uncover patterns that challenge assumptions about what effectiveness means to the business.

Consider response time, for example. Many legal teams pride themselves on providing thorough, carefully crafted advice. However, stakeholder feedback often reveals that the speed of an initial response matters more than depth, at least for the first touchpoint. What lawyers see as diligence, stakeholders may experience as delay. This insight doesn’t mean the legal team should compromise quality; rather, true effectiveness comes from knowing when a quick acknowledgment is sufficient and when an issue demands thorough analysis right away.

Varied responses needed

Of course, different stakeholders have different expectations of responsiveness. For example, sales colleagues working under targets and time pressure need speed to drive momentum in contract negotiations. Understanding different stakeholder personas can help manage expectations and educate junior lawyers about the different business rhythms that the legal department must respond to.

Or, as another example, take service awareness. It’s common to discover that stakeholders simply don’t know the full extent of what the legal team can offer. Business leaders may not realize their legal team provides training, templates, or advisory services that could prevent issues before they escalate. The problem here isn’t service quality, it’s visibility — and that distinction matters enormously when deciding where to invest limited resources.


You can learn more about how theThomson Reuters Institute’s Value Alignment toolkitallows you to assess your legal department’s strategic positioning here


More importantly, these insights directly inform AI integration strategy for corporate law departments. Routine, high-volume work in which speed matters is a prime candidate for automation and self-service tools. Complex matters in which stakeholders specifically value a lawyer’s business understanding and strategic judgment is where to protect and focus human capacity.

Perhaps the most valuable output of fostering systematic feedback is when that feedback reveals where satisfaction varies across departments or stakeholder groups. A legal department might assume it delivers consistent service, only to discover that one business unit rates the department highly for responsiveness while another complains that it struggles to receive timely answers. These variations point to either inconsistent delivery or improperly communicated expectations. which are exactly the kinds of problems that process standardization, better intervention systems, or technology can address.

Without this type of intelligence, GCs risk automating services that should stay personalized, or maintaining high-touch approaches for work that stakeholders would happily handle themselves through self-service options.

The human value imperative

As AI handles more legal work, the question becomes: What can legal professionals do that technology cannot? The answer lies in the distinctly human elements of legal service such as judgment, knowledge of the business, relationship building, and strategic counsel.

The challenge for corporate law departments, however, is that without first knowing which touchpoints stakeholders value as human interactions, you can’t strategically deploy your team’s capabilities. Systematic stakeholder feedback allows evidence-based decisions on where the legal team’s relationship adds value and where speed or self-service could better serve stakeholder needs.


The question for every General Counsel then becomes: Are you making decisions on the department’s effectiveness based on systematic stakeholder intelligence, or operating with a blind spot that may be costing you more than you realize?


This then becomes critical intelligence for decision-making around resource allocation and restructuring, as well as for demonstrating the legal team’s value to the C-Suite in terms they can recognize. When a GC can articulate not just what their department does but how effectively it serves broader stakeholder needs, they are speaking the same language as the business they support.

This also allows a GC to shift from defending their department headcount based on workload volume to justifying resources based on stakeholder-defined value — and that’s a fundamentally stronger position.

Understanding the Spinning Plates

The Four Spinning Plates model — Effective, Efficient, Protect, and Enable — represents the complete picture of a legal department’s role and value within the organization. Yet research consistently shows a perception gap. For example, C-Suite executives over-emphasize the Effective plate while under-recognizing Protection and Enablement contributions.

This gap exists partly because legal departments lack metrics that capture effectiveness in business terms. They can report cost savings and matter volumes but struggle to demonstrate how well they’re actually serving stakeholder needs. Stakeholder feedback mechanisms bridge this gap by making effectiveness measurable and visible through the lens of those the department serves.

Indeed, it’s not about running surveys for the sake of feedback. It’s about grounding strategic decisions about AI integration, service design, and where to focus human talent, in evidence not assumptions. For those GCs navigating AI transformation specifically, this isn’t optional. Rather, it’s the difference between guessing where to automate and knowing where automation serves stakeholders.

Leading legal departments are already using stakeholder intelligence as their compass for AI transformation, leveraging that intelligence to best determine where to standardize, where to automate, and where human judgment remains irreplaceable.

The question for every General Counsel then becomes: Are you making decisions on the department’s effectiveness based on systematic stakeholder intelligence, or operating with a blind spot that may be costing you more than you realize?


You can learn more about the challenges that corporate GCs face every day

]]>
Chief Marketing & Business Development Officer Forum 2026: Law firms need to play the long game on talent /en-us/posts/legal/cmbdo-forum-2026-long-game-on-talent/ Fri, 06 Feb 2026 15:56:02 +0000 https://blogs.thomsonreuters.com/en-us/?p=69319 Key insights:
      • EI is emerging as a critical strategic capability — Stronger emotional intelligence can enable law firm leaders to build trust, navigate complex relationships, and strengthen both internal collaboration and client engagement.

      • Culture is now the defining factor in retaining top talent — As professionals increasingly expect transparency, purpose, and human‑centered leadership rather than traditional top‑down structures, law firms need to adapt.

      • Successful lateral integration requires coordination — Firms need to provide consistent messaging and fulfill their commitments to ensure that new hires feel aligned, supported, and positioned to contribute meaningfully.


AMELIA ISLAND, Fla. — If you’ve spent any amount of time inside a law firm, you already know that the people stuff is often the hardest part of the job. Sure, the work is complex, the clients are demanding, and the deadlines are relentless — but navigating human dynamics? That’s where things get really interesting.

During the Thomson Reuters Institute’s recent33rd Annual Chief Marketing & Business Development Officer Forum(formerly theMarketing Partner Forum), three panels zoomed in on law firm talent: how to attract it, how to integrate it, and how to keep it. And while the themes ranged from emotional intelligence to lateral hiring to long‑term culture building, one takeaway stood out loud and clear: Those law firms that want to succeed have to start thinking about talent as a strategic engine — not an administrative task.

EI is not just a soft skill, it’s a strategic power skill

Emotional intelligence (EI) is having something of a renaissance inside law firms, and frankly, it’s overdue. As several panelists emphasized, EI isn’t about being warm and fuzzy — it’s about , especially in a high‑pressure, fact‑driven environment like law.

Stronger EI, especially among firm leadership, will enhance everyone’s ability to perceive, understand, and manage their own emotions and relationships. Emotionally intelligent professionals are better able to motivate themselves, read social cues, and build stronger relationships. And because it requires being aware of emotions in oneself and others, it can positively impact internal collaboration and external client relationships.


You can find out more about next year’s Chief Marketing & Business Development Officer Forum 2027here


For example, one panelist explained, if your go‑to opener with clients is still, “How’s it going?”, don’t expect anything more insightful than a polite shrug. Lawyers should use intentional conversation starters and even simple prompts, such as sharing the “top 10 things clients say we can do better,” the panelist explained.

Of course, EI isn’t always easy for lawyers because they are trained to trust facts, not feelings. That means firm leaders often need to dig deeper especially when someone seems resistant. It’s crucial for law firm leaders to remember that EI isn’t emotional fluff. It’s how firms build trust, lead through uncertainty, and strengthen both internal teams and client relationships. It’s a differentiator, panelists said, and one that law firms can no longer treat as optional.

In retention, culture is the whole game

Indeed, so much around talent hinges on the workplace culture, and as another panel discussed, that it has become the linchpin for successful hiring and retention of top talent. Indeed, in today’s environment, even the best firms may have trouble hiring and keeping top talent in a market where expectations, especially after the pandemic, have changed dramatically.

CMBDO Forum 2026
One of several panel discussions on law firm talent issues at the recent Thomson Reuters Institute’s 33rd Annual Chief Marketing & Business Development Officer Forum.

“It’s just changed so much since the pandemic where people just did their jobs and were expected to do so,” said one panelist. “Now, they want to feel valued and want to feel like they are making a difference.”

Several panelists agreed, pointing out that top talent is harder to hire than ever, largely because client demands have increased and the talent pool hasn’t expanded at the same pace. However, culture is where firms either win or lose the long game, they concurred.

Today’s employees want to feel valued, engaged, and connected to meaningful work — not just completing tasks in the background. They want transparency, authenticity, and involvement in strategy, panelists said. “People need to want to be part of your team, they need to feel prized once they’re there,” said another panelist. “They want leaders who are human first, and executives second.”

While this cultural tightrope may seem daunting, when a firm gets it right, recruiting becomes significantly easier. People want to work in environments in which they can be themselves, questions are encouraged, and their participation actually shapes outcomes, another panelist explained. “Keeping great people isn’t about perks or ping‑pong,” they said. “It’s about trust, clarity, and connection.”

The strategy behind making lateral integration work

Another aspect of the talent discussion, lateral hiring, has become a cornerstone of modern law firm growth, according to another panel. But to be honest, several panelists argued, even firms that recruit great laterals often fail to integrate them properly.

This can be a critical failure, they added, because lateral integration isn’t a task — it’s a firmwide commitment. When done well, it accelerates growth; but when done poorly, it creates churn, skepticism, and reputational risk.

Panelists stressed that laterals need clear messaging from everyone in the firm about how they fit into the broader business strategy. That means offering them consistent narratives and articulated opportunities, as well as stories of client wins, proof points about firm strengths, and external endorsements — all of which can help build credibility, they said.

Further, laterals need structured opportunities to showcase their expertise — such as CLEs, webinars, client events, internal spotlights. “These aren’t just marketing activations,” one panelist noted. “They are culture‑building moments that signal, ‘You’re part of this team, and we want people to know what you bring.’ĝ

On-boarding laterals, especially lateral teams, often can be a fraught proposition, and ideally one person should coordinate the entire process on the firm’s behalf. Otherwise, the new partner ends up drowning in inconsistent communication and duplicate requests. “Nerves are very high during this time — worries about whether the lateral made the right choice, whether support staff is being accommodated, and, most critically, whether clients will come over too — and all that has to be managed,” a panelist said.

However, the most important thing firm leadership can do when it comes to laterals is to simply deliver on their promises. Few things sour a lateral’s experience faster than broken commitments, another panelist offered.

Overall, the thread throughout all these panels on talent challenges within law firms showed that law firms need to evolve not just how they manage work, but how they manage people. Whether leveraging EI to power leadership and motivate teams, unifying communication to drive successful lateral integration, or fostering a culture in which top talent wants to stick around, firms would be wise to invest in human‑centered strategies.

Indeed, the potential payoff is massive: More engaged teams, stronger client relationships, and a more resilient future. And for those firms that don’t make this shift? Well, talent always has other options.


You can read the fullExecutive Summary of the Thomson Reuters Institute’s 33rd Annual Chief Marketing & Business Development Officer Forumhere

]]>
Hybrid intelligence: Ramping up human-focused power skills in an AI-enabled workplace /en-us/posts/sustainability/hybrid-intelligence/ Wed, 21 Jan 2026 19:03:17 +0000 https://blogs.thomsonreuters.com/en-us/?p=69097

Key highlights:

      • Human connection is now a competitive capability — Treat relationships as core infrastructure instead of cultural fluff by designing work to keep real collaboration, accountability, and regular face-to-face interaction at the center with AI in a supporting role.

      • Protect your judgment and meaning as “human-owned” — Start with independent frameworks and reasoning, then use AI to refine and stress-test; and schedule recurring “no-AI” blocks to keep analytical muscle and professional agency strong.

      • The winning model is hybrid intelligence — The standout professionals in 2026 will be those who are fluent in both human dynamics and AI assisted workflow.


Professional services work fundamentally relies on judgment, trust, and relationships. Clients engage firms for confidence and strategic guidance, while a good reputation in this sector develops through the consistent delivery of high-quality counsel. While AI can enhance these capabilities, these technologies may also erode professional value if permitted to displace the distinctly human elements that differentiate exceptional service.

The imperative for 2026 is to maintain full professional capability by embracing human strengths while leveraging technological tools. Consistent application of the following practices will protect and develop the competencies that AI cannot replicate.

Build your human connections muscle

In the near future, professionals may spend more time interacting with AI systems than they do with colleagues. Over time, AI creates opportunities to disengage from human interaction; and AI systems remain consistently agreeable, perpetually available, and never introduce tension into professional discourse.

For time-constrained professionals, this predictability may appear advantageous; however, this convenience carries a substantial cost. In professional services, relationships constitute essential infrastructure rather than supplementary benefits. When professional interaction shifts from human to machine interface, social acuity diminishes as professionals lose exposure to subtle human dynamics. Critical developmental experiences — including the ability to manage discomfort, resolve misunderstandings, and navigate the productive friction that builds capacity for maintaining and repairing strained relationships — become scarcer.

To preserve human connection capacity with intention, implement these measures:

      • Prioritize work that requires genuine collaboration and shared accountability and keep AI as a supporting resource.
      • Establish regular face-to-face interaction, both virtual and in-person, with colleagues to invest in relationship-building conversations that extend beyond project deliverables and timeline discussions.
      • Actively engage in professionally challenging interactions, including those involving constructive feedback delivery and negotiation. These experiences maintain trust and prevent the gradual atrophy of human collaboration skills.

Protect your brain and your meaning at work

AI technologies offer substantial efficiency gains through automated drafting, summarization, and information analysis. However, excessive reliance on these capabilities may diminish the cognitive repetitions that maintain professional acuity. In professional services, intellectual capacity, which includes attention to detail and analytical reasoning, constitutes the primary asset. This capacity requires the ability to discern significance, interrogate underlying assumptions, and articulate complex tradeoffs with precision.

Delegating these cognitive tasks to AI systems daily may yield short-term efficiency while lowering costs, but this may lead to work becoming ambiguous and require less nuanced judgment. As a result, professional instincts may atrophy.

An additional consequence of AI overreliance involves the erosion of professional meaning and engagement. When AI systems generate the majority of intellectual output, professionals may risk becoming approvers rather than creators. Work devolves into review and authorization — a repetitive pattern that can lessen one’s connection to making a substantive professional contribution. Indeed, the role begins to resemble a production line of incremental validations rather than meaningful professional practice.

To avoid this, you should implement the following practices to preserve both intellectual rigor and a meaningful sense of agency over critical professional activities:

      • Integrate deliberate cognitive exercises into weekly routines — Initiate substantive work with independent analysis — by establishing frameworks, identifying priorities, and constructing logic — before employing AI to refine structure, enhance clarity, and stress-test reasoning. Subsequently, critically evaluate AI-generated output by identifying omissions, examining underlying assumptions, and assessing potential errors.
      • Establish dedicated periods for unassisted professional work — Schedule regular intervals for research, conceptual development, and drafting without AI support to ensure sustained development of analytical capacity and professional judgment.
      • Anchor work to meaning and outcomes — Identify work of particular professional significance and maintain direct engagement with these tasks, again without AI assistance. Regularly reflect on the tangible impact of contributions, including the delivery of client value and the support of colleagues, in order to better sustain meaningful connection to professional purpose.

Hybrid intelligence is the future

The most effective professionals in 2026 will be those that are focused on their capacity to integrate human literacy with algorithmic literacy, which is a competency framework known as hybrid intelligence.

Human literacy remains the fundamental differentiator in professional services, encompassing the ability to interpret interpersonal dynamics, establish trust amid complexity, deliver constructive feedback with appropriate sensitivity, and maintain both self-awareness and relational intelligence.

Algorithmic literacy involves understanding the specific capabilities and limitations of AI tools, including honing a proficiency for output verification, tool evaluation, and sustained awareness of bias and risk considerations.

The combination of these two factors within hybrid intelligence can give professionals a potent way of fighting the accelerating cognitive deterioration andagency decaythat some may experience with AI overuse.

Today, organizational mandates for AI adoption are becoming increasingly prevalent and will approach universality over the next few years. While firms compete through technological capability, competitive differentiation will ultimately derive from the human excellence of their professionals — a dynamic that will similarly shape individual career trajectories.


You can find out more about how a focus on power skills can help professionals in the workplace here

]]>
Impact of AI on critical thinking: Challenges and opportunities for lawyers /en-us/posts/sustainability/ai-impact-critical-thinking/ Mon, 29 Dec 2025 14:04:00 +0000 https://blogs.thomsonreuters.com/en-us/?p=68783

Key insights:

      • Cognitive offloading is a significant risk —The correlation between increased AI usage and decreased critical thinking, known as cognitive offloading, poses a threat to effective legal practice, especially with the rise of autonomous agentic AI.

      • Agentic AI risks and opportunities — The next generation of agentic AI poses significant challenges to lawyers’ critical thinking skills, but it also offers opportunities for lawyers to enhance their analytical rigor and human insight.

      • Agentic AI can enhance critical thinking when properly leveraged —When designed by lawyers, for lawyers and used to augment human judgment in legal workflow tasks — such as discovery, contract analysis, and drafting — agentic AI can improve efficiency, deepen analysis, and allow legal professionals to focus on higher-value critical thinking tasks.


The legal profession is at a critical juncture as AI becomes increasingly sophisticated. Recent research has uncovered a troubling correlation between the use of AI and the decline in critical thinking abilities among legal professionals. This phenomenon, known as cognitive offloading, threatens the very foundation of effective legal practice.

Studies have shown a clear pattern linking AI use, cognitive offloading, and critical thinking. According to , there is a notable correlation between increased AI usage and diminished critical thinking performance among individuals. Moreover, as people offload more mental work to AI tools, their critical thinking scores tend to be lower. While correlation does not necessarily imply causation, this pattern is strong enough to warrant proactive measures to safeguard critical thinking skills.

The findings from the study have implications for lawyers. First, it is essential to design workflows that ensure attorneys retain ownership of problem framing, authority weighting, and strategic judgment. Human checkpoints should be inserted at key decisions, and transparent evidence trails should be maintained. For junior lawyers, it is crucial to preserve desirable difficulty reps — basically, the baseline skill-building experience — before they consult AI. By pairing these guardrails with outcome tracking, law firms can harness AI’s speed and scale while minimizing the risks associated with cognitive offloading.

Risks increase with agentic AI

The next wave of AI-powered legal tech involves agentic AI, which operates as autonomous agents. These agents can plan and execute complex workflows independently, make real-time decisions, and adapt strategies without constant human input. This autonomy intensifies cognitive offloading risks by enabling workflow automation beyond human oversight, strategic cognitive offloading, and the black box problem magnified. (Basically, these are situations in which a system’s internal workings are hidden, and users may know what goes in and what comes out, but not howthe system arrives at its decisions.)


To mitigate the risks associated with cognitive offloading, legal professionals can leverage agentic AI tools designed to enhance critical thinking.


The autonomous nature of agentic AI creates unprecedented professional responsibility challenges, including supervision standards, competence requirements, and explaining AI-developed strategies to clients. The legal profession faces significant challenges that could accelerate skills atrophy, such as new attorneys missing opportunities to develop foundational analytical skills, lawyers becoming dependent on AI, and AI handling strategic planning.

To mitigate the risks associated with cognitive offloading, legal professionals can leverage agentic AI tools designed to enhance critical thinking. For instance, AI-driven legal research and analysis platforms can make every step of the legal workflow more transparent, testable, and adversarially robust. These tools use custom-trained, agentic AI to produce transparent, step-by-step research notes and comprehensive reports that present arguments on both sides.

Illuminating examples of critical thinking skills

Agentic AI is transforming legal practice by enhancing critical thinking skills through various applications, and these innovative uses of AI not only improve efficiency but also augment human judgment. This in turn enables lawyers to focus on higher-value tasks that require critical thinking, creativity, and nuanced understanding. Several examples illustrate how agentic AI can enhance critical thinking in legal practice, such as:

      • Discovery — Autonomous analysis engines have uncovered patterns that traditional keyword searches missed. In one commercial litigation case, an agent found subtle shifts in executive language precisely around the period of alleged misconduct. The agent was able to explain why those patterns mattered and then tied each inference to source documents.
      • Contract analysis — In M&A diligence, agentic AI examined hundreds of legacy agreements and flagged indemnification variants that created potential exposure issues. With about 94% accuracy, transparent AI reasoning supported a targeted remediation strategy that averted post-closing liability.
      • Drafting workflows — Expert-designed, multi-step workflows assemble relevant know-how, generate first drafts to specification, and require counterarguments and verification before stylistic polish is done. This approach has been shown to reduce review time by roughly 63% and legal know-how tasks by about 10%.

As we are learning, agentic AI strengthens core litigation work by preserving human judgment while expanding pattern detection, accelerating theory testing, and deepening client advocacy. By handling comprehensive case law analysis and factual pattern identification, agentic AI frees litigators to develop creative legal theories, anticipate opposing strategies, and craft nuanced arguments.

Thus, to better elevate critical thinking in legal work, it is essential to use AI that is designed by lawyers, for lawyers. Domain-specific AI legal assistants provide nuanced insights that inform sharper, more strategic decisions. And expert-guided analytical workflows support comprehensive analysis without encroaching on professional judgment, ensuring that attorneys can interrogate sources confidently and build arguments on solid ground.

By embracing agentic AI as a collaborative counterpart, legal professionals can heighten analytical rigor and human insight — the very qualities that make legal practice both powerful and purposeful. As opportunities expand, so does the potential for creating more positive impact for clients, engaging in complex problem-solving, and advancing access to justice for more people.


You can find out more about the impact of AI and other advanced technologies on the legal profession here

]]>