Technology training Archives - Thomson Reuters Institute https://blogs.thomsonreuters.com/en-us/topic/technology-training/ Thomson Reuters Institute is a blog from ¶¶ŇőłÉÄę, the intelligence, technology and human expertise you need to find trusted answers. Thu, 16 Apr 2026 15:10:52 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 Rethinking lawyer development in future AI-enabled law firms /en-us/posts/legal/lawyer-development-ai-enabled-law-firms/ Thu, 16 Apr 2026 15:10:23 +0000 https://blogs.thomsonreuters.com/en-us/?p=70390

Key highlights:

      • Three emerging business models, one unresolved tensionĚý— AI is compressing time, which directly threatens the logic of billing by the hour, but the smartest law firms are not waiting for a winner to emerge before building their strategic foundation.

      • Technology strategy and talent strategy are the same conversation — The talent model must be designed in tandem with the business model, even amid uncertainty, because many of the structural conditions of legal work are changing all at once.

      • The next great lawyer will lead with human skills, not tool proficiencyĚý— Forward-thinking firms are doubling down on their lawyers’ curiosity, judgment, client skills, and relationship-building as these capabilities are those that AI cannot replicate.


Every law firm is asking how AI will change the way legal work gets done; but , Chief Legal Operations Officer at , is asking a more consequential question: How will AI change the way legal work getsĚýpaid for?

Planning around 3 law firm business models in the AI era

AI is making law firms more efficient, of course, but efficiency alone does not answer the harder question of how to capture value and how AI-enabled legal services get priced. Olson Bluvshtein sees three paths emerging in law firms:

      1. Billable-hour (still) — The first is the path of least resistance. Firms stay anchored to the billable hour, raise rates, and use AI to move faster and handle more volume, with the idea that more volume will make up the revenue losses of faster work. With this model, however, the client-firm incentive misalignment remains intact, and the fundamental tension between billing for time and AI compressing that time never gets resolved.
      2. Value-based pricing — The fixed fee pathway also is likely to gain further traction, as it’s one that many AI-native law firms are pursuing. In this model, value-based pricing creates a natural meeting point between firm and client interests because when incentives align, everyone wins, Olson Bluvshtein explains.
      3. Frontier models rule — The third scenario is more speculative but worth watching. As foundational models improve, the need for expensive legal-specific tools may diminish. “I could see a scenario in the future in which we don’t necessarily need all the legal-specific tools that are out there,” she says. Even though technology costs historically come down, cheaper tools do not make the business model question disappear, Olson Bluvshtein notes.

Candidly, Olson Bluvshtein admits that “the truth is probably somewhere in the middle,” and the firms best positioned for any of these futures are the ones building the strategic and operational foundation now rather than waiting for the answer to become obvious.

Indeed, the most thoughtfully designed business model will fall short without the right talent foundation to support it. “Technology strategy and people strategy are not separate conversations,” Olson Bluvshtein says, adding that they are key parts of the same strategy.

Legal innovation consultant reinforces this point in , noting that many aspects of the structural foundation under which the legal profession has operated are changing all at once. This means that addressing the technology strategy separately from the human side, slice by slice, does not make sense.

Boyko says she encourages law firms to take a step back and approach the problem by identifying what the firm will need first in the future and then plan the talent and tech part for that reality.

Aligning the talent model to the future business model

Not surprisingly, a key challenge for law firms right now is that the future is uncertain. Therefore, it is difficult to design a talent model for an uncertain future and an unknown business model. At the same time, there are some known facts, but the unknown aspect is when these certainties will occur.

More specifically, what is known is that there is mounting pressure on the three possible law firm business models because AI is automating the tasks of past junior associates, clients do not want to pay for tasks completed by junior associates, and clients are bringing more legal work in-house, often until the time when the almost final deliverable is handed over to outside counsel for final review.

Norah Olson Bluvshtein of Fredrikson & Byron

To explore the right talent model, one experiment that Boyko suggests is to expand the junior associate experience to include rotations through back-office functions, such as knowledge management, professional development, and technology functions.

At law firm Fredrikson & Byron, Olson Bluvshtein says its associate development program is evolving to prepare for the uncertain future based on three current tactics:

      • Building AI fluency — This is a near-term imperative that will soon become table stakes. The goal is to move past basic adoption into something more sophisticated and durable. To enable this, the litigation and M&A practices at Fredrikson are actively working with a variety of tools to test prompts that they can then share more broadly with other teams, while also identifying how AI policy guidance will evolve.
      • Accelerating the development of legal judgment — Shortening the learning curve for developing legal judgment, which includes the ability to supervise and efficiently validate AI-produced work, is the second essential part of the firm’s talent development framework. Olson Bluvshtein is candid about where things stand. “It has not fully happened yet,” she says. “But building the training infrastructure to operationalize this is a stated goal for the year ahead, including formalized curriculum around effectively and efficiently supervising AI output.”
      • Being hyper-focused on the development and recruiting of human skills — Doubling down on the human skills — including client development, negotiation, relationship-building, and sound judgment — that technology cannot replicate are the capabilities that will define the next generation of great lawyers, regardless of which law firm business model ultimately prevails.

This same philosophy is shaping how Fredrikson recruits. Rather than screening candidates for a checklist of AI tools, the firm is prioritizing curiosity, openness, and the ability to demonstrate human skills. Indeed, the firm is looking for lawyers “who are really good at those human skills” and who bring the kind of judgment and adaptability that compounds over time, explains Olson Bluvshtein.

Boyko underscores a similar approach to skills. “Right now, the skills needed to be a good lawyer are no longer those rote skills that AI can automate,” she explains. “Instead, they are the people skills, the operational skills, and the client skills.”

Of course, moving from broad experimentation to disciplined, firm-wide maturity takes time, and the gap between early movers and late adopters is already widening. Those firms that will define the next era of legal services already are asking how AI changes the way it delivers value and what skills its lawyers will most need — and not just looking for the next tool to buy.


You can learn more about the challenges facing legal talent here

]]>
Scaling Justice: AI is scaling faster than justice, revealing a dangerous governance gap /en-us/posts/ai-in-courts/scaling-justice-governance-gap/ Mon, 13 Apr 2026 16:57:55 +0000 https://blogs.thomsonreuters.com/en-us/?p=70330

Key takeaways:

      • AI frameworks need to keep up with implementation — While AI governance frameworks are being developed and enacted globally, their effectiveness depends on enforceable mechanisms within domestic justice systems.

      • Access to justice is essential for trustworthy AI regulation — Rights and protections are only meaningful if individuals can understand, challenge, and seek remedies for AI-driven decisions. Without operational access, governance frameworks risk remaining theoretical.

      • People-centered justice and human rights must anchor AI governance — Embedding human rights standards and ensuring equal access to justice in AI regulation strengthens public trust, accountability, and the credibility of both public institutions and private companies.


AI governance is accelerating across global, national, and local levels. As public investment in AI infrastructure expands, new oversight bodies are emerging to assess safety, risk, and accountability. The global policy conversation has from principles to the implementation of meaningful guardrails and AI governance frameworks, which legislators now are drafting and enacting.

These developments reflect growing recognition that AI systems demand structured oversight and a shift from voluntary safeguards and standards to institutionalized governance. One critical dimension remains underdeveloped, however: how do these frameworks function in practice? Are they enforceable? Do they provide accountability? Do they ensure equal access?

AI governance will not succeed on the strength of international declarations or regulatory design alone; rather, domestic justice systems will determine whether it works. At this intersection, the connection between AI governance and access to justice becomes real.

In early February, leaders across government, the legal sector, international organizations, industry, and civil society convened for an expert discussion. The following reflections attempt to build on that dialogue and its urgency.

From principles to enforcement

Over the past decade, AI governance has evolved from hypothetical ethical guidelines to voluntary commitments, binding regulatory frameworks, and risk-based approaches. Due to these game-changing advancements, however, many past attempts to provide structure and governance have been quickly outpaced by technology and are insufficient without enforcement mechanisms. As Anoush Rima Tatevossian of The Future Society observed: “The judicial community should have a role to play not only in shaping policies, but in how they are implemented.”

Frameworks establish expectations, while courts and dispute resolution mechanisms interpret rules, test rights, evaluate harm, assign responsibility, and determine remedies. If individuals are not empowered to safeguard their rights and cannot access these mechanisms, governance frameworks remain theoretical or are casually ignored.

This challenge reflects a broader structural constraint. Even without AI, legal systems struggle to meet demand. In the United States alone, 92% of people do not receive the help they need in accessing their rights in the justice system. Introducing AI into this environment without strengthening access can risk widening, rather than narrowing, the justice gap.


Please add your voice to ¶¶ŇőłÉÄę’ flagship , a global study exploring how the professional landscape continues to change.Ěý


Justice systems serve as the operational core of AI governance. By inserting the rule of law into unregulated areas, they provide the infrastructure that enables accountability by interpreting regulatory provisions in specific cases, assessing whether AI-related harms violate legal standards, allocating responsibility across public and private actors, and providing accessible pathways for redress.

These frameworks also generate critical feedback. Disputes involving AI systems expose gaps in transparency, fairness, and accountability. Legal professionals see where governance frameworks first break down in real-world conditions, often long before policymakers do. As a result, these frameworks function as an early signal of policy effectiveness and rights protections.

Importantly, AI governance does not require entirely new legal foundations. Human rights frameworks already provide standards for legality, non-discrimination, due process, and access to remedy, and these standards apply directly to AI-enabled decision-making. “AI can assist judges but must never replace human judgment, accountability, or due process,” said Kate Fox Principi, Lead on the Administration of Justice at the United Nations (UN) Office of the High Commissioner for Human Rights (OHCHR), during the February panel.

Clearly, rights are only meaningful when individuals can exercise them — this constraint is not conceptual, it’s operational. Systems must be understandable, affordable, and responsive, and institutions should be capable of evaluating complex, technology-enabled disputes.

Trust, markets & accountability

Governance frameworks that do not account for these dynamics risk entrenching inequities rather than mitigating them. An individual’s ability to understand, challenge, and seek a remedy for automated decisions determines whether governance is credible. A people-centered justice approach, as established in the , asks whether individuals can meaningfully engage with the system, not just whether rules exist. For example, women face documented barriers to accessing justice in any jurisdiction. AI systems trained on biased data can replicate or amplify existing disparities in employment, financial services, healthcare, and criminal justice.

“Institutional agreement rings hollow when billions of people experience governance as remote, technocratic, and unresponsive to their actual lives,” said Alfredo Pizarro of the Permanent Mission of Costa Rica to the UN. “People-centered justice becomes essential.”

AI systems already shape outcomes across employment, financial services, housing, and justice. Entrepreneurs, law schools, courts, and legal services organizations are already building AI-enabled tools that help people navigate legal processes and assert their rights more effectively. Governance design will determine whether these tools help spread access to justice and or introduce new barriers.

Private companies play a central role in developing and deploying AI systems. Their products shape economic and social outcomes at scale. For them, trust is not abstract; it is a success metric. “Innovation depends on trust,” explained Iain Levine, formerly of Meta’s Human Rights Policy Team. “Without trust, products will not be adopted.” And trust, in turn, depends on enforceability and equal access to remedy.

AI governance will succeed or fail based on access

As Pizarro also noted, justice provides “normative continuity across technological rupture.” Indeed, these principles already exist within international human rights law and people-centered justice; although they precede the advent of autonomous systems, they provide standards for evaluating discrimination, surveillance, and procedural fairness, and remain durable as new challenges to upholding justice and the rule of law emerge.

People-centered justice was not designed for legal systems addressing AI-related harms, but its outcome-driven orientation remains durable as new justice problems emerge.

The current stage presents an opportunity to align AI governance with access to justice from the outset. Beyond well-drafted rules, we need systems that people can use. And that means that any effective governance requires coordination between policymakers, legal professionals, and the public.


You can find other installments ofĚýour Scaling Justice blog seriesĚýhere

]]>
Agentic AI following GenAI’s growth trajectory in legal, but with unique oversight challenges, new report shows /en-us/posts/technology/agentic-ai-oversight-challenges/ Thu, 09 Apr 2026 08:45:55 +0000 https://blogs.thomsonreuters.com/en-us/?p=70278

Key takeaways:

      • Agentic AI poised for adoption uptick — Agentic AI is following GenAI’s rapid adoption in the legal industry, with less than 20% of firms currently implementing agentic systems but half planning or considering adoption in the near future, according to a new report.

      • Adoption depends on human oversight answers — Legal professionals are generally optimistic about agentic AI’s potential, but successful adoption depends on explicit guidance about human oversight and the lawyer’s role in maintaining ethical standards.

      • Time to retool AI education? — Agentic AI’s increased autonomy introduces new oversight and ethical challenges for law firms, making targeted education and clear guidance essential to understanding the differences from GenAI.


Over the past several years, law firms and corporate legal departments have turned towards generative AI en masse. At the beginning of 2024, just 14% of all law firms and legal departments featured an enterprise-wide GenAI tool. Just two years later, that number had already risen to 43% of all firms and departments, according to the 2026 AI in Professional Services Report, from the Thomson Reuters Institute (TRI). For large law firms or legal departments, those percentages — not surprisingly — are beginning to approach 100%.

With GenAI adoption now this widespread, legal industry leaders are now turning their attention to two primary initiatives. One, of course, is how to get the most out of the AI tools they already have — a task that is proving a bit elusive. Currently, less than 20% of lawyers say their organizations measure AI’s return-on-investment, and most corporate lawyers say they have no idea how their outside law firms are approaching AI. Thus, instituting not just AI tools, but also an AI strategy is the second top priority for law firms and corporate legal departments in 2026 and beyond.

However, even as the legal industry reaches a tipping point in adopting GenAI tools, technology innovation still continues unabated. Agentic AI has emerged as the next wave of innovation that could change how lawyers work on a daily basis, offering a way to autonomously complete multi-step tasks. For example, agentic AI systems are already being built for the legal industry that independently researches a regulation or law, drafts a document based on the finding, identifies pitfalls, and revises the document, with stops for human guidance only instituted as desired.

According to the AI in Professional Services Report, the legal industry is already making headway towards implementing agentic AI systems. For agentic AI to truly take hold in legal, however, lawyers still require more education around not only how it differs from the GenAI systems they already have in place, but also when and where human intervention needs to occur within an agentic system.

The early stages of agentic AI

Examining current agentic AI adoption for the legal industry almost takes one back in time — two years, to be exact. Following the public release of GenAI in late-2022, many legal industry organizations spent 2023 evaluating and experimenting with AI systems, usually with a small working group of interested guinea pigs. As a result, only 14% of survey respondents said their law firms or corporate legal departments were engaged in organization-wide GenAI rollouts at the start of 2024. However, more than half of respondents said their organizations expected to be rolling out large-scale GenAI systems over the next 1 to 3 years. The intervening two years since then have proved that prediction to be largely true.

Agentic AI usage in the first half of 2026 looks largely similar to GenAI in 2024. The legal industry started to experiment with agentic AI at the beginning of 2025, with an eye towards actual implementation in 2026 and beyond (particularly as legal software providers began to integrate agentic systems into their own products). As such, less than 20% of recent survey respondents say their organization is engaged in widespread agentic AI adoption, but with about half of respondents said their organization is either planning to use or considering whether to use agentic AI in the near future.

agentic ai

By and large, lawyers feel positive about the agentic AI movement. When asked about their sentiment towards agentic AI, 51% of legal industry respondents said they felt excited or hopeful, while just 19% said they felt concerned or fearful. Further, about half (47%) said they actively believe agentic AI should be used for legal work, while 22% felt it should not, with the remainder saying they were unsure. These figures largely track with the sentiments expressed about GenAI in 2024, which have only grown over time from about 50% positive two years ago to two-thirds of all legal professionals feeling positive currently.

This all lends further credence to a rise in agentic AI usage similar to what law firms and corporate legal departments experienced with GenAI over the course of 2024 and 2025. Indeed, when asked when they expect agentic AI to be a central part of their workflow, few have baked agentic systems into their daily work currently, but a majority of legal industry respondents expect it to be central within the next 3 to 5 years.

agentic ai

The unique barriers of agentic AI adoption

Agentic AI does differ from GenAI in one crucial area that may limit its growth potential within the legal industry, however — autonomy. By and large, GenAI systems operate on a back-and-forth basis: Users provide the tool a prompt, receive its output, and then iterate back-and-forth from there. Agentic AI is intended to be more automated by design, only requiring human input at pre-determined points in the process. And that makes some lawyers understandably nervous.

When asked why they might feel hesitant about using agentic AI for legal tasks, the most common answer was a general fear of the unknown, but the second most common answer dealt with the need for careful monitoring and oversight. In fact, some respondents said they were excited about GenAI, but more cautious about agentic AI’s potential.

“Agentic AI, while exciting, to me removes oversight a step too far,” said one such lawyer from a US law firm. “I like the idea of prompting and reviewing a result. It is something else to have a machine have so much autonomy in the actual doing of a thing and potentially acting on my behalf without that very concrete review.”


Please add your voice to ¶¶ŇőłÉÄę’ flagship , a global study exploring how the professional landscape continues to change.Ěý


An assistant GC at a US company also pointed to potential privacy and security concerns, adding: “The fact that agentic AI operates in a much more autonomous way, with a lack of control from the user, means there are many unknowns that are hidden beneath the process.”

For law firm and corporate legal department leaders looking to potentially implement agentic AI systems into their practice, this means re-thinking what AI education and training will mean moving forward. Beyond that, however, legal AI educators also will need to make sure to pinpoint and perhaps over-explain those specific instances in which human oversight needs to occur in agentic systems. More autonomous does not mean fully autonomous, and particularly for lawyers with ethical duties to their work product, lawyer oversight will in fact be a necessary part of any agentic system.

For law firm or legal department leaders, that means that finding the right balance between efficient workflows and human intervention will be key to agentic AI adoption. And those organizations that can best communicate human-in-the-loop to their professionals up-front will be rewarded with more increased and reliable adoption.

Clearly, lawyers feel positively about the agentic AI future, after all. They just need it spelled out explicitly as to what the lawyer’s role will be in this new paradigm.

“Agentic AI is powerful, but its moral compass must come from humans,” one UK law firm barrister noted aptly. “Lawyers are trained to safeguard fairness, rights, and the rule of law — principles that should guide how AI is designed, governed, and deployed. Hope lies in our ability to shape AI through these values for fairer values for society as a whole.”


You can download a full copy of the Thomson Reuters Institute’sĚý2026 AI in Professional Services ReportĚýhere

]]>
The AI Law Professor: When AI quietly hijacks legal judgment /en-us/posts/technology/ai-law-professor-first-draft-trap/ Wed, 08 Apr 2026 07:56:33 +0000 https://blogs.thomsonreuters.com/en-us/?p=70293

Key takeaways:

      • Anchoring distorts judgment before you begin — Research shows a first draft shapes subsequent decisions; and an AI draft is the most seductive anchor imaginable, because it looks exactly like something a lawyer would write.

      • The First Draft Trap inverts legal training — The Socratic method builds the habit of holding multiple possibilities in tension before committing; but an AI first draft collapses that space before the real thinking begins.

      • The fix is to ask for the map, not the draft — Requesting multiple strategic framings before writing keeps judgment where it belongs and uses AI to expand possibilities rather than foreclose them.


Welcome back toĚýThe AI Law Professor. Last month, I examined why promised efficiency gains often become a cycle of work intensification. This month, I want to address a subtler challenge. I call it the First Draft Trap and understanding it may change how you reach for AI the next time a new matter lands on your desk

We have all heard the pitch: Staring at a blank page? Just prompt the AI. In seconds you have a working draft: structured, coherent, and surprisingly competent. The blank page problem, that ancient enemy of productivity, thus has been vanquished.

Except the blank page itself was never just an obstacle; rather, it was a space of possibility. For lawyers, it was the space in which the most important part of their work actually happens. Now, with AI in the mix, that may be changing.

Welcome to the First Draft Trap.

Simply put, the First Draft Trap is this: The moment you accept an AI-generated draft as your starting point, you have already made the most consequential decision of the entire project — most importantly, you made it by not making it. You let the machine choose your direction, your framing, and your theory. Everything that follows is editing; and editing, no matter how rigorous, is not the same as thinking.

The cognitive hijack

There is solid psychology behind why this happens. Daniel Kahneman and Amos Tversky demonstrated in their landmark 1974 paper, , that once people are exposed to an idea, this first impression distorts their subsequent judgments and becomes a mental anchor. In their experiments, subjects who watched a roulette wheel spin to a random number still let that number influence their estimates of completely unrelated quantities. The anchor held even when people knew it was meaningless.


Please join Tom Martin at the on April 28–29. It’s virtual and completely free — two days of keynotes, panels, and workshops on AI and the legal profession


An AI first draft is the most seductive anchor imaginable. It is not random — it is plausible, and it is well-organized. It sounds like something a lawyer would write. And that is precisely what makes it dangerous. You know intellectually that it is just one of many possible approaches to addressing the matter, but the anchor holds anyway.

That is the First Draft Trap at the cognitive level. The AI draft is not just one option you happen to prefer. It is a filter that prevents you from seeing the other options that were available to you, the roads you never even noticed that you did not take.

Consider what this means for a profession built on the opposite instinct. From the first day of law school, lawyers are trained to resist the obvious answer and to think like a lawyer. The Socratic method exists for exactly this reason. A good professor hears your confident response and asks: What else? What if the facts were different? What is the argument on the other side? The goal is not to arrive at an answer, per se. It is to build the mental habit of holding multiple possibilities in tension before committing to any one of them.

The First Draft Trap is the anti-Socratic method. It delivers a confident answer before you have even formulated the question properly — and instead of interrogating it, you polish it.

The value of the blank page

Think about what a senior partner actually does when a junior associate brings them a memo. The partner’s value is not better writing; rather, it is peripheral vision: The ability to see what the memo does not address, the argument not considered, or the framing that would land differently with this particular judge or this particular jury. That capacity to see beyond the document in front of them is why clients pay senior partners premium rates. And it is precisely the muscle that atrophies when your default workflow begins with the prompt generate a draft.


The AI draft is not just one option you happen to prefer. It is a filter that prevents you from seeing the other options that were available to you, the roads you never even noticed that you did not take.


The two-system framework offered by Kahneman and Tversky gives us a clean way to describe what is going wrong. System 1 is fast, intuitive, and pattern-matching; while System 2 is slow, deliberate, and analytical. The practice of law, at its best, is a System 2 discipline. We, as lawyers, are trained to override gut reactions, challenge assumptions, and think through consequences before acting.

In this way, the AI first draft feels like a System 2 output. It is structured, footnoted, and methodical. However, your decision to accept it as a starting point is pure System 1 — a fast, intuitive grab at the nearest plausible answer. You have used a sophisticated tool to bypass the sophisticated thinking the tool was supposed to support. That uncomfortable period of ambiguity, of not knowing which path is best, is where the real lawyering lives.

What to do instead

None of this means stop using AI. It means stop using AI to skip the hard part that matters.

Before you ever ask for a draft, ask for the map. Describe the matter or document you are working on, then ask the AI for three fundamentally different strategic framings for the problem. For each framing, request the strongest argument in its favor and its most serious vulnerability. Then ask which framing best fits the client’s goals, the audience, or the procedural posture. Close with a clear instruction: Do not write a draft yet.

That last instruction is the key. It keeps you in the driver’s seat during the phase that matters most. You are using AI to expand the possibilities before you prune them, not after. And, most importantly, it gives you the opportunity to think for yourself about other important possibilities and add them in.

In the terms used by Kahneman and Tversky, use AI to fuel System 2, not to hand the controls to System 1. Let the machine generate options, and you exercise judgment.

For lawyers, the ability to see what is not there is the whole game.

Do not let the first draft blind you to it.


Tom Martin is CEO & Founder of LawDroid, Adjunct Professor at Suffolk University Law School, and author of the forthcomingĚý. He is “The AI Law Professor” and writes this eponymous column for the Thomson Reuters Institute.

]]>
From emerging player to contender: How Latin America can compete in the global AI race /en-us/posts/technology/latam-ai-investment/ Mon, 06 Apr 2026 11:57:46 +0000 https://blogs.thomsonreuters.com/en-us/?p=70259

Key takeaways:

      • Strategic collaboration is becoming a defining strength for the region — Latin American organizations are realizing that progress in AI accelerates when they combine forces by linking industry expertise, academic talent, and public‑sector support.

      • AI initiatives rooted in real local challenges are gaining global relevance — By developing solutions grounded in the region’s own structural needs, whether in infrastructure, finance, agriculture, education, or mobility, many LatAm firms are producing technologies that are both highly impactful and naturally scalable.

      • Demonstrating clear outcomes is becoming fundamental — Organizations that show concrete operational improvements, measurable efficiencies, or stronger customer outcomes are strengthening their position with investors and partners.


In recent years, Latin America has experienced significant growth in investments related to AI, accounting for . This is strikingly low given that the region makes up around 6.6% of global GDP, highlighting the region’s opportunities to scale AI initiatives even further. Although there are notable differences among countries, Mexico and Brazil — the two largest LatAm economies — stand out for their volume of AI projects and funding, followed by other nations such as Chile, Colombia, and Argentina.

By recognizing the region’s strengths — which include cost-effective operations, access to data, clean energy, and public support — the region’s businesses can better position themselves and design strategies to draw in international investors that may be increasingly seeking promising locations for AI development.

Lessons from LatAm’s AI success stories

Latin America has produced remarkable AI success stories that can serve as models to build confidence among investors. These cases — involving companies that attracted substantial investment and achieved growth — demonstrate valuable best practices that range from technological innovation to working with governments and corporations. Some of these best practices include:

Building strategic alliances

The journey of innovation rarely unfolds in isolation. At times, the presence of large, established companies, whether local industry leaders or multinationals, has served as a catalyst for AI projects. The experience of that specializes in AI-powered agricultural irrigation, proves it. Now, Kilimo is partnering with EdgeConneX, a data center company based in the United States, on a community .

Academia, too, can be woven into this narrative. Collaborations with research centers or universities offer scientific credibility and connect ventures with emerging talent. In Mexico, AI startups often originate within university settings — such as computer vision projects from the National Autonomous University of Mexico (UNAM), for instance — and maintain agreements that sustain ongoing innovation and technical progress even with modest resources. And academic validations, whether in published papers or conference accolades, tend to resonate with foreign investors. Indeed, the emergence of this ecosystem that features early corporate clients and academic mentors frequently lends a distinctive appeal for those seeking investment.

Focusing on local problems with global impact

Within Latin America, certain issues prove especially relevant in situations in which AI solutions intersect with sectors renowned for regional strengths, such as fintech and financial inclusion, agrotech optimizing agriculture, and foodtech drawing on local ingredients. The experience of Chilean food startup NotCo — in which and subsequently exported — suggests how innovations rooted in local context may generate broader attention.

By addressing needs in urban transport, education, mining and related areas, local LatAm companies can provide access to homegrown data and users, which can further refine technology and open pathways for investors into similar emerging markets. When AI solutions respond to genuine pain points rather than mere novelty, momentum often builds more quickly, and the model finds validation among that evaluate investments.

Showing results and AI ROI early on

Questions linger for many executives . Evidence of clear metrics like cost savings, sales growth, or error reduction can prove persuasive, especially when complemented by success stories from local clients.

Recent studies show that companies ; and such figures tend to reassure those considering investment by illustrating tangible improvements. Testimonials or independent validations, such as a university study, can further illuminate achievements.

The act of quantifying impact — whether in efficiency, revenue, or other relevant KPIs — has a way of transforming perceptions from uncertainty toward clarity.

Leveraging government incentives and collaborations

Many Latin American nations have put forth support programs for AI and tech projects, such as non-repayable funds, soft loans, and tax benefits for innovation illustrated in , , , or the .

Public financing, when present, often acts as a stamp of validation for private investors. For example, this trust extended to Brazilian startups receiving Finep support for AI health projects, which in turn can shift perceptions for foreign ventures capitals. Engagement in government pilots, such as smart city initiatives or solutions for ministries, provides valuable exposure. In such contexts, public-private partnerships and incentives seem to act as quiet levers for growth and legitimacy.

Seeking smart and diversified financing

Financial strategies in Latin America have been shaped by the interplay of local and foreign capital. Local funds often bring insights and patience, while foreign funds may offer larger investments and global scaling experience. Ownership dilution sometimes accompanies the arrival of strategic investors, whose networks can prove invaluable, such as . Programs like 500 Startups, Y Combinator, MassChallenge, and international competitions have ushered LatAm AI startups such as Heru, Rappi, Bitso, and Clip into new rounds of capital following increased exposure.

Efficiency in capital management, which can be demonstrated with lean burn rates and milestone achievement with limited resources, signals an ability to execute within the realities of LatAm, which may enhance the appeal for future investments. The cultivation of relationships and responsible stewardship of capital frequently matters as much as the funds themselves, suggesting that the value of mentorship, contacts, and reputation is often intertwined with deepening financial support.

Unlocking AI Investment

By applying these principles, Latin American companies have achieved a better position to attract AI investments to their projects and help position the region as a viable destination for technology capital. These recent experiences show that when a LatAm company combines innovation, talent, and strategy — while communicating its story well — it can win over global and local investors alike. Each of the best practices noted above is based on real lessons: international alliances (NotCo with US funds), leveraging incentives (Brazilian companies funded by Finep), talent formation (Santander and Microsoft programs), focus on ROI (successful use cases that convince boards), and more.

Latin America has challenges but also unique advantages. Companies that manage to navigate this environment intelligently will increase their chances of securing the financing needed to innovate and grow. By doing so, they will contribute to a virtuous circle in which each new success attracts more investment to the region and opens doors for the next generation of LatAm AI ventures.


You can find more about the challenges and opportunities in the Latin American region here

]]>
Honing legal judgment: The AI era requires changes to how lawyers are trained during and after law school /en-us/posts/legal/honing-legal-judgment-training-lawyers/ Thu, 02 Apr 2026 15:36:44 +0000 https://blogs.thomsonreuters.com/en-us/?p=70236

Key takeaways:

      • AI threatens traditional lawyer development — As AI automates entry-level legal tasks like research and writing that historically has honed legal judgment skills, the profession faces a crisis in how new lawyers will develop such judgment abilities.

      • The profession can’t agree on what constitutes “legal judgment” — Unlike other professions, there is no agreed-upon definition of legal judgment or clear standards for when AI should be used.

      • Implementation requires unprecedented coordination and funding — A legal education fund as a proposed solution would require a small percentage of legal services revenue and coordinated action across law schools, legal employers, and state regulators.


This is the second of a two-part blog series that looks at how lawyer training needs to evolve in the age of AI. The first part of this series looked at how lawyers can keep their skills relevant amid AI utilization.

The key skills that comprise legal judgment have received mixed reviews, according to a recent white paper from the Thomson Reuters Institute that advocated for cultivating practice-ready lawyers. The white paper was based on feedback from thousands of experienced lawyers, judges, and law students and raises questions about how legal judgment forms when AI assistance is used for task completion.

notes that calls for “… to accelerate the development of legal judgment early in lawyers’ careers.”

The challenge is that each part of the profession — law schools, employers, state supreme courts (as regulators) — have distinctly separate responsibilities. That means, that in the age of AI, coordination across the entire legal profession is needed, especially as AI reduces the availability of traditional first jobs.

Furlong points out that there is no consensus for what legal judgment is or any agreed upon standards for in what instances AI should be used in legal. To bring clarity to these issues, the white paper proposed a profession-wide model that integrates three critical elements: i) work-based learning that’s modeled on medical residencies; ii) micro-skill decomposition of legal judgment; and iii) AI-as-thinking-partner throughout pedagogy.

Three pillars for an AI-era lawyer formation system

Not surprisingly, overreliance on AI can erode critical analysis and solid legal judgment skills. Addressing these concerns requires a comprehensive reimagining of how lawyers are educated and trained. One solution lies in three interconnected pillars that together form a cohesive system for developing legal judgment in an AI-integrated world.

Pillar 1: Integrate work experience into legal education

Core skills such as legal research, writing, and document review help develop legal judgment; yet these skills could collapse once AI assumes such tasks. The Brookings Institution recently proposed to preserve entry-level professional development in an AI era. This parallels the TRI white paper’s calls for mandatory supervised postgraduate practice as a key part of legal licensure.

While implementing a full residency model presents challenges, several law schools have already pioneered approaches that demonstrate the viability of work-integrated legal education that, if scaled appropriately, could improve new lawyer practice and judgment skills. For example, Northeastern Law School guarantees all students nearly before graduation through four quarter-length legal positions. The program integrates supervised practice into the curriculum so graduates can gain substantial hands-on experience alongside their classroom instruction.

Also, program offers an alternative pathway to bar admission through practice-based assessment rather than the traditional bar exam. The program demonstrates that competency can be evaluated through supervised experiential learning.

Pillar 2: Decompose legal judgment into teachable micro-skills

The legal profession needs to come to a common definition of legal judgment and develop its components to teach the concept effectively. “We can’t teach what we can’t describe,” Furlong says. To develop legal judgment, the profession must define its components, including:

      • Pattern recognition — The ability to identify when different fact patterns are related to similar legal frameworks and distinguish when superficially similar cases are legally distinct.
      • Strategic calibration and proportionality — This means understanding what level of effort, precision, and risk each matter requires and matching responses to the stakes involved.
      • Reasoning through uncertainty — This is the capacity to make defensible decisions and provide sound counsel even when the law is ambiguous, unsettled, or silent on an issue.
      • Source evaluation and authority weighting — This includes knowing which legal authorities are most suitable and being able to assess their persuasive value.
      • Ethical judgment under pressure — This means spotting conflicts, confidentiality issues, and duty-of-candor moments while maintaining competence and knowing when to escalate beyond expertise.

Breaking down legal judgment into these discrete components makes it possible to design targeted teaching interventions. For example, , former law professor and executive director of , suggests we back into AI-assisted workflows by requiring a short verification log (detailing sources checked, changes made, and why); running attack-the-draft drills (find missing authority, weak inferences, and jurisdictional mismatch); and preserving slow work as formative work (citation chaining, updating, and adversarial research memos).

With judgment skills clearly defined and work experience integrated into training, the profession must then tackle how AI itself should be incorporated into lawyer development.

Pillar 3: AI-as-thinking-partner throughout a lawyer’s career

Warnings that are mounting. The legal profession must provide clear standards for in what instances and how AI should be used, with training in verification and judgment skills. Overreliance on AI could compromise lawyers’ capacity to fulfill their fiduciary duties to clients.

A phased approach in the introduction of AI in legal work helps protect critical thinking while building AI competency. For example, in Year 1, law students could complete core legal reasoning exercises without AI assistance in order to better develop their analytical muscles. In Year 2, students use AI as a research assistant with mandatory verification protocols that teach students to check outputs against authoritative sources. Finally, in Year 3, residencies can immerse students in real-world AI workflows under proper supervision and while providing feedback.

These three pillars form a coherent vision for lawyer formation in the AI era. However, the most well-designed system faces the obstacle of funding.

The challenge of who pays

Perhaps the most difficult part of any overhaul is the cost. The medical residency model works because — up to $15 billion-plus annually — for teaching young medical students to be doctors. Legal education has no equivalent. Without addressing funding, however, even the best reforms will fail.

One idea is to establish a legal education fund that’s supported by an assessment of a small percentage of the legal industry’s gross legal services revenue (while exempting solo practitioners and firms with less than $500,000 in annual revenue). These funds could be used to subsidize thousands of supervised residency placements, fund law school curriculum development, support bar exam alternative assessments, and provide employer training and supervision stipends.


The challenge is that each part of the profession — law schools, employers, state supreme courts — have distinctly separate responsibilities, and that means coordination across the entire legal profession is needed.


This proposal, of course, would require unprecedented coordination and financial commitment from the legal profession. Skeptics might argue that market forces can solve this problem, or that firms will simply create new training pathways, or that AI will prove less disruptive than feared. However, waiting for market forces risks a lost generation of lawyers. The medical profession already when the medical industry’s voluntary reform failed. Only later did coordinated regulatory intervention produce the consistent quality standards the medical industry sees now.

What is clear is that inaction is resulting in degradation of lawyering skills. “Maybe… we need catastrophic external intervention to bring about the wholesale changes we can’t manage from the inside,” Furlong suggests.

However, the question is whether the legal profession will wait for a crisis to force change or act proactively to make the needed changes now, before the crisis hits.


You can learn more about the impact of AI on professional services organizations at TRI’s upcoming 2026 Future of AI & Technology Forum here

]]>
AI use and employee experience: New research reveals guidance gap in professional services /en-us/posts/technology/ai-guidance-gap/ Mon, 30 Mar 2026 11:23:47 +0000 https://blogs.thomsonreuters.com/en-us/?p=70090

Key takeaways:

      • Employees face contradictory messages or none at all — Nearly 40% of professionals surveyed report receiving conflicting directives about AI usage from clients and leadership, while half report no client conversations about AI have occurred at all.

      • Workers lack feedback on whether their AI efforts matter — Professionals who are experimenting with AI tools without knowing if their efforts are valued are left uncertain about whether investing time in developing AI skills is worth it.

      • Job displacement fears are rising — While employees remain cautiously optimistic about AI usage in their workplace, concerns about job displacement have doubled over the past year.


As generative AI (GenAI) tools flood into legal and accounting workplaces, organizations are deploying powerful technology without giving their employees clear directions on how to use it. Worse, some have received no guidance.

New research that underpinned the recent 2026 AI in Professional Services ReportĚýfrom the Thomson Reuters Institute (TRI), reveals a disconnect between AI availability and organizational guidance, which is creating confusion that may undermine both employee experience and the technology’s potential value. (The report’s data was gathered from surveys of more than 1,500 legal, tax, accounting, and compliance professionals across 26 countries.)

Employees navigate inconsistent AI policies or none at all

Approximately 40% of the professionals surveyed said they received contradictory guidance from clients and leadership about AI tool usage, with directives both encouraging and discouraging their use on projects and in RFPs. This ambivalence is slowing down decision-making at the front lines — a place in which AI could deliver the most value.

Equally concerning is the fact that half of professionals indicated that no conversations with clients about AI tool usage have taken place yet. And when discussions do occur, concerns about data protection and accuracy are the main topics.

guidance gap

This confusion extends to external relationships as well. More than two-thirds of corporate and government clients remain unaware of whether their outside professional service providers are even utilizing GenAI. And the majority of clients have provided no direction whatsoever to their outside law firms concerning AI use, respondents said.

guidance gap

Organizations often ignore what employees need to know

Perhaps most revealing is how organizations are measuring — or failing to measure — whether their AI investments are paying off. Almost half of respondents said their organizations are not measuring return on investment (ROI) at all. Among the minority (18%) of respondents that said their organizations do track ROI, the metrics they use tell a story about organizational priorities. That fact that internal cost savings and employee usage rates lead the list suggests a focus on efficiency over innovation or quality improvements.

guidance gap

This measurement vacuum has consequences for employee experience. Without clear success metrics, employees lack feedback on whether their AI experimentation is valued, discouraged, or even noticed. The absence of ROI frameworks also makes it hard to justify training investments or dedicated time that allows employees to develop AI fluency.

AI usage doubles while support systems fall behind

AI usage among professional service organizations has nearly doubled over the past year, and professionals are increasingly integrating these tools into their workflows, the report shows. Yet organizational infrastructure that could support this adoption surge lags badly. Most professionals said they expect GenAI to become central to their work within the next two years — but that may be happening without roadmaps from their employers.

In addition, notable barriers in employees’ usage of AI remain. When asked what barriers could prevent their organization from more widely adopting GenAI and agentic AI, almost 80% of professionals cited concerns over inaccurate responses. Other concerns included worries over data security, privacy, and ethical use. Most of these suggest an ongoing lack of trust in GenAI.

guidance gap

The tool landscape adds another layer of complexity. Publicly available tools dominate current usage, with more than half of respondents (57%) citing their use, while proprietary or industry-specific solutions remain largely in the consideration phase. This suggests employees are often self-provisioning AI tools rather than working within enterprise-supported ecosystems. This potentially opens organizations to increased risk exposure because of security gaps, compliance risks, and inconsistent quality.

Employees’ job displacement fears increasing

Despite these challenges, employee sentiment toward AI remains cautiously optimistic. More than half (57%) of respondents said they are either hopeful or excited about the future of GenAI in their industry. Clearly, employees see AI’s potential to enhance their efficiency, automate routine tasks, and free up their time for higher-value work.

At the same time, hesitation and concern among employees are rising, particularly around accuracy, job displacement fears, and the unknown implications of autonomous AI systems. Notably, concerns about job displacement have doubled over the past year, and this trend demands organizational attention and transparent communication about a workforce strategy to combat this concern.

What organizations need to do now

Organizational leaders who are serious about positive employee AI experiences need to step up their efforts to provide guidance to employees and gain the ROI that AI promises. Specific steps they can take include:

      • Draft clear and consistent guidance — Create explicit policies for employees about in which instances AI use is encouraged, required, or prohibited. This includes client communication protocols, data-handling requirements, and escalation procedures in those situations in which AI outputs seem questionable.
      • Develop and implement meaningful ROI metrics — Organizations must move beyond usage rates and cost savings as key success measurements. Tracking data points that capture quality improvements, time redeployed to strategic work, and client feedback on AI-enhanced deliverables present a more comprehensive picture. Also, leaders need to share these metrics transparently in order to give employees an understanding about organizational priorities.
      • Invest in structured learning — The survey shows professionals are experimenting with dozens of different tools from ChatGPT to specialized legal tech platforms. Organizations should curate recommended toolsets, provide hands-on training, and create communities of practice in which employees can share effective prompts and use cases with other users.

Our data shows that the employee experience around AI adoption reveals a workforce that is hopeful but hungry for direction and concerned about job impacts. Leaders who implement these actions effectively are more likely to unlock the strategic value that AI promises while building the trust and competence needed for their organizations and its employees to thrive in an automated future.


You can download a full copy of the Thomson Reuters Institute’sĚý2026 AI in Professional Services ReportĚýhere

]]>
Honing legal judgment: How professional acumen & fiduciary care can keep lawyers relevant in the age of AI /en-us/posts/legal/honing-legal-judgment-keeping-lawyers-relevant/ Wed, 25 Mar 2026 14:21:08 +0000 https://blogs.thomsonreuters.com/en-us/?p=70071

Key highlights:

      • Lawyers excel at semantic legal work while AI excels in syntactic tasks — Syntactic work (document generation, pattern recognition) is where AI excels, but semantic work involving exercising independent judgment, reflecting on consequences, and fulfilling fiduciary duties remains uniquely human.

      • Fiduciary duty as the core of legal relevance — What distinguishes lawyers isn’t justĚýwhatthey do, butĚýhow and whyĚýthey do it. The fiduciary relationship demands human understanding of context, balances competing interests, recognizes unstated concerns, and exercises discretion.

      • 5 hours to deepen or diminish — The five hours lawyers are expected to gain each week by using AI can either accelerate professional obsolescence or deepen lawyers’ relevance, depending on what they do with it.


This is the first of a two-part blog series that looks at how lawyers can keep their skills relevant in the age of AI

Lawyers expect to gain a full five hours per week of worktime due to the efficiency derived from AI use, according to the ¶¶ŇőłÉÄę 2025 Future of Professionals Report. Yet the fear of job loss among lawyers is rising, as those viewing AI as a threat or somewhat of a threat grew from to almost two-thirds (65%) of those surveyed, according to the Thomson Reuters Institute’s 2026 AI in Professional Services Report.

Many in the legal profession are asking how lawyers are uniquely valuable at a time when machines can process legal information faster and cheaper. The answer lies in understanding the difference between what AI does in processing legal information and what humans do in exercising legal judgment, says , Founding Director of the .

Defining 2 levels of legal work

Understanding what makes lawyers particularlyĚýmeaningfulĚýin this current AI moment requires distinguishing between two different levels of legal work in an environment in which AI-enabled information systems are compressing humanity and legal judgment into data points and draining away the storytelling and moral nuance that ground both. According to Lee, these different levels involve the syntactic and the semantic:

      • Syntactic — Lawyers process information, generate documents, and recognize patterns at the syntactic level, meaning those tasks in which AI excels and delivers promised efficiency gains. “The danger is that we will use this efficiency merely to generate more syntactic volume,” Lee explains, adding that this will result in faster processing of more documents at greater speeds. “If we do that, we will have automated ourselves out of a profession.”
      • Semantic — The semantic aspect of lawyering highlights the irreducible skills of the legal practice, which include exercising independent legal judgment, reflecting on consequences, demonstrating care for clients, and fulfilling fiduciary duties.

This distinction between the semantic level is inherent within the practice of law definition, Lee says, pointing out that many jurisdictions distinguish between “providing legal information” (not practicing law) and “exercising independent legal judgment” (the essence of legal practice).

He also rightly contends that the existential risk facing lawyers is not in AI completing legal tasks, but rather the temptation to reduce lawyers’ role to verifying machine output and processing legal information. Conflating these two concepts is a challenge for the legal profession and requires increasing the appreciation for the craft of legal reasoning and judgment.

legal judgment
Kevin Lee, Founding Director of the Institute for AI & Democratic Governance

Making this more difficult is that the current information age complicates this picture by challenging society’s assumptions about reality, consciousness, and the moral meaning of human life — all at an exponential rate, Lee says. Similarly, AI and information systems threaten to reduce everything, including human beings and law itself, to processable data by stripping away the narratives and meanings that define humanity, he adds.

Semantic qualities of legal judgment

The question of what makes lawyers especially relevant in the AI era is mainly answered in how and why they do what they do, rather than in what they do. For example, Lee points to skills around executing their fiduciary duty and ensuring legitimacy and meaning as key characteristics of lawyers’ semantic qualities.

Fiduciary duty — When a client seeks legal counsel, it’s legal judgment — not information processing — that the client wants. Lawyers, as part of their fiduciary duty to their clients, demonstrate human and legal understanding of the unique context of each case and the consequences of various legal paths forward. This bond of trust between attorney and client demands reflection, consideration, care, and proper purpose.

The fiduciary duty of the lawyer to the client requires balancing competing interests, recognizing unstated concerns, and exercising discretion in ways that honor both the letter and spirit of the law. At the heart of this balance is legal reasoning and professional judgment, which often involves navigating the critical gap between legal rules as written and their meaningful application to human circumstances.

Legitimacy and meaning — Beyond the fiduciary of care exercised in individual client relationships, lawyers serve a broader purpose in their role to safeguard law’s connection to the narratives of justice and human dignity that legitimize its authority. Indeed, lawyers maintain the connection between law and its humanistic foundations, so that the narratives that give legal authority its legitimacy depend on this connection. “The artwork that one associates with the law (in law schools and courtrooms) connects actions and legal judgment of attorneys to the mythic meaning of justice, equality, and the rule of law,” Lee explains.

How to deepen appreciation for the special relevance of lawyers

The five hours that lawyers said they expect to gain each week through AI-driven efficiency represents a choice point for the profession. These hours can either accelerate lawyers’ obsolescence or deepen their relevance. To ensure the latter, Lee advises lawyers and legal institutions to examine ways to put those hours to good use by, for example:

Collaborating on apprenticeships — Bar associations, practicing lawyers, legal service providers, and law schools should consider apprenticeship models that teach professional norms and values through mentorship that allow law students to learn the craft of legal reasoning through guided practice.

Recommitting more fully to legal service — Law firms and in-house counsel must reclaim humanistic awareness as central to their professional identity. The efficiency gains from AI should be reinvested into semantic work, which include counseling clients, exercising moral judgment, and fulfilling fiduciary duties with greater care and reflection.

Improving legal education — Law schools must return to the humanistic formation of lawyers, echoing the vision of the pre-2007 , before economic pressures reduced legal education to producing commercially exploitable graduates. In addition, AI ethics must be integrated systemically across the curriculum into doctrinal courses rather than being confined to elective courses.

Looking ahead

The five hours gained through AI represent a defining choice for the legal profession. The special relevance of lawyers in the AI age lies precisely in the human components and semantics aspects of lawyering.


In the concluding part of this blog series, we look at how the legal profession needs to rethink how it trains lawyers in order to prevent AI from eroding legal judgment skills

]]>
SALT changes in 2026 and beyond: What indirect tax teams need to know /en-us/posts/corporates/salt-changes-indirect-tax-teams/ Fri, 20 Mar 2026 13:27:08 +0000 https://blogs.thomsonreuters.com/en-us/?p=70037 Key takeaways:

      • Changing the balance of taxes — Budget‑driven tax swaps and incentive reforms are changing the balance between income, property, and sales taxes, forcing large companies to revisit their multistate footprint.

      • How revenue is sourced is changing, too — Rapidly evolving digital and AI‑related taxes are creating new nexus, sourcing, and base‑definition issues for businesses that rely on revenue from digital advertising, social platforms, data monetization, and automated tools.

      • Planning amid continued uncertainty — New federal tax regulations, tariff‑related uncertainty, and even the elimination of the penny are all amplifying state‑by‑state complexity for in‑house tax departments.


WASHINGTON, DC — Tax industry experts who gathered at to provide updates on the current landscape of state and local tax (SALT) policy and offer insight that corporate tax departments should consider found, not surprisingly, that they had a lot to talk about in the current economic environment.

Mapping the new SALT frontier

For starters, this year’s SALT agenda is not just an abstract policy story for large, multistate businesses, rather, it’s a direct driver of cash taxes, effective tax rate (ETR) volatility, and audit exposure. Indeed, several state legislatures are advancing new taxes on digital advertising and data, revisiting incentives and data center exemptions, and using conformity to federal law — especially the tax provisions in the One Big Beautiful Bill Act (OBBBA) — as a policy lever, all against the backdrop of slowing revenues and contentious elections.

“Tax swaps” and incentives — States that are facing budget pressure are, unsurprisingly, looking at tax swaps to reduce income or property taxes while broadening the sales & use tax base and trimming exemptions. For example, on March 3, the state of Florida — which already doesn’t have a state income tax — passed legislation that in the state.

Moreover, with the rapid expansion of AI come the extensive need for data centers. Several states are reassessing data center exemptions and credits, either tightening qualification standards, requiring centers to supply more of their own power, or repealing incentives outright. A decision in Virginia to , for example, is viewed as a potential template for other states, particularly in those areas in which energy and environmental concerns are priorities. At the same time, proposals targeting include expanded corporate tax disclosures, CEO compensation surcharges, and enhanced reporting on apportionment and group filing methods.

What companies should consider — Large companies operating over multiple states should consider making an inventory of their credits and incentives by jurisdiction, including looking at sunset dates and political risk indicators.

Companies should also build forward‑looking models that show how any sales tax base expansion would interact with their supply chain and their procurement of digital and professional services.

New exposure for tech, marketing & data

Bipartisan legislators in several states are continuing to expand on digital economies as a revenue and policy target. For example, Maryland continues to lead with its digital advertising tax; while Washington state’s expansion of its sales tax to include certain digital and IT services and Chicago’s social media taxes illustrate the variety of approaches that state and local jurisdictions are exploring to expand their tax base and raise revenue.

Data and “digital resource” taxes — Proposals in states such as New York would tax companies that derive income from resident data, treating data as a natural resource. While no state has fully implemented a comprehensive data tax, however, large platforms and data‑driven enterprises are monitoring these bills closely.

AI‑related SALT rules — Many states still classify AI solutions under existing Software as a Service (SaaS) or data‑processing categories, but some — including New York — are exploring surcharges tied to AI‑driven workforce reductions. And at least two states are explicitly taxing AI, similarly to the way software is taxed.

For corporate tax leaders, some practical next steps should include mapping those areas in which your group has digital ad spending, user bases, data monetization, or AI deployments. Then, overlaying that with current and pending digital tax proposals. In parallel, it is increasingly critical for the tax team to partner with IT and marketing teams to understand how contracts, invoicing structures, and platform design will affect nexus, tax base definition, and sourcing.

Federal shifts magnify multistate complexity

The OBBBA made permanent several of , while expanding SALT relief on the individual side and creating new interactions for multinational groups. Because most states start from federal taxable income — either on a rolling, static, or selective conformity basis — OBBBA changes reverberate across state corporate income tax bases, especially in those states that have decoupled themselves from interest limits, R&D expensing, or new production‑related incentives.

Corporate tax departments must now juggle different conformity dates and selective decoupling rules across rolling and static states, including jurisdictions that automatically decouple when a federal change exceeds a revenue impact threshold. This requires more granular state‑by‑state modeling of OBBBA impacts on apportionable income, deferred tax balances, and cash tax forecasts. It also heightens the risk that political disputes — such as — produce mid‑cycle changes that complicate provision and compliance processes.

Penny elimination — With federal , states now are moving toward symmetrical rounding for cash transactions, rounding the final tax‑inclusive total to the nearest five cents while attempting not to alter the underlying tax computation. For retailers and consumer‑facing enterprises, this shifts the focus to point of sale (POS) configuration, consumer‑protection exposure, and class‑action risk if rounding is implemented incorrectly.

Tariffs and refunds — The U.S. Supreme Court’s Learning Resources, Inc. v. Trump decision under the International Emergency Economic Powers Act in February leaves open how more than $100 billion in and what that means for prior sales & use tax treatment. Streamlined guidance generally treats tariffs embedded in product prices as part of the taxable sales price but excludes tariffs paid directly by a consumer‑importer from the tax base, raising complex questions if tariff refunds reduce costs or sales prices retroactively.

For indirect tax department teams, the confluency of the 2026 SALT changes — including the impacts around everything from data center credits to the recent Supreme Court tariff decision — the need to rely on internal partners across the business has never been stronger. Combining that with a greater reliance on technologies, including dedicated research tools to stay abreast of state-by-state tax changes, may be the best way for corporate tax teams to keep up with compliance requirements and avoid penalties.


You can download a full copy of here

]]>
Move over, “Death of the billable hour,” Legalweek 2026 has found a new existential crisis /en-us/posts/legal/legalweek-2026-new-existential-crisis/ Thu, 19 Mar 2026 13:25:16 +0000 https://blogs.thomsonreuters.com/en-us/?p=70031

Key takeaways:

      • Structural change in firms — The traditional law firm pyramid, in which junior lawyers perform high-volume work at billable rates, is losing its foundation as AI compresses tasks that once took hours and clients increasingly bring more work in-house.

      • Finding new ways to train — AI-powered simulations are emerging as a concrete answer to the associate training problem, allowing new lawyers to build courtroom skills faster and fail safely behind closed doors.

      • The associate role isn’t dying, it’s being redefined — Those law firms that figure out the right mix of legal training, technological fluency, and management skills will have a significant edge over those that are still debating it.


NEW YORK —ĚýOn more than one occasion, I have written seriously and at length about the death of the billable hour. I’ve argued that alternative fee arrangements (AFAs) are the future, that the economic logic of hourly billing is irreconcilable with AI-driven productivity gains, and that the industry needs to prepare for a fundamentally different pricing model. I meant every word. I still do.

Yet, at last week’s one attendee pointed out they’ve been hearing about the death of billable hour since the 1990s. At this point, it’s less a prediction and more of a tradition. Indeed, Matthew Kohel, a partner at Saul Ewing, said despite the legal press coverage connecting AI to the billable hour’s demise that narrative is now entering its third or fourth decade. And Kohel said his firm simply isn’t seeing meaningful client-driven movement toward AFAs.

So let’s be honest: the billable hour is not dead, and in fact, it may not be even close to dead.

However, if you’re looking for something that is facing a genuine existential reckoning — something the legal industry whispered about in the early days of generative AI (GenAI) and is now discussing openly — Legalweek 2026 may have found it. It turns out the billable hour was never the thing in danger, rather it’s the person billing the hours.

It’s the associate.

The question nobody wanted to ask out loud

The future of the junior lawyer surfaced in virtually every breakout session across the three-days event, and while it may not be the point of inception for the question, it was certainly the moment this idea graduated from a half-whispered aside to main-stage conversation.

Moreover, the problem has grown more urgent since its inception in the early GenAI days, when the question was simply whether a firm would need fewer associates. Now, that question hasn’t gone away, but it’s been joined by harder ones concerning training, hiring, and legal and technical skills. For example, what if AI is already better than a junior associate at some of the tasks that defined the role in the past? And what happens if someone says it out loud?

Someone said it out loud.


If you’re looking for something that is facing a genuine existential reckoning, Legalweek 2026 may have found it. It turns out the billable hour was never the thing in danger, rather it’s the person billing the hours.ĚýIt’s the associate.


During a panel on Measuring What Matters, the conversation turned to client trust. Clients want to know: How can you be sure AI will catch everything? How do you trust it to find what matters across 5,000 pages of documents?

The response from the panel was direct, and it landed like a brick in the room: it’s 5,000 pages, and someone was reading those five thousand pages. That someone is an associate. If that associate — who, more often than not, is one of the least experienced lawyers in the building — is the one reading all those pages, why would you trust them to do it better than a machine?

While that question hung in the air during the panel, it does deserve to sit with you for a moment afterward. Because embedded in it is the uncomfortable arithmetic that drives the entire associate question. The traditional law firm pyramid is built on a base of junior lawyers performing high-volume, lower-complexity work such as document review, due diligence, first-pass research, and doing so at rates that generate revenue while the activity is simultaneously (in theory) training the next generation of partners. If AI can do that base-layer work faster, cheaper, and with accuracy that one panelist described as “beyond very good,” then the pyramid doesn’t just shrink. It loses its foundation.

Barclay Blair, Senior Managing Director of AI Innovation at DLA Piper, noted that tasks like due diligence on some types of financial contracts are already being compressed to two hours, down from 15 to 20 — with zero hours being a realistic possibility in the near future.

Further, as one attendee observed, clients increasingly are adopting AI internally, and they’re bringing work in-house that was previously sent to outside counsel. Clearly, the work that trained generations of associates isn’t just being automated — in some cases, it’s leaving the firm entirely.

Fewer reps, greater weight

Yet here is where it would be easy (and wrong) to write the doom-and-gloom version of the future, in which AI replaces associates, the pipeline collapses, nobody knows how to train lawyers anymore, civilization crumbles, etc. It’s a clean narrative, but it’s also not what Legalweek panels actually said.

Because alongside the anxiety, something else was happening. People were building answers.

In another panel, Developing the Future Lawyer, panelists spent an hour in the weeds of what associate training actually looks like when the old model breaks down — and the conversation was far more concrete than you might expect.


Panelist spent an hour in the weeds of what associate training actually looks like when the old model breaks down — and the conversation was far more concrete than you might expect.


Panelist Abdi Shayesteh, Founder and CEO of AltaClaro, laid out the core problem with precision, noting that there’s a growing gap in critical thinking among associates. Templates getting copy-pasted without relevance analysis, and there is a lack of knowing what you don’t know. And the traditional training methods such as videos, lectures, and passive learning, don’t fix it. Indeed, those outdated models may be making it worse. Shayesteh’s analogy was blunt: You don’t learn to swim by watching videos — you need to jump into the deep end.

His solution is AI-powered simulations. Not hypothetical ones, but working deposition simulations available today, with real-time AI feedback, in which associates can practice cross-examination, deal with opposing counsel objections, and build the muscle memory that used to require years of live experience.

Kate Orr, Managing Director of Practice Innovation at Orrick, picked up the thread with two observations that reframed the stakes. First, AI simulations allow associates to fail behind closed doors, a radical improvement over the old model, in which blowing it had real consequences because failure often happened directly in front of the partners Second, the tool isn’t just for juniors. Even experienced lawyers are using simulations to test different approaches, tweak personas, and sharpen arguments. Orrick’s own Supreme Court team had a lawyer use AI to review a draft brief and identify paragraphs that could be tighter.

Todd Heffner, Partner at Smith, Gambrell & Russell, said the real question isn’t whether associates will use AI, but rather whether it gets them to lead at trial in year 10 instead of year 20. Right now, most associates are lucky to see the inside of a courtroom in their first seven years, and even then, they spend most of their time back in the hotel prepping for the more experienced attorneys instead of arguing themselves. If simulations can compress that learning curve, the associate’s career doesn’t disappear, rather, it gets accelerated.

The dinosaur that adapted

During the Measuring What Matters panel, Mitchell Kaplan, Managing Director of Zarwin Baum, introduced himself with a memorable bit of self-deprecation: He’s a dinosaur — but one, he clarified, who understands how AI can revolutionize what he does.

Kaplan’s perspective threaded through both days of programming like a quiet counterweight to the anxiety. He’d seen this before — not AI specifically, but the fear of it. He watched the legal industry transition from physical libraries to digital research tools, and he watched attorneys adapt. And his message was consistent: the work changes, but the need for lawyers doesn’t disappear. Associates may be taking shortcuts, but they still need to read, still need to review, and still need to think.

They’re developing differently than his generation did, Kaplan said, but it’s the same way every generation develops differently from the one before it. And different doesn’t mean wrong.


The work changes, but the need for lawyers doesn’t disappear. Associates may be taking shortcuts, but they still need to read, still need to review, and still need to think.


It’s a perspective that found an unexpected echo in the Enterprise Alignment panel. Mark Brennan, a partner at Hogan Lovells, relayed a comment he heard at a previous AI conference: The next generation of entry-level jobs will be managers — because they’ll be managing agents and other tech tools. Brennan admitted he didn’t have all the answers on what that means for legal training, but the implication was clear. The associate role isn’t dying, instead, it’s being redefined. And the firms that figure out what that redefined role looks like, what mix of legal training, technological fluency, critical thinking, and management skills it requires, will have a significant advantage over those firms that are still debating it.

Another panelist, Andrew Medeiros, Managing Director of Innovation at Troutman Pepper Locke, made a prediction that felt like the sharpest version of this idea. He said that at some point, new lawyers are going to be doing simulated matters as a standard part of the development process. Eventually, there’s going to be a generation that walks in as new attorneys and finds themselves litigating right away.

That’s not the death of the associate. Rather, that’s the beginning of a different kind of associate — one who arrives at the courtroom sooner, with different preparation, carrying different tools.

The billable hour, for all the prophecies, refuses to die. The associate, it turns out, has no intention of dying either — just evolving. Mitchell Kaplan called himself a dinosaur — but Legalweek was full of dinosaurs, and every one of them was adapting and in that adaptation, thriving. The harder question is whether the firms that forged them will be brave enough to follow.


You can find more ofĚýour coverage of Legalweek eventsĚýhere

]]>