Emerging Technologies Archives - Thomson Reuters Institute https://blogs.thomsonreuters.com/en-us/topic/emerging-technologies/ Thomson Reuters Institute is a blog from ¶¶ŇőłÉÄę, the intelligence, technology and human expertise you need to find trusted answers. Mon, 06 Apr 2026 11:57:46 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 From emerging player to contender: How Latin America can compete in the global AI race /en-us/posts/technology/latam-ai-investment/ Mon, 06 Apr 2026 11:57:46 +0000 https://blogs.thomsonreuters.com/en-us/?p=70259

Key takeaways:

      • Strategic collaboration is becoming a defining strength for the region — Latin American organizations are realizing that progress in AI accelerates when they combine forces by linking industry expertise, academic talent, and public‑sector support.

      • AI initiatives rooted in real local challenges are gaining global relevance — By developing solutions grounded in the region’s own structural needs, whether in infrastructure, finance, agriculture, education, or mobility, many LatAm firms are producing technologies that are both highly impactful and naturally scalable.

      • Demonstrating clear outcomes is becoming fundamental — Organizations that show concrete operational improvements, measurable efficiencies, or stronger customer outcomes are strengthening their position with investors and partners.


In recent years, Latin America has experienced significant growth in investments related to AI, accounting for . This is strikingly low given that the region makes up around 6.6% of global GDP, highlighting the region’s opportunities to scale AI initiatives even further. Although there are notable differences among countries, Mexico and Brazil — the two largest LatAm economies — stand out for their volume of AI projects and funding, followed by other nations such as Chile, Colombia, and Argentina.

By recognizing the region’s strengths — which include cost-effective operations, access to data, clean energy, and public support — the region’s businesses can better position themselves and design strategies to draw in international investors that may be increasingly seeking promising locations for AI development.

Lessons from LatAm’s AI success stories

Latin America has produced remarkable AI success stories that can serve as models to build confidence among investors. These cases — involving companies that attracted substantial investment and achieved growth — demonstrate valuable best practices that range from technological innovation to working with governments and corporations. Some of these best practices include:

Building strategic alliances

The journey of innovation rarely unfolds in isolation. At times, the presence of large, established companies, whether local industry leaders or multinationals, has served as a catalyst for AI projects. The experience of that specializes in AI-powered agricultural irrigation, proves it. Now, Kilimo is partnering with EdgeConneX, a data center company based in the United States, on a community .

Academia, too, can be woven into this narrative. Collaborations with research centers or universities offer scientific credibility and connect ventures with emerging talent. In Mexico, AI startups often originate within university settings — such as computer vision projects from the National Autonomous University of Mexico (UNAM), for instance — and maintain agreements that sustain ongoing innovation and technical progress even with modest resources. And academic validations, whether in published papers or conference accolades, tend to resonate with foreign investors. Indeed, the emergence of this ecosystem that features early corporate clients and academic mentors frequently lends a distinctive appeal for those seeking investment.

Focusing on local problems with global impact

Within Latin America, certain issues prove especially relevant in situations in which AI solutions intersect with sectors renowned for regional strengths, such as fintech and financial inclusion, agrotech optimizing agriculture, and foodtech drawing on local ingredients. The experience of Chilean food startup NotCo — in which and subsequently exported — suggests how innovations rooted in local context may generate broader attention.

By addressing needs in urban transport, education, mining and related areas, local LatAm companies can provide access to homegrown data and users, which can further refine technology and open pathways for investors into similar emerging markets. When AI solutions respond to genuine pain points rather than mere novelty, momentum often builds more quickly, and the model finds validation among that evaluate investments.

Showing results and AI ROI early on

Questions linger for many executives . Evidence of clear metrics like cost savings, sales growth, or error reduction can prove persuasive, especially when complemented by success stories from local clients.

Recent studies show that companies ; and such figures tend to reassure those considering investment by illustrating tangible improvements. Testimonials or independent validations, such as a university study, can further illuminate achievements.

The act of quantifying impact — whether in efficiency, revenue, or other relevant KPIs — has a way of transforming perceptions from uncertainty toward clarity.

Leveraging government incentives and collaborations

Many Latin American nations have put forth support programs for AI and tech projects, such as non-repayable funds, soft loans, and tax benefits for innovation illustrated in , , , or the .

Public financing, when present, often acts as a stamp of validation for private investors. For example, this trust extended to Brazilian startups receiving Finep support for AI health projects, which in turn can shift perceptions for foreign ventures capitals. Engagement in government pilots, such as smart city initiatives or solutions for ministries, provides valuable exposure. In such contexts, public-private partnerships and incentives seem to act as quiet levers for growth and legitimacy.

Seeking smart and diversified financing

Financial strategies in Latin America have been shaped by the interplay of local and foreign capital. Local funds often bring insights and patience, while foreign funds may offer larger investments and global scaling experience. Ownership dilution sometimes accompanies the arrival of strategic investors, whose networks can prove invaluable, such as . Programs like 500 Startups, Y Combinator, MassChallenge, and international competitions have ushered LatAm AI startups such as Heru, Rappi, Bitso, and Clip into new rounds of capital following increased exposure.

Efficiency in capital management, which can be demonstrated with lean burn rates and milestone achievement with limited resources, signals an ability to execute within the realities of LatAm, which may enhance the appeal for future investments. The cultivation of relationships and responsible stewardship of capital frequently matters as much as the funds themselves, suggesting that the value of mentorship, contacts, and reputation is often intertwined with deepening financial support.

Unlocking AI Investment

By applying these principles, Latin American companies have achieved a better position to attract AI investments to their projects and help position the region as a viable destination for technology capital. These recent experiences show that when a LatAm company combines innovation, talent, and strategy — while communicating its story well — it can win over global and local investors alike. Each of the best practices noted above is based on real lessons: international alliances (NotCo with US funds), leveraging incentives (Brazilian companies funded by Finep), talent formation (Santander and Microsoft programs), focus on ROI (successful use cases that convince boards), and more.

Latin America has challenges but also unique advantages. Companies that manage to navigate this environment intelligently will increase their chances of securing the financing needed to innovate and grow. By doing so, they will contribute to a virtuous circle in which each new success attracts more investment to the region and opens doors for the next generation of LatAm AI ventures.


You can find more about the challenges and opportunities in the Latin American region here

]]>
Honing legal judgment: The AI era requires changes to how lawyers are trained during and after law school /en-us/posts/legal/honing-legal-judgment-training-lawyers/ Thu, 02 Apr 2026 15:36:44 +0000 https://blogs.thomsonreuters.com/en-us/?p=70236

Key takeaways:

      • AI threatens traditional lawyer development — As AI automates entry-level legal tasks like research and writing that historically has honed legal judgment skills, the profession faces a crisis in how new lawyers will develop such judgment abilities.

      • The profession can’t agree on what constitutes “legal judgment” — Unlike other professions, there is no agreed-upon definition of legal judgment or clear standards for when AI should be used.

      • Implementation requires unprecedented coordination and funding — A legal education fund as a proposed solution would require a small percentage of legal services revenue and coordinated action across law schools, legal employers, and state regulators.


This is the second of a two-part blog series that looks at how lawyer training needs to evolve in the age of AI. The first part of this series looked at how lawyers can keep their skills relevant amid AI utilization.

The key skills that comprise legal judgment have received mixed reviews, according to a recent white paper from the Thomson Reuters Institute that advocated for cultivating practice-ready lawyers. The white paper was based on feedback from thousands of experienced lawyers, judges, and law students and raises questions about how legal judgment forms when AI assistance is used for task completion.

notes that calls for “… to accelerate the development of legal judgment early in lawyers’ careers.”

The challenge is that each part of the profession — law schools, employers, state supreme courts (as regulators) — have distinctly separate responsibilities. That means, that in the age of AI, coordination across the entire legal profession is needed, especially as AI reduces the availability of traditional first jobs.

Furlong points out that there is no consensus for what legal judgment is or any agreed upon standards for in what instances AI should be used in legal. To bring clarity to these issues, the white paper proposed a profession-wide model that integrates three critical elements: i) work-based learning that’s modeled on medical residencies; ii) micro-skill decomposition of legal judgment; and iii) AI-as-thinking-partner throughout pedagogy.

Three pillars for an AI-era lawyer formation system

Not surprisingly, overreliance on AI can erode critical analysis and solid legal judgment skills. Addressing these concerns requires a comprehensive reimagining of how lawyers are educated and trained. One solution lies in three interconnected pillars that together form a cohesive system for developing legal judgment in an AI-integrated world.

Pillar 1: Integrate work experience into legal education

Core skills such as legal research, writing, and document review help develop legal judgment; yet these skills could collapse once AI assumes such tasks. The Brookings Institution recently proposed to preserve entry-level professional development in an AI era. This parallels the TRI white paper’s calls for mandatory supervised postgraduate practice as a key part of legal licensure.

While implementing a full residency model presents challenges, several law schools have already pioneered approaches that demonstrate the viability of work-integrated legal education that, if scaled appropriately, could improve new lawyer practice and judgment skills. For example, Northeastern Law School guarantees all students nearly before graduation through four quarter-length legal positions. The program integrates supervised practice into the curriculum so graduates can gain substantial hands-on experience alongside their classroom instruction.

Also, program offers an alternative pathway to bar admission through practice-based assessment rather than the traditional bar exam. The program demonstrates that competency can be evaluated through supervised experiential learning.

Pillar 2: Decompose legal judgment into teachable micro-skills

The legal profession needs to come to a common definition of legal judgment and develop its components to teach the concept effectively. “We can’t teach what we can’t describe,” Furlong says. To develop legal judgment, the profession must define its components, including:

      • Pattern recognition — The ability to identify when different fact patterns are related to similar legal frameworks and distinguish when superficially similar cases are legally distinct.
      • Strategic calibration and proportionality — This means understanding what level of effort, precision, and risk each matter requires and matching responses to the stakes involved.
      • Reasoning through uncertainty — This is the capacity to make defensible decisions and provide sound counsel even when the law is ambiguous, unsettled, or silent on an issue.
      • Source evaluation and authority weighting — This includes knowing which legal authorities are most suitable and being able to assess their persuasive value.
      • Ethical judgment under pressure — This means spotting conflicts, confidentiality issues, and duty-of-candor moments while maintaining competence and knowing when to escalate beyond expertise.

Breaking down legal judgment into these discrete components makes it possible to design targeted teaching interventions. For example, , former law professor and executive director of , suggests we back into AI-assisted workflows by requiring a short verification log (detailing sources checked, changes made, and why); running attack-the-draft drills (find missing authority, weak inferences, and jurisdictional mismatch); and preserving slow work as formative work (citation chaining, updating, and adversarial research memos).

With judgment skills clearly defined and work experience integrated into training, the profession must then tackle how AI itself should be incorporated into lawyer development.

Pillar 3: AI-as-thinking-partner throughout a lawyer’s career

Warnings that are mounting. The legal profession must provide clear standards for in what instances and how AI should be used, with training in verification and judgment skills. Overreliance on AI could compromise lawyers’ capacity to fulfill their fiduciary duties to clients.

A phased approach in the introduction of AI in legal work helps protect critical thinking while building AI competency. For example, in Year 1, law students could complete core legal reasoning exercises without AI assistance in order to better develop their analytical muscles. In Year 2, students use AI as a research assistant with mandatory verification protocols that teach students to check outputs against authoritative sources. Finally, in Year 3, residencies can immerse students in real-world AI workflows under proper supervision and while providing feedback.

These three pillars form a coherent vision for lawyer formation in the AI era. However, the most well-designed system faces the obstacle of funding.

The challenge of who pays

Perhaps the most difficult part of any overhaul is the cost. The medical residency model works because — up to $15 billion-plus annually — for teaching young medical students to be doctors. Legal education has no equivalent. Without addressing funding, however, even the best reforms will fail.

One idea is to establish a legal education fund that’s supported by an assessment of a small percentage of the legal industry’s gross legal services revenue (while exempting solo practitioners and firms with less than $500,000 in annual revenue). These funds could be used to subsidize thousands of supervised residency placements, fund law school curriculum development, support bar exam alternative assessments, and provide employer training and supervision stipends.


The challenge is that each part of the profession — law schools, employers, state supreme courts — have distinctly separate responsibilities, and that means coordination across the entire legal profession is needed.


This proposal, of course, would require unprecedented coordination and financial commitment from the legal profession. Skeptics might argue that market forces can solve this problem, or that firms will simply create new training pathways, or that AI will prove less disruptive than feared. However, waiting for market forces risks a lost generation of lawyers. The medical profession already when the medical industry’s voluntary reform failed. Only later did coordinated regulatory intervention produce the consistent quality standards the medical industry sees now.

What is clear is that inaction is resulting in degradation of lawyering skills. “Maybe… we need catastrophic external intervention to bring about the wholesale changes we can’t manage from the inside,” Furlong suggests.

However, the question is whether the legal profession will wait for a crisis to force change or act proactively to make the needed changes now, before the crisis hits.


You can learn more about the impact of AI on professional services organizations at TRI’s upcoming 2026 Future of AI & Technology Forum here

]]>
Honing legal judgment: How professional acumen & fiduciary care can keep lawyers relevant in the age of AI /en-us/posts/legal/honing-legal-judgment-keeping-lawyers-relevant/ Wed, 25 Mar 2026 14:21:08 +0000 https://blogs.thomsonreuters.com/en-us/?p=70071

Key highlights:

      • Lawyers excel at semantic legal work while AI excels in syntactic tasks — Syntactic work (document generation, pattern recognition) is where AI excels, but semantic work involving exercising independent judgment, reflecting on consequences, and fulfilling fiduciary duties remains uniquely human.

      • Fiduciary duty as the core of legal relevance — What distinguishes lawyers isn’t justĚýwhatthey do, butĚýhow and whyĚýthey do it. The fiduciary relationship demands human understanding of context, balances competing interests, recognizes unstated concerns, and exercises discretion.

      • 5 hours to deepen or diminish — The five hours lawyers are expected to gain each week by using AI can either accelerate professional obsolescence or deepen lawyers’ relevance, depending on what they do with it.


This is the first of a two-part blog series that looks at how lawyers can keep their skills relevant in the age of AI

Lawyers expect to gain a full five hours per week of worktime due to the efficiency derived from AI use, according to the ¶¶ŇőłÉÄę 2025 Future of Professionals Report. Yet the fear of job loss among lawyers is rising, as those viewing AI as a threat or somewhat of a threat grew from to almost two-thirds (65%) of those surveyed, according to the Thomson Reuters Institute’s 2026 AI in Professional Services Report.

Many in the legal profession are asking how lawyers are uniquely valuable at a time when machines can process legal information faster and cheaper. The answer lies in understanding the difference between what AI does in processing legal information and what humans do in exercising legal judgment, says , Founding Director of the .

Defining 2 levels of legal work

Understanding what makes lawyers particularlyĚýmeaningfulĚýin this current AI moment requires distinguishing between two different levels of legal work in an environment in which AI-enabled information systems are compressing humanity and legal judgment into data points and draining away the storytelling and moral nuance that ground both. According to Lee, these different levels involve the syntactic and the semantic:

      • Syntactic — Lawyers process information, generate documents, and recognize patterns at the syntactic level, meaning those tasks in which AI excels and delivers promised efficiency gains. “The danger is that we will use this efficiency merely to generate more syntactic volume,” Lee explains, adding that this will result in faster processing of more documents at greater speeds. “If we do that, we will have automated ourselves out of a profession.”
      • Semantic — The semantic aspect of lawyering highlights the irreducible skills of the legal practice, which include exercising independent legal judgment, reflecting on consequences, demonstrating care for clients, and fulfilling fiduciary duties.

This distinction between the semantic level is inherent within the practice of law definition, Lee says, pointing out that many jurisdictions distinguish between “providing legal information” (not practicing law) and “exercising independent legal judgment” (the essence of legal practice).

He also rightly contends that the existential risk facing lawyers is not in AI completing legal tasks, but rather the temptation to reduce lawyers’ role to verifying machine output and processing legal information. Conflating these two concepts is a challenge for the legal profession and requires increasing the appreciation for the craft of legal reasoning and judgment.

legal judgment
Kevin Lee, Founding Director of the Institute for AI & Democratic Governance

Making this more difficult is that the current information age complicates this picture by challenging society’s assumptions about reality, consciousness, and the moral meaning of human life — all at an exponential rate, Lee says. Similarly, AI and information systems threaten to reduce everything, including human beings and law itself, to processable data by stripping away the narratives and meanings that define humanity, he adds.

Semantic qualities of legal judgment

The question of what makes lawyers especially relevant in the AI era is mainly answered in how and why they do what they do, rather than in what they do. For example, Lee points to skills around executing their fiduciary duty and ensuring legitimacy and meaning as key characteristics of lawyers’ semantic qualities.

Fiduciary duty — When a client seeks legal counsel, it’s legal judgment — not information processing — that the client wants. Lawyers, as part of their fiduciary duty to their clients, demonstrate human and legal understanding of the unique context of each case and the consequences of various legal paths forward. This bond of trust between attorney and client demands reflection, consideration, care, and proper purpose.

The fiduciary duty of the lawyer to the client requires balancing competing interests, recognizing unstated concerns, and exercising discretion in ways that honor both the letter and spirit of the law. At the heart of this balance is legal reasoning and professional judgment, which often involves navigating the critical gap between legal rules as written and their meaningful application to human circumstances.

Legitimacy and meaning — Beyond the fiduciary of care exercised in individual client relationships, lawyers serve a broader purpose in their role to safeguard law’s connection to the narratives of justice and human dignity that legitimize its authority. Indeed, lawyers maintain the connection between law and its humanistic foundations, so that the narratives that give legal authority its legitimacy depend on this connection. “The artwork that one associates with the law (in law schools and courtrooms) connects actions and legal judgment of attorneys to the mythic meaning of justice, equality, and the rule of law,” Lee explains.

How to deepen appreciation for the special relevance of lawyers

The five hours that lawyers said they expect to gain each week through AI-driven efficiency represents a choice point for the profession. These hours can either accelerate lawyers’ obsolescence or deepen their relevance. To ensure the latter, Lee advises lawyers and legal institutions to examine ways to put those hours to good use by, for example:

Collaborating on apprenticeships — Bar associations, practicing lawyers, legal service providers, and law schools should consider apprenticeship models that teach professional norms and values through mentorship that allow law students to learn the craft of legal reasoning through guided practice.

Recommitting more fully to legal service — Law firms and in-house counsel must reclaim humanistic awareness as central to their professional identity. The efficiency gains from AI should be reinvested into semantic work, which include counseling clients, exercising moral judgment, and fulfilling fiduciary duties with greater care and reflection.

Improving legal education — Law schools must return to the humanistic formation of lawyers, echoing the vision of the pre-2007 , before economic pressures reduced legal education to producing commercially exploitable graduates. In addition, AI ethics must be integrated systemically across the curriculum into doctrinal courses rather than being confined to elective courses.

Looking ahead

The five hours gained through AI represent a defining choice for the legal profession. The special relevance of lawyers in the AI age lies precisely in the human components and semantics aspects of lawyering.


In the concluding part of this blog series, we look at how the legal profession needs to rethink how it trains lawyers in order to prevent AI from eroding legal judgment skills

]]>
Inside the Shift: The AI Adoption Boardgame & why law firm leaders can’t afford to play it safe /en-us/posts/technology/inside-the-shift-ai-adoption-boardgame/ Mon, 23 Mar 2026 13:00:33 +0000 https://blogs.thomsonreuters.com/en-us/?p=70057

You can read TRI’s latest “Inside the Shift” feature,ĚýThe AI adoption board game: Why law firm leaders can’t afford to play it safe here


Let’s be honest: most law firms know AI is a big deal. They’ve read the headlines, attended the conferences, and nodded along when someone says, “AI will change everything.” The problem? Knowing that AI matters and actually doing something strategic about it are two very different things. And according to our latest Inside the Shift feature article, that gap is where many law firms are starting to lose ground.

Our latest Inside the Shift feature, author Michelle Nesbitt-Burrell, Marketing Strategy Director for ¶¶ŇőłÉÄę (TR), frames AI adoption as a boardgame that’s already underway. Some law firms are moving confidently across the board, while others are stuck on the starting square, not because they don’t see the future, but because they’re hesitating. The latest TRI research shows that while the majority of lawyers say they believe AI will fundamentally transform the legal industry within the next few years, far fewer expect real change inside their own firms anytime soon. That disconnect is risky — especially when competitors and clients aren’t waiting around.


Inside the Shift

Here’s what should concern every law firm partner — corporate legal departments aren’t just playing the same AI adoption game, they’re winning it.

 


One of the most uncomfortable truths the article reveals is that corporate legal departments are further often ahead on AI adoption and utilization than their outside counsel. In fact, many corporate legal teams are investing in AI faster and using it more deeply in their day‑to‑day legal work. That means clients are reviewing contracts faster, doing more work internally, and increasingly judging their outside law firms on their technological sophistication. In a world like that, the excuse that We’re still experimenting stops sounding reasonable pretty quickly.

The article breaks law firms into three players on the game board:

          1. The laggards — Those firms with no meaningful AI plans and very little ROI to show for it.
          2. The adopters — Thos firms that are experimenting with tools but don’t really have a clear strategy. These firms see some efficiency gains but too often hit a ceiling.
          3. The innovators — Those firms with visible, intentional AI strategies. These firms are far more likely to see ROI, revenue growth, and long‑term competitive advantages.

So, what separates the winners from everyone else? The article details the PLAYERS framework: pilot with purpose, leadership that sets the pace, action over perfection, strong ethics, serious education, good data, and — most importantly — strategy before tools. In other words, those law firms that want to become innovators should stop asking, What AI should we buy? and start asking What are we actually trying to achieve?

Clearly, AI isn’t a side project anymore. Law firms that treat it like one may save some time, but as the article fully explains, those firms that approach AI adoption and implementation strategically will reshape how legal work gets done. The game is already moving — the only question is whether your firm is playing to win or quietly falling behind.


You can find moreĚýInside the Shift feature articlesĚýfrom the Thomson Reuters Institute here

]]>
The efficiency imperative: AI as a tool for improving the way lawyers practice /en-us/posts/ai-in-courts/improving-lawyers-practice/ Wed, 18 Mar 2026 17:45:16 +0000 https://blogs.thomsonreuters.com/en-us/?p=70024

Key insights:

      • AI brings improved efficiency — AI accelerates tasks like document review and research, freeing lawyers to pursue more high-value work for clients.

      • AI does the work of a team of lawyers — AI levels the playing field for small law firms and solo practitioners by providing additional capacity without adding headcount, thereby allowing fewer lawyer to do the work of many.

      • Yet, AI still needs guardrails — Lawyers must remain accountable, however, with human oversight and review to ensure that AI outputs are accurate and correct, thereby preserving nuance and professional judgment.


Already, AI is no longer a theoretical concept for legal professionals, nor is it a nice-to-have for law firms that are seeking to impress their clients with improved efficiency and cost savings. That means, the practical question now becomes how to adopt AI in ways that improve speed and capacity of lawyers without compromising accuracy, confidentiality, or professional judgment.

The strongest near-term value shows up where modern practice is most strained: high-volume inputs and relentless timelines. In that environment, AI can be most helpful as an accelerant for the first pass through large bodies of material.

This possibilities, opportunities, and challenges of using AI in this way were discussed by a panel of experts in a recent webinar, , from theĚý, a joint effort by the National Center for State CourtsĚý(NCSC) and the Thomson Reuters Institute (TRI).

One panelist, Mark Francis, a partner at Holland & Knight, described one way that AI can be an enormous help. “Anything where we’re dealing with large volume of materials that need to be reviewed [such as] large sets of documents, large sets of legal research, large sets of discovery. Obviously, AI can be leveraged in all of those circumstances.” That framing is important because it anchors AI’s utility in a familiar workflow: review, triage, and synthesis at scale.

AI also has a role earlier in the workflow than many attorneys expect. In addition to sorting and summarizing, it can help generate starting structures. For lawyers drafting motions, client advisories, demand letters, contract markups, or internal investigations memos, the hardest step can be getting traction from a blank page. “It’s really good at content or idea generation,” Francis said, adding that lawyers can ask AI to “generate some ideas for me on this topic, or generate an outline of a document to cover a particular issue.”


“AI is definitely going to benefit some of the small law firms who cannot actually afford the workforce. AI can be an extension when it comes to the automation.”


Of course, that does not mean letting an AI model decide what the law is; rather, it means using AI to produce an initial outline, identify possible issues to consider, or propose alternate ways to organize an argument. Then, the attorney should apply their own judgment to accept, reject, refine, and verify the AI’s output.

For legal teams, the ideal mindset is that AI can compress the time between intake and a workable first draft, whether that draft is a research plan, a deposition outline, a set of contract fallback positions, or a motion framework. However, speed is only valuable if it facilitates careful lawyering, not just taking shortcuts.

Efficiency that scales down, not just up

AI’s impact is not limited to large law firms with dedicated tech & innovation budgets. In fact, the benefits may be most transformative for smaller legal organizations that feel every hour of administrative drag and every unstaffed matter. Panelist Ashwini Jarral, a Strategic Advisor at IGIS, underscores how broad the current level of AI adoption already is. “AI is already being used in a lot of legal research, contract analysis, and in office operations,” Jarral explained. “Whether that’s in a small law firm or a large law firm, everybody can benefit from that automation with this AI.”

For many practices, that list maps directly onto the work that consumes lawyers’ time without always adding commensurate value: repetitive research steps, first-pass contract review, intake and scheduling, matter administration, and other operational tasks.

Historically, scale favored organizations that could hire more associates, paralegals, and support staff to push volume through the pipeline. Now, AI offers a different form of leverage: additional capacity without adding headcount. “It is definitely going to also benefit some of the small law firms who cannot actually afford the workforce,” Jarral said, adding that “AI can be an extension when it comes to the automation.” For a solo or small firm, that extension can show up as faster first-pass review of contracts, quicker summarization of records, more consistent intake workflows, and reduced time spent on repetitive back-office tasks.

At the same time, it is crucial to be clear-eyed about what is being automated. While AI can help deliver efficiency, it does not offer legal judgment itself. The legal profession still must decide, matter by matter, what level of review is required and what risks are acceptable.


“Lawyers are trained a certain way, and AI is never going to be trained that way. AI misses nuances. We’re always going to need lawyers; we’re always going to need the human in the loop.”


And that’s where implementation discipline becomes a strategic differentiator. Law firms that treat AI as a general-purpose shortcut tend to create risk; while firms that treat AI as a workflow component, with guardrails, review steps, and clear accountability, are more likely to capture value without compromising quality.

The non-negotiable: lawyers remain accountable

Any serious conversation about AI in legal practice must address these limits, panelists agreed. The Hon. Linda Kevins, a Justice on the Supreme Court in the 10th Judicial District of New York (Suffolk County), offered the most direct articulation of the boundary line: “Lawyers are trained a certain way, and AI is never going to be trained that way. AI misses nuances. We’re always going to need lawyers; we’re always going to need the human in the loop.”

Indeed, legal work is saturated with nuance. The same set of facts can carry different weight depending on jurisdiction, judge, forum, procedural posture, and the client’s goals and risk tolerance. Even when the law is clear, the right action often is not. To strive for true justice requires judgment about timing, framing, business consequences, reputational risk, and settlement dynamics. Those are not merely inputs for an AI to process — they are human decisions that define legal representation.

As the webinar made clear, this is the point at which responsible use becomes practical, not abstract. If AI is used for research support, contract analysis, or document review, lawyers need an explicit approach for verification and oversight. The outputs may look polished and may sound confident; however, confidence is not accuracy, and professional responsibility does not shift to a vendor or an AI model. Human review is not a ceremonial or perfunctory step, nor is it a formality. Rather, it is the core control that protects clients and the court, and it is the inflection point that turns AI from a novelty into a defensible tool.

In practice, the human in the loop means deciding in which instances AI can assist and in what instances it cannot. It also means reserving an attorney’s time for the decisions that carry legal and ethical consequences and building repeatable habits that prevent teams from drifting into overreliance on AI, especially under deadline pressure.

The legal profession can capture real benefits from AI, including speed, scalability, and improved access, but only if it adopts the technology in a way that preserves what Justice Kevins highlighted: training, nuance, and human accountability.


You can find out more about how AI and other advanced technologies are impactingĚýbest practices in courts and administration here

]]>
Corporate tax teams eager for AI, but frustrated by pace of change, new report shows /en-us/posts/corporates/corporate-tax-department-technology-report-2026/ Mon, 16 Mar 2026 13:06:11 +0000 https://blogs.thomsonreuters.com/en-us/?p=69963

Key insights:

      • Possibilities vs. practicality — There is a growing frustration gap between what corporate tax professionals want to achieve and what their current technological tools will allow.

      • Expectations about AI — Tax professionals have significantly accelerated the timeframe in which they expect AI to become a central part of their workflow.

      • Proactive progress — Automation is enabling a gradual shift toward more strategic, proactive tax work, although not as quickly as many tax professionals would like.


The recently released , from the Thomson Reuters Institute and Tax Executives Institute, reveals that while automation of routine tax functions is indeed enabling a long-desired shift toward more strategic, proactive tax work in some corporate tax departments, a majority of tax leaders surveyed say upgrading their department’s tax technology is still a relatively low priority at their company.

Jump to ↓

2026 Corporate Tax Department Technology Report

 

The report surveyed 170 tax leaders from companies of all sizes to find out how corporate tax professionals are using technology, overcoming obstacles, and planning for the future.

A growing “frustration gap”

In general, the report found that while many companies (especially larger ones) are actively upgrading their tax department’s technological capabilities, there is a growing frustration gap between what tax professionals know they can accomplish with more robust technologies and what their current tools allow them to do.

Adding to this frustration is a growing discrepancy between the additional budget and resources tax departments hope to get each year and the harsher reality they often face. Indeed, even though tax leaders remain optimistic that their budgets and capabilities will expand and improve in the coming years, fewer than half of the respondents surveyed said their departments received a budget increase last year, and many saw budget cuts.


corporate tax

Further, the report shows that the prospect of incorporating ever more sophisticated forms of AI and AI-driven tools into tax workflows is also very much on the minds of tax professionals. Even though the actual usage of AI in corporate tax departments is still relatively low, the report reveals that tax professionals now expect AI become a central part of their workflow within one to two years, much faster than they did in last year’s report.

Indeed, as the report explains, this expectation of more imminent AI adoption represents a significant shift in attitude, because most corporate tax departments are rather circumspect about how, when, and why they incorporate new tech tools into their established routines.

If today’s technological capabilities continue to accelerate, companies that have been slow to invest in the infrastructure necessary to keep pace may soon find themselves struggling to catch up with their more tech-savvy counterparts, the report warns.

Moving toward more proactive work, albeit slowly

For companies that have invested in the technological infrastructure necessary to support advanced tax technologies, the payoff is becoming increasingly evident.

According to the report, about two-thirds (67%) of tax professionals surveyed said their company’s investment in technology had enabled a shift toward more proactive tax work within their departments. This shift is particularly noticeable at large corporations, at which, unsurprisingly, investment in tax technology has been more generous.

The 2026 Corporate Tax Department Technology Report also explores other aspects of corporate tax departments, including their hiring practices, tech training, purchasing strategies, what they see as the most popular tech tools for tax, and numerous other factors that affect how tax departments operate.


You can download

a full copy of the Thomson Reuters Institute’s here

]]>
Human layer of AI: How to build human-centered AI safety to mitigate harm and misuse /en-us/posts/human-rights-crimes/human-layer-of-ai-building-safety/ Mon, 09 Mar 2026 17:33:34 +0000 https://blogs.thomsonreuters.com/en-us/?p=69789

Key highlights:

      • Map risks before building— Distinguish between foreseeable harms that may be embedded in your product’s design and potential misuse by bad actors.

      • Safety processes need real authority— An AI safety framework is only credible if it has the power to delay launches, halt deployments, or mandate redesigns when human rights risks outweigh business incentives.

      • Triggers enable proactive intervention— Define clear, automatic review triggers such as product updates, geographic expansion, or emerging patterns in user reports to ensure your safety processes adapt as risks evolve rather than reacting after harm occurs.


In recent months, the human cost of AI has become impossible to ignore. after interacting with AI chatbots, while generative AI (GenAI) tools have been weaponized to create that digitally undress women and children. These tragedies underscore that the gap between stated values around AI and actual safeguards remains wide, despite major tech companies publishing responsible AI principles.

, a senior associate at , who works at the intersection of technology and human rights, argues that closing this gap requires companies to: i) systematically assess both foreseeable harms from intended AI use and plausible misuse by bad actors; and ii) build safety processes powerful enough to actually stop launches when risks to people outweigh commercial incentives.

Detailing the two-step framework for anticipating and addressing AI risks

To build effective AI safety processes, companies must first understand what they’re protecting against, then establish credible mechanisms to act on that knowledge.

Step 1: Mapping foreseeable arms and intentional misuse

When mapping AI risks during “responsible foresight workshops” with clients, Richard-Carvajal says she takes them through a process that identifies:

    • foreseeable harms that emerge from a product’s design itself. For example, algorithm-driven recommender systems — which often are used by social media platforms to keep users on the site — are designed to drive engagement through personalized content, and are well-documented in amplifying sensationalist, polarizing, and emotionally harmful content, according to Richard-Carvajal.
    • intentional misuse that involves bad actors who may weaponize technology beyond its purpose. Richard-Carvajal points to the example of Bluetooth tracking devices, which initially were designed to help people find lost items, but were quickly exploited by stalkers, who placed them in victims’ handbags in order to track their movements and in some cases, to follow them home.

Tactically, the role-playing use of “bad actor personas” by Richard-Carvajal and her colleagues can help clients imagine misuse scenarios and help ensure companies anticipate harm before it occurs rather than responding after people have been hurt.

Step 2: Building a credible AI safety process

Once risks are identified, Richard-Carvajal says she advises that companies identify mechanisms to address them.ĚýThe components of a legitimate AI safety framework mirror the structure of robust human rights due diligence by centering on the risks to people.

Indeed, Richard-Carvajal identifies core components of this framework, which include: i) hazard analysis and to anticipate both foreseeable harms and potential misuse; ii) incident response mechanisms that allow users to report problems; and iii) ongoing review protocols that adapt as risks evolve.

Continual evaluation of new emerging risks is needed

As AI capabilities advance and deployment contexts expand, companies must continuously reassess whether their existing safeguards remain adequate against evolving threats to privacy, vulnerable populations, human autonomy, and explainability. Richard-Carvajal discusses each one of these factors in depth.

Privacy — Traditional privacy mitigations, such as removing information that leads to identifying specific individuals, are no longer sufficient as AI systems can now re-identify individuals by linking supposedly anonymized data back to specific people or using synthetic training data that still enables re-identification. The rise of personalized AI — in which sensitive information from emails, calendars, and health data aggregates into comprehensive profiles shared across third-party providers — can create new privacy vulnerabilities.

Children — Companies must apply a heightened risk lens for vulnerable populations, such as children, because young users lack the same capacity as adults to critically assess AI outputs. Indeed, the growing concerns around AI usage and children are warranted because of AI-generated deepfakes involving real children are being created without their consent. In fact, Richard-Carvajal says that current guidance calls for specific child rights impact assessments and emphasizes the need to engage children, caregivers, educators, and communities.

Cognitive decay — A growing concern is that too much AI usage can harm human autonomy and contribute to a decline in critical thinking. This occurs when , and it has the potential to undermine their human rights in regard to work, education, and informed civic participation.

Meaningful explainability — Companies’ commitment to explainability as a core tenet of their responsible AI programs was always a challenge. As synthetic AI-generated data increasingly trains new models, explainability becomes even more critical because engineers may struggle to trace decision-making through these layered systems. To make explainability meaningful in these contexts, companies must disclose AI limitations and appropriate use contexts, while maintaining human-in-the-loop oversight for consequential decisions. Likewise, testing explanations should require engagement with actual rights holders instead of just relying on internal reviews.

Moving forward safely

While no universal checklist exists for AI safety, the systematic approach itself is non-negotiable. Success means empowering engineers to identify and address human-centered risks early, maintaining ongoing stakeholder engagement, and building safety processes that have genuine authority to delay launches, halt deployments, or mandate redesigns when human rights outweigh commercial pressures to ship products.

If your company builds or deploys AI, take action now: Give your engineers and risk teams the authority and resources to identify harms early, keep continuous engagement with affected people and independent stakeholders, and create governance that have the power to keep harm from happening.

Indeed, companies need to make sure these steps go beyond simple best practices on paper and make these protective processes operational, measurable, and enforceable before their next product release.


You can find more about human rights considerations around AI in our ongoingĚýHuman Layer of AI seriesĚýhere

]]>
Chief Marketing & Business Development Officer Forum 2026: The most important aspect of AI may be talking to your clients about it /en-us/posts/legal/cmbdo-forum-2026-talking-to-your-clients-about-ai/ Wed, 18 Feb 2026 14:47:26 +0000 https://blogs.thomsonreuters.com/en-us/?p=69455

Key insights:

      • AI can help lawyers prepare better, more relevant client conversations — AI’s real value lies in synthesizing news, regulatory updates, client activity, and relationship data so lawyers have timely, tailored insights that make outreach easier and more meaningful.

      • AI works best as a foundation for client discussions, not a script — Panelists at a recent Forum repeatedly stressed that AI-generated briefs and opportunity matrices should only guide lawyers, but authenticity, experience, and interpretation are still what make client conversations effective.

      • Firms must actively and clearly talk to clients about their AI capabilities — Clients increasingly expect AI-savvy law firms, and those that can confidently explain how AI improves their service offerings while keeping humans at the center will stand out, while silence or vague messaging is a missed opportunity.


AMELIA ISLAND, Fla. — During the Thomson Reuters Institute’s recentĚý33rd Annual Chief Marketing & Business Development Officer ForumĚý(formerly theĚýMarketing Partner Forum), one concept became clear very quickly: When it comes to AI in law firms, the technology itself isn’t the hard part anymore. The real challenge — and the real opportunity — is how firms use AI to deepen client relationships and, just as importantly, how they talk to clients about what they’re doing.

Indeed, more than three-quarters of respondents (77%) say they believe law firms should take the initiative to begin these talks with clients around AI usage, according to the recent Thomson Reuters Institute’s 2026 AI in Professional Services Report.

Across multiple Forum panel discussions, speakers returned again and again to the same idea: AI is becoming a powerful business development engine, but only if lawyers and law firm business development teams are willing to use it proactively and communicate its value in human terms.

AI as an assistant, not a replacement

One of the most practical discussions that arose during the Forum centered on using AI to make client outreach less painful and more effective. Too often, panelists contended, senior lawyers often don’t send regular client notes — but it’s not because they don’t care. These notes get put on the backburner because crafting them takes time away from billable work and is hard to prioritize.


You can find out more about next year’s Chief Marketing & Business Development Officer Forum 2027Ěýhere


Several panelists talked about how AI can change that equation by pulling together information from news coverage, regulatory developments, earnings calls, relationship data, and even what clients are actively reading. Instead of staring at a blank page, partners can walk into a meeting or send a note armed with relevant, timely insights that actually matter to the client, they explained.

“We can plant things in our lawyers’ and partners’ minds to move the needle with clients so they can open conversations with clients that will make a difference,” said one panelist.

Of course, the point isn’t to automate relationships, rather it’s to give lawyers a smarter starting point — a short list of clients to contact, paired with concrete conversation openers that feel tailored rather than generic. “Those conversations and what results from those conversations will be revolutionary for your firm,” the panelist added.

Another theme that resonated at the Forum was the idea of matching client needs with firm capabilities in a much more structured way. AI can help generate documents that clearly show what a client is dealing with and where the firm can help — essentially an opportunity matrix that’s built from real data.

Strong need for lawyer training around AI

Several speakers were quick to stress, however, that this doesn’t mean that AI should be left on autopilot. The best results come when firms train their partners before client meetings, using AI-generated briefs as a foundation, not a script. That balance — between automation and authenticity — came up repeatedly throughout the Forum. As several panelists described, AI can bring insights to the surface, but lawyers still need to interpret those insights, contextualize them, and deliver them in a way that feels personal.

“AI might get you 90% of the way there, but that last 10% still depends on human judgment, experience, and relationship skills,” said one law firm technology specialist.

Indeed, if there was one clear takeaway from the Forum, it’s that AI adoption rises or falls on training. Not broad, one-size-fits-all sessions, but bespoke, one-on-one training that shows lawyers exactly how AI helps them prepare for client conversations. Indeed, several panelists argued that it is essential that firms educate their attorneys on how to use these tools effectively or give them very specific guidance — anything less will lead to hesitation, confusion, or outright resistance.

CMBDO Forum
One of several panels discussing AI issues at the recent Chief Marketing & Business Development Officer Forum.

Of course, the problem is that AI adoption isn’t waiting for everyone to catch up. As one speaker noted, the train is already leaving the station, and those firms that fail to bring partners along — especially by showing clear, practical benefits of AI use — risk falling behind quickly.

In fact, several panelists discussed how the excitement around agentic AI is real, but so are the risks. They warned against assuming these more advanced tools are smarter or more autonomous than they really are. In fact, AI agents are still constrained by the data and tools they’re given, and a flawed understanding at the leadership level can lead to poor decisions and misplaced expectations.

That said, business development was repeatedly described as an ideal starting point for experimenting with agentic AI. The workflows are less rigid or high stakes than agentic use for legal work, the feedback loops are faster, and early wins are easier to spot.

Talking to clients about AI matters

Overall, perhaps the most important takeaway from the Forum wasn’t technical at all. It was strategic.

Because clients are increasingly expecting their law firms to be AI‑savvy, firms have to be proactive in their response. Firms have to not just be using AI internally, but understanding how the technology improves their service, efficiency, and insight. Those firms that can clearly and confidently explain to their own partners and clients how AI supports their best efforts — and where humans still play a critical role — will stand out. Staying silent about AI, or worse, being vague and generic about its value, is a missed opportunity, several panelists explained.

Those law firms that thrive, especially around business development and client service, will be the ones that treat AI not as a back-office experiment, but as a client-facing capability — something to be discussed openly, thoughtfully, and authentically.


You can read the fullĚýExecutive Summary of the Thomson Reuters Institute’s 33rd Annual Chief Marketing & Business Development Officer ForumĚýhere

]]>
Understanding the data core: From legacy debt to enterprise acceleration /en-us/posts/technology/understanding-data-core-enterprise-acceleration/ Tue, 03 Feb 2026 14:47:41 +0000 https://blogs.thomsonreuters.com/en-us/?p=69255

Key takeaways:

      • The real bottleneck for AI is the data core — AI is advancing rapidly, but most organizations’ data architectures, governance, and legacy assumptions can’t keep up. Without a repeatable, business-aligned data foundation, AI initiatives will struggle to scale and deliver reliable results.

      • AI success relies on explainable, traceable, and reusable data — For AI to be reliable and compliant, organizations must design data environments that emphasize lineage, semantics, and trust; and that means that compliance and auditability need to be built into the data core, not added on later.

      • Business should shift from tool-centric upgrades to business-driven, data-centric reinvention — Efforts focused only on modernizing tools or platforms miss the root issue: legacy data structures. Leaders must prioritize building a cohesive, reusable data core that aligns with business strategy.


This article is the first in a 3-part blog series exploring how organizations can reset and empower their data core.

Across boardrooms, regulatory briefings, and strategic off-sites, leaders are asking with growing urgency some variation of the same question: How do we make AI reliable, scalable, auditable, and economically defensible? The surprising answer is not in the AI technology, nor in the cloud stack, nor in another round of system upgrades.

It is in the data. Not the data we store, not the data we report, and not the data we move across our pipelines. It is in the data that we must now explain, contextualize, trace, validate, and reuse continuously as agentic AI becomes embedded in every workflow, every decision system, and every regulatory outcome.

The stark reality across industries then becomes what to do as AI matures faster than our data cores can support. For the first time, technology is not the bottleneck — architecture is, organizational assumptions are, and governance strategies are. More importantly, the lack of a repeatable, business-aligned data foundry has become the strategic inhibitor standing between today’s operations and tomorrow’s autonomy-ready enterprises.

The realities of 2026

As 2026 gets underway, the pressures of regulation, AI adoption, data lineage requirements, and cross-system consistency have converged into a single strategic reality: We can’t keep modernizing data at the edges. The data core itself must be reimaged and compartmentalized.

For leaders across highly regulated industries, the challenge is recognizing that our data architectures were never designed for the world we’re moving into. Historically, solutions were built for predictable siloed-data systems, linear programmatic processes, and dashboard reporting. Today’s demands are continuous, variable, cross-domain, and machine-interpreted and not bound by traditional methods and techniques of process efficiency and system adaptability. Tomorrow’s systems will be comprehensively trained by data. To properly frame these realities, leaders must understand:

      • Agentic AI exposes weak data architecture immediately — Models may scale, but data debt does not. This is a new, priority constraint.
      • Lineage, semantics, and trust scoring — not models — will determine enterprise readiness — AI will only be as reliable as the meaning and traceability of enterprise data.
      • Compliance cannot be retrofitted; rather, it must be designed into the data core — Compliance no longer ends in reporting, it must exist upstream and be addressed continuously.
      • Return on investment in AI is impossible without composable, modular, and reusable data products — Data that cannot be composed, traced, and made consistent cannot be automated.
      • The bottleneck is not talent or tools, it is the absence of a data foundry — Without robust, industrial-grade data production, AI will remain fragmented and experimental.

By delivering a practical, business-first path integrated with a data-centric design, organizations enable reuse, compliance, and measurable ROI. AI is accelerating, but data readiness is not. This mismatch is where many transformation efforts die.

Agentic AI demands a data environment that simply does not exist with most legacy solutions. It requires decision-aligned semantics, federated trust scoring, cross-domain lineage, dynamic compliance overlays, and consistent interpretability. No model, no matter how advanced, can compensate for data environments that have been engineered for static reporting and linear process logic. We are entering a cycle of reinvention in which data becomes the organizing principle.

The business need, not the engineering myth

Executives are rightfully fatigued by transformation programs. They have seen modernization initiatives expand scope, escalate cost, and ultimately underdeliver. They have heard the promises of clean data, enterprise data platforms, microservices, cloud migration, and AI-readiness. However, when agentic AI begins interacting with these ecosystems, the fragility of the entire operation becomes instantly visible.

Why? Because most data modernization initiatives have been driven by tool-centric solutions rather than architecture-centric capabilities. Prior data governance is about oversight, not enablement and reuse, as is being demanded by emerging AI designs. Often, legacy methods kept their audit and lineage contained within siloed processes, bridging bridged them with replicated data warehouses, extract, transform, load systems (ETLs), and application programming interfaces (API) protocols.

However, this tool-centric, legacy-enabled approach is the problem. We keep optimizing the wrong layers, and we keep modernizing the components.

As a result, we too often see that AI pilots succeed, but enterprise scaling fails. Or, that regulatory reporting improves marginally, but compliance costs increase. Or M&A integrations appear straightforward, but post-close data convergence drags on for years.

The gap between ambition and reality

As a solution, a data foundry approach corrects that imbalance by formalizing the factory-grade patterns required to support agentic AI systems. It becomes the production line for reusable data products, compliant semantics, and decision-aligned datasets. It also eliminates reinvention by institutionalizing repeatable structures; and, most importantly, it restores business leadership over AI outcomes, rather than relegating decision logic to engineering workstreams and emerging technologies.

As illustrated below, AI requirements and realities need to be tempered with business demands, organizational risks, and data agility capabilities (including skill sets) to achieve realistic roadmaps of action — not strategic aspirations.

data core

Today, the question isn’t whether organizations understand the importance of data, it’s whether leaders know how to build environments in which data becomes reusable, trustworthy, and ready for agentic AI. The issue, however, continues to be that our data cores — the architectural, operational, and standards ecosystems beneath all this — were not designed for continuous change.

Before they mobilize and execute against AI plans, business leaders need to answer the question: What business decisions are we trying to improve — and what data do these decisions actually requires today, and for tomorrow?

The organizations that will lead in the coming decade will do so not because they found the perfect technology stack, but because they built a reusable, continuously improving data foundation that can support AI, regulation, risk, and innovation simultaneously.

The question for leaders then becomes: Are we prepared to reinvent?

The work begins now — quietly, deliberately across the data core where tomorrow’s competitive advantages will be created. The chart below illustrates the business-driven AI elements that must be addressed, and how the old sequence of system provisioning must be replaced, beginning with outcomes and ending with engineered AI tools.

data core

AI is an output — a capability that’s unlocked after the underlying data foundation becomes coherent, traceable, explainable, and aligned with business decisions. For leaders, the data core is no longer a back-office concern or one-off IT initiative. It is a strategic asset that can shape speed, resilience, and trust across the organization.


In the next post in this series, the author will explain how to architect an integrated data core, particularly through the AXTent architectural framework for regulated organizations. You can find more blog postsĚýby this authorĚýhere

]]>
The AI Law Professor: When AI forces us to rethink how we train junior lawyers /en-us/posts/legal/ai-law-professor-train-junior-lawyers/ Mon, 02 Feb 2026 14:48:39 +0000 https://blogs.thomsonreuters.com/en-us/?p=69248

Key takeaways:

      • The training crisis is a category error — Fears about junior lawyer obsolescence assume AI will simply replace existing tasks rather than transform the nature of legal work itself.

      • New operational roles are emerging — Positions like AI Compliance Specialists and Legal Data Analysts represent transitional pathways that didn’t exist five years ago.

      • The transition requires patience — Firms that thoughtfully redesign junior workflows will develop talent pipelines that outcompete those firms that still are clinging to traditional models.


Welcome back my The AI Law Professor column. Last month,ĚýI examined how agentic AI is transforming lawyers from reactive firefighters into proactive strategic partners. This month, I’m tackling a question that keeps law students and junior lawyers awake at night: What happens to junior lawyer development when AI handles the foundational tasks that traditionally built legal expertise?

When people say, “AI will eliminate junior training,” they’re making a category error and confusing the specific tasks that junior lawyers perform today with the underlying purpose of having juniors at all.

Junior lawyer work has never been a timeless set of tasks. It’s a bundle of functions that firms needed to be done at a particular moment in the history of information. When legal knowledge lived in books, juniors found it and copied it. When knowledge moved into databases, juniors learned how to query it. When email replaced dictation and secretaries, juniors typed more and seniors reviewed more. The traditional workflow is just the current snapshot of a role that has been continuously changing over time.

The purpose of junior lawyers isn’t to suffer through busy work for character-building or misplaced professional hazing. Rather, it’s to i) expand capacity, ii) reduce risk through additional eyes, and iii) create a talent pipeline by giving novices progressively harder judgment calls to make under supervision.

Generative AI (GenAI) doesn’t remove that purpose — it forces us to rethink and redesign how we accomplish it.

The AI-accelerated apprenticeship

The most important shift isn’t that juniors will do less, rather it’s that juniors will do different work earlier — work that looks operational, technical, and strategic, because that’s where the bottlenecks move to when drafting and research become cheaper and easier to accomplish.

Today’s law firms should expect to see first- and second-year lawyers rotating through new AI-enabled roles, such as:

      • AI compliance specialist — Not a software engineer, this is a lawyer who understands what an AI model is doing well enough to manage risk. In this role, they would help set usage policies, evaluate vendor claims, document audit trails, and ensure the firm’s AI use aligns with professional responsibility duties, such as confidentiality, competence, supervision, and candor.
      • Legal data analyst — This is a junior who can turn messy matter history into usable structure by tagging outcomes, mapping issues to fact patterns, building internal playbooks, and working with knowledge management to make firm experience retrievable, so that AI can draft with your institutional memory.
      • Knowledge operations curator — This person ensures the reliability of your data by updating clause libraries, flagging suspect precedent, harmonizing templates with new local rules, and maintaining the firm’s internal source of truth so the AI doesn’t confidently resurrect a brief from 2014 that cites a law that was nullified in 2019.
      • Vibe coder — Yes, this is a lawyer, because someone has to translate legal workflows into software prototypes and agentic processes. Juniors are often better positioned than senior lawyers to do this because they actually touch the steps in which friction lives.

These transitional operational roles serve a crucial function — they provide entry points for junior lawyers to develop expertise while the profession reorganizes around AI capabilities. They’re not permanent destinations, but rather, pathways toward the strategic roles that will define legal practice in the coming decade.

In this way, the junior becomes a hybrid of lawyer, analyst, builder, and quality controller. They become someone who understands both the legal reasoning and the system producing it. That is not a degradation of training; rather it is training with the boring parts stripped out and the responsibility to engage with interesting work earlier on poured in.

The transition won’t be instant

Of course, none of this will happen overnight. There will be a messy period in which firms use AI inconsistently, partners trust it too much or not at all, and juniors are asked to double-check outputs without being taught how to do that systematically. Some law firms will treat AI as a time-saver while keeping the old apprenticeship model intact, until they realize they’ve removed the work that used to teach judgment and replaced it with… nothing.

To manage this better, law firms must redesign training programs, adjust compensation structures, and develop new metrics for evaluating junior performance. Law schools must rethink curricula that is built around skills that AI increasingly handles. Bar examiners must consider what competencies actually matter at a time when AI itself can pass the bar.


In this way, the junior becomes a hybrid of lawyer, analyst, builder, and quality controller. They become someone who understands both the legal reasoning and the system producing it.


The long-term path is clear: AI will make legal production faster and cheaper, and that efficiency will push lawyers toward higher-value work — strategy, prevention, client-centered design, and complex advocacy. Juniors won’t be trained by copying and pasting the past.

When AI can produce a first draft in minutes, someone must evaluate whether that draft actually serves the client’s objectives. When machine learning surfaces relevant precedents from thousands of cases, someone must assess which precedents matter for this particular argument before this particular judge.

Juniors will be trained by building and supervising systems that generate the first drafts of tomorrow. Indeed, the future of junior training isn’t less training. It’s less busy work that pretends to be training, and more deliberate apprenticeship in verification and judgment.

And for those law firms willing to redesign how juniors learn, that future looks not only efficient, but better — better for clients, for partners, and especially for the next generation of lawyers.


For further help getting started on your organization’s AI journey, seeĚý here

]]>