Establishing GenAI literacy in courts Archives - Thomson Reuters Institute https://blogs.thomsonreuters.com/en-us/topic/establishing-genai-literacy-in-courts/ Thomson Reuters Institute is a blog from ¶¶ŇőłÉÄę, the intelligence, technology and human expertise you need to find trusted answers. Thu, 16 Apr 2026 06:20:41 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 Scaling Justice: AI is scaling faster than justice, revealing a dangerous governance gap /en-us/posts/ai-in-courts/scaling-justice-governance-gap/ Mon, 13 Apr 2026 16:57:55 +0000 https://blogs.thomsonreuters.com/en-us/?p=70330

Key takeaways:

      • AI frameworks need to keep up with implementation — While AI governance frameworks are being developed and enacted globally, their effectiveness depends on enforceable mechanisms within domestic justice systems.

      • Access to justice is essential for trustworthy AI regulation — Rights and protections are only meaningful if individuals can understand, challenge, and seek remedies for AI-driven decisions. Without operational access, governance frameworks risk remaining theoretical.

      • People-centered justice and human rights must anchor AI governance — Embedding human rights standards and ensuring equal access to justice in AI regulation strengthens public trust, accountability, and the credibility of both public institutions and private companies.


AI governance is accelerating across global, national, and local levels. As public investment in AI infrastructure expands, new oversight bodies are emerging to assess safety, risk, and accountability. The global policy conversation has from principles to the implementation of meaningful guardrails and AI governance frameworks, which legislators now are drafting and enacting.

These developments reflect growing recognition that AI systems demand structured oversight and a shift from voluntary safeguards and standards to institutionalized governance. One critical dimension remains underdeveloped, however: how do these frameworks function in practice? Are they enforceable? Do they provide accountability? Do they ensure equal access?

AI governance will not succeed on the strength of international declarations or regulatory design alone; rather, domestic justice systems will determine whether it works. At this intersection, the connection between AI governance and access to justice becomes real.

In early February, leaders across government, the legal sector, international organizations, industry, and civil society convened for an expert discussion. The following reflections attempt to build on that dialogue and its urgency.

From principles to enforcement

Over the past decade, AI governance has evolved from hypothetical ethical guidelines to voluntary commitments, binding regulatory frameworks, and risk-based approaches. Due to these game-changing advancements, however, many past attempts to provide structure and governance have been quickly outpaced by technology and are insufficient without enforcement mechanisms. As Anoush Rima Tatevossian of The Future Society observed: “The judicial community should have a role to play not only in shaping policies, but in how they are implemented.”

Frameworks establish expectations, while courts and dispute resolution mechanisms interpret rules, test rights, evaluate harm, assign responsibility, and determine remedies. If individuals are not empowered to safeguard their rights and cannot access these mechanisms, governance frameworks remain theoretical or are casually ignored.

This challenge reflects a broader structural constraint. Even without AI, legal systems struggle to meet demand. In the United States alone, 92% of people do not receive the help they need in accessing their rights in the justice system. Introducing AI into this environment without strengthening access can risk widening, rather than narrowing, the justice gap.


Please add your voice to ¶¶ŇőłÉÄę’ flagship , a global study exploring how the professional landscape continues to change.Ěý


Justice systems serve as the operational core of AI governance. By inserting the rule of law into unregulated areas, they provide the infrastructure that enables accountability by interpreting regulatory provisions in specific cases, assessing whether AI-related harms violate legal standards, allocating responsibility across public and private actors, and providing accessible pathways for redress.

These frameworks also generate critical feedback. Disputes involving AI systems expose gaps in transparency, fairness, and accountability. Legal professionals see where governance frameworks first break down in real-world conditions, often long before policymakers do. As a result, these frameworks function as an early signal of policy effectiveness and rights protections.

Importantly, AI governance does not require entirely new legal foundations. Human rights frameworks already provide standards for legality, non-discrimination, due process, and access to remedy, and these standards apply directly to AI-enabled decision-making. “AI can assist judges but must never replace human judgment, accountability, or due process,” said Kate Fox Principi, Lead on the Administration of Justice at the United Nations (UN) Office of the High Commissioner for Human Rights (OHCHR), during the February panel.

Clearly, rights are only meaningful when individuals can exercise them — this constraint is not conceptual, it’s operational. Systems must be understandable, affordable, and responsive, and institutions should be capable of evaluating complex, technology-enabled disputes.

Trust, markets & accountability

Governance frameworks that do not account for these dynamics risk entrenching inequities rather than mitigating them. An individual’s ability to understand, challenge, and seek a remedy for automated decisions determines whether governance is credible. A people-centered justice approach, as established in the , asks whether individuals can meaningfully engage with the system, not just whether rules exist. For example, women face documented barriers to accessing justice in any jurisdiction. AI systems trained on biased data can replicate or amplify existing disparities in employment, financial services, healthcare, and criminal justice.

“Institutional agreement rings hollow when billions of people experience governance as remote, technocratic, and unresponsive to their actual lives,” said Alfredo Pizarro of the Permanent Mission of Costa Rica to the UN. “People-centered justice becomes essential.”

AI systems already shape outcomes across employment, financial services, housing, and justice. Entrepreneurs, law schools, courts, and legal services organizations are already building AI-enabled tools that help people navigate legal processes and assert their rights more effectively. Governance design will determine whether these tools help spread access to justice and or introduce new barriers.

Private companies play a central role in developing and deploying AI systems. Their products shape economic and social outcomes at scale. For them, trust is not abstract; it is a success metric. “Innovation depends on trust,” explained Iain Levine, formerly of Meta’s Human Rights Policy Team. “Without trust, products will not be adopted.” And trust, in turn, depends on enforceability and equal access to remedy.

AI governance will succeed or fail based on access

As Pizarro also noted, justice provides “normative continuity across technological rupture.” Indeed, these principles already exist within international human rights law and people-centered justice; although they precede the advent of autonomous systems, they provide standards for evaluating discrimination, surveillance, and procedural fairness, and remain durable as new challenges to upholding justice and the rule of law emerge.

People-centered justice was not designed for legal systems addressing AI-related harms, but its outcome-driven orientation remains durable as new justice problems emerge.

The current stage presents an opportunity to align AI governance with access to justice from the outset. Beyond well-drafted rules, we need systems that people can use. And that means that any effective governance requires coordination between policymakers, legal professionals, and the public.


You can find other installments ofĚýour Scaling Justice blog seriesĚýhere

]]>
Pattern, proof & rights: How AI is reshaping criminal justice /en-us/posts/ai-in-courts/ai-reshapes-criminal-justice/ Fri, 10 Apr 2026 08:46:55 +0000 https://blogs.thomsonreuters.com/en-us/?p=70255

Key insights:

      • AI’s greatest strength in criminal justice is pattern recognition— AI can process vast amounts of data quickly, helping law enforcement and legal professionals detect connections, reduce oversight gaps, and improve consistency across investigations and casework.

      • AI should strengthen justice, not substitute for human judgment— Legal professionals are integral to evaluating AI-generated outputs, especially when decisions affect evidence, warrants, and individuals’ constitutional rights.

      • The most effective model is human/AI collaboration— AI handles scale and speed, while judges, attorneys, and investigators provide context, accountability, and ethical reasoning needed to protect due process.


The law has always been about patterns — patterns of behavior, patterns of evidence, and patterns of justice. Now, courts and law enforcement can leverage a tool powerful enough to see those patterns at a scale at a speed no human mind could match: AI.

At its core, AI works by recognizing patterns. Rather than simply matching keywords, it learns from large amounts of existing text to understand meaning and context and uses that learning to make predictions about what comes next. In the context of law enforcement, that capability is nothing short of transformative.

These themes were front and center in a recent webinar, , from theĚý, a joint effort by the National Center for State CourtsĚý(NCSC) and the Thomson Reuters Institute (TRI). The webinar brought together voices from across the justice system, and what emerged was a clear and consistent message: AI is a powerful ally in the pursuit of justice, but only when paired with the judgment, accountability, and constitutional grounding that human professionals can provide.

AI’s pattern recognition is a gamechanger

“AI is excellent,” said Mark Cheatham, Chief of Police in Acworth, Georgia, during the webinar. “It is better than anyone else in your office at recognizing patterns. No doubt about it. It is the smartest, most capable employee that you have.”

That kind of capability, applied to the demands of modern policing, investigation, and prosecution, is a genuine gamechanger. However, the promise of AI extends far beyond the patrol car or the precinct. Indeed, it cascades through the entire arc of justice — from the moment a crime is detected all the way through prosecution and adjudication.

Each step in that chain represents not just an operational and efficiency upgrade, but an opportunity to make the system more fair, more consistent, and more protective of the rights of everyone involved.

Webinar participants considered the practical implications. For example, AI can identify and mitigate human error in decision-making, promoting greater consistency and fairness in outcomes across cases. And by automating labor-intensive tasks such as reviewing body camera footage, AI frees prosecutors and defense attorneys to focus on other aspects of their work that demand professional judgment and legal expertise.

In legal education, the potential of AI is similarly recognized. Hon. Eric DuBois of the 9th Judicial Circuit Court in Florida emphasizes its role as a tool rather than a substitute. “I encourage the law students to use AI as a starting point,” Judge DuBois explained. “But it’s not going to replace us. You’ve got to put the work in, you’ve got to put the effort in.”


AI can never replace the detective, the prosecutor, the judge, or the defense attorney; however, it can work alongside them, handling the volume and velocity of data that no human team could process alone.


Judge DuBois’ perspective aligns with broader judicial sentiment on the responsible integration of AI. In fact, one consistent theme across the webinar was the necessity of maintaining human oversight. The role of the legal professional remains central, participants stressed, because that ensures accuracy, accountability, and ethical judgment. The appropriate placement of human expertise within AI-assisted processes is essential to ensuring a fair and effective legal system.

That balance between leveraging AI and preserving human judgment is not just good practice, rather it’s a cornerstone of justice. While Chief Cheatham praises AI’s pattern recognition, he also cautions that it “will call in sick, frequently and unexpectedly.” In other words, AI is a powerful but imperfect tool, and those professionals who rely on it must always be prepared to intervene in those situations in which AI falls short. Moreover, the technology is improving extremely rapidly, and the models we are using today will likely be the worst models we ever use.

Naturally, that readiness is especially critical when individuals’ rights are on the line. “A human cannot just rely on that machine,” said Joyce King, Deputy State’s Attorney for Frederick County in Maryland. “You need a warrant to open that cyber tip separately, to get human eyes on that for confirmation, that we cannot rely on the machine.” Clearly, as the webinar explained, AI does not replace constitutional obligations; rather, it operates within them, and the professionals who use AI are still the guardians of due process.

The human/AI partnership is where justice is served

Bob Rhodes, Chief Technology Officer for ¶¶ŇőłÉÄę Special Services (TRSS) echoed that sentiment with a principle that cuts across every application of AI in the justice system. “The number one thing… is a human should always be in the loop to verify what the systems are giving them,” Rhodes said.

This is not a limitation of AI; instead, it’s the design of a system that works. AI identifies the patterns, and trained, experienced professionals evaluate them, act on them, and are accountable for them.

That partnership is where the real opportunity lives. AI can never replace the detective, the prosecutor, the judge, or the defense attorney. However, it can work alongside them, handling the volume and velocity of data that no human team could process alone. So that means the humans in the room can focus on what they do best: applying judgment, upholding the law, and protecting an individual’s rights.

For judicial and law enforcement professionals, this is the moment to lean in. The patterns are there, the technology to read them is here, and the opportunity to use both in service of rights — not against them — has never been greater.


Please add your voice to ¶¶ŇőłÉÄę’ flagship , a global study exploring how the professional landscape continues to change.Ěý

]]>
Scaling Justice: Unlocking the $3.3 trillion ethical capital market /en-us/posts/ai-in-courts/scaling-justice-ethical-capital/ Mon, 23 Mar 2026 17:12:28 +0000 https://blogs.thomsonreuters.com/en-us/?p=70042

Key takeaways:

      • An additional funding stream, not a replacement — Ethical capital has the potential to supplement existing access to justice infrastructure by introducing a justice finance mechanism that can fund cases with measurable social and environmental impact.

      • Technology as trust infrastructure — AI and smart technologies can provide the governance scaffolding required for ethical capital to flow at scale, including standardizing assessment, impact measurement, and oversight.

      • Capital is not scarce; allocation is — The true bottleneck is not the availability of funds; rather it’s the disciplined, investment-grade legal judgment required to evaluate risk, ensure compliance, and measure impact in a way that makes justice outcomes investable.


Kayee Cheung & Melina Gisler, Co-Founders of justice finance platform Edenreach, are co-authors of this blog post

Access to justice is typically framed as a resource problem — the idea that there are too few legal aid lawyers, too little philanthropic funding, and too many people navigating civil disputes alone. This often results in the majority of individuals who face civil legal challenges doing so without representation, often because they cannot afford it.

Yet this crisis exists alongside a striking paradox. While 5.1 billion people worldwide face unmet justice needs, an estimated $3.3 trillion in mission-aligned capital — held in donor-advised funds, philanthropic portfolios, private foundations, and impact investment vehicles — remains largely disconnected from solutions.

Unlocking even a fraction of this capital could introduce a meaningful parallel funding stream — one that’s capable of supporting cases with potential impacts that currently fall outside traditional funding models. Rather than depending on charity or contingency, what if justice also attracted disciplined, impact-aligned investment in cases themselves, in addition to additional funding that could support technology?

Recent efforts have expanded investor awareness of justice-related innovation. Programs like Village Capital’s have helped demystify the sector and catalyze funding for the technology serving justice-impacted communities. Justice tech, or impact-driven direct-to-consumer legal tech, has grown exponentially in the last few years along with increased investor interest and user awareness.

Litigation finance has also grown, but its structure is narrowly optimized for high-value commercial claims with a strong financial upside. Traditional funders typically seek 5- to 10-times returns, prioritizing large corporate disputes and excluding cases with significant social value but lower monetary recovery, such as consumer protection claims, housing code enforcement, environmental accountability, or systemic health negligence.

Justice finance offers a different approach. By channeling capital from the impact investment market toward the justice system and aligning legal case funding with established impact measurement frameworks like the , it reframes certain categories of legal action as dual-return opportunities, covering financial and social.

This is not philanthropy repackaged. It’s the idea that measurable justice outcomes can form the basis of an investable asset class, if they’re properly structured, governed, and evaluated.

Technology as trust infrastructure

While mission-aligned capital is widely available, the ability to evaluate legal matters with the necessary rigor remains limited. Responsibly allocating funds to legal matters requires complex expertise, including legal merit assessment, financial risk modeling, regulatory compliance, and impact evaluation. Cases must be considered not only for their likelihood of success and recovery potential, but also for measurable social or environmental outcomes.

Today, that assessment is largely manual and capacity-bound by small teams. The result is a structural bottleneck as capital waits on scalable, trusted evaluation and allocation.

Without a way to standardize and responsibly scale analysis of the double bottom line, however, justice funding remains bespoke, even when resources are available.

AI-enabled systems can play a transformative role by standardizing assessment frameworks and supporting disciplined capital allocation at scale. By encoding assessment criteria, decision pathways, and compliance safeguards and then mapping case characteristics to impact metrics, technology can enable consistency and allow legal and financial experts to evaluate exponentially more matters without lowering their standards.

And by integrating legal assessment, financial modeling, and impact alignment within a governed tech framework, justice finance platforms like can function as the connective tissue. Through the platform, impact metrics are applied consistently while human experts remain responsible for final determinations, thereby reducing friction, increasing transparency, and supporting auditability.

When incentives align

It’s no coincidence that many of the leaders exploring justice finance models are women. Globally, women experience legal problems at disproportionately higher rates than men yet are less likely to obtain formal assistance. Women also control significant pools of global wealth and are more likely to . Indeed, 75% of women believe investing responsibly is more important than returns alone, and female investors are almost twice as likely as male counterparts to prioritize environmental, social and corporate governance (ESG) factors when making investment decisions, .

When those most affected by systemic barriers also shape capital allocation decisions, structural change becomes more feasible. Despite facing steep barriers in legal tech funding (just 2% goes to female founders), women represent in access-to-justice legal tech, compared to just 13.8% across legal tech overall.

This alignment between lived experience, innovation leadership, and capital stewardship creates an opportunity to reconfigure incentives in favor of meaningful change.

Expanding funding and impact

Justice financing will not resolve the justice gap on its own. Mission-focused tools for self-represented parties, legal aid, and court reform remain essential components of a functioning justice ecosystem. However, ethical capital represents an additional structural layer that can expand the range of cases and remedies that receive financial support.

Impact orientation can accommodate longer time horizons, alternative dispute resolution pathways, and remedies that extend beyond monetary damages. In certain matters, particularly those involving environmental harm, systemic consumer violations, or community-wide injustice, capital structured around impact metrics may identify and enable solutions that traditional litigation finance models do not prioritize.

For example, capital aligned with defined impact frameworks may support outcomes that include remediation programs, compliance reforms, or community investments alongside financial recovery. These approaches can create durable benefits that outlast a single judgment or settlement.

Of course, solving deep-rooted inequities and legal system complexity requires more than new tools and new investors. It requires designing capital pathways that are repeatable, accountable, and aligned with measurable public benefit.

Although justice finance may not be a fit for every case and has yet to see widespread uptake, it does have the potential to reach cases that currently fall through the cracks — cases that have merit, despite falling outside traditional litigation finance models and legal aid or impact litigation eligibility criteria.


You can find other installments of our Scaling Justice blog series here

]]>
Scaling Justice: Easing the UK’s employee rights crisis /en-us/posts/ai-in-courts/scaling-justice-uk-employee-rights-crisis/ Tue, 24 Feb 2026 18:37:39 +0000 https://blogs.thomsonreuters.com/en-us/?p=69605

Key takeaways:

      • An emerging employment tribunal crisisĚý— The UK’s employment tribunal system is facing unprecedented backlogs, long wait times, and unaffordable legal representation, leaving many workers and small businesses unable to effectively resolve workplace disputes.

      • Process-oriented barriers to justiceĚý— Most claims are dismissed not because they lack merit, but due to claimants disengaging from a slow and complex process, with legal costs often exceeding the value of claims and legal aid unable to meet rising demand.

      • A potential role for legal technologyĚý— Mission-driven legal tech platforms are emerging to provide affordable, scalable support and help claimants stay engaged by offering a practical solution to improve access to justice.


When a worker in the United Kingdom is unfairly dismissed or denied wages, their path to resolution runs through employment tribunals, a specialized court system separate from civil courts. As in the United States, many workers and small businesses cannot afford legal representation and must navigate the process on their own.

With backlogs at all-time highs and affordable legal services at all-time lows, this system is coming under increasing pressure. Fortunately, mission-driven technology and data analysis are emerging to level the playing field and increase access to justice.

Current state by the numbers

According to an analysis of the and other data sources,*Ěýin the second quarter of 2025, employment tribunals resolved just 45% of incoming claims, adding 18,000 cases to the backlog alone. In the past year, the open caseload has surged by 244%. This pressure is set to intensify as the inbound Employment Rights Act 2025 — the UK’s most significant overhaul of workplace protections in decades — is set to extend protection to six million more workers in 2027.

As the backlog increases, so do wait times. In 2025, the average wait for resolution reached 25 weeks, more than double that of 2024, with some claim types like equal pay and discrimination claims reaching up to 37 weeks. Some more complex cases are reported to have their final hearings scheduled as far out as 2029.

With only 8% of cases reaching a final hearing and the majority resolved through settlement or withdrawal, the growing backlog raises concerns about whether lengthy wait times influence how claimants choose to resolve their cases.

In the UK, a common threshold for legal affordability is a salary of ÂŁ55,000, meaning around 65% of workers cannot afford legal representation. Legal aid and pro-bono services exist to support those in need, but with growing funding constraints and rising demand, these services cannot reach nearly two-thirds of claimants.


You can find more insights about how courts are managing the impact of advanced technology fromĚýour Scaling Justice seriesĚýhere


Tribunal awards are largely calculated from salary. This can result in a claim’s value often being lower than the cost of legal representation to pursue it. In a typical hospitality case, for example, a worker owed ÂŁ1,500 in unpaid wages (equivalent to 3½ weeks of pay) has a 92% chance of representing themselves and will wait on average six months for resolution — without pay owed, legal support, or outcome certainty.

The cost, both in time and resources, also falls on employers. In lower-margin industries such as hospitality, default judgments, in which an employer does not engage with proceedings, can reach as high as 37%, compared with a national average of around 6%. For these employers and for smaller businesses more broadly, the cost of legal support may also exceed the value of defending a claim.

With rising costs and growing delays, the risk for both employers and employees is that the system becomes inaccessible, leading to outcomes shaped by who can afford to sustain the process rather than case-by-case strength.

Where justice tech fits

The conventional assumption is that self-represented claimants are at a significant disadvantage when they go to court; yet the data is more nuanced. Self-represented claimants who reach a hearing prevail 44% of the time, compared to 52% for those with legal representation — a gap of less than eight percentage points.

The greater risk is not losing at hearing but never actually reaching one. Analysis of more than 2,700 struck-out, or dismissed, cases by employment rights platform Yerty found that the majority were dismissed not for lack of merit, but because claimants stopped engaging with the process. Only 6% were struck out for having no reasonable prospect of success. This suggests that the primary barrier may not be the absence of legal representation, but the ability to sustain engagement with a slow, complex, and often opaque process.

Increasing numbers of UK workers turning to AI tools like ChatGPT for legal support highlight not only the demand for affordable access but also the risks of general-purpose tools being used in legal contexts. Fabricated case law in tribunal submissions, for example, harms users and adds further pressure to an already overstretched system.


The conventional assumption is that self-represented claimants are at a significant disadvantage when they go to court; yet the data is more nuanced.


A new generation of legal technology platforms is emerging to fill this gap, with tools purpose-built for the specific circumstances of employment law. Yerty and Valla, among others, offer AI-powered guidance tailored to the UK tribunal process, providing affordable, scalable support previously out of reach for most workers. Government organizations are also moving in this direction. For example, in its recent five-year strategy outlook committed to exploring new digital services that offer faster, more accessible support.

Technology alone cannot address underfunding, judicial capacity, or fundamental power imbalances. However, if the majority of dismissed claims stem from disengagement rather than weak cases, and self-represented claimants prevail at comparable rates to those with lawyers, then the answer isn’t more lawyers — it’s better support upstream. Mission-driven legal technology can provide consistent, scalable guidance that helps both parties manage the process and avoid falling through the cracks.

The UK government’s own assessment of the Employment Rights Bill forecasts a 15% increase in claims by 2027 due to expanded eligibility. As noted above, the system is already under significant pressure before these reforms take effect, and traditional responses — more judges, more funding — too often take years to deliver.

While not a complete answer, justice tech can help address a real, measurable problem, that of keeping people engaged in a process that too often disengages them. For a hospitality worker owed back pay, a healthcare worker facing unfair dismissal, or a retail employee navigating a discrimination claim alone, that support could mean the difference between a case heard and one abandoned — and justice delayed or justice denied.


*Sources: Ministry of Justice Tribunal Statistics Quarterly (July-September 2025); Yerty analysis of 2,721 struck-out tribunal decisions and 8,761 case outcomes; ACAS Strategy 2025-2030; 2024 UK Judicial Attitude Survey, UCL Judicial Institute / UK Judiciary, February 2025.

]]>
When courts meet GenAI: Guiding self-represented litigants through the AI maze /en-us/posts/ai-in-courts/guiding-self-represented-litigants/ Thu, 19 Feb 2026 18:20:08 +0000 https://blogs.thomsonreuters.com/en-us/?p=69532

Key insights:

      • Considering courts’ approach — Although many courts do not interact with litigants prior to filings, courts can explore how to help court staff discuss AI use with litigants.

      • Risk of generic AI tools — AI use in legal settings can’t be simply categorized as safe or risky; jurisdiction, timing, and procedure are vital factors, making generic AI tools unreliable for court-specific needs.

      • Specialty AI tools require testing — Purpose-built court AI tools offer a safer alternative for litigants, yet these require development and extensive testing.


Self-represented litigants have always pieced together legal help from whatever sources they can access. Now that AI is part of that mix, courts are working to help people use this advanced technology responsibly without implying an endorsement of any particular tool or even the use of AI.

Many litigants cannot afford an attorney; others may distrust the representation they have or may not know where to begin. In any case, people need a meaningful way to interact with the legal system. Used carefully and responsibly, AI can support access to justice by helping self-represented litigants understand their options, organize information, and draft documents, while still requiring litigants to verify their information and consult official court rules and resources.

These issues were discussed in a recent webinar, , hosted by . The panel explored the potential benefits of AI for access to justice and the operational challenges of integrating AI into public-facing guidance for litigants.

The problem with “Just ask AI”

Angela Tripp of the Legal Services Corporation noted that people handling legal matters on their own have long relied on a mix of resources, “some of which were designed for that purpose, and some of which were not.” AI is simply a new tool in that environment, she added. The primary challenge is that court processes are rule-based and time-sensitive; and a mistake can mean missing a deadline, submitting the wrong document, or misunderstanding a requirement that affects the case.

Access to justice also requires more than just access to information in general. Court users need information that is relevant, complete, accurate, and up to date. Generic AI systems, such as most public-facing tools, are trained on broad internet text may not reliably deliver that level of specificity for a particular court, case type, or stage of a proceeding. In these cases, jurisdiction, timing, and procedure all matter. Unfortunately, AI can omit key steps or emphasize the wrong issues, and self-represented litigants may not have the legal experience to recognize what is missing.

At the same time, AI offers several potential benefits to self-represented litigants. It can explain concepts in plain language, help users structure a narrative, and produce a first draft faster than many people can on their own. The challenge is aligning those strengths with the precision that court processes demand.

A strategic pivot: from teaching litigants to equipping staff

In the webinar, Stacey Marz, Administrative Director of the Alaska Court System, described her team’s early efforts to give self-represented litigants clear guidance about safer and riskier uses of AI, including examples of how to properly prompt generative AI queries.

The team tried to create traffic light categories that would simplify decision-making; however, they found this approach very challenging despite several draft efforts to create useful guidance. Indeed, AI use can shift from low-risk to high-risk depending on context, and it was hard to provide examples without sounding like the court was endorsing a tool or sending people down a path to which the court could not guarantee results.

The group ultimately shifted to a more practical approach — training the people who already help litigants. The new guidance targets public-facing staff such as clerks, librarians, and self-help center workers. Instead of teaching litigants how to prompt AI, it equips staff to have informed, consistent conversations when litigants bring AI-generated drafts or AI-based questions to the counter.

The framework emphasizes acknowledgment without endorsement. It suggests language such as:

“Many people are exploring AI tools right now. I’m happy to talk with you about how they may or may not fit with court requirements.”

From there, staff can explain why court filings require extra caution and direct users to court-specific resources.

This approach also assumes good faith. A flawed filing is often a sincere attempt to comply, and a litigant may not realize that an AI output is incomplete or incorrect.

Purpose-built tools take time

The webinar also discussed how courts also are exploring purpose-built AI tools, including judicial chatbots designed around court procedures and grounded in verified information. Done well, these tools can reduce common problems associated with generic AI systems, such as jurisdiction mismatch, outdated requirements, or fabricated or hallucinated citations.

However, building reliable court-facing AI demands significant time and testing. Marz shared Alaska’s experience, noting that what the team expected to take three months took more than a year because of extensive refinement and evaluation. The reason is straightforward: Court guidance must be highly accurate, and errors can materially harm someone’s legal interests. In fact, even after careful testing, Alaska still included cautionary language, recognizing that no system can guarantee perfect answers in every situation.

The path forward

Legal Services’ Tripp highlighted a central risk: Modern AI tools can be clear, confident, and easy to trust, which can lead people to over-rely on them. And courts have to recognize this balance. Courts are not trying to prevent AI use; rather, many are working toward realistic norms that treat AI as a drafting and organizing aid but require litigants to verify claims against official court sources and seek human support when possible.

Marz also emphasized that courts should generally assume filings reflect a litigant’s best effort, including in those cases in which AI contributed to confusion. The goal is education and correction rather than punishment, especially for people navigating complex processes without representation.

Some observers describe this moment as an early AOL phase of AI, akin to the very early days of the world wide web — widely used, evolving quickly, and uneven in its reliability. That reality makes clear guidance and consistent messaging more important, not less.

This shift among courts from teaching litigants to use AI to teaching court staff and other helpers how to talk to litigants about AI reflects a practical effort on the part of courts to reduce the risk of harm while expanding access to understandable information.

As is becoming clearer every day, AI can make legal processes feel more navigable by helping self-represented litigants draft, summarize, and prepare; and for courts to realize that value requires clear guardrails, court-specific verification, and careful implementation, especially when a missed detail can change the outcome of a case.


You can find out more about how AI and other advanced technologies are impactingĚýbest practices in courts and administrationĚýhere

]]>
How AI-powered access to justice is impacting unauthorized practice of law regulations /en-us/posts/government/ai-impacts-unauthorized-practice-of-law/ Mon, 02 Feb 2026 17:55:20 +0000 https://blogs.thomsonreuters.com/en-us/?p=69263

Key insights:

      • Courts and the legal profession need to show leadership — Given their specialized knowledge of the needs of litigants and of courts, courts need to take the lead in determining definitions of the unauthorized practice of law.

      • 3 paths forward to workable regulatory solutions — Recent discussions and research around this subject offered three paths toward modernizing UPL definitions.

      • Uncertainty harms users and innovation — Fear of UPL can drive self-censorship and market exits, even as litigants continue to use publicly available GenAI tools.


Today, many Americans experience legal issues but lack proper access to legal representation. At the same time, AI tools capable of providing legal information are rapidly evolving and already in widespread use. Between these two facts lies a critical definitional problem that courts and state bars must urgently address: How to define the unauthorized practice of law (UPL) in way that doesn’t further curtail access to justice.

This discussion is not theoretical. It directly determines whether AI-based legal services can operate, how they should be regulated, and ultimately whether AI can help unrepresented or self-represented litigants gain meaningful access to justice. This issue was explored in more depth during a recent webinar from theĚý, a joint effort by the National Center for State CourtsĚý(NCSC) and the Thomson Reuters Institute (TRI).

The need for clear definitions

During the webinar, Alaska Supreme Court Administrative Director Stacey Marz noted that “there is no uniform definition of the practice of law” and that UPL regulations represent “a real varied continuum of scope and clarity.” This variation makes compliance challenging for technology providers, especially as they navigate 50 different state standards.

UPL generally occurs when someone “not licensed as an attorney attempts to represent or perform legal work on behalf of another person,” explained Cathy Cunningham, Senior Specialist Legal Editor at ¶¶ŇőłÉÄę Practical Law.

Marz added that such legal advice typically involves “applying the law, rules, principles, and processes to specific facts and circumstances of that individual client — and then recommending a course of action.”

The challenge, however, is that AI can appear to do exactly this, yet the regulatory framework remains unclear about whether and how this should be permitted and how consumers can be protected.

3 paths forward

During the recent webinar, panelists discussed several different approaches to UPL regulations, noting that a and outlined three approaches that state courts could take, including:

Path 1: Explicitly enabling tools with regulatory framework — UPL statutes can be revisited to explicitly allow purpose-built AI legal tools to operate without threat of UPL enforcement, provided they meet certain requirements. Prof. Dyane O’Leary, Director of Legal Innovation & Technology at Suffolk University, emphasized that consumer-facing AI legal tools are already being used for tailored legal advice, arguing that some oversight is better than “just letting these tools continue to operate and hoping consumers aren’t harmed by them.”

Path 2: Creating regulatory sandboxes — Courts could establish temporary experimental zones in which AI legal service providers can operate under controlled conditions while regulators gather data about efficacy and safety through feedback and research, with an eye toward informing future regulation reform.

Path 3: Narrowing UPL to human conduct — Clarifying that existing UPL rules apply only to humans who may hold themselves out as attorneys in tribunals or courtrooms or creating legal documents under the guise of being a human attorney, effectively would leave AI-powered legal tools clearly outside UPL restrictions and open up a “new pocket of the free market” for consumers.

Utah Courts Self-Help Center Director Nathanael Player referenced Utah Supreme Court Standing Order Number 15, which established their regulatory sandbox using a fundamentally different standard: Not whether services match what lawyers provide, but rather “is this better than the absolute nothing that people currently have available to them?”

Prof. O’Leary reframed the comparison itself, suggesting that instead of comparing consumers who use AI tools to consumers with an attorney, the framework should be “consumers that use legal AI tools, and maybe consumers that otherwise have no support whatsoever.”

The personhood puzzle

“AI, at this time, does not have legal personhood status,” said Practical Law’s Cunningham. “So, AI can’t commit unauthorized practice of law because AI is not a person.”

However, Player pushed back on this reasoning, clarifying that “AI does have a corporate personhood. There is a corporation that made the AI, [and] the corporation providing that does have corporate personhood.” He added, however, that “it’s not clear, I don’t think we know whether or not there is… some sort of consequence for the provision of ChatGPT providing legal services.”


You can view here


This ambiguity creates what might be called the personhood gap, a zone of legal uncertainty with serious consequences for both innovation and access to justice.

Colin Rule, CEO at online dispute resolution platform ODR.com, explained that “one of the major impacts of UPL is, actually self-censorship.” After receiving a UPL letter from a state bar years ago, he immediately exited that market. This pattern repeats across the legal tech landscape, leaving companies hesitant to innovate.

Rule’s bottom line resonates with anyone trying to build solutions in this space. “As a solution provider, what I want is guidance,” Rules explained. “Clarity is what I need most… that’s my number one priority.”

Moving forward: Clarity over perfection

The legal profession needs to lead on this issue, and that means state bars and state supreme courts must take action now. The tools are already in use, and the question is not whether AI will play a role in legal services, but rather whether that role will be defined by thoughtful regulation or by default.

The solution is for the judiciary to provide clear guidance on what services can be offered, by whom, and under what conditions. To do that, courts much first acknowledge that for most people, the choice is not between an AI tool and a lawyer but between an AI tool and nothing. Given that, states must walk a path that will both encourage innovation and protect consumers.

To this end, legal professionals and courts should experiment with these tools, understand their trajectory as well as their current limitations, and work collaboratively with developers to create frameworks that prioritize consumer protection without stifling innovation that could genuinely expand access to justice.


You can find out more about how courts and legal professionals are dealing with the unauthorized practice of law here

]]>
Responsible AI use for courts: Minimizing and managing hallucinations and ensuring veracity /en-us/posts/ai-in-courts/hallucinations-report-2026/ Wed, 28 Jan 2026 10:51:10 +0000 https://blogs.thomsonreuters.com/en-us/?p=69181

Key insights:

      • AI usage in courts needs verifiable reliability— Unlike other fields, errors and hallucinations caused by AI in a court setting can create due-process issues.

      • Skepticism is professional responsibility— Judges’ interrogation of AI sources and accountability concerns are vital guardrails to minimizing these problems.

      • Governance over perfection— Courts and legal professionals should focus on systematic management of AI hallucinations through clear protocols, human oversight, and mandatory verification to ensure veracity.


AI hallucinations have become one of the most urgent and most misunderstood issues in professional work today; and as generative AI (GenAI) moves from and interesting experiment to common usage in many workplace infrastructures, these issues can cause significant problems, especially for courts and the professionals and individuals that use them.

Jump to ↓

Responsible AI use for courts: Minimizing and managing hallucinations and ensuring veracity

 

Today, AI can be used in everything from assisted research to guided drafting of documents, court briefs, and even court orders. With the development of tools supported by GenAI and agentic AI, the very infrastructure of professional work has shifted to include these offerings.

Yet, in most business settings, a wrong answer is an inconvenience. It requires minor corrections and has minimal impact. In the justice system, a wrong answer can be a due-process problem that strongly underscores the need for courts and legal professionals to ensure that their AI use is verifiably reliable when it counts.

At the same time, the direction of travel is clear: AI adoption isn’t a fad we can simply wait out, and it isn’t inherently at odds with high-stakes decision-making. Used well, these tools can reduce administrative burden, speed up access to relevant information, and help court professionals navigate large volumes of material more efficiently. The real question is not whether courts will encounter AI in their workflows, but how they will define responsible use, especially in moments in which accuracy isn’t a feature, it’s the foundation.


“Whether you are a judge [or] an attorney, credibility is everything, particularly when you come before the court.”

— Justice Tanya R. Kennedy Associate Justice of the Appellate Division, First Judicial Department of New York


To examine these issues more deeply, the Thomson Reuters Institute has published a new report,Ěý, which frames hallucinations not as a sensationalistic gotcha, but as a practical risk that must be managed with policy, process, and professional judgment. The report also features valuable insight on this subject from judges and court stakeholders who today are evaluating AI in the real operating environment of legal proceedings, courtroom expectations, and the daily administration of justice.

This perspective is essential. Technical teams can explain how models generate language and why they sometimes produce confident-sounding errors. However, judges and court staff can explain something equally important — what accuracy actually means in practice. In courts, accuracy isn’t just about getting the gist right; rather, it’s about precise citations, faithful characterization of the record, correct procedural posture, and language that withstands scrutiny. As the report points out, relied-upon hallucinated information isn’t merely bad output, it can lead to a potential distortion of justice.

Managing AI as professional responsibility

Crucially, the report reflects that judicial skepticism about AI is not simple technophobia — it’s professional responsibility. Judges are trained to interrogate sources, weigh credibility, and understand the downstream consequences of errors. Judges may ask, What is the provenance of this information? Can I reproduce it independently? And who is accountable if it’s wrong? These questions aren’t barriers to innovation; indeed, they are the guardrails that this innovation requires.

What emerges is a pragmatic middle ground that embraces the upside of AI use in courts while treating hallucinations as a predictable occurrence that can be managed systematically. Rather than concluding AI hallucinates, therefore AI can’t be used, the more workable conclusion is AI can hallucinate, therefore AI outputs must be designed, handled, and verified accordingly, likely with other advanced tech tools. As the report points out, courts don’t need a perfect AI; rather, they need repeatable protocols that keep human decision-makers in control and keep the record clean.

As the report ultimately demonstrates, managing hallucinations in courts isn’t about chasing perfection, it’s about protecting veracity. It’s about using the right advanced tech tools to build workflows in which the technology consistently supports the truth-finding process instead of quietly eroding it. And it’s about recognizing that in the legal system, responsibility doesn’t disappear when a new tool arrives — it becomes even more important to ensure the new tool doesn’t erode that either.


You can download

a full copy of the Thomson Reuters Institute’s Ěýhere

]]>
Scaling Justice: How technology is reshaping support for self-represented litigants /en-us/posts/ai-in-courts/scaling-justice-technology-self-represented-litigants/ Fri, 23 Jan 2026 15:31:24 +0000 https://blogs.thomsonreuters.com/en-us/?p=69124

Key takeaways:

      • From scarcity to abundance — Technology has shifted the challenge in access to justice from scarcity of legal help to issues of accuracy, governance, and effective support.ĚýAI and digital tools now provide abundant legal information to self-represented litigants, but they raise new questions about reliability, oversight, and alignment with human needs.

      • The necessity of human-in-the-loop — Human involvement remains essential for meaningful resolution.ĚýWhile AI can explain procedures and guide users, real support often requires relational and institutional human guidance, especially for vulnerable populations facing anxiety, low literacy, or systemic bias.

      • One part of a bigger question — Systemic reform and broader approaches are needed beyond technological fixes because technology alone cannot solve deep-rooted inequities or the complexity of the legal system. Efforts should include prevention, alternative dispute resolution, and redesigning systems to prioritize just outcomes and accessibility.


Access to justice has long been framed as a problem of scarcity, with too few legal aid lawyers and insufficient funding forcing systems to be built in triage mode. This has been underscored with the unspoken assumption that most people navigating civil legal problems would do so without meaningful help, often because their issues were not compelling or lucrative enough to justify legal representation.

This framing no longer holds, however. Legal information, once tightly controlled by legal professionals, publishers, and institutions, is now abundantly available. Large language models, search-based AI systems, and consumer-facing legal tools can explain civil procedure, identify relevant statutes, translate dense legalese into plain language, and generate step-by-step guidance in seconds.

Increasingly, self-represented litigants are actively using these tools, whether courts or legal aid organizations endorse them or not. Katherine Alteneder, principal at Access to Justice Innovation and former Director of the Self-Represented Litigation Network, notes: “This reality cannot be fully controlled, regulated out of existence, or ignored.”

And as Demetrios Karis, HFID and UX instructor at Bentley University, argues: “Withholding today’s AI tools from self-represented litigants is like withholding life-saving medicine because it has potential side effects. These systems can already help people avoid eviction, protect themselves from abuse, keep custody of their children, and understand their rights. Doing nothing is not a neutral choice.”

Thus, the central question is no longer whether technology can help self-represented litigants, but rather how it should be deployed — and with what expectations, safeguards, and institutional responsibilities.

Accuracy, error & tradeoffs

The baseline capabilities of general-purpose AI systems have advanced dramatically in a matter of months. For common use cases that self-represented litigants most likely seek — such as understanding process, identifying next steps, preparing for hearings, and locating authoritative resources — today’s frontier models routinely outperform well-funded legal chatbots developed at significant cost just a year or two ago.


The central question is no longer whether technology can help self-represented litigants, but rather how it should be deployed — and with what expectations, safeguards, and institutional responsibilities.


These performance gains raise important questions about the continued call for extensive customization to deliver basic legal information. However, performance improvements do not eliminate the need for careful design. Tom Martin, CEO and founder of LawDroid (and columnist for this blog), emphasizes that “minor tweaking” is subjective, and that grounding AI tools in high-quality sources, appropriate tone, and clear audience alignment remains essential, particularly when an organization takes responsibility and assumes liability for the tool’s voice and output.

Not surprisingly, few topics in the legal tech community generate more debate than AI accuracy, but it cannot be evaluated in isolation. Human lawyers make mistakes, static self-help materials become outdated, and informal advice from friends, family, or online forums is often wrong. Models should be evaluated against realistic alternatives, especially when the alternative is no help at all.

Off-the-shelf tools now perform surprisingly well at generating plain-language explanations, often drawing on primary law, court websites, and legal aid resources. In limited testing, inaccuracies tend to reflect misunderstandings or overgeneralizations rather than pure fabrication. And while these are errors that are still serious, they may be easier to detect and correct with review.

Still, caution is key, often because AI tells people what they want to hear in order to keep them on the platform. Claudia Johnson of Western Washington University’s Law, Diversity, and Justice Center asks what an acceptable error rate is when tools are deployed to vulnerable populations and reminds organizations of their duty of care. Mistakes, especially those known and uncorrected, can carry legal, ethical, and liability consequences that cannot be ignored.

Knowledge bases are infrastructure, but more is needed

Vetted, purpose-built, and mission-focused solution ecosystems are emerging to fill the gap between infrastructure and problem-solving. The Justice Tech Directory from the Legal Services National Technology Assistance Project (LSNTAP) provides legal aid organizations, courts, and self-help centers with visibility into curated tools that incorporate guardrails, human review, and consumer protection in ways that general-purpose AI platforms do not.

Of course, this infrastructure does not exist in a vacuum. Indeed, these systems address the real needs of real people. While calls for human-in-the-loop systems are often framed as safeguards against technical failure, some of the most important reasons for human involvement are often relational and institutional. Even accurate information frequently fails to resolve legal problems without human support, particularly for people experiencing anxiety, shame, low literacy, or systemic bias within courts.


Not surprisingly, few topics in the legal tech community generate more debate than AI accuracy, but it cannot be evaluated in isolation.


A human in the loop can improve how self-represented litigants are treated by clerks, judges, and opposing parties. Institutional review models often provide this interaction at pre-filing document clinics, navigator-supported pipelines, and structured AI review workshops that integrate human judgment and augment human effort rather than replacing it.

Abundance and the limits of technology

Information does not automatically produce equity. Technology cannot make up for existing, persistent systemic issues, and several prominent voices caution against treating AI as a workaround for deeper system failures. Richard Schauffler of Principal Justice Solutions, notes that the underlying problem with the use of AI in the legal world is the fact that our legal process is overly complicated, mystified in jargon, inefficient, expensive, and deeply unsatisfying in terms of justice and fairness — and using AI to automate that process does not alter this fact.

Without changes at the courthouse level, upstream technological improvements may not translate into just outcomes. Bias, discrimination, and resource constraints cannot be solved by technology alone. Even perfect information from a lawyer does not equal power when structural inequities persist.

Further, abundance fundamentally changes the problem. As Alteneder notes, rather than access, the primary problem now is “governance, trust, filtering, and alignment with human values.” Similar patterns are seen in healthcare, journalism, and education. Without scaffolding, technology often widens gaps, benefiting those with greater capacity to interpret, prioritize, and act. For self-represented litigants, the most valuable support is often not answers, but navigation: What matters most now, which paths are realistic, how to understand when to escalate and when legal action may not serve broader life needs.

Focusing solely on court-based self-help misses an opportunity to intervene earlier, especially on behalf of self-represented litigants. AI-enabled tools have the potential to identify upstream legal risk and connect people to mediation, benefits, or social services before disputes harden.


You can find more insights about how courts are managing the impact of advanced technology from our Scaling Justice series here

]]>
Between hype and fear: Why I have not issued a standing order on AI /en-us/posts/ai-in-courts/standing-order-on-ai/ Thu, 15 Jan 2026 19:31:57 +0000 https://blogs.thomsonreuters.com/en-us/?p=69072

Key insights:

      • The legal system should avoid both overhyping and over-fearing AI — Instead, adopting a balanced approach that emphasizes careful, deliberate engagement and responsible experimentation.

      • Mandatory AI disclosure or certification orders do not necessarily improve the reliability of legal filings — In addition, they run the risk of creating confusion, false assurance, and additional hurdles, especially for smaller law firms and self-represented litigants.

      • Rather than imposing a restrictive order, the author issued guidance — This guidance is designed to promote responsible AI use, focusing on verification and accountability while allowing space for lawyers to engage with AI as a tool for augmentation rather than automation.


The legal system is being pulled in two directions when it comes to AI: On one side is overconfidence, the idea that AI will quickly solve legal work by automating it; and on the other side, fear — the feeling that AI is so risky that the safest response is to restrict it, discourage its use, or fence it off with new rules.

Both reactions are understandable; but neither is getting us where we need to go.

In a recent interview, Erik Brynjolfsson, the Director of the Stanford Digital Economy Lab and lead voice for the Stanford Institute for Human-Centered AI, makes that explain why both hype and too much skepticism miss the mark.

First, those caught up in the hype are moving too quickly toward automation. Tools work best when they support people, not when they try to stand in for them. Second, skeptics are overreacting to early stumbles. Early failures do not mean AI is a dead end. More often, they mean institutions are still learning how to use it well.

There is a middle ground. It’s not about rushing ahead, and it’s not about slamming the brakes. It’s about careful but deliberate use while testing tools, learning their limits, and moving forward with intention.

That perspective informs my approach.

Standing orders on AI

After well-publicized AI mistakes, it makes sense to look for something concrete that signals seriousness, and disclosure and certification orders do that. They tell the public and the bar that courts are paying attention. However, I don’t think disclosure does the work people hope it does, and I worry it pulls attention away from things that matter much more. I’ll explain.

Disclosure does not make filings more reliable — Knowing whether a lawyer used AI to help draft a filing does not tell me whether that filing is accurate, complete, or well supported. Long before modern AI entered the picture, courts had to guard against overstated arguments, bad citations, and unsupported claims. Knowing which tools were used to prepare a filing did not make those filings or the tools more reliable then, and it does not make them more reliable now.

Certifications and disclosures may offer false assurance — The spotlight is on hallucinations (AI-generated fake cases or citations), but courts already have ways to identify and address those problems. The more concerning risks are quieter: bias, AI over-reliance, or subtle framing that influences how an argument is presented. I’m also extremely concerned about deepfakes, which are much more difficult to detect. Disclosure about AI use in briefs does not address any of those risks, and it may distract us from the far bigger risks. It also creates a false sense that a filing is more careful or reliable than it actually may be.

Additional orders can add confusion — AI standing orders are growing in number, and they take very different approaches. Some require disclosure, some certifications, some limits, some are outright bans. Definitions vary or are missing altogether. Lawyers can comply, but it takes time and careful reading, and as noted already, it doesn’t necessarily improve the quality of what reaches the court.

Early in my time as a United States Magistrate Judge, I made it a point to seek feedback from the legal community about what made legal practice more difficult than it needed to be. One theme came up repeatedly — keeping track of multiple, overlapping judicial practice standards was tough. In response, I worked with my colleagues to consolidate standards into a single, uniform set. I see a similar risk emerging with AI standing orders. Well-intentioned but divergent approaches can splinter practices and create new hurdles, particularly for smaller law firms and self-represented litigants. I don’t want to issue a standing order that adds another layer of complexity without meaningfully improving the quality of what comes before me.

The rules already cover the landscape — I already have tools to deal with inaccurate or misleading filings. Lawyers are responsible for the work they submit, and Rule 11 doesn’t stop working because AI was involved. If something is wrong or misleading, I already have ways to address it.

Certification or disclosure could be misinterpreted as discouraging AI use, and I worry about who gets left out — When new tools are treated as suspect or off-limits, those with the most resources find ways to keep moving forward. However, smaller firms and individual litigants fall further behind. A system that chills responsible experimentation risks widening access gaps instead of narrowing them. In my view, everyone should be exploring ways to, as Brynjolfsson says, “augment” themselves. So long as we remain accountable for the result, augmentation is how lawyers, judges, and other professionals will retain their value in a legal system that is becoming more AI-integrated every day.

Rather than issue a standing order that limits AI use or requires certification or disclosure, I offer : Check your work, protect confidential information, and take responsibility for what you submit. I published this guidance for those interested in my perspective, but it is deliberately not an order, so as to avoid the concerns described above.

We shouldn’t fear AI — we should shape it

Some warn that AI is coming for the legal profession; however, I’m more optimistic (and perhaps more idealistic).

In my view, the justice system depends on human judgment. Empathy, discretion, humility, moral reasoning, and uncertainty are not bugs in the system, rather they’re an essential part of the program. If we want to preserve human judgment in the age of AI, we must be involved in how AI is used. And we can’t do that from a distance. We have to engage with AI, understand its limits, and model responsible use.

Used carefully, AI can help judges:

      • organize large records,
      • identify gaps or inconsistencies,
      • spot issues that need a closer look,
      • identify and locate key information,
      • translate legal jargon to help self-represented litigants better understand what is being asked of them, and
      • reduce administrative drag so more of a judge’s time is spent on decision-making.

This kind of use does not replace us; rather, it supports us. It augments us so we do our work as well as we can, help as many people as possible, and still keep human judgment at the center of everything.

Why this moment matters

The AI conversation in law will remain noisy for a while. Some legal professionals will promise too much. Others will warn against everything. The better path is in the middle — engage, test, verify, and adjust.

As the Newsweek article suggests, this is a watershed moment. Not because AI will decide the future of our institutions, but because we will. The choices we make now will shape what AI does in the justice system, and just as importantly, what it does not do.

We should not be afraid of AI. We should help shape how it is used so it strengthens, rather than replaces, the human judgment at the heart of the legal justice system.


You can find out more about how courts and the legal system are managing AI here

]]>
AI literacy: The courtroom’s next essential skillset /en-us/posts/ai-in-courts/ai-literacy-court-skillset/ Fri, 12 Dec 2025 14:04:03 +0000 https://blogs.thomsonreuters.com/en-us/?p=68733

Key insights:

      • AI literacy is role-specific and essential — Courts need to move beyond general AI conversations and focus on concrete, role-based strategies that support AI readiness.

      • Balanced AI adoption is crucial — The goal for courts is not to automate blindly but rather should adopt a balanced AI-forward mindset.

      • Ongoing education and adaptability are vital — AI literacy requires continuous learning and upskilling that focus on building managers’ comfort and capability to lead their teams.


For today’s court system, AI literacy is quickly becoming a core professional skill, not just a technical curiosity. In the recent webinar AI Literacy for Courts: A New Framework for Role-Specific Education, panelists emphasized that courts need to move from holding abstract conversations around AI to enacting concrete, role-based strategies that support judicial officers and court professionals throughout their AI journey.

The webinar is part of a series from theĚý, a joint effort by the National Center for State Courts (NCSC) and the Thomson Reuters Institute (TRI).

The need for AI literacy is great

Courts are being urged to treat AI literacy as a foundational pillar of AI readiness, not as an optional add-on training. AI literacy is “the knowledge, attitudes, and skills needed to effectively interact with, critically evaluate, and responsibly use AI systems,” said the NCSC’s , adding that it cannot be one-size-fits-all. “The important thing to know about the definition of AI literacy is it’s going to be different for every single personnel role.”

Building a serious AI literacy strategy therefore begins with defining what success looks like for each role, and then aligning recruitment, training, and evaluation practices around those expectations.


You can find out more about here


To support this, policy and security concerns must come before (and alongside) AI use. Webinar panelist , Chief Human Resources Officer at Los Angeles County Superior Court, described how the court started by clarifying the sandbox for safe AI use. First, the court’s generative AI (GenAI) policy sets parameters, such as prohibiting staff from using court usernames or passwords to create accounts on external AI tools. Only then, after those guardrails were in place, did the training really lean into the technical how-to of writing prompts and experimenting with tools. Policy development and skills development happened in tandem, Griffin explained.

To make space for learning in an already overloaded environment, her team lit a creativity spark with managers first, she said, giving them concrete use cases — such as drafting performance evaluations, coaching documents, and job aids. As a result, these managers, in turn, feel motivated to create room for their teams to experiment.

This, Griffin added, is all anchored in a clear, people-centered message from leadership: “We have a lot of work to do, and not enough people to do our work — and so AI is going to help us serve the court users and help us provide access to justice.”


You can register for here


How to make AI “work”

On the webinar, the conversation repeatedly returns to what lawyers and court professionals are actually doing with AI tools today and where they’re getting stuck. , Founder of Creative Lawyers, noted that despite AI’s rapid advance, many professionals are still at a surprisingly basic stage in how they use it. For example, Leonard said that users tend to treat AI as a one-way question-and-answer box instead of using it as an expertise extractor that asks them targeted questions. To combat this, she suggested that users ask AI to ask them questions to extract from their expertise.

When thinking about how to interact with AI generally, users should treat it like a smart colleague and ask themselves (and implicitly the AI) these questions:

      • What information would this colleague need from me to do the assignment well?

      • What questions would I want them to ask me?

      • What specific task do I actually want them to execute?

      • What feedback would I give them to make the work product better?

As the webinar examined, leadership messaging needs to be explicit. AI is being adopted to augment human work, reduce burnout, and expand access to justice — not to eliminate jobs, particularly in courts that are already understaffed. For example, LA Superior Court has been meeting with unions around their GenAI policy, repeatedly affirming that they are not using AI to replace court staff, Griffin said. Instead, they show how AI can be used to demonstrate use cases, and offload repetitive tasks that will make remaining work more meaningful.

At the same time, managers themselves often feel unprepared to talk about AI, which is why building their comfort and capability — especially around explaining where the court is going — is becoming a critical managerial competency, panelists noted.

Supporting the journey

To support all of this, the TRI/NCSC AI Policy Consortium has built practical training resources that courts can plug into their own strategies. For example, the offers curated materials mapped to specific roles such as judges, court administrators, court reporters, clerks, and interpreters. Courts can use these resources as targeted supplements when rolling out AI projects to better prepare staff members who are just starting their AI journey.

Complementing this is the , an environment in which staff can safely experiment with GenAI tools without sending data back to the open internet. This gives judges and staff a place to practice prompt-writing, ask follow-up questions, and give feedback, all while staying inside a controlled environment and within the bounds of most court AI policies.

Looking ahead, the panelists argued that the most durable “future skills” may not be specific technical proficiencies but human capabilities, such as adaptability, creativity, critical thinking, and change leadership. In fact, HR leaders across industries largely agree they cannot predict exactly which tools or skill sets will dominate in a few years, Griffin said, and instead, courts should focus on helping managers to craft better prompts, interpret outputs critically, and lead their teams through repeated waves of technological change.

Leonard similarly urged legal organizations to move beyond basic, adoption use cases — such as document summarization and email refinement — and start exploring more creative, transformative uses that could redesign legal services and court systems to be more responsive to the public.

Finally, the webinar stressed that AI literacy cannot be a one-and-done initiative. The , published by NCSC, encourages courts to treat AI projects as catalysts for revisiting their overall literacy strategy and HR practices.


You can find out more about the work that NCSC is doing to improve courts here

]]>