State Courts Archives - Thomson Reuters Institute https://blogs.thomsonreuters.com/en-us/topic/state-courts/ Thomson Reuters Institute is a blog from ¶¶ŇőłÉÄę, the intelligence, technology and human expertise you need to find trusted answers. Thu, 16 Apr 2026 06:20:41 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 Looking beyond the bench at the importance of judicial well-being /en-us/posts/government/beyond-the-bench/ Wed, 15 Apr 2026 14:06:38 +0000 https://blogs.thomsonreuters.com/en-us/?p=70384

Key insights:

      • Well-being is a professional necessity — Judges experience decision fatigue, emotional stress, and personal biases that can affect their rulings, making mental and physical well-being a judicial duty.

      • Community engagement builds better judgment — Staying connected to the communities they serve helps judges develop empathy, recognize bias, and deliver fairer decisions.

      • Diverse experience strengthens the judiciary — Varied backgrounds and ongoing education in areas like restorative justice make courts more responsive, inclusive, and publicly trusted.


Judges play a unique and essential role in society. They are tasked with interpreting the law, resolving disputes, and upholding justice — often under intense scrutiny and pressure. Their decisions shape lives, influence public policy, and reinforce the rule of law.

Indeed, judicial rulings may be the most visible part of the job, but they are not the only measure of a judge’s effectiveness — or of the judiciary’s overall health.

To truly understand and support a robust legal system, it is vital to look beyond the courtroom and examine the broader context in which judges operate. A judiciary that is fair, empathetic, and resilient depends not only on legal expertise, but also on balance, self-awareness, and active engagement with the communities it serves.

The weight of the robe & the value of connection

Despite the solemnity of the judicial office, judges also carry personal experiences, cognitive biases, and emotional responses. The weight of responsibility in adjudicating complex, often emotionally charged cases can lead to stress, burnout, and decision fatigue. that judicial decisions can be influenced by factors such as time of day, caseload volume, and even personal well-being.

When judges prioritize their own well-being through physical health, mental resilience, and time away from the bench, they are better equipped to render fair and consistent decisions. Judicial wellness is not a personal luxury; rather, it is a professional imperative.

Equally important is the role of community engagement. The law does not exist in a vacuum but is shaped by social norms, economic realities, and cultural shifts. Judges who remain isolated from the communities that are affected by their rulings risk losing touch with the lived experiences of the people before them.


Judicial rulings may be the most visible part of the job, but they are not the only measure of a judge’s effectiveness — or of the judiciary’s overall health.


Engagement with the public helps judges better understand how the law impacts and operates in people’s lives. It also builds the empathy and contextual awareness needed for interpreting statutes or imposing sentences.

For example, a judge who volunteers with youth programs or participates in community forums on public safety may develop a more nuanced understanding of cases involving juvenile offenders or policing practices. Similarly, a judge who attends local cultural events or listens to community leaders may be better positioned to recognize implicit biases or systemic inequities that may be inherent in the justice system.

Community involvement also strengthens public trust. When citizens see judges as accessible and engaged, rather than distant or aloof, confidence in the judiciary increases. And these ideas of transparency and connection are key to maintaining citizens’ trust in the courts.

These themes are explored more in depth in the Thomson Reuters Institute’s video series,ĚýBeyond the Bench. For example, in the episodeĚý,ĚýAssociate Justice Tanya R. Kennedy shares her experience educating youth, participating in civic organizations, and leading legal reform initiatives. The episode also highlights how service beyond judicial duties enhances judges’ decision-making and strengthens community ties.

Another episode of the series,Ěý,Ěýexamines the personal and professional challenges faced by judges and attorneys alike. It features a candid interview with Judge Mark Pfiffer, who emphasizes the importance of mindfulness, peer support, and institutional policies that promote mental health and sustainable work practices.

A judiciary that reflects society

The same principle applies at the institutional level. A judiciary is strongest when it reflects the range of experiences and perspectives present in the society it serves.

Beyond individual judges, the judiciary can benefit from diversity and inclusion. A bench that reflects the full spectrum of society is more likely to deliver balanced and equitable justice. But diversity is not just about representation — it’s also about perspective.

Judges who have worked in public defense, civil rights advocacy, or rural legal services bring different insights to the bench than those who have spent their careers in corporate law or prosecution. These varied experiences enrich judicial deliberation and help ensure that decisions are informed by a broad understanding of justice.

Encouraging judges and court personnel to engage in lifelong learning, mentorship, and cross-sector collaboration further strengthens the judiciary. Programs that support judicial education on topics like implicit bias, trauma-informed practices, or restorative justice are essential to modern, responsive courts.

Improving judges’ well-being

The quality of justice depends not only on what happens in the courtroom, of course, but on what happens outside of it. Judges who maintain personal balance, engage with their communities, and remain open to diverse perspectives are better equipped to serve the public good.

Legal professionals, court administrators, and policymakers should support the kinds of initiatives that promote judicial wellness, community outreach, and professional development. By fostering a judiciary that looks beyond the bench, we ensure a justice system that is not only legally sound, but also humane, inclusive, and trusted.

In the end, judges and the justice they mete out are not defined by court rulings alone. It also depends on relationships, context, and public trust. Recognizing that reality is essential to preserving the well-being of the judiciary and the integrity of the law.


TheĚý“Beyond the Bench”Ěývideo series is available on

]]>
Scaling Justice: AI is scaling faster than justice, revealing a dangerous governance gap /en-us/posts/ai-in-courts/scaling-justice-governance-gap/ Mon, 13 Apr 2026 16:57:55 +0000 https://blogs.thomsonreuters.com/en-us/?p=70330

Key takeaways:

      • AI frameworks need to keep up with implementation — While AI governance frameworks are being developed and enacted globally, their effectiveness depends on enforceable mechanisms within domestic justice systems.

      • Access to justice is essential for trustworthy AI regulation — Rights and protections are only meaningful if individuals can understand, challenge, and seek remedies for AI-driven decisions. Without operational access, governance frameworks risk remaining theoretical.

      • People-centered justice and human rights must anchor AI governance — Embedding human rights standards and ensuring equal access to justice in AI regulation strengthens public trust, accountability, and the credibility of both public institutions and private companies.


AI governance is accelerating across global, national, and local levels. As public investment in AI infrastructure expands, new oversight bodies are emerging to assess safety, risk, and accountability. The global policy conversation has from principles to the implementation of meaningful guardrails and AI governance frameworks, which legislators now are drafting and enacting.

These developments reflect growing recognition that AI systems demand structured oversight and a shift from voluntary safeguards and standards to institutionalized governance. One critical dimension remains underdeveloped, however: how do these frameworks function in practice? Are they enforceable? Do they provide accountability? Do they ensure equal access?

AI governance will not succeed on the strength of international declarations or regulatory design alone; rather, domestic justice systems will determine whether it works. At this intersection, the connection between AI governance and access to justice becomes real.

In early February, leaders across government, the legal sector, international organizations, industry, and civil society convened for an expert discussion. The following reflections attempt to build on that dialogue and its urgency.

From principles to enforcement

Over the past decade, AI governance has evolved from hypothetical ethical guidelines to voluntary commitments, binding regulatory frameworks, and risk-based approaches. Due to these game-changing advancements, however, many past attempts to provide structure and governance have been quickly outpaced by technology and are insufficient without enforcement mechanisms. As Anoush Rima Tatevossian of The Future Society observed: “The judicial community should have a role to play not only in shaping policies, but in how they are implemented.”

Frameworks establish expectations, while courts and dispute resolution mechanisms interpret rules, test rights, evaluate harm, assign responsibility, and determine remedies. If individuals are not empowered to safeguard their rights and cannot access these mechanisms, governance frameworks remain theoretical or are casually ignored.

This challenge reflects a broader structural constraint. Even without AI, legal systems struggle to meet demand. In the United States alone, 92% of people do not receive the help they need in accessing their rights in the justice system. Introducing AI into this environment without strengthening access can risk widening, rather than narrowing, the justice gap.


Please add your voice to ¶¶ŇőłÉÄę’ flagship , a global study exploring how the professional landscape continues to change.Ěý


Justice systems serve as the operational core of AI governance. By inserting the rule of law into unregulated areas, they provide the infrastructure that enables accountability by interpreting regulatory provisions in specific cases, assessing whether AI-related harms violate legal standards, allocating responsibility across public and private actors, and providing accessible pathways for redress.

These frameworks also generate critical feedback. Disputes involving AI systems expose gaps in transparency, fairness, and accountability. Legal professionals see where governance frameworks first break down in real-world conditions, often long before policymakers do. As a result, these frameworks function as an early signal of policy effectiveness and rights protections.

Importantly, AI governance does not require entirely new legal foundations. Human rights frameworks already provide standards for legality, non-discrimination, due process, and access to remedy, and these standards apply directly to AI-enabled decision-making. “AI can assist judges but must never replace human judgment, accountability, or due process,” said Kate Fox Principi, Lead on the Administration of Justice at the United Nations (UN) Office of the High Commissioner for Human Rights (OHCHR), during the February panel.

Clearly, rights are only meaningful when individuals can exercise them — this constraint is not conceptual, it’s operational. Systems must be understandable, affordable, and responsive, and institutions should be capable of evaluating complex, technology-enabled disputes.

Trust, markets & accountability

Governance frameworks that do not account for these dynamics risk entrenching inequities rather than mitigating them. An individual’s ability to understand, challenge, and seek a remedy for automated decisions determines whether governance is credible. A people-centered justice approach, as established in the , asks whether individuals can meaningfully engage with the system, not just whether rules exist. For example, women face documented barriers to accessing justice in any jurisdiction. AI systems trained on biased data can replicate or amplify existing disparities in employment, financial services, healthcare, and criminal justice.

“Institutional agreement rings hollow when billions of people experience governance as remote, technocratic, and unresponsive to their actual lives,” said Alfredo Pizarro of the Permanent Mission of Costa Rica to the UN. “People-centered justice becomes essential.”

AI systems already shape outcomes across employment, financial services, housing, and justice. Entrepreneurs, law schools, courts, and legal services organizations are already building AI-enabled tools that help people navigate legal processes and assert their rights more effectively. Governance design will determine whether these tools help spread access to justice and or introduce new barriers.

Private companies play a central role in developing and deploying AI systems. Their products shape economic and social outcomes at scale. For them, trust is not abstract; it is a success metric. “Innovation depends on trust,” explained Iain Levine, formerly of Meta’s Human Rights Policy Team. “Without trust, products will not be adopted.” And trust, in turn, depends on enforceability and equal access to remedy.

AI governance will succeed or fail based on access

As Pizarro also noted, justice provides “normative continuity across technological rupture.” Indeed, these principles already exist within international human rights law and people-centered justice; although they precede the advent of autonomous systems, they provide standards for evaluating discrimination, surveillance, and procedural fairness, and remain durable as new challenges to upholding justice and the rule of law emerge.

People-centered justice was not designed for legal systems addressing AI-related harms, but its outcome-driven orientation remains durable as new justice problems emerge.

The current stage presents an opportunity to align AI governance with access to justice from the outset. Beyond well-drafted rules, we need systems that people can use. And that means that any effective governance requires coordination between policymakers, legal professionals, and the public.


You can find other installments ofĚýour Scaling Justice blog seriesĚýhere

]]>
The shadow over the bench: Legalweek 2026’s most important session had nothing to do with AI /en-us/posts/government/legalweek-2026-judicial-threats/ Thu, 26 Mar 2026 17:12:25 +0000 https://blogs.thomsonreuters.com/en-us/?p=70142

Key takeaways:

      • Violence against judges is escalating — Targeted shootings, coordinated harassment campaigns, and threats that now routinely follow judges to their homes and families.

      • The rhetoric driving the escalation is coming from the highest levels of government — The absence of any public denunciation from the Department of Justice is highlighting the source of the problem.

      • Will the violence itself become part of judicial rulings? — The endgame of judicial intimidation isn’t that judges stop ruling, it’s that the threat of violence becomes a silent presence in the deliberation itself.


NEW YORK — Those attendees who came to the recentĚý to talk about AI, agentic workflows, and the business of legal technology, also were treated to a session that will likely stay with attendees and had nothing to do with AI.

In that session, four federal judges took the stage; but they were not there to talk about pricing models or AI adoption. They were there to talk about staying alive.

Setting the stage

Jason Wareham, CEO of IPSA Intelligent Systems and a former U.S. Marine Corps judge advocate, introduced the session — a panel of four sitting United States District Court judges — by speaking of how the rule of law once seemed resolute, yet how that faith in that has been shaken, year after year. He worked hard to frame his observations as nonpartisan, a matter of institutional fragility rather than political allegiance. It was a generous framing, but it was one that would not survive the weight of the ensuing discussion.

The Honorable Esther Salas of the District of New Jersey said that the reason she was there has a name. On July 19, 2020, a disgruntled, extremist attorney who had a case before her court arrived at her home during a birthday celebration. He shot and killed her twenty-year-old son, Daniel Anderl. He shot and critically wounded her husband. She has spent the years since on a mission to protect her judicial colleagues from the same fate.

The new normal

Next, the Honorable Kenly Kiya Kato of the Central District of California described what has changed. Judges’ rulings are still based on the Constitution, on precedent, and on the facts; but what’s different is the small voice in the back of a judge’s head. That voice, often coming after a judge issued a decision that they now have to fight against, asks: What will happen after this? It is now expected, Judge Kato explained, that a high-profile order will bring threats. When two colleagues in her district issued prominent decisions, her first thought was for their safety. That is not how it has been historically.

The Honorable Mia Roberts Perez of the Eastern District of Pennsylvania asked how we got here, pointing to language from the highest levels of government: judges called monsters, a U.S. Department of Justice declaring war on rogue judges, and recently politicians bringing justice’s families into the conversation.

Judge Salas pushed even further. She acknowledged the instinct to frame the problem as bipartisan, but said the current moment is not apples to apples. It is apples to watermelons. The spike in threats since 2015, she argued, traces directly to rhetoric from political leaders using language never before deployed against the bench.


The federal judiciary is looking to break annual records for threats [against judges], and there is an absence of any public denunciation from the Attorney General or the DOJ.


The evidence is not abstract, nor are the victims, and the panel walked through it. Judge John Roemer of Wisconsin, zip-tied to a chair and assassinated in his home. Associate Judge Andrew Wilkinson of Maryland shot dead in his driveway while his family was inside. Judge Steven Meyer of Indiana and his wife Kimberly, shot through their own front door after attackers first posed as a food delivery, then returned days later claiming to have found the couple’s dog. Judge Meyer has just undergone his fifth surgery since the attack.

All of these incidents happened at the judges’ homes.

Judge Salas then played a voicemail, one of thousands that federal judges receive. It was less than 30 seconds long, but it did not need to be longer. While names had been redacted, what remained was a torrent of threats and obscenities, graphic, sexual and violent, delivered with the confidence of someone who does not expect consequences. Some judges receive hundreds of these after a single ruling, often from people with no case before them at all.

The shadow over the courts

Throughout the session, there was a presence the panelists circled but rarely named directly. A shadow that shaped every observation about escalating threats, every reference to rhetoric from the top down, every mention of language never before used by political leaders, of action or inaction the likes of which would have been unthinkable just several years ago. The specifics were spoken. The name, largely, was not.

It didn’t have to be.

Judge Kato said that what was perhaps the most disheartening aspect of all this is that these threats are getting worse. The people who know better are not doing better. Indeed, she said her children think about these problems every day. What will happen to mom today? Will someone come to the house? These are questions children should not have to carry. They did not sign up for this, and neither did the judges.

In 2026, Judge Salas noted, the federal judiciary is looking to break annual records for threats. She also noted the absence of any public denunciation from the Attorney General or the DOJ. The silence, she said, says a lot.

Not surprisingly, the implications extend beyond the judges themselves. As Judge Salas noted, if judges have to weigh their safety alongside the law, ordinary people don’t stand a chance. If one party is stronger, better funded, or more willing to threaten, then the scales tip.

That is the endgame of judicial intimidation. It’s not that judges stop ruling, but that the violent and the powerful — indeed, the people least fit to hold the scales — can tilt them at will.

That concern echoed an earlier warning from Judge Karoline Mehalchick of the Middle District of Pennsylvania. Judge Mehalchick said that judicial intimidation feeds on misunderstanding. When the public no longer grasps why judges must be insulated from pressure or conversely, mistakes independence for partisanship, the threat environment becomes easier to justify, easier to ignore, and harder to reverse.


What is perhaps the most disheartening aspect of all this is that these threats are getting worse, and the people who know better are not doing better.


In his 2024 year-end report, U.S. Supreme Court Chief Justice John Roberts identified four threats to judicial independence: violence, intimidation, disinformation, and threats to defy lawfully entered judgements. The panel discussed this report as prophecy fulfilled. Public confidence in the judiciary has plummeted since 2021, and the reasons are complex. The judges insisted they are still doing their jobs the right way, but the violence is spreading anyway.

What survives

Judge Salas asked the audience to watch their thoughts. Are they negative and destructive, or positive and uplifting? Can we start loving more? She ended by sending love and light to everyone in the room.

The judges were visibly emotional on the stage.

The words were beautiful. They were also, in the context of everything that had just been described — the killings, the voicemails, the zip ties, the pizza deliveries masking a threat under a murdered son’s name — resting in a shadow that no amount of love and light could fully dispel on their own.

The room responded with a standing ovation.

Thousands of people came to Legalweek 2026 to talk about the future of legal technology. For one morning, four judges reminded them that none of it matters if the people charged with administering justice cannot do so safely.

So, while the billable hour may survive and the associate will adapt, the harder question, the one that should keep the legal industry awake at night, is whether the bench will hold.


You can find more ofĚýour coverage of Legalweek eventsĚýhere

]]>
How AI-powered access to justice is impacting unauthorized practice of law regulations /en-us/posts/government/ai-impacts-unauthorized-practice-of-law/ Mon, 02 Feb 2026 17:55:20 +0000 https://blogs.thomsonreuters.com/en-us/?p=69263

Key insights:

      • Courts and the legal profession need to show leadership — Given their specialized knowledge of the needs of litigants and of courts, courts need to take the lead in determining definitions of the unauthorized practice of law.

      • 3 paths forward to workable regulatory solutions — Recent discussions and research around this subject offered three paths toward modernizing UPL definitions.

      • Uncertainty harms users and innovation — Fear of UPL can drive self-censorship and market exits, even as litigants continue to use publicly available GenAI tools.


Today, many Americans experience legal issues but lack proper access to legal representation. At the same time, AI tools capable of providing legal information are rapidly evolving and already in widespread use. Between these two facts lies a critical definitional problem that courts and state bars must urgently address: How to define the unauthorized practice of law (UPL) in way that doesn’t further curtail access to justice.

This discussion is not theoretical. It directly determines whether AI-based legal services can operate, how they should be regulated, and ultimately whether AI can help unrepresented or self-represented litigants gain meaningful access to justice. This issue was explored in more depth during a recent webinar from theĚý, a joint effort by the National Center for State CourtsĚý(NCSC) and the Thomson Reuters Institute (TRI).

The need for clear definitions

During the webinar, Alaska Supreme Court Administrative Director Stacey Marz noted that “there is no uniform definition of the practice of law” and that UPL regulations represent “a real varied continuum of scope and clarity.” This variation makes compliance challenging for technology providers, especially as they navigate 50 different state standards.

UPL generally occurs when someone “not licensed as an attorney attempts to represent or perform legal work on behalf of another person,” explained Cathy Cunningham, Senior Specialist Legal Editor at ¶¶ŇőłÉÄę Practical Law.

Marz added that such legal advice typically involves “applying the law, rules, principles, and processes to specific facts and circumstances of that individual client — and then recommending a course of action.”

The challenge, however, is that AI can appear to do exactly this, yet the regulatory framework remains unclear about whether and how this should be permitted and how consumers can be protected.

3 paths forward

During the recent webinar, panelists discussed several different approaches to UPL regulations, noting that a and outlined three approaches that state courts could take, including:

Path 1: Explicitly enabling tools with regulatory framework — UPL statutes can be revisited to explicitly allow purpose-built AI legal tools to operate without threat of UPL enforcement, provided they meet certain requirements. Prof. Dyane O’Leary, Director of Legal Innovation & Technology at Suffolk University, emphasized that consumer-facing AI legal tools are already being used for tailored legal advice, arguing that some oversight is better than “just letting these tools continue to operate and hoping consumers aren’t harmed by them.”

Path 2: Creating regulatory sandboxes — Courts could establish temporary experimental zones in which AI legal service providers can operate under controlled conditions while regulators gather data about efficacy and safety through feedback and research, with an eye toward informing future regulation reform.

Path 3: Narrowing UPL to human conduct — Clarifying that existing UPL rules apply only to humans who may hold themselves out as attorneys in tribunals or courtrooms or creating legal documents under the guise of being a human attorney, effectively would leave AI-powered legal tools clearly outside UPL restrictions and open up a “new pocket of the free market” for consumers.

Utah Courts Self-Help Center Director Nathanael Player referenced Utah Supreme Court Standing Order Number 15, which established their regulatory sandbox using a fundamentally different standard: Not whether services match what lawyers provide, but rather “is this better than the absolute nothing that people currently have available to them?”

Prof. O’Leary reframed the comparison itself, suggesting that instead of comparing consumers who use AI tools to consumers with an attorney, the framework should be “consumers that use legal AI tools, and maybe consumers that otherwise have no support whatsoever.”

The personhood puzzle

“AI, at this time, does not have legal personhood status,” said Practical Law’s Cunningham. “So, AI can’t commit unauthorized practice of law because AI is not a person.”

However, Player pushed back on this reasoning, clarifying that “AI does have a corporate personhood. There is a corporation that made the AI, [and] the corporation providing that does have corporate personhood.” He added, however, that “it’s not clear, I don’t think we know whether or not there is… some sort of consequence for the provision of ChatGPT providing legal services.”


You can view here


This ambiguity creates what might be called the personhood gap, a zone of legal uncertainty with serious consequences for both innovation and access to justice.

Colin Rule, CEO at online dispute resolution platform ODR.com, explained that “one of the major impacts of UPL is, actually self-censorship.” After receiving a UPL letter from a state bar years ago, he immediately exited that market. This pattern repeats across the legal tech landscape, leaving companies hesitant to innovate.

Rule’s bottom line resonates with anyone trying to build solutions in this space. “As a solution provider, what I want is guidance,” Rules explained. “Clarity is what I need most… that’s my number one priority.”

Moving forward: Clarity over perfection

The legal profession needs to lead on this issue, and that means state bars and state supreme courts must take action now. The tools are already in use, and the question is not whether AI will play a role in legal services, but rather whether that role will be defined by thoughtful regulation or by default.

The solution is for the judiciary to provide clear guidance on what services can be offered, by whom, and under what conditions. To do that, courts much first acknowledge that for most people, the choice is not between an AI tool and a lawyer but between an AI tool and nothing. Given that, states must walk a path that will both encourage innovation and protect consumers.

To this end, legal professionals and courts should experiment with these tools, understand their trajectory as well as their current limitations, and work collaboratively with developers to create frameworks that prioritize consumer protection without stifling innovation that could genuinely expand access to justice.


You can find out more about how courts and legal professionals are dealing with the unauthorized practice of law here

]]>
Responsible AI use for courts: Minimizing and managing hallucinations and ensuring veracity /en-us/posts/ai-in-courts/hallucinations-report-2026/ Wed, 28 Jan 2026 10:51:10 +0000 https://blogs.thomsonreuters.com/en-us/?p=69181

Key insights:

      • AI usage in courts needs verifiable reliability— Unlike other fields, errors and hallucinations caused by AI in a court setting can create due-process issues.

      • Skepticism is professional responsibility— Judges’ interrogation of AI sources and accountability concerns are vital guardrails to minimizing these problems.

      • Governance over perfection— Courts and legal professionals should focus on systematic management of AI hallucinations through clear protocols, human oversight, and mandatory verification to ensure veracity.


AI hallucinations have become one of the most urgent and most misunderstood issues in professional work today; and as generative AI (GenAI) moves from and interesting experiment to common usage in many workplace infrastructures, these issues can cause significant problems, especially for courts and the professionals and individuals that use them.

Jump to ↓

Responsible AI use for courts: Minimizing and managing hallucinations and ensuring veracity

 

Today, AI can be used in everything from assisted research to guided drafting of documents, court briefs, and even court orders. With the development of tools supported by GenAI and agentic AI, the very infrastructure of professional work has shifted to include these offerings.

Yet, in most business settings, a wrong answer is an inconvenience. It requires minor corrections and has minimal impact. In the justice system, a wrong answer can be a due-process problem that strongly underscores the need for courts and legal professionals to ensure that their AI use is verifiably reliable when it counts.

At the same time, the direction of travel is clear: AI adoption isn’t a fad we can simply wait out, and it isn’t inherently at odds with high-stakes decision-making. Used well, these tools can reduce administrative burden, speed up access to relevant information, and help court professionals navigate large volumes of material more efficiently. The real question is not whether courts will encounter AI in their workflows, but how they will define responsible use, especially in moments in which accuracy isn’t a feature, it’s the foundation.


“Whether you are a judge [or] an attorney, credibility is everything, particularly when you come before the court.”

— Justice Tanya R. Kennedy Associate Justice of the Appellate Division, First Judicial Department of New York


To examine these issues more deeply, the Thomson Reuters Institute has published a new report,Ěý, which frames hallucinations not as a sensationalistic gotcha, but as a practical risk that must be managed with policy, process, and professional judgment. The report also features valuable insight on this subject from judges and court stakeholders who today are evaluating AI in the real operating environment of legal proceedings, courtroom expectations, and the daily administration of justice.

This perspective is essential. Technical teams can explain how models generate language and why they sometimes produce confident-sounding errors. However, judges and court staff can explain something equally important — what accuracy actually means in practice. In courts, accuracy isn’t just about getting the gist right; rather, it’s about precise citations, faithful characterization of the record, correct procedural posture, and language that withstands scrutiny. As the report points out, relied-upon hallucinated information isn’t merely bad output, it can lead to a potential distortion of justice.

Managing AI as professional responsibility

Crucially, the report reflects that judicial skepticism about AI is not simple technophobia — it’s professional responsibility. Judges are trained to interrogate sources, weigh credibility, and understand the downstream consequences of errors. Judges may ask, What is the provenance of this information? Can I reproduce it independently? And who is accountable if it’s wrong? These questions aren’t barriers to innovation; indeed, they are the guardrails that this innovation requires.

What emerges is a pragmatic middle ground that embraces the upside of AI use in courts while treating hallucinations as a predictable occurrence that can be managed systematically. Rather than concluding AI hallucinates, therefore AI can’t be used, the more workable conclusion is AI can hallucinate, therefore AI outputs must be designed, handled, and verified accordingly, likely with other advanced tech tools. As the report points out, courts don’t need a perfect AI; rather, they need repeatable protocols that keep human decision-makers in control and keep the record clean.

As the report ultimately demonstrates, managing hallucinations in courts isn’t about chasing perfection, it’s about protecting veracity. It’s about using the right advanced tech tools to build workflows in which the technology consistently supports the truth-finding process instead of quietly eroding it. And it’s about recognizing that in the legal system, responsibility doesn’t disappear when a new tool arrives — it becomes even more important to ensure the new tool doesn’t erode that either.


You can download

a full copy of the Thomson Reuters Institute’s Ěýhere

]]>
Scaling Justice: How technology is reshaping support for self-represented litigants /en-us/posts/ai-in-courts/scaling-justice-technology-self-represented-litigants/ Fri, 23 Jan 2026 15:31:24 +0000 https://blogs.thomsonreuters.com/en-us/?p=69124

Key takeaways:

      • From scarcity to abundance — Technology has shifted the challenge in access to justice from scarcity of legal help to issues of accuracy, governance, and effective support.ĚýAI and digital tools now provide abundant legal information to self-represented litigants, but they raise new questions about reliability, oversight, and alignment with human needs.

      • The necessity of human-in-the-loop — Human involvement remains essential for meaningful resolution.ĚýWhile AI can explain procedures and guide users, real support often requires relational and institutional human guidance, especially for vulnerable populations facing anxiety, low literacy, or systemic bias.

      • One part of a bigger question — Systemic reform and broader approaches are needed beyond technological fixes because technology alone cannot solve deep-rooted inequities or the complexity of the legal system. Efforts should include prevention, alternative dispute resolution, and redesigning systems to prioritize just outcomes and accessibility.


Access to justice has long been framed as a problem of scarcity, with too few legal aid lawyers and insufficient funding forcing systems to be built in triage mode. This has been underscored with the unspoken assumption that most people navigating civil legal problems would do so without meaningful help, often because their issues were not compelling or lucrative enough to justify legal representation.

This framing no longer holds, however. Legal information, once tightly controlled by legal professionals, publishers, and institutions, is now abundantly available. Large language models, search-based AI systems, and consumer-facing legal tools can explain civil procedure, identify relevant statutes, translate dense legalese into plain language, and generate step-by-step guidance in seconds.

Increasingly, self-represented litigants are actively using these tools, whether courts or legal aid organizations endorse them or not. Katherine Alteneder, principal at Access to Justice Innovation and former Director of the Self-Represented Litigation Network, notes: “This reality cannot be fully controlled, regulated out of existence, or ignored.”

And as Demetrios Karis, HFID and UX instructor at Bentley University, argues: “Withholding today’s AI tools from self-represented litigants is like withholding life-saving medicine because it has potential side effects. These systems can already help people avoid eviction, protect themselves from abuse, keep custody of their children, and understand their rights. Doing nothing is not a neutral choice.”

Thus, the central question is no longer whether technology can help self-represented litigants, but rather how it should be deployed — and with what expectations, safeguards, and institutional responsibilities.

Accuracy, error & tradeoffs

The baseline capabilities of general-purpose AI systems have advanced dramatically in a matter of months. For common use cases that self-represented litigants most likely seek — such as understanding process, identifying next steps, preparing for hearings, and locating authoritative resources — today’s frontier models routinely outperform well-funded legal chatbots developed at significant cost just a year or two ago.


The central question is no longer whether technology can help self-represented litigants, but rather how it should be deployed — and with what expectations, safeguards, and institutional responsibilities.


These performance gains raise important questions about the continued call for extensive customization to deliver basic legal information. However, performance improvements do not eliminate the need for careful design. Tom Martin, CEO and founder of LawDroid (and columnist for this blog), emphasizes that “minor tweaking” is subjective, and that grounding AI tools in high-quality sources, appropriate tone, and clear audience alignment remains essential, particularly when an organization takes responsibility and assumes liability for the tool’s voice and output.

Not surprisingly, few topics in the legal tech community generate more debate than AI accuracy, but it cannot be evaluated in isolation. Human lawyers make mistakes, static self-help materials become outdated, and informal advice from friends, family, or online forums is often wrong. Models should be evaluated against realistic alternatives, especially when the alternative is no help at all.

Off-the-shelf tools now perform surprisingly well at generating plain-language explanations, often drawing on primary law, court websites, and legal aid resources. In limited testing, inaccuracies tend to reflect misunderstandings or overgeneralizations rather than pure fabrication. And while these are errors that are still serious, they may be easier to detect and correct with review.

Still, caution is key, often because AI tells people what they want to hear in order to keep them on the platform. Claudia Johnson of Western Washington University’s Law, Diversity, and Justice Center asks what an acceptable error rate is when tools are deployed to vulnerable populations and reminds organizations of their duty of care. Mistakes, especially those known and uncorrected, can carry legal, ethical, and liability consequences that cannot be ignored.

Knowledge bases are infrastructure, but more is needed

Vetted, purpose-built, and mission-focused solution ecosystems are emerging to fill the gap between infrastructure and problem-solving. The Justice Tech Directory from the Legal Services National Technology Assistance Project (LSNTAP) provides legal aid organizations, courts, and self-help centers with visibility into curated tools that incorporate guardrails, human review, and consumer protection in ways that general-purpose AI platforms do not.

Of course, this infrastructure does not exist in a vacuum. Indeed, these systems address the real needs of real people. While calls for human-in-the-loop systems are often framed as safeguards against technical failure, some of the most important reasons for human involvement are often relational and institutional. Even accurate information frequently fails to resolve legal problems without human support, particularly for people experiencing anxiety, shame, low literacy, or systemic bias within courts.


Not surprisingly, few topics in the legal tech community generate more debate than AI accuracy, but it cannot be evaluated in isolation.


A human in the loop can improve how self-represented litigants are treated by clerks, judges, and opposing parties. Institutional review models often provide this interaction at pre-filing document clinics, navigator-supported pipelines, and structured AI review workshops that integrate human judgment and augment human effort rather than replacing it.

Abundance and the limits of technology

Information does not automatically produce equity. Technology cannot make up for existing, persistent systemic issues, and several prominent voices caution against treating AI as a workaround for deeper system failures. Richard Schauffler of Principal Justice Solutions, notes that the underlying problem with the use of AI in the legal world is the fact that our legal process is overly complicated, mystified in jargon, inefficient, expensive, and deeply unsatisfying in terms of justice and fairness — and using AI to automate that process does not alter this fact.

Without changes at the courthouse level, upstream technological improvements may not translate into just outcomes. Bias, discrimination, and resource constraints cannot be solved by technology alone. Even perfect information from a lawyer does not equal power when structural inequities persist.

Further, abundance fundamentally changes the problem. As Alteneder notes, rather than access, the primary problem now is “governance, trust, filtering, and alignment with human values.” Similar patterns are seen in healthcare, journalism, and education. Without scaffolding, technology often widens gaps, benefiting those with greater capacity to interpret, prioritize, and act. For self-represented litigants, the most valuable support is often not answers, but navigation: What matters most now, which paths are realistic, how to understand when to escalate and when legal action may not serve broader life needs.

Focusing solely on court-based self-help misses an opportunity to intervene earlier, especially on behalf of self-represented litigants. AI-enabled tools have the potential to identify upstream legal risk and connect people to mediation, benefits, or social services before disputes harden.


You can find more insights about how courts are managing the impact of advanced technology from our Scaling Justice series here

]]>
Between hype and fear: Why I have not issued a standing order on AI /en-us/posts/ai-in-courts/standing-order-on-ai/ Thu, 15 Jan 2026 19:31:57 +0000 https://blogs.thomsonreuters.com/en-us/?p=69072

Key insights:

      • The legal system should avoid both overhyping and over-fearing AI — Instead, adopting a balanced approach that emphasizes careful, deliberate engagement and responsible experimentation.

      • Mandatory AI disclosure or certification orders do not necessarily improve the reliability of legal filings — In addition, they run the risk of creating confusion, false assurance, and additional hurdles, especially for smaller law firms and self-represented litigants.

      • Rather than imposing a restrictive order, the author issued guidance — This guidance is designed to promote responsible AI use, focusing on verification and accountability while allowing space for lawyers to engage with AI as a tool for augmentation rather than automation.


The legal system is being pulled in two directions when it comes to AI: On one side is overconfidence, the idea that AI will quickly solve legal work by automating it; and on the other side, fear — the feeling that AI is so risky that the safest response is to restrict it, discourage its use, or fence it off with new rules.

Both reactions are understandable; but neither is getting us where we need to go.

In a recent interview, Erik Brynjolfsson, the Director of the Stanford Digital Economy Lab and lead voice for the Stanford Institute for Human-Centered AI, makes that explain why both hype and too much skepticism miss the mark.

First, those caught up in the hype are moving too quickly toward automation. Tools work best when they support people, not when they try to stand in for them. Second, skeptics are overreacting to early stumbles. Early failures do not mean AI is a dead end. More often, they mean institutions are still learning how to use it well.

There is a middle ground. It’s not about rushing ahead, and it’s not about slamming the brakes. It’s about careful but deliberate use while testing tools, learning their limits, and moving forward with intention.

That perspective informs my approach.

Standing orders on AI

After well-publicized AI mistakes, it makes sense to look for something concrete that signals seriousness, and disclosure and certification orders do that. They tell the public and the bar that courts are paying attention. However, I don’t think disclosure does the work people hope it does, and I worry it pulls attention away from things that matter much more. I’ll explain.

Disclosure does not make filings more reliable — Knowing whether a lawyer used AI to help draft a filing does not tell me whether that filing is accurate, complete, or well supported. Long before modern AI entered the picture, courts had to guard against overstated arguments, bad citations, and unsupported claims. Knowing which tools were used to prepare a filing did not make those filings or the tools more reliable then, and it does not make them more reliable now.

Certifications and disclosures may offer false assurance — The spotlight is on hallucinations (AI-generated fake cases or citations), but courts already have ways to identify and address those problems. The more concerning risks are quieter: bias, AI over-reliance, or subtle framing that influences how an argument is presented. I’m also extremely concerned about deepfakes, which are much more difficult to detect. Disclosure about AI use in briefs does not address any of those risks, and it may distract us from the far bigger risks. It also creates a false sense that a filing is more careful or reliable than it actually may be.

Additional orders can add confusion — AI standing orders are growing in number, and they take very different approaches. Some require disclosure, some certifications, some limits, some are outright bans. Definitions vary or are missing altogether. Lawyers can comply, but it takes time and careful reading, and as noted already, it doesn’t necessarily improve the quality of what reaches the court.

Early in my time as a United States Magistrate Judge, I made it a point to seek feedback from the legal community about what made legal practice more difficult than it needed to be. One theme came up repeatedly — keeping track of multiple, overlapping judicial practice standards was tough. In response, I worked with my colleagues to consolidate standards into a single, uniform set. I see a similar risk emerging with AI standing orders. Well-intentioned but divergent approaches can splinter practices and create new hurdles, particularly for smaller law firms and self-represented litigants. I don’t want to issue a standing order that adds another layer of complexity without meaningfully improving the quality of what comes before me.

The rules already cover the landscape — I already have tools to deal with inaccurate or misleading filings. Lawyers are responsible for the work they submit, and Rule 11 doesn’t stop working because AI was involved. If something is wrong or misleading, I already have ways to address it.

Certification or disclosure could be misinterpreted as discouraging AI use, and I worry about who gets left out — When new tools are treated as suspect or off-limits, those with the most resources find ways to keep moving forward. However, smaller firms and individual litigants fall further behind. A system that chills responsible experimentation risks widening access gaps instead of narrowing them. In my view, everyone should be exploring ways to, as Brynjolfsson says, “augment” themselves. So long as we remain accountable for the result, augmentation is how lawyers, judges, and other professionals will retain their value in a legal system that is becoming more AI-integrated every day.

Rather than issue a standing order that limits AI use or requires certification or disclosure, I offer : Check your work, protect confidential information, and take responsibility for what you submit. I published this guidance for those interested in my perspective, but it is deliberately not an order, so as to avoid the concerns described above.

We shouldn’t fear AI — we should shape it

Some warn that AI is coming for the legal profession; however, I’m more optimistic (and perhaps more idealistic).

In my view, the justice system depends on human judgment. Empathy, discretion, humility, moral reasoning, and uncertainty are not bugs in the system, rather they’re an essential part of the program. If we want to preserve human judgment in the age of AI, we must be involved in how AI is used. And we can’t do that from a distance. We have to engage with AI, understand its limits, and model responsible use.

Used carefully, AI can help judges:

      • organize large records,
      • identify gaps or inconsistencies,
      • spot issues that need a closer look,
      • identify and locate key information,
      • translate legal jargon to help self-represented litigants better understand what is being asked of them, and
      • reduce administrative drag so more of a judge’s time is spent on decision-making.

This kind of use does not replace us; rather, it supports us. It augments us so we do our work as well as we can, help as many people as possible, and still keep human judgment at the center of everything.

Why this moment matters

The AI conversation in law will remain noisy for a while. Some legal professionals will promise too much. Others will warn against everything. The better path is in the middle — engage, test, verify, and adjust.

As the Newsweek article suggests, this is a watershed moment. Not because AI will decide the future of our institutions, but because we will. The choices we make now will shape what AI does in the justice system, and just as importantly, what it does not do.

We should not be afraid of AI. We should help shape how it is used so it strengthens, rather than replaces, the human judgment at the heart of the legal justice system.


You can find out more about how courts and the legal system are managing AI here

]]>
AI literacy: The courtroom’s next essential skillset /en-us/posts/ai-in-courts/ai-literacy-court-skillset/ Fri, 12 Dec 2025 14:04:03 +0000 https://blogs.thomsonreuters.com/en-us/?p=68733

Key insights:

      • AI literacy is role-specific and essential — Courts need to move beyond general AI conversations and focus on concrete, role-based strategies that support AI readiness.

      • Balanced AI adoption is crucial — The goal for courts is not to automate blindly but rather should adopt a balanced AI-forward mindset.

      • Ongoing education and adaptability are vital — AI literacy requires continuous learning and upskilling that focus on building managers’ comfort and capability to lead their teams.


For today’s court system, AI literacy is quickly becoming a core professional skill, not just a technical curiosity. In the recent webinar AI Literacy for Courts: A New Framework for Role-Specific Education, panelists emphasized that courts need to move from holding abstract conversations around AI to enacting concrete, role-based strategies that support judicial officers and court professionals throughout their AI journey.

The webinar is part of a series from theĚý, a joint effort by the National Center for State Courts (NCSC) and the Thomson Reuters Institute (TRI).

The need for AI literacy is great

Courts are being urged to treat AI literacy as a foundational pillar of AI readiness, not as an optional add-on training. AI literacy is “the knowledge, attitudes, and skills needed to effectively interact with, critically evaluate, and responsibly use AI systems,” said the NCSC’s , adding that it cannot be one-size-fits-all. “The important thing to know about the definition of AI literacy is it’s going to be different for every single personnel role.”

Building a serious AI literacy strategy therefore begins with defining what success looks like for each role, and then aligning recruitment, training, and evaluation practices around those expectations.


You can find out more about here


To support this, policy and security concerns must come before (and alongside) AI use. Webinar panelist , Chief Human Resources Officer at Los Angeles County Superior Court, described how the court started by clarifying the sandbox for safe AI use. First, the court’s generative AI (GenAI) policy sets parameters, such as prohibiting staff from using court usernames or passwords to create accounts on external AI tools. Only then, after those guardrails were in place, did the training really lean into the technical how-to of writing prompts and experimenting with tools. Policy development and skills development happened in tandem, Griffin explained.

To make space for learning in an already overloaded environment, her team lit a creativity spark with managers first, she said, giving them concrete use cases — such as drafting performance evaluations, coaching documents, and job aids. As a result, these managers, in turn, feel motivated to create room for their teams to experiment.

This, Griffin added, is all anchored in a clear, people-centered message from leadership: “We have a lot of work to do, and not enough people to do our work — and so AI is going to help us serve the court users and help us provide access to justice.”


You can register for here


How to make AI “work”

On the webinar, the conversation repeatedly returns to what lawyers and court professionals are actually doing with AI tools today and where they’re getting stuck. , Founder of Creative Lawyers, noted that despite AI’s rapid advance, many professionals are still at a surprisingly basic stage in how they use it. For example, Leonard said that users tend to treat AI as a one-way question-and-answer box instead of using it as an expertise extractor that asks them targeted questions. To combat this, she suggested that users ask AI to ask them questions to extract from their expertise.

When thinking about how to interact with AI generally, users should treat it like a smart colleague and ask themselves (and implicitly the AI) these questions:

      • What information would this colleague need from me to do the assignment well?

      • What questions would I want them to ask me?

      • What specific task do I actually want them to execute?

      • What feedback would I give them to make the work product better?

As the webinar examined, leadership messaging needs to be explicit. AI is being adopted to augment human work, reduce burnout, and expand access to justice — not to eliminate jobs, particularly in courts that are already understaffed. For example, LA Superior Court has been meeting with unions around their GenAI policy, repeatedly affirming that they are not using AI to replace court staff, Griffin said. Instead, they show how AI can be used to demonstrate use cases, and offload repetitive tasks that will make remaining work more meaningful.

At the same time, managers themselves often feel unprepared to talk about AI, which is why building their comfort and capability — especially around explaining where the court is going — is becoming a critical managerial competency, panelists noted.

Supporting the journey

To support all of this, the TRI/NCSC AI Policy Consortium has built practical training resources that courts can plug into their own strategies. For example, the offers curated materials mapped to specific roles such as judges, court administrators, court reporters, clerks, and interpreters. Courts can use these resources as targeted supplements when rolling out AI projects to better prepare staff members who are just starting their AI journey.

Complementing this is the , an environment in which staff can safely experiment with GenAI tools without sending data back to the open internet. This gives judges and staff a place to practice prompt-writing, ask follow-up questions, and give feedback, all while staying inside a controlled environment and within the bounds of most court AI policies.

Looking ahead, the panelists argued that the most durable “future skills” may not be specific technical proficiencies but human capabilities, such as adaptability, creativity, critical thinking, and change leadership. In fact, HR leaders across industries largely agree they cannot predict exactly which tools or skill sets will dominate in a few years, Griffin said, and instead, courts should focus on helping managers to craft better prompts, interpret outputs critically, and lead their teams through repeated waves of technological change.

Leonard similarly urged legal organizations to move beyond basic, adoption use cases — such as document summarization and email refinement — and start exploring more creative, transformative uses that could redesign legal services and court systems to be more responsive to the public.

Finally, the webinar stressed that AI literacy cannot be a one-and-done initiative. The , published by NCSC, encourages courts to treat AI projects as catalysts for revisiting their overall literacy strategy and HR practices.


You can find out more about the work that NCSC is doing to improve courts here

]]>
Scaling Justice: Unauthorized practice of law and the risk of AI over-regulation /en-us/posts/ai-in-courts/scaling-justice-unauthorized-practice-of-law/ Mon, 01 Dec 2025 19:35:29 +0000 https://blogs.thomsonreuters.com/en-us/?p=68596

Key insights:

      • Are regulations choking innovation? — Current regulatory efforts may be stifling innovation in AI-driven legal solutions, exacerbating the access to justice crisis and prioritizing lawyer business model protection over consumer needs.

      • Some safeguards already in place — Existing consumer protection laws and product liability laws already provide robust safeguards against potential AI-related harm, making it unnecessary to impose additional restrictive policies on AI-driven legal services.

      • A balanced regulatory approach is best — An approach that encourages responsible innovation, prioritizes consumer protection, and fosters a data-driven mindset can best unlock the transformative potential of AI in addressing critical gaps in access to justice.


As AI-driven legal solutions gain traction, calls for regulation have grown apace. Some are thoughtful, others ill-informed or protectionist, and many focus on the issue of unauthorized practice of law (UPL). While protecting the public is crucial, shielding the legal profession from competition is not. A large majority (92%) of low-income people currently receive no or insufficient legal assistance; and the ongoing uncertainty in the legal AI and UPL regulatory landscape is chilling innovation that could support them.

The legal profession has always struggled to provide affordable, accessible services even as they simultaneously attempt to block those working ethically to bridge the gap with technology. When done right, legal industry regulation should balance protection with progress to avoid stifling innovation and exacerbating the access to justice crisis.

Consumer protection laws already provide robust safeguards against potential AI-related harms. Existing product liability laws and enforcement actions by state attorneys general ensure that consumers have recourse if AI legal tools cause harm. Despite these safeguards, concerns about unregulated AI filling the gaps in legal services persist.

It is time to upend the calculus of consumer harm and examine the motives of regulation. Rather than forcing tech-based legal services to prove they cause no harm in order to avoid changes of UPL, regulators should be required to justify, with data, that legal technology companies cause harm and whether any ruling will constrain supply in the face of a catastrophic lack of access to justice.

Uneven regulatory efforts raise questions

Current regulatory efforts tend to focus on companies that directly serve legal consumers, while leaving broader AI models largely unchecked. This raises uncomfortable questions: Are we truly protecting the public, or merely constraining competition and thereby reinforcing barriers to innovation in the process?


You can find out more about here


“If UPL’s purpose is protecting the public from nonlawyers giving legal advice — and if regulators define legal advice as applying law to facts — how many legal questions are asked of these Big Tech tools every day?” asks Damien Riehl, a veteran lawyer and innovator. “And if we won’t go after Big Tech, will regulators prosecute Small Legal Tech, which in turn utilizes Big Tech tools? If Big Tech isn’t violating UPL, then neither is Small Tech [by using Big Tech’s tools].”

Efforts to regulate the use of AI-based legal services are, de facto, another path to market constraint. Any attempt to regulate AI should be rooted in actual consumer experience. Justice tech companies, by definition, pursue mission-driven work to benefit consumers, but if an AI-driven tool causes harm, it should certainly be investigated and regulated. State bar associations are not waiting for harm to occur before considering regulating AI-driven legal help — and we must wonder why.

The risks of premature regulation

We must enable, not obstruct, AI-driven legal solutions and ensure that innovation remains a driving force in modernizing legal services. If restrictive policies make it difficult to develop cost-effective legal solutions, fewer consumers —particularly those with limited resources — will have access to legal assistance.

AI is developing far too quickly for a slower regulatory trajectory to keep up — any contemplated regulation would be evaluating last year’s technology, which is at best half as good as the latest iterations. Regulating AI-driven legal services now is akin to prior restraint, as when published or broadcast material is anticipated to cause problems in the future and is suppressed or prohibited before it can be released. However, this approach does not apply to new technology; we already can look for evidence of harm in product liability.

By prioritizing consumers rather than lawyer business model protection, AI-enabled legal support would be monitored for potential harm with data collected and analyzed to bring to light any issues. That way, regulations could be built around that defined, data-backed harm. For instance, we might require certification protocols for privacy or security if those issues prove problematic.

Forward-thinking states are going further

In July, the Institute for the Advancement of the American Legal System (IAALS) released a new report, , which advocated for a phased approach to regulation, beginning with experimentation, education, and consumer protection, while gathering and evaluating data. Later phases could involve potential regulation based on what is learned. In this way, innovation is encouraged while consumer needs and public trust remain paramount.

Also this year, Colorado cut the proverbial Gordian Knot by releasing a — consistent with existing analysis of UPL complaints in the state — for AI tools focused on improving access to justice. Guiding principles include ensuring consumers have clarity about the services they receive and their limits, educating consumers on the risks inherent in relying on advice from non-lawyer sources, and including a lawyer in the loop. Utah, Washington, and Minnesota all have considered similar policies. And IAALS now is collaborating with Duke University’s Center on Law & Tech to create a toolkit and templates to make it easier for other states to adopt UPL non-prosecution or similar policies.

Yet, some regulators seek the opposite, looking to define the exact types of business activity that will lead to UPL prosecution. While this framework is more likely to become obsolete more quickly, it serves a similar purpose: providing clear guardrails that allow innovation to flourish, while protecting consumers by clearly indicating the limitations of the software. The to specifically exclude tech products from UPL enforcement, provided they are accompanied by adequate disclosures that they are not a substitute for the advice of a licensed lawyer. Such policies are essential, and they can encourage those entrepreneurs aiming to ameliorate the justice gap.

What’s next?

The legal and justice tech industries should aim for a regulatory framework that encourages responsible, iterative innovation — and participants should take some proactive steps, including: i) justice tech companies should participate in the discussion and share their business- and mission-focused perspectives to help shape any new regulations; and ii) regulators with internal non-prosecution policies should consider making them public to encourage entrepreneurs in their state.

These approaches would enable positive change for state residents, support overburdened legal aid organizations and courts, and foster a flourishing tech ecosystem aimed at serving unrepresented and under-represented parties.

The legal profession has not been able to ensure justice for all, making it even harder for low-income and unrepresented parties to find the help they need. Now, AI-driven legal service providers are moving forward on addressing critical gaps in access to justice.

With a measured and equitable approach to regulation that neither ignores AI’s risks nor overlooks its transformative potential, the legal industry and regulators must keep pace with today’s technology — and such efforts should not obstruct those legal providers who can bring the law closer to that ideal and help close the justice gap.


You can learn more about the challenges faced by justice tech providers here

]]>
Generative AI in legal: A risk-based framework for courts /en-us/posts/ai-in-courts/genai-risk-based-framework/ Fri, 21 Nov 2025 13:57:31 +0000 https://blogs.thomsonreuters.com/en-us/?p=68524

Key highlights:

      • Risk varies by workflow and context — Practitioners should apply risk ratings based on workflow and context, such as low for productivity, moderate for research, moderate to high for drafting and public‑facing tools, and high for decision-support.

      • Courts need their own developed benchmarks — Courts should develop and regularly review their own independent benchmarks and evaluation datasets instead of relying solely on vendor claims, because vendors may optimize systems for known tests.

      • Need for benchmarking to detect drift, degradation, and bias — Continuous, rigorous benchmarking of AI models is essential for courts and legal professionals to maintain confidence in these systems, since both the law and AI models change over time.


AI is not monolithic technology, and a risk-based assessment process is needed when using it. Indeed, courts and legal professionals must scale their scrutiny to match risk levels.

This approach — which balances innovation with accountability, along with other essential best practices — is detailed in a recent publication, , created as part of .

In a recent webinar, , one of the co-authors of the document, explained the purpose of the document: “The central aim of what we were thinking about in these best practices is to give courts and legal professionals a principle-based architecture when you’re thinking about the adoption of GenAI tools.”

Risk and human judgement serve as central elements

What is unique about this framework is that it categorizes risk based on key workflow actions of lawyering, for example:

      • Productivity tools carry minimal to moderate risk
      • Research tools are assigned moderate risk
      • Drafting tools range from moderate to high risk
      • Public-facing tools carry moderate to high risk
      • Decision-support tools pose high risk

The framework holds that risk is dynamic rather than static, and there can be shifts in risk levels based on use cases. For example, a scheduling tool typically poses minimal risk; however, the same tool becomes high risk when used for urgent national security cases. And translation tools can shift from lower risk research support to high-risk decision-support depending on their use.

Similarly, when tools range from moderate risk to high risk, users need to be especially discerning in order to understand the underlying risks — and if the task should be delegated to AI at all.

“You can’t just rely on categories,” explains from the IP High Court of Korea. “You need to understand the underlying risks and ask yourself: Would I delegate this task to another person? Am I comfortable delegating it publicly? If the answer is no, then you probably shouldn’t be delegating it to an AI either.”

In addition, clear red lines around when AI should never be used and classified as unacceptable risk exist for judicial use. “I believe the clear red line is automated final decisions or AI systems that assess a person’s credibility or determine fundamental rights involving incarceration, housing, family,” says Judge Kwon, adding that fundamental rights require human judgment.


“You can’t just rely on categories. You need to understand the underlying risks and ask yourself: Would I delegate this task to another person? Am I comfortable delegating it publicly? If the answer is no, then you probably shouldn’t be delegating it to an AI either.”


The extent of human judgment also has layers. , Shareholder at Greenberg Traurig, says he believes that AI for any legal use currently requires human oversight. “The human supervision piece… is utterly critical in the real world of practicing lawyers and law firms,” Greenberg says. “You have to supervise the lawyers in the firm that are using the technology, including young lawyers.”

To help distinguish which type of human oversight is appropriate, the framework in the Key Considerations document defines two forms of such oversight: i) human in the loop, which means active human involvement in decisions; and ii) human on the loop, which means monitoring automated processes and intervening when needed.

What the difference between what each concept could look like in a court setting shows that a human in the loop is, for example, a law clerk using AI to do research on relevant case law and checking to make sure that the references are legally sound; and a human on the loop is a clerk monitoring an established robotic process to extract data for the case management system and spot-checking for accuracy.

Practical guidance for courts

In addition to judges considering the risk level of AI tools, Judge Kwon, Greenberg, and Carpenter, noted the importance of technical AI competence as part of lawyers’ and judges’ ethical duty, especially around verification, transparency, and independent benchmarks as part of accountability, as well as the need for understandable documentation to maintain public trust. To reinforce the latter point, , Director in Government Practice for ¶¶ŇőłÉÄę Practical Law states: “It’s very vital, especially as we usher in the age of AI, that the public be informed as much as they can be about how that decision-making process is taking place.”

In addition, Judge Kwon, Greenberg, and Carpenter highlighted additional guidance on the criticality of benchmarking, including:

      • Court-developed benchmarks prevent overreliance on vendor data — Courts should develop their own benchmarks and independent evaluation datasets rather than relying entirely on vendor claims and review evaluation scenarios regularly. Vendors may optimize their systems for known tests, which leads to overfitting, in which a model learns patterns specific to its training data so well that it performs poorly on new, unseen data. This gives a misleading impression of reliability.
      • Ongoing rigorous benchmarking to detect model drift & degradation — To build confidence in AI models, courts and legal professionals must approach AI model evaluation with rigor and ongoing vigilance. Continuous benchmarking is essential, and it cannot be a one-time process because the law evolves constantly and precedents shift. In addition, AI models themselves update regularly, and courts need to monitor performance over time to detect AI degradation or bias drift.

Adopting a thoughtful, risk-informed approach to GenAI in legal practice and courts will help realize its benefits for efficiency and access to justice while protecting ethical obligations, due process, and public trust in the legal system.


You can find out more about how AI and other advanced technologies are impacting best practices in courts and administration here

]]>