Access to Justice Archives - Thomson Reuters Institute https://blogs.thomsonreuters.com/en-us/topic/access-to-justice/ Thomson Reuters Institute is a blog from ¶¶ŇőłÉÄę, the intelligence, technology and human expertise you need to find trusted answers. Thu, 16 Apr 2026 06:20:41 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 Looking beyond the bench at the importance of judicial well-being /en-us/posts/government/beyond-the-bench/ Wed, 15 Apr 2026 14:06:38 +0000 https://blogs.thomsonreuters.com/en-us/?p=70384

Key insights:

      • Well-being is a professional necessity — Judges experience decision fatigue, emotional stress, and personal biases that can affect their rulings, making mental and physical well-being a judicial duty.

      • Community engagement builds better judgment — Staying connected to the communities they serve helps judges develop empathy, recognize bias, and deliver fairer decisions.

      • Diverse experience strengthens the judiciary — Varied backgrounds and ongoing education in areas like restorative justice make courts more responsive, inclusive, and publicly trusted.


Judges play a unique and essential role in society. They are tasked with interpreting the law, resolving disputes, and upholding justice — often under intense scrutiny and pressure. Their decisions shape lives, influence public policy, and reinforce the rule of law.

Indeed, judicial rulings may be the most visible part of the job, but they are not the only measure of a judge’s effectiveness — or of the judiciary’s overall health.

To truly understand and support a robust legal system, it is vital to look beyond the courtroom and examine the broader context in which judges operate. A judiciary that is fair, empathetic, and resilient depends not only on legal expertise, but also on balance, self-awareness, and active engagement with the communities it serves.

The weight of the robe & the value of connection

Despite the solemnity of the judicial office, judges also carry personal experiences, cognitive biases, and emotional responses. The weight of responsibility in adjudicating complex, often emotionally charged cases can lead to stress, burnout, and decision fatigue. that judicial decisions can be influenced by factors such as time of day, caseload volume, and even personal well-being.

When judges prioritize their own well-being through physical health, mental resilience, and time away from the bench, they are better equipped to render fair and consistent decisions. Judicial wellness is not a personal luxury; rather, it is a professional imperative.

Equally important is the role of community engagement. The law does not exist in a vacuum but is shaped by social norms, economic realities, and cultural shifts. Judges who remain isolated from the communities that are affected by their rulings risk losing touch with the lived experiences of the people before them.


Judicial rulings may be the most visible part of the job, but they are not the only measure of a judge’s effectiveness — or of the judiciary’s overall health.


Engagement with the public helps judges better understand how the law impacts and operates in people’s lives. It also builds the empathy and contextual awareness needed for interpreting statutes or imposing sentences.

For example, a judge who volunteers with youth programs or participates in community forums on public safety may develop a more nuanced understanding of cases involving juvenile offenders or policing practices. Similarly, a judge who attends local cultural events or listens to community leaders may be better positioned to recognize implicit biases or systemic inequities that may be inherent in the justice system.

Community involvement also strengthens public trust. When citizens see judges as accessible and engaged, rather than distant or aloof, confidence in the judiciary increases. And these ideas of transparency and connection are key to maintaining citizens’ trust in the courts.

These themes are explored more in depth in the Thomson Reuters Institute’s video series,ĚýBeyond the Bench. For example, in the episodeĚý,ĚýAssociate Justice Tanya R. Kennedy shares her experience educating youth, participating in civic organizations, and leading legal reform initiatives. The episode also highlights how service beyond judicial duties enhances judges’ decision-making and strengthens community ties.

Another episode of the series,Ěý,Ěýexamines the personal and professional challenges faced by judges and attorneys alike. It features a candid interview with Judge Mark Pfiffer, who emphasizes the importance of mindfulness, peer support, and institutional policies that promote mental health and sustainable work practices.

A judiciary that reflects society

The same principle applies at the institutional level. A judiciary is strongest when it reflects the range of experiences and perspectives present in the society it serves.

Beyond individual judges, the judiciary can benefit from diversity and inclusion. A bench that reflects the full spectrum of society is more likely to deliver balanced and equitable justice. But diversity is not just about representation — it’s also about perspective.

Judges who have worked in public defense, civil rights advocacy, or rural legal services bring different insights to the bench than those who have spent their careers in corporate law or prosecution. These varied experiences enrich judicial deliberation and help ensure that decisions are informed by a broad understanding of justice.

Encouraging judges and court personnel to engage in lifelong learning, mentorship, and cross-sector collaboration further strengthens the judiciary. Programs that support judicial education on topics like implicit bias, trauma-informed practices, or restorative justice are essential to modern, responsive courts.

Improving judges’ well-being

The quality of justice depends not only on what happens in the courtroom, of course, but on what happens outside of it. Judges who maintain personal balance, engage with their communities, and remain open to diverse perspectives are better equipped to serve the public good.

Legal professionals, court administrators, and policymakers should support the kinds of initiatives that promote judicial wellness, community outreach, and professional development. By fostering a judiciary that looks beyond the bench, we ensure a justice system that is not only legally sound, but also humane, inclusive, and trusted.

In the end, judges and the justice they mete out are not defined by court rulings alone. It also depends on relationships, context, and public trust. Recognizing that reality is essential to preserving the well-being of the judiciary and the integrity of the law.


TheĚý“Beyond the Bench”Ěývideo series is available on

]]>
Scaling Justice: AI is scaling faster than justice, revealing a dangerous governance gap /en-us/posts/ai-in-courts/scaling-justice-governance-gap/ Mon, 13 Apr 2026 16:57:55 +0000 https://blogs.thomsonreuters.com/en-us/?p=70330

Key takeaways:

      • AI frameworks need to keep up with implementation — While AI governance frameworks are being developed and enacted globally, their effectiveness depends on enforceable mechanisms within domestic justice systems.

      • Access to justice is essential for trustworthy AI regulation — Rights and protections are only meaningful if individuals can understand, challenge, and seek remedies for AI-driven decisions. Without operational access, governance frameworks risk remaining theoretical.

      • People-centered justice and human rights must anchor AI governance — Embedding human rights standards and ensuring equal access to justice in AI regulation strengthens public trust, accountability, and the credibility of both public institutions and private companies.


AI governance is accelerating across global, national, and local levels. As public investment in AI infrastructure expands, new oversight bodies are emerging to assess safety, risk, and accountability. The global policy conversation has from principles to the implementation of meaningful guardrails and AI governance frameworks, which legislators now are drafting and enacting.

These developments reflect growing recognition that AI systems demand structured oversight and a shift from voluntary safeguards and standards to institutionalized governance. One critical dimension remains underdeveloped, however: how do these frameworks function in practice? Are they enforceable? Do they provide accountability? Do they ensure equal access?

AI governance will not succeed on the strength of international declarations or regulatory design alone; rather, domestic justice systems will determine whether it works. At this intersection, the connection between AI governance and access to justice becomes real.

In early February, leaders across government, the legal sector, international organizations, industry, and civil society convened for an expert discussion. The following reflections attempt to build on that dialogue and its urgency.

From principles to enforcement

Over the past decade, AI governance has evolved from hypothetical ethical guidelines to voluntary commitments, binding regulatory frameworks, and risk-based approaches. Due to these game-changing advancements, however, many past attempts to provide structure and governance have been quickly outpaced by technology and are insufficient without enforcement mechanisms. As Anoush Rima Tatevossian of The Future Society observed: “The judicial community should have a role to play not only in shaping policies, but in how they are implemented.”

Frameworks establish expectations, while courts and dispute resolution mechanisms interpret rules, test rights, evaluate harm, assign responsibility, and determine remedies. If individuals are not empowered to safeguard their rights and cannot access these mechanisms, governance frameworks remain theoretical or are casually ignored.

This challenge reflects a broader structural constraint. Even without AI, legal systems struggle to meet demand. In the United States alone, 92% of people do not receive the help they need in accessing their rights in the justice system. Introducing AI into this environment without strengthening access can risk widening, rather than narrowing, the justice gap.


Please add your voice to ¶¶ŇőłÉÄę’ flagship , a global study exploring how the professional landscape continues to change.Ěý


Justice systems serve as the operational core of AI governance. By inserting the rule of law into unregulated areas, they provide the infrastructure that enables accountability by interpreting regulatory provisions in specific cases, assessing whether AI-related harms violate legal standards, allocating responsibility across public and private actors, and providing accessible pathways for redress.

These frameworks also generate critical feedback. Disputes involving AI systems expose gaps in transparency, fairness, and accountability. Legal professionals see where governance frameworks first break down in real-world conditions, often long before policymakers do. As a result, these frameworks function as an early signal of policy effectiveness and rights protections.

Importantly, AI governance does not require entirely new legal foundations. Human rights frameworks already provide standards for legality, non-discrimination, due process, and access to remedy, and these standards apply directly to AI-enabled decision-making. “AI can assist judges but must never replace human judgment, accountability, or due process,” said Kate Fox Principi, Lead on the Administration of Justice at the United Nations (UN) Office of the High Commissioner for Human Rights (OHCHR), during the February panel.

Clearly, rights are only meaningful when individuals can exercise them — this constraint is not conceptual, it’s operational. Systems must be understandable, affordable, and responsive, and institutions should be capable of evaluating complex, technology-enabled disputes.

Trust, markets & accountability

Governance frameworks that do not account for these dynamics risk entrenching inequities rather than mitigating them. An individual’s ability to understand, challenge, and seek a remedy for automated decisions determines whether governance is credible. A people-centered justice approach, as established in the , asks whether individuals can meaningfully engage with the system, not just whether rules exist. For example, women face documented barriers to accessing justice in any jurisdiction. AI systems trained on biased data can replicate or amplify existing disparities in employment, financial services, healthcare, and criminal justice.

“Institutional agreement rings hollow when billions of people experience governance as remote, technocratic, and unresponsive to their actual lives,” said Alfredo Pizarro of the Permanent Mission of Costa Rica to the UN. “People-centered justice becomes essential.”

AI systems already shape outcomes across employment, financial services, housing, and justice. Entrepreneurs, law schools, courts, and legal services organizations are already building AI-enabled tools that help people navigate legal processes and assert their rights more effectively. Governance design will determine whether these tools help spread access to justice and or introduce new barriers.

Private companies play a central role in developing and deploying AI systems. Their products shape economic and social outcomes at scale. For them, trust is not abstract; it is a success metric. “Innovation depends on trust,” explained Iain Levine, formerly of Meta’s Human Rights Policy Team. “Without trust, products will not be adopted.” And trust, in turn, depends on enforceability and equal access to remedy.

AI governance will succeed or fail based on access

As Pizarro also noted, justice provides “normative continuity across technological rupture.” Indeed, these principles already exist within international human rights law and people-centered justice; although they precede the advent of autonomous systems, they provide standards for evaluating discrimination, surveillance, and procedural fairness, and remain durable as new challenges to upholding justice and the rule of law emerge.

People-centered justice was not designed for legal systems addressing AI-related harms, but its outcome-driven orientation remains durable as new justice problems emerge.

The current stage presents an opportunity to align AI governance with access to justice from the outset. Beyond well-drafted rules, we need systems that people can use. And that means that any effective governance requires coordination between policymakers, legal professionals, and the public.


You can find other installments ofĚýour Scaling Justice blog seriesĚýhere

]]>
Pattern, proof & rights: How AI is reshaping criminal justice /en-us/posts/ai-in-courts/ai-reshapes-criminal-justice/ Fri, 10 Apr 2026 08:46:55 +0000 https://blogs.thomsonreuters.com/en-us/?p=70255

Key insights:

      • AI’s greatest strength in criminal justice is pattern recognition— AI can process vast amounts of data quickly, helping law enforcement and legal professionals detect connections, reduce oversight gaps, and improve consistency across investigations and casework.

      • AI should strengthen justice, not substitute for human judgment— Legal professionals are integral to evaluating AI-generated outputs, especially when decisions affect evidence, warrants, and individuals’ constitutional rights.

      • The most effective model is human/AI collaboration— AI handles scale and speed, while judges, attorneys, and investigators provide context, accountability, and ethical reasoning needed to protect due process.


The law has always been about patterns — patterns of behavior, patterns of evidence, and patterns of justice. Now, courts and law enforcement can leverage a tool powerful enough to see those patterns at a scale at a speed no human mind could match: AI.

At its core, AI works by recognizing patterns. Rather than simply matching keywords, it learns from large amounts of existing text to understand meaning and context and uses that learning to make predictions about what comes next. In the context of law enforcement, that capability is nothing short of transformative.

These themes were front and center in a recent webinar, , from theĚý, a joint effort by the National Center for State CourtsĚý(NCSC) and the Thomson Reuters Institute (TRI). The webinar brought together voices from across the justice system, and what emerged was a clear and consistent message: AI is a powerful ally in the pursuit of justice, but only when paired with the judgment, accountability, and constitutional grounding that human professionals can provide.

AI’s pattern recognition is a gamechanger

“AI is excellent,” said Mark Cheatham, Chief of Police in Acworth, Georgia, during the webinar. “It is better than anyone else in your office at recognizing patterns. No doubt about it. It is the smartest, most capable employee that you have.”

That kind of capability, applied to the demands of modern policing, investigation, and prosecution, is a genuine gamechanger. However, the promise of AI extends far beyond the patrol car or the precinct. Indeed, it cascades through the entire arc of justice — from the moment a crime is detected all the way through prosecution and adjudication.

Each step in that chain represents not just an operational and efficiency upgrade, but an opportunity to make the system more fair, more consistent, and more protective of the rights of everyone involved.

Webinar participants considered the practical implications. For example, AI can identify and mitigate human error in decision-making, promoting greater consistency and fairness in outcomes across cases. And by automating labor-intensive tasks such as reviewing body camera footage, AI frees prosecutors and defense attorneys to focus on other aspects of their work that demand professional judgment and legal expertise.

In legal education, the potential of AI is similarly recognized. Hon. Eric DuBois of the 9th Judicial Circuit Court in Florida emphasizes its role as a tool rather than a substitute. “I encourage the law students to use AI as a starting point,” Judge DuBois explained. “But it’s not going to replace us. You’ve got to put the work in, you’ve got to put the effort in.”


AI can never replace the detective, the prosecutor, the judge, or the defense attorney; however, it can work alongside them, handling the volume and velocity of data that no human team could process alone.


Judge DuBois’ perspective aligns with broader judicial sentiment on the responsible integration of AI. In fact, one consistent theme across the webinar was the necessity of maintaining human oversight. The role of the legal professional remains central, participants stressed, because that ensures accuracy, accountability, and ethical judgment. The appropriate placement of human expertise within AI-assisted processes is essential to ensuring a fair and effective legal system.

That balance between leveraging AI and preserving human judgment is not just good practice, rather it’s a cornerstone of justice. While Chief Cheatham praises AI’s pattern recognition, he also cautions that it “will call in sick, frequently and unexpectedly.” In other words, AI is a powerful but imperfect tool, and those professionals who rely on it must always be prepared to intervene in those situations in which AI falls short. Moreover, the technology is improving extremely rapidly, and the models we are using today will likely be the worst models we ever use.

Naturally, that readiness is especially critical when individuals’ rights are on the line. “A human cannot just rely on that machine,” said Joyce King, Deputy State’s Attorney for Frederick County in Maryland. “You need a warrant to open that cyber tip separately, to get human eyes on that for confirmation, that we cannot rely on the machine.” Clearly, as the webinar explained, AI does not replace constitutional obligations; rather, it operates within them, and the professionals who use AI are still the guardians of due process.

The human/AI partnership is where justice is served

Bob Rhodes, Chief Technology Officer for ¶¶ŇőłÉÄę Special Services (TRSS) echoed that sentiment with a principle that cuts across every application of AI in the justice system. “The number one thing… is a human should always be in the loop to verify what the systems are giving them,” Rhodes said.

This is not a limitation of AI; instead, it’s the design of a system that works. AI identifies the patterns, and trained, experienced professionals evaluate them, act on them, and are accountable for them.

That partnership is where the real opportunity lives. AI can never replace the detective, the prosecutor, the judge, or the defense attorney. However, it can work alongside them, handling the volume and velocity of data that no human team could process alone. So that means the humans in the room can focus on what they do best: applying judgment, upholding the law, and protecting an individual’s rights.

For judicial and law enforcement professionals, this is the moment to lean in. The patterns are there, the technology to read them is here, and the opportunity to use both in service of rights — not against them — has never been greater.


Please add your voice to ¶¶ŇőłÉÄę’ flagship , a global study exploring how the professional landscape continues to change.Ěý

]]>
The shadow over the bench: Legalweek 2026’s most important session had nothing to do with AI /en-us/posts/government/legalweek-2026-judicial-threats/ Thu, 26 Mar 2026 17:12:25 +0000 https://blogs.thomsonreuters.com/en-us/?p=70142

Key takeaways:

      • Violence against judges is escalating — Targeted shootings, coordinated harassment campaigns, and threats that now routinely follow judges to their homes and families.

      • The rhetoric driving the escalation is coming from the highest levels of government — The absence of any public denunciation from the Department of Justice is highlighting the source of the problem.

      • Will the violence itself become part of judicial rulings? — The endgame of judicial intimidation isn’t that judges stop ruling, it’s that the threat of violence becomes a silent presence in the deliberation itself.


NEW YORK — Those attendees who came to the recentĚý to talk about AI, agentic workflows, and the business of legal technology, also were treated to a session that will likely stay with attendees and had nothing to do with AI.

In that session, four federal judges took the stage; but they were not there to talk about pricing models or AI adoption. They were there to talk about staying alive.

Setting the stage

Jason Wareham, CEO of IPSA Intelligent Systems and a former U.S. Marine Corps judge advocate, introduced the session — a panel of four sitting United States District Court judges — by speaking of how the rule of law once seemed resolute, yet how that faith in that has been shaken, year after year. He worked hard to frame his observations as nonpartisan, a matter of institutional fragility rather than political allegiance. It was a generous framing, but it was one that would not survive the weight of the ensuing discussion.

The Honorable Esther Salas of the District of New Jersey said that the reason she was there has a name. On July 19, 2020, a disgruntled, extremist attorney who had a case before her court arrived at her home during a birthday celebration. He shot and killed her twenty-year-old son, Daniel Anderl. He shot and critically wounded her husband. She has spent the years since on a mission to protect her judicial colleagues from the same fate.

The new normal

Next, the Honorable Kenly Kiya Kato of the Central District of California described what has changed. Judges’ rulings are still based on the Constitution, on precedent, and on the facts; but what’s different is the small voice in the back of a judge’s head. That voice, often coming after a judge issued a decision that they now have to fight against, asks: What will happen after this? It is now expected, Judge Kato explained, that a high-profile order will bring threats. When two colleagues in her district issued prominent decisions, her first thought was for their safety. That is not how it has been historically.

The Honorable Mia Roberts Perez of the Eastern District of Pennsylvania asked how we got here, pointing to language from the highest levels of government: judges called monsters, a U.S. Department of Justice declaring war on rogue judges, and recently politicians bringing justice’s families into the conversation.

Judge Salas pushed even further. She acknowledged the instinct to frame the problem as bipartisan, but said the current moment is not apples to apples. It is apples to watermelons. The spike in threats since 2015, she argued, traces directly to rhetoric from political leaders using language never before deployed against the bench.


The federal judiciary is looking to break annual records for threats [against judges], and there is an absence of any public denunciation from the Attorney General or the DOJ.


The evidence is not abstract, nor are the victims, and the panel walked through it. Judge John Roemer of Wisconsin, zip-tied to a chair and assassinated in his home. Associate Judge Andrew Wilkinson of Maryland shot dead in his driveway while his family was inside. Judge Steven Meyer of Indiana and his wife Kimberly, shot through their own front door after attackers first posed as a food delivery, then returned days later claiming to have found the couple’s dog. Judge Meyer has just undergone his fifth surgery since the attack.

All of these incidents happened at the judges’ homes.

Judge Salas then played a voicemail, one of thousands that federal judges receive. It was less than 30 seconds long, but it did not need to be longer. While names had been redacted, what remained was a torrent of threats and obscenities, graphic, sexual and violent, delivered with the confidence of someone who does not expect consequences. Some judges receive hundreds of these after a single ruling, often from people with no case before them at all.

The shadow over the courts

Throughout the session, there was a presence the panelists circled but rarely named directly. A shadow that shaped every observation about escalating threats, every reference to rhetoric from the top down, every mention of language never before used by political leaders, of action or inaction the likes of which would have been unthinkable just several years ago. The specifics were spoken. The name, largely, was not.

It didn’t have to be.

Judge Kato said that what was perhaps the most disheartening aspect of all this is that these threats are getting worse. The people who know better are not doing better. Indeed, she said her children think about these problems every day. What will happen to mom today? Will someone come to the house? These are questions children should not have to carry. They did not sign up for this, and neither did the judges.

In 2026, Judge Salas noted, the federal judiciary is looking to break annual records for threats. She also noted the absence of any public denunciation from the Attorney General or the DOJ. The silence, she said, says a lot.

Not surprisingly, the implications extend beyond the judges themselves. As Judge Salas noted, if judges have to weigh their safety alongside the law, ordinary people don’t stand a chance. If one party is stronger, better funded, or more willing to threaten, then the scales tip.

That is the endgame of judicial intimidation. It’s not that judges stop ruling, but that the violent and the powerful — indeed, the people least fit to hold the scales — can tilt them at will.

That concern echoed an earlier warning from Judge Karoline Mehalchick of the Middle District of Pennsylvania. Judge Mehalchick said that judicial intimidation feeds on misunderstanding. When the public no longer grasps why judges must be insulated from pressure or conversely, mistakes independence for partisanship, the threat environment becomes easier to justify, easier to ignore, and harder to reverse.


What is perhaps the most disheartening aspect of all this is that these threats are getting worse, and the people who know better are not doing better.


In his 2024 year-end report, U.S. Supreme Court Chief Justice John Roberts identified four threats to judicial independence: violence, intimidation, disinformation, and threats to defy lawfully entered judgements. The panel discussed this report as prophecy fulfilled. Public confidence in the judiciary has plummeted since 2021, and the reasons are complex. The judges insisted they are still doing their jobs the right way, but the violence is spreading anyway.

What survives

Judge Salas asked the audience to watch their thoughts. Are they negative and destructive, or positive and uplifting? Can we start loving more? She ended by sending love and light to everyone in the room.

The judges were visibly emotional on the stage.

The words were beautiful. They were also, in the context of everything that had just been described — the killings, the voicemails, the zip ties, the pizza deliveries masking a threat under a murdered son’s name — resting in a shadow that no amount of love and light could fully dispel on their own.

The room responded with a standing ovation.

Thousands of people came to Legalweek 2026 to talk about the future of legal technology. For one morning, four judges reminded them that none of it matters if the people charged with administering justice cannot do so safely.

So, while the billable hour may survive and the associate will adapt, the harder question, the one that should keep the legal industry awake at night, is whether the bench will hold.


You can find more ofĚýour coverage of Legalweek eventsĚýhere

]]>
Scaling Justice: Unlocking the $3.3 trillion ethical capital market /en-us/posts/ai-in-courts/scaling-justice-ethical-capital/ Mon, 23 Mar 2026 17:12:28 +0000 https://blogs.thomsonreuters.com/en-us/?p=70042

Key takeaways:

      • An additional funding stream, not a replacement — Ethical capital has the potential to supplement existing access to justice infrastructure by introducing a justice finance mechanism that can fund cases with measurable social and environmental impact.

      • Technology as trust infrastructure — AI and smart technologies can provide the governance scaffolding required for ethical capital to flow at scale, including standardizing assessment, impact measurement, and oversight.

      • Capital is not scarce; allocation is — The true bottleneck is not the availability of funds; rather it’s the disciplined, investment-grade legal judgment required to evaluate risk, ensure compliance, and measure impact in a way that makes justice outcomes investable.


Kayee Cheung & Melina Gisler, Co-Founders of justice finance platform Edenreach, are co-authors of this blog post

Access to justice is typically framed as a resource problem — the idea that there are too few legal aid lawyers, too little philanthropic funding, and too many people navigating civil disputes alone. This often results in the majority of individuals who face civil legal challenges doing so without representation, often because they cannot afford it.

Yet this crisis exists alongside a striking paradox. While 5.1 billion people worldwide face unmet justice needs, an estimated $3.3 trillion in mission-aligned capital — held in donor-advised funds, philanthropic portfolios, private foundations, and impact investment vehicles — remains largely disconnected from solutions.

Unlocking even a fraction of this capital could introduce a meaningful parallel funding stream — one that’s capable of supporting cases with potential impacts that currently fall outside traditional funding models. Rather than depending on charity or contingency, what if justice also attracted disciplined, impact-aligned investment in cases themselves, in addition to additional funding that could support technology?

Recent efforts have expanded investor awareness of justice-related innovation. Programs like Village Capital’s have helped demystify the sector and catalyze funding for the technology serving justice-impacted communities. Justice tech, or impact-driven direct-to-consumer legal tech, has grown exponentially in the last few years along with increased investor interest and user awareness.

Litigation finance has also grown, but its structure is narrowly optimized for high-value commercial claims with a strong financial upside. Traditional funders typically seek 5- to 10-times returns, prioritizing large corporate disputes and excluding cases with significant social value but lower monetary recovery, such as consumer protection claims, housing code enforcement, environmental accountability, or systemic health negligence.

Justice finance offers a different approach. By channeling capital from the impact investment market toward the justice system and aligning legal case funding with established impact measurement frameworks like the , it reframes certain categories of legal action as dual-return opportunities, covering financial and social.

This is not philanthropy repackaged. It’s the idea that measurable justice outcomes can form the basis of an investable asset class, if they’re properly structured, governed, and evaluated.

Technology as trust infrastructure

While mission-aligned capital is widely available, the ability to evaluate legal matters with the necessary rigor remains limited. Responsibly allocating funds to legal matters requires complex expertise, including legal merit assessment, financial risk modeling, regulatory compliance, and impact evaluation. Cases must be considered not only for their likelihood of success and recovery potential, but also for measurable social or environmental outcomes.

Today, that assessment is largely manual and capacity-bound by small teams. The result is a structural bottleneck as capital waits on scalable, trusted evaluation and allocation.

Without a way to standardize and responsibly scale analysis of the double bottom line, however, justice funding remains bespoke, even when resources are available.

AI-enabled systems can play a transformative role by standardizing assessment frameworks and supporting disciplined capital allocation at scale. By encoding assessment criteria, decision pathways, and compliance safeguards and then mapping case characteristics to impact metrics, technology can enable consistency and allow legal and financial experts to evaluate exponentially more matters without lowering their standards.

And by integrating legal assessment, financial modeling, and impact alignment within a governed tech framework, justice finance platforms like can function as the connective tissue. Through the platform, impact metrics are applied consistently while human experts remain responsible for final determinations, thereby reducing friction, increasing transparency, and supporting auditability.

When incentives align

It’s no coincidence that many of the leaders exploring justice finance models are women. Globally, women experience legal problems at disproportionately higher rates than men yet are less likely to obtain formal assistance. Women also control significant pools of global wealth and are more likely to . Indeed, 75% of women believe investing responsibly is more important than returns alone, and female investors are almost twice as likely as male counterparts to prioritize environmental, social and corporate governance (ESG) factors when making investment decisions, .

When those most affected by systemic barriers also shape capital allocation decisions, structural change becomes more feasible. Despite facing steep barriers in legal tech funding (just 2% goes to female founders), women represent in access-to-justice legal tech, compared to just 13.8% across legal tech overall.

This alignment between lived experience, innovation leadership, and capital stewardship creates an opportunity to reconfigure incentives in favor of meaningful change.

Expanding funding and impact

Justice financing will not resolve the justice gap on its own. Mission-focused tools for self-represented parties, legal aid, and court reform remain essential components of a functioning justice ecosystem. However, ethical capital represents an additional structural layer that can expand the range of cases and remedies that receive financial support.

Impact orientation can accommodate longer time horizons, alternative dispute resolution pathways, and remedies that extend beyond monetary damages. In certain matters, particularly those involving environmental harm, systemic consumer violations, or community-wide injustice, capital structured around impact metrics may identify and enable solutions that traditional litigation finance models do not prioritize.

For example, capital aligned with defined impact frameworks may support outcomes that include remediation programs, compliance reforms, or community investments alongside financial recovery. These approaches can create durable benefits that outlast a single judgment or settlement.

Of course, solving deep-rooted inequities and legal system complexity requires more than new tools and new investors. It requires designing capital pathways that are repeatable, accountable, and aligned with measurable public benefit.

Although justice finance may not be a fit for every case and has yet to see widespread uptake, it does have the potential to reach cases that currently fall through the cracks — cases that have merit, despite falling outside traditional litigation finance models and legal aid or impact litigation eligibility criteria.


You can find other installments of our Scaling Justice blog series here

]]>
The efficiency imperative: AI as a tool for improving the way lawyers practice /en-us/posts/ai-in-courts/improving-lawyers-practice/ Wed, 18 Mar 2026 17:45:16 +0000 https://blogs.thomsonreuters.com/en-us/?p=70024

Key insights:

      • AI brings improved efficiency — AI accelerates tasks like document review and research, freeing lawyers to pursue more high-value work for clients.

      • AI does the work of a team of lawyers — AI levels the playing field for small law firms and solo practitioners by providing additional capacity without adding headcount, thereby allowing fewer lawyer to do the work of many.

      • Yet, AI still needs guardrails — Lawyers must remain accountable, however, with human oversight and review to ensure that AI outputs are accurate and correct, thereby preserving nuance and professional judgment.


Already, AI is no longer a theoretical concept for legal professionals, nor is it a nice-to-have for law firms that are seeking to impress their clients with improved efficiency and cost savings. That means, the practical question now becomes how to adopt AI in ways that improve speed and capacity of lawyers without compromising accuracy, confidentiality, or professional judgment.

The strongest near-term value shows up where modern practice is most strained: high-volume inputs and relentless timelines. In that environment, AI can be most helpful as an accelerant for the first pass through large bodies of material.

This possibilities, opportunities, and challenges of using AI in this way were discussed by a panel of experts in a recent webinar, , from theĚý, a joint effort by the National Center for State CourtsĚý(NCSC) and the Thomson Reuters Institute (TRI).

One panelist, Mark Francis, a partner at Holland & Knight, described one way that AI can be an enormous help. “Anything where we’re dealing with large volume of materials that need to be reviewed [such as] large sets of documents, large sets of legal research, large sets of discovery. Obviously, AI can be leveraged in all of those circumstances.” That framing is important because it anchors AI’s utility in a familiar workflow: review, triage, and synthesis at scale.

AI also has a role earlier in the workflow than many attorneys expect. In addition to sorting and summarizing, it can help generate starting structures. For lawyers drafting motions, client advisories, demand letters, contract markups, or internal investigations memos, the hardest step can be getting traction from a blank page. “It’s really good at content or idea generation,” Francis said, adding that lawyers can ask AI to “generate some ideas for me on this topic, or generate an outline of a document to cover a particular issue.”


“AI is definitely going to benefit some of the small law firms who cannot actually afford the workforce. AI can be an extension when it comes to the automation.”


Of course, that does not mean letting an AI model decide what the law is; rather, it means using AI to produce an initial outline, identify possible issues to consider, or propose alternate ways to organize an argument. Then, the attorney should apply their own judgment to accept, reject, refine, and verify the AI’s output.

For legal teams, the ideal mindset is that AI can compress the time between intake and a workable first draft, whether that draft is a research plan, a deposition outline, a set of contract fallback positions, or a motion framework. However, speed is only valuable if it facilitates careful lawyering, not just taking shortcuts.

Efficiency that scales down, not just up

AI’s impact is not limited to large law firms with dedicated tech & innovation budgets. In fact, the benefits may be most transformative for smaller legal organizations that feel every hour of administrative drag and every unstaffed matter. Panelist Ashwini Jarral, a Strategic Advisor at IGIS, underscores how broad the current level of AI adoption already is. “AI is already being used in a lot of legal research, contract analysis, and in office operations,” Jarral explained. “Whether that’s in a small law firm or a large law firm, everybody can benefit from that automation with this AI.”

For many practices, that list maps directly onto the work that consumes lawyers’ time without always adding commensurate value: repetitive research steps, first-pass contract review, intake and scheduling, matter administration, and other operational tasks.

Historically, scale favored organizations that could hire more associates, paralegals, and support staff to push volume through the pipeline. Now, AI offers a different form of leverage: additional capacity without adding headcount. “It is definitely going to also benefit some of the small law firms who cannot actually afford the workforce,” Jarral said, adding that “AI can be an extension when it comes to the automation.” For a solo or small firm, that extension can show up as faster first-pass review of contracts, quicker summarization of records, more consistent intake workflows, and reduced time spent on repetitive back-office tasks.

At the same time, it is crucial to be clear-eyed about what is being automated. While AI can help deliver efficiency, it does not offer legal judgment itself. The legal profession still must decide, matter by matter, what level of review is required and what risks are acceptable.


“Lawyers are trained a certain way, and AI is never going to be trained that way. AI misses nuances. We’re always going to need lawyers; we’re always going to need the human in the loop.”


And that’s where implementation discipline becomes a strategic differentiator. Law firms that treat AI as a general-purpose shortcut tend to create risk; while firms that treat AI as a workflow component, with guardrails, review steps, and clear accountability, are more likely to capture value without compromising quality.

The non-negotiable: lawyers remain accountable

Any serious conversation about AI in legal practice must address these limits, panelists agreed. The Hon. Linda Kevins, a Justice on the Supreme Court in the 10th Judicial District of New York (Suffolk County), offered the most direct articulation of the boundary line: “Lawyers are trained a certain way, and AI is never going to be trained that way. AI misses nuances. We’re always going to need lawyers; we’re always going to need the human in the loop.”

Indeed, legal work is saturated with nuance. The same set of facts can carry different weight depending on jurisdiction, judge, forum, procedural posture, and the client’s goals and risk tolerance. Even when the law is clear, the right action often is not. To strive for true justice requires judgment about timing, framing, business consequences, reputational risk, and settlement dynamics. Those are not merely inputs for an AI to process — they are human decisions that define legal representation.

As the webinar made clear, this is the point at which responsible use becomes practical, not abstract. If AI is used for research support, contract analysis, or document review, lawyers need an explicit approach for verification and oversight. The outputs may look polished and may sound confident; however, confidence is not accuracy, and professional responsibility does not shift to a vendor or an AI model. Human review is not a ceremonial or perfunctory step, nor is it a formality. Rather, it is the core control that protects clients and the court, and it is the inflection point that turns AI from a novelty into a defensible tool.

In practice, the human in the loop means deciding in which instances AI can assist and in what instances it cannot. It also means reserving an attorney’s time for the decisions that carry legal and ethical consequences and building repeatable habits that prevent teams from drifting into overreliance on AI, especially under deadline pressure.

The legal profession can capture real benefits from AI, including speed, scalability, and improved access, but only if it adopts the technology in a way that preserves what Justice Kevins highlighted: training, nuance, and human accountability.


You can find out more about how AI and other advanced technologies are impactingĚýbest practices in courts and administration here

]]>
When courts meet GenAI: Guiding self-represented litigants through the AI maze /en-us/posts/ai-in-courts/guiding-self-represented-litigants/ Thu, 19 Feb 2026 18:20:08 +0000 https://blogs.thomsonreuters.com/en-us/?p=69532

Key insights:

      • Considering courts’ approach — Although many courts do not interact with litigants prior to filings, courts can explore how to help court staff discuss AI use with litigants.

      • Risk of generic AI tools — AI use in legal settings can’t be simply categorized as safe or risky; jurisdiction, timing, and procedure are vital factors, making generic AI tools unreliable for court-specific needs.

      • Specialty AI tools require testing — Purpose-built court AI tools offer a safer alternative for litigants, yet these require development and extensive testing.


Self-represented litigants have always pieced together legal help from whatever sources they can access. Now that AI is part of that mix, courts are working to help people use this advanced technology responsibly without implying an endorsement of any particular tool or even the use of AI.

Many litigants cannot afford an attorney; others may distrust the representation they have or may not know where to begin. In any case, people need a meaningful way to interact with the legal system. Used carefully and responsibly, AI can support access to justice by helping self-represented litigants understand their options, organize information, and draft documents, while still requiring litigants to verify their information and consult official court rules and resources.

These issues were discussed in a recent webinar, , hosted by . The panel explored the potential benefits of AI for access to justice and the operational challenges of integrating AI into public-facing guidance for litigants.

The problem with “Just ask AI”

Angela Tripp of the Legal Services Corporation noted that people handling legal matters on their own have long relied on a mix of resources, “some of which were designed for that purpose, and some of which were not.” AI is simply a new tool in that environment, she added. The primary challenge is that court processes are rule-based and time-sensitive; and a mistake can mean missing a deadline, submitting the wrong document, or misunderstanding a requirement that affects the case.

Access to justice also requires more than just access to information in general. Court users need information that is relevant, complete, accurate, and up to date. Generic AI systems, such as most public-facing tools, are trained on broad internet text may not reliably deliver that level of specificity for a particular court, case type, or stage of a proceeding. In these cases, jurisdiction, timing, and procedure all matter. Unfortunately, AI can omit key steps or emphasize the wrong issues, and self-represented litigants may not have the legal experience to recognize what is missing.

At the same time, AI offers several potential benefits to self-represented litigants. It can explain concepts in plain language, help users structure a narrative, and produce a first draft faster than many people can on their own. The challenge is aligning those strengths with the precision that court processes demand.

A strategic pivot: from teaching litigants to equipping staff

In the webinar, Stacey Marz, Administrative Director of the Alaska Court System, described her team’s early efforts to give self-represented litigants clear guidance about safer and riskier uses of AI, including examples of how to properly prompt generative AI queries.

The team tried to create traffic light categories that would simplify decision-making; however, they found this approach very challenging despite several draft efforts to create useful guidance. Indeed, AI use can shift from low-risk to high-risk depending on context, and it was hard to provide examples without sounding like the court was endorsing a tool or sending people down a path to which the court could not guarantee results.

The group ultimately shifted to a more practical approach — training the people who already help litigants. The new guidance targets public-facing staff such as clerks, librarians, and self-help center workers. Instead of teaching litigants how to prompt AI, it equips staff to have informed, consistent conversations when litigants bring AI-generated drafts or AI-based questions to the counter.

The framework emphasizes acknowledgment without endorsement. It suggests language such as:

“Many people are exploring AI tools right now. I’m happy to talk with you about how they may or may not fit with court requirements.”

From there, staff can explain why court filings require extra caution and direct users to court-specific resources.

This approach also assumes good faith. A flawed filing is often a sincere attempt to comply, and a litigant may not realize that an AI output is incomplete or incorrect.

Purpose-built tools take time

The webinar also discussed how courts also are exploring purpose-built AI tools, including judicial chatbots designed around court procedures and grounded in verified information. Done well, these tools can reduce common problems associated with generic AI systems, such as jurisdiction mismatch, outdated requirements, or fabricated or hallucinated citations.

However, building reliable court-facing AI demands significant time and testing. Marz shared Alaska’s experience, noting that what the team expected to take three months took more than a year because of extensive refinement and evaluation. The reason is straightforward: Court guidance must be highly accurate, and errors can materially harm someone’s legal interests. In fact, even after careful testing, Alaska still included cautionary language, recognizing that no system can guarantee perfect answers in every situation.

The path forward

Legal Services’ Tripp highlighted a central risk: Modern AI tools can be clear, confident, and easy to trust, which can lead people to over-rely on them. And courts have to recognize this balance. Courts are not trying to prevent AI use; rather, many are working toward realistic norms that treat AI as a drafting and organizing aid but require litigants to verify claims against official court sources and seek human support when possible.

Marz also emphasized that courts should generally assume filings reflect a litigant’s best effort, including in those cases in which AI contributed to confusion. The goal is education and correction rather than punishment, especially for people navigating complex processes without representation.

Some observers describe this moment as an early AOL phase of AI, akin to the very early days of the world wide web — widely used, evolving quickly, and uneven in its reliability. That reality makes clear guidance and consistent messaging more important, not less.

This shift among courts from teaching litigants to use AI to teaching court staff and other helpers how to talk to litigants about AI reflects a practical effort on the part of courts to reduce the risk of harm while expanding access to understandable information.

As is becoming clearer every day, AI can make legal processes feel more navigable by helping self-represented litigants draft, summarize, and prepare; and for courts to realize that value requires clear guardrails, court-specific verification, and careful implementation, especially when a missed detail can change the outcome of a case.


You can find out more about how AI and other advanced technologies are impactingĚýbest practices in courts and administrationĚýhere

]]>
How AI-powered access to justice is impacting unauthorized practice of law regulations /en-us/posts/government/ai-impacts-unauthorized-practice-of-law/ Mon, 02 Feb 2026 17:55:20 +0000 https://blogs.thomsonreuters.com/en-us/?p=69263

Key insights:

      • Courts and the legal profession need to show leadership — Given their specialized knowledge of the needs of litigants and of courts, courts need to take the lead in determining definitions of the unauthorized practice of law.

      • 3 paths forward to workable regulatory solutions — Recent discussions and research around this subject offered three paths toward modernizing UPL definitions.

      • Uncertainty harms users and innovation — Fear of UPL can drive self-censorship and market exits, even as litigants continue to use publicly available GenAI tools.


Today, many Americans experience legal issues but lack proper access to legal representation. At the same time, AI tools capable of providing legal information are rapidly evolving and already in widespread use. Between these two facts lies a critical definitional problem that courts and state bars must urgently address: How to define the unauthorized practice of law (UPL) in way that doesn’t further curtail access to justice.

This discussion is not theoretical. It directly determines whether AI-based legal services can operate, how they should be regulated, and ultimately whether AI can help unrepresented or self-represented litigants gain meaningful access to justice. This issue was explored in more depth during a recent webinar from theĚý, a joint effort by the National Center for State CourtsĚý(NCSC) and the Thomson Reuters Institute (TRI).

The need for clear definitions

During the webinar, Alaska Supreme Court Administrative Director Stacey Marz noted that “there is no uniform definition of the practice of law” and that UPL regulations represent “a real varied continuum of scope and clarity.” This variation makes compliance challenging for technology providers, especially as they navigate 50 different state standards.

UPL generally occurs when someone “not licensed as an attorney attempts to represent or perform legal work on behalf of another person,” explained Cathy Cunningham, Senior Specialist Legal Editor at ¶¶ŇőłÉÄę Practical Law.

Marz added that such legal advice typically involves “applying the law, rules, principles, and processes to specific facts and circumstances of that individual client — and then recommending a course of action.”

The challenge, however, is that AI can appear to do exactly this, yet the regulatory framework remains unclear about whether and how this should be permitted and how consumers can be protected.

3 paths forward

During the recent webinar, panelists discussed several different approaches to UPL regulations, noting that a and outlined three approaches that state courts could take, including:

Path 1: Explicitly enabling tools with regulatory framework — UPL statutes can be revisited to explicitly allow purpose-built AI legal tools to operate without threat of UPL enforcement, provided they meet certain requirements. Prof. Dyane O’Leary, Director of Legal Innovation & Technology at Suffolk University, emphasized that consumer-facing AI legal tools are already being used for tailored legal advice, arguing that some oversight is better than “just letting these tools continue to operate and hoping consumers aren’t harmed by them.”

Path 2: Creating regulatory sandboxes — Courts could establish temporary experimental zones in which AI legal service providers can operate under controlled conditions while regulators gather data about efficacy and safety through feedback and research, with an eye toward informing future regulation reform.

Path 3: Narrowing UPL to human conduct — Clarifying that existing UPL rules apply only to humans who may hold themselves out as attorneys in tribunals or courtrooms or creating legal documents under the guise of being a human attorney, effectively would leave AI-powered legal tools clearly outside UPL restrictions and open up a “new pocket of the free market” for consumers.

Utah Courts Self-Help Center Director Nathanael Player referenced Utah Supreme Court Standing Order Number 15, which established their regulatory sandbox using a fundamentally different standard: Not whether services match what lawyers provide, but rather “is this better than the absolute nothing that people currently have available to them?”

Prof. O’Leary reframed the comparison itself, suggesting that instead of comparing consumers who use AI tools to consumers with an attorney, the framework should be “consumers that use legal AI tools, and maybe consumers that otherwise have no support whatsoever.”

The personhood puzzle

“AI, at this time, does not have legal personhood status,” said Practical Law’s Cunningham. “So, AI can’t commit unauthorized practice of law because AI is not a person.”

However, Player pushed back on this reasoning, clarifying that “AI does have a corporate personhood. There is a corporation that made the AI, [and] the corporation providing that does have corporate personhood.” He added, however, that “it’s not clear, I don’t think we know whether or not there is… some sort of consequence for the provision of ChatGPT providing legal services.”


You can view here


This ambiguity creates what might be called the personhood gap, a zone of legal uncertainty with serious consequences for both innovation and access to justice.

Colin Rule, CEO at online dispute resolution platform ODR.com, explained that “one of the major impacts of UPL is, actually self-censorship.” After receiving a UPL letter from a state bar years ago, he immediately exited that market. This pattern repeats across the legal tech landscape, leaving companies hesitant to innovate.

Rule’s bottom line resonates with anyone trying to build solutions in this space. “As a solution provider, what I want is guidance,” Rules explained. “Clarity is what I need most… that’s my number one priority.”

Moving forward: Clarity over perfection

The legal profession needs to lead on this issue, and that means state bars and state supreme courts must take action now. The tools are already in use, and the question is not whether AI will play a role in legal services, but rather whether that role will be defined by thoughtful regulation or by default.

The solution is for the judiciary to provide clear guidance on what services can be offered, by whom, and under what conditions. To do that, courts much first acknowledge that for most people, the choice is not between an AI tool and a lawyer but between an AI tool and nothing. Given that, states must walk a path that will both encourage innovation and protect consumers.

To this end, legal professionals and courts should experiment with these tools, understand their trajectory as well as their current limitations, and work collaboratively with developers to create frameworks that prioritize consumer protection without stifling innovation that could genuinely expand access to justice.


You can find out more about how courts and legal professionals are dealing with the unauthorized practice of law here

]]>
Responsible AI use for courts: Minimizing and managing hallucinations and ensuring veracity /en-us/posts/ai-in-courts/hallucinations-report-2026/ Wed, 28 Jan 2026 10:51:10 +0000 https://blogs.thomsonreuters.com/en-us/?p=69181

Key insights:

      • AI usage in courts needs verifiable reliability— Unlike other fields, errors and hallucinations caused by AI in a court setting can create due-process issues.

      • Skepticism is professional responsibility— Judges’ interrogation of AI sources and accountability concerns are vital guardrails to minimizing these problems.

      • Governance over perfection— Courts and legal professionals should focus on systematic management of AI hallucinations through clear protocols, human oversight, and mandatory verification to ensure veracity.


AI hallucinations have become one of the most urgent and most misunderstood issues in professional work today; and as generative AI (GenAI) moves from and interesting experiment to common usage in many workplace infrastructures, these issues can cause significant problems, especially for courts and the professionals and individuals that use them.

Jump to ↓

Responsible AI use for courts: Minimizing and managing hallucinations and ensuring veracity

 

Today, AI can be used in everything from assisted research to guided drafting of documents, court briefs, and even court orders. With the development of tools supported by GenAI and agentic AI, the very infrastructure of professional work has shifted to include these offerings.

Yet, in most business settings, a wrong answer is an inconvenience. It requires minor corrections and has minimal impact. In the justice system, a wrong answer can be a due-process problem that strongly underscores the need for courts and legal professionals to ensure that their AI use is verifiably reliable when it counts.

At the same time, the direction of travel is clear: AI adoption isn’t a fad we can simply wait out, and it isn’t inherently at odds with high-stakes decision-making. Used well, these tools can reduce administrative burden, speed up access to relevant information, and help court professionals navigate large volumes of material more efficiently. The real question is not whether courts will encounter AI in their workflows, but how they will define responsible use, especially in moments in which accuracy isn’t a feature, it’s the foundation.


“Whether you are a judge [or] an attorney, credibility is everything, particularly when you come before the court.”

— Justice Tanya R. Kennedy Associate Justice of the Appellate Division, First Judicial Department of New York


To examine these issues more deeply, the Thomson Reuters Institute has published a new report,Ěý, which frames hallucinations not as a sensationalistic gotcha, but as a practical risk that must be managed with policy, process, and professional judgment. The report also features valuable insight on this subject from judges and court stakeholders who today are evaluating AI in the real operating environment of legal proceedings, courtroom expectations, and the daily administration of justice.

This perspective is essential. Technical teams can explain how models generate language and why they sometimes produce confident-sounding errors. However, judges and court staff can explain something equally important — what accuracy actually means in practice. In courts, accuracy isn’t just about getting the gist right; rather, it’s about precise citations, faithful characterization of the record, correct procedural posture, and language that withstands scrutiny. As the report points out, relied-upon hallucinated information isn’t merely bad output, it can lead to a potential distortion of justice.

Managing AI as professional responsibility

Crucially, the report reflects that judicial skepticism about AI is not simple technophobia — it’s professional responsibility. Judges are trained to interrogate sources, weigh credibility, and understand the downstream consequences of errors. Judges may ask, What is the provenance of this information? Can I reproduce it independently? And who is accountable if it’s wrong? These questions aren’t barriers to innovation; indeed, they are the guardrails that this innovation requires.

What emerges is a pragmatic middle ground that embraces the upside of AI use in courts while treating hallucinations as a predictable occurrence that can be managed systematically. Rather than concluding AI hallucinates, therefore AI can’t be used, the more workable conclusion is AI can hallucinate, therefore AI outputs must be designed, handled, and verified accordingly, likely with other advanced tech tools. As the report points out, courts don’t need a perfect AI; rather, they need repeatable protocols that keep human decision-makers in control and keep the record clean.

As the report ultimately demonstrates, managing hallucinations in courts isn’t about chasing perfection, it’s about protecting veracity. It’s about using the right advanced tech tools to build workflows in which the technology consistently supports the truth-finding process instead of quietly eroding it. And it’s about recognizing that in the legal system, responsibility doesn’t disappear when a new tool arrives — it becomes even more important to ensure the new tool doesn’t erode that either.


You can download

a full copy of the Thomson Reuters Institute’s Ěýhere

]]>
Scaling Justice: How technology is reshaping support for self-represented litigants /en-us/posts/ai-in-courts/scaling-justice-technology-self-represented-litigants/ Fri, 23 Jan 2026 15:31:24 +0000 https://blogs.thomsonreuters.com/en-us/?p=69124

Key takeaways:

      • From scarcity to abundance — Technology has shifted the challenge in access to justice from scarcity of legal help to issues of accuracy, governance, and effective support.ĚýAI and digital tools now provide abundant legal information to self-represented litigants, but they raise new questions about reliability, oversight, and alignment with human needs.

      • The necessity of human-in-the-loop — Human involvement remains essential for meaningful resolution.ĚýWhile AI can explain procedures and guide users, real support often requires relational and institutional human guidance, especially for vulnerable populations facing anxiety, low literacy, or systemic bias.

      • One part of a bigger question — Systemic reform and broader approaches are needed beyond technological fixes because technology alone cannot solve deep-rooted inequities or the complexity of the legal system. Efforts should include prevention, alternative dispute resolution, and redesigning systems to prioritize just outcomes and accessibility.


Access to justice has long been framed as a problem of scarcity, with too few legal aid lawyers and insufficient funding forcing systems to be built in triage mode. This has been underscored with the unspoken assumption that most people navigating civil legal problems would do so without meaningful help, often because their issues were not compelling or lucrative enough to justify legal representation.

This framing no longer holds, however. Legal information, once tightly controlled by legal professionals, publishers, and institutions, is now abundantly available. Large language models, search-based AI systems, and consumer-facing legal tools can explain civil procedure, identify relevant statutes, translate dense legalese into plain language, and generate step-by-step guidance in seconds.

Increasingly, self-represented litigants are actively using these tools, whether courts or legal aid organizations endorse them or not. Katherine Alteneder, principal at Access to Justice Innovation and former Director of the Self-Represented Litigation Network, notes: “This reality cannot be fully controlled, regulated out of existence, or ignored.”

And as Demetrios Karis, HFID and UX instructor at Bentley University, argues: “Withholding today’s AI tools from self-represented litigants is like withholding life-saving medicine because it has potential side effects. These systems can already help people avoid eviction, protect themselves from abuse, keep custody of their children, and understand their rights. Doing nothing is not a neutral choice.”

Thus, the central question is no longer whether technology can help self-represented litigants, but rather how it should be deployed — and with what expectations, safeguards, and institutional responsibilities.

Accuracy, error & tradeoffs

The baseline capabilities of general-purpose AI systems have advanced dramatically in a matter of months. For common use cases that self-represented litigants most likely seek — such as understanding process, identifying next steps, preparing for hearings, and locating authoritative resources — today’s frontier models routinely outperform well-funded legal chatbots developed at significant cost just a year or two ago.


The central question is no longer whether technology can help self-represented litigants, but rather how it should be deployed — and with what expectations, safeguards, and institutional responsibilities.


These performance gains raise important questions about the continued call for extensive customization to deliver basic legal information. However, performance improvements do not eliminate the need for careful design. Tom Martin, CEO and founder of LawDroid (and columnist for this blog), emphasizes that “minor tweaking” is subjective, and that grounding AI tools in high-quality sources, appropriate tone, and clear audience alignment remains essential, particularly when an organization takes responsibility and assumes liability for the tool’s voice and output.

Not surprisingly, few topics in the legal tech community generate more debate than AI accuracy, but it cannot be evaluated in isolation. Human lawyers make mistakes, static self-help materials become outdated, and informal advice from friends, family, or online forums is often wrong. Models should be evaluated against realistic alternatives, especially when the alternative is no help at all.

Off-the-shelf tools now perform surprisingly well at generating plain-language explanations, often drawing on primary law, court websites, and legal aid resources. In limited testing, inaccuracies tend to reflect misunderstandings or overgeneralizations rather than pure fabrication. And while these are errors that are still serious, they may be easier to detect and correct with review.

Still, caution is key, often because AI tells people what they want to hear in order to keep them on the platform. Claudia Johnson of Western Washington University’s Law, Diversity, and Justice Center asks what an acceptable error rate is when tools are deployed to vulnerable populations and reminds organizations of their duty of care. Mistakes, especially those known and uncorrected, can carry legal, ethical, and liability consequences that cannot be ignored.

Knowledge bases are infrastructure, but more is needed

Vetted, purpose-built, and mission-focused solution ecosystems are emerging to fill the gap between infrastructure and problem-solving. The Justice Tech Directory from the Legal Services National Technology Assistance Project (LSNTAP) provides legal aid organizations, courts, and self-help centers with visibility into curated tools that incorporate guardrails, human review, and consumer protection in ways that general-purpose AI platforms do not.

Of course, this infrastructure does not exist in a vacuum. Indeed, these systems address the real needs of real people. While calls for human-in-the-loop systems are often framed as safeguards against technical failure, some of the most important reasons for human involvement are often relational and institutional. Even accurate information frequently fails to resolve legal problems without human support, particularly for people experiencing anxiety, shame, low literacy, or systemic bias within courts.


Not surprisingly, few topics in the legal tech community generate more debate than AI accuracy, but it cannot be evaluated in isolation.


A human in the loop can improve how self-represented litigants are treated by clerks, judges, and opposing parties. Institutional review models often provide this interaction at pre-filing document clinics, navigator-supported pipelines, and structured AI review workshops that integrate human judgment and augment human effort rather than replacing it.

Abundance and the limits of technology

Information does not automatically produce equity. Technology cannot make up for existing, persistent systemic issues, and several prominent voices caution against treating AI as a workaround for deeper system failures. Richard Schauffler of Principal Justice Solutions, notes that the underlying problem with the use of AI in the legal world is the fact that our legal process is overly complicated, mystified in jargon, inefficient, expensive, and deeply unsatisfying in terms of justice and fairness — and using AI to automate that process does not alter this fact.

Without changes at the courthouse level, upstream technological improvements may not translate into just outcomes. Bias, discrimination, and resource constraints cannot be solved by technology alone. Even perfect information from a lawyer does not equal power when structural inequities persist.

Further, abundance fundamentally changes the problem. As Alteneder notes, rather than access, the primary problem now is “governance, trust, filtering, and alignment with human values.” Similar patterns are seen in healthcare, journalism, and education. Without scaffolding, technology often widens gaps, benefiting those with greater capacity to interpret, prioritize, and act. For self-represented litigants, the most valuable support is often not answers, but navigation: What matters most now, which paths are realistic, how to understand when to escalate and when legal action may not serve broader life needs.

Focusing solely on court-based self-help misses an opportunity to intervene earlier, especially on behalf of self-represented litigants. AI-enabled tools have the potential to identify upstream legal risk and connect people to mediation, benefits, or social services before disputes harden.


You can find more insights about how courts are managing the impact of advanced technology from our Scaling Justice series here

]]>