Self-represented litigants Archives - Thomson Reuters Institute https://blogs.thomsonreuters.com/en-us/topic/self-represented-litigants/ Thomson Reuters Institute is a blog from ¶¶ŇőłÉÄę, the intelligence, technology and human expertise you need to find trusted answers. Thu, 16 Apr 2026 06:20:41 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 Scaling Justice: AI is scaling faster than justice, revealing a dangerous governance gap /en-us/posts/ai-in-courts/scaling-justice-governance-gap/ Mon, 13 Apr 2026 16:57:55 +0000 https://blogs.thomsonreuters.com/en-us/?p=70330

Key takeaways:

      • AI frameworks need to keep up with implementation — While AI governance frameworks are being developed and enacted globally, their effectiveness depends on enforceable mechanisms within domestic justice systems.

      • Access to justice is essential for trustworthy AI regulation — Rights and protections are only meaningful if individuals can understand, challenge, and seek remedies for AI-driven decisions. Without operational access, governance frameworks risk remaining theoretical.

      • People-centered justice and human rights must anchor AI governance — Embedding human rights standards and ensuring equal access to justice in AI regulation strengthens public trust, accountability, and the credibility of both public institutions and private companies.


AI governance is accelerating across global, national, and local levels. As public investment in AI infrastructure expands, new oversight bodies are emerging to assess safety, risk, and accountability. The global policy conversation has from principles to the implementation of meaningful guardrails and AI governance frameworks, which legislators now are drafting and enacting.

These developments reflect growing recognition that AI systems demand structured oversight and a shift from voluntary safeguards and standards to institutionalized governance. One critical dimension remains underdeveloped, however: how do these frameworks function in practice? Are they enforceable? Do they provide accountability? Do they ensure equal access?

AI governance will not succeed on the strength of international declarations or regulatory design alone; rather, domestic justice systems will determine whether it works. At this intersection, the connection between AI governance and access to justice becomes real.

In early February, leaders across government, the legal sector, international organizations, industry, and civil society convened for an expert discussion. The following reflections attempt to build on that dialogue and its urgency.

From principles to enforcement

Over the past decade, AI governance has evolved from hypothetical ethical guidelines to voluntary commitments, binding regulatory frameworks, and risk-based approaches. Due to these game-changing advancements, however, many past attempts to provide structure and governance have been quickly outpaced by technology and are insufficient without enforcement mechanisms. As Anoush Rima Tatevossian of The Future Society observed: “The judicial community should have a role to play not only in shaping policies, but in how they are implemented.”

Frameworks establish expectations, while courts and dispute resolution mechanisms interpret rules, test rights, evaluate harm, assign responsibility, and determine remedies. If individuals are not empowered to safeguard their rights and cannot access these mechanisms, governance frameworks remain theoretical or are casually ignored.

This challenge reflects a broader structural constraint. Even without AI, legal systems struggle to meet demand. In the United States alone, 92% of people do not receive the help they need in accessing their rights in the justice system. Introducing AI into this environment without strengthening access can risk widening, rather than narrowing, the justice gap.


Please add your voice to ¶¶ŇőłÉÄę’ flagship , a global study exploring how the professional landscape continues to change.Ěý


Justice systems serve as the operational core of AI governance. By inserting the rule of law into unregulated areas, they provide the infrastructure that enables accountability by interpreting regulatory provisions in specific cases, assessing whether AI-related harms violate legal standards, allocating responsibility across public and private actors, and providing accessible pathways for redress.

These frameworks also generate critical feedback. Disputes involving AI systems expose gaps in transparency, fairness, and accountability. Legal professionals see where governance frameworks first break down in real-world conditions, often long before policymakers do. As a result, these frameworks function as an early signal of policy effectiveness and rights protections.

Importantly, AI governance does not require entirely new legal foundations. Human rights frameworks already provide standards for legality, non-discrimination, due process, and access to remedy, and these standards apply directly to AI-enabled decision-making. “AI can assist judges but must never replace human judgment, accountability, or due process,” said Kate Fox Principi, Lead on the Administration of Justice at the United Nations (UN) Office of the High Commissioner for Human Rights (OHCHR), during the February panel.

Clearly, rights are only meaningful when individuals can exercise them — this constraint is not conceptual, it’s operational. Systems must be understandable, affordable, and responsive, and institutions should be capable of evaluating complex, technology-enabled disputes.

Trust, markets & accountability

Governance frameworks that do not account for these dynamics risk entrenching inequities rather than mitigating them. An individual’s ability to understand, challenge, and seek a remedy for automated decisions determines whether governance is credible. A people-centered justice approach, as established in the , asks whether individuals can meaningfully engage with the system, not just whether rules exist. For example, women face documented barriers to accessing justice in any jurisdiction. AI systems trained on biased data can replicate or amplify existing disparities in employment, financial services, healthcare, and criminal justice.

“Institutional agreement rings hollow when billions of people experience governance as remote, technocratic, and unresponsive to their actual lives,” said Alfredo Pizarro of the Permanent Mission of Costa Rica to the UN. “People-centered justice becomes essential.”

AI systems already shape outcomes across employment, financial services, housing, and justice. Entrepreneurs, law schools, courts, and legal services organizations are already building AI-enabled tools that help people navigate legal processes and assert their rights more effectively. Governance design will determine whether these tools help spread access to justice and or introduce new barriers.

Private companies play a central role in developing and deploying AI systems. Their products shape economic and social outcomes at scale. For them, trust is not abstract; it is a success metric. “Innovation depends on trust,” explained Iain Levine, formerly of Meta’s Human Rights Policy Team. “Without trust, products will not be adopted.” And trust, in turn, depends on enforceability and equal access to remedy.

AI governance will succeed or fail based on access

As Pizarro also noted, justice provides “normative continuity across technological rupture.” Indeed, these principles already exist within international human rights law and people-centered justice; although they precede the advent of autonomous systems, they provide standards for evaluating discrimination, surveillance, and procedural fairness, and remain durable as new challenges to upholding justice and the rule of law emerge.

People-centered justice was not designed for legal systems addressing AI-related harms, but its outcome-driven orientation remains durable as new justice problems emerge.

The current stage presents an opportunity to align AI governance with access to justice from the outset. Beyond well-drafted rules, we need systems that people can use. And that means that any effective governance requires coordination between policymakers, legal professionals, and the public.


You can find other installments ofĚýour Scaling Justice blog seriesĚýhere

]]>
Scaling Justice: Unlocking the $3.3 trillion ethical capital market /en-us/posts/ai-in-courts/scaling-justice-ethical-capital/ Mon, 23 Mar 2026 17:12:28 +0000 https://blogs.thomsonreuters.com/en-us/?p=70042

Key takeaways:

      • An additional funding stream, not a replacement — Ethical capital has the potential to supplement existing access to justice infrastructure by introducing a justice finance mechanism that can fund cases with measurable social and environmental impact.

      • Technology as trust infrastructure — AI and smart technologies can provide the governance scaffolding required for ethical capital to flow at scale, including standardizing assessment, impact measurement, and oversight.

      • Capital is not scarce; allocation is — The true bottleneck is not the availability of funds; rather it’s the disciplined, investment-grade legal judgment required to evaluate risk, ensure compliance, and measure impact in a way that makes justice outcomes investable.


Kayee Cheung & Melina Gisler, Co-Founders of justice finance platform Edenreach, are co-authors of this blog post

Access to justice is typically framed as a resource problem — the idea that there are too few legal aid lawyers, too little philanthropic funding, and too many people navigating civil disputes alone. This often results in the majority of individuals who face civil legal challenges doing so without representation, often because they cannot afford it.

Yet this crisis exists alongside a striking paradox. While 5.1 billion people worldwide face unmet justice needs, an estimated $3.3 trillion in mission-aligned capital — held in donor-advised funds, philanthropic portfolios, private foundations, and impact investment vehicles — remains largely disconnected from solutions.

Unlocking even a fraction of this capital could introduce a meaningful parallel funding stream — one that’s capable of supporting cases with potential impacts that currently fall outside traditional funding models. Rather than depending on charity or contingency, what if justice also attracted disciplined, impact-aligned investment in cases themselves, in addition to additional funding that could support technology?

Recent efforts have expanded investor awareness of justice-related innovation. Programs like Village Capital’s have helped demystify the sector and catalyze funding for the technology serving justice-impacted communities. Justice tech, or impact-driven direct-to-consumer legal tech, has grown exponentially in the last few years along with increased investor interest and user awareness.

Litigation finance has also grown, but its structure is narrowly optimized for high-value commercial claims with a strong financial upside. Traditional funders typically seek 5- to 10-times returns, prioritizing large corporate disputes and excluding cases with significant social value but lower monetary recovery, such as consumer protection claims, housing code enforcement, environmental accountability, or systemic health negligence.

Justice finance offers a different approach. By channeling capital from the impact investment market toward the justice system and aligning legal case funding with established impact measurement frameworks like the , it reframes certain categories of legal action as dual-return opportunities, covering financial and social.

This is not philanthropy repackaged. It’s the idea that measurable justice outcomes can form the basis of an investable asset class, if they’re properly structured, governed, and evaluated.

Technology as trust infrastructure

While mission-aligned capital is widely available, the ability to evaluate legal matters with the necessary rigor remains limited. Responsibly allocating funds to legal matters requires complex expertise, including legal merit assessment, financial risk modeling, regulatory compliance, and impact evaluation. Cases must be considered not only for their likelihood of success and recovery potential, but also for measurable social or environmental outcomes.

Today, that assessment is largely manual and capacity-bound by small teams. The result is a structural bottleneck as capital waits on scalable, trusted evaluation and allocation.

Without a way to standardize and responsibly scale analysis of the double bottom line, however, justice funding remains bespoke, even when resources are available.

AI-enabled systems can play a transformative role by standardizing assessment frameworks and supporting disciplined capital allocation at scale. By encoding assessment criteria, decision pathways, and compliance safeguards and then mapping case characteristics to impact metrics, technology can enable consistency and allow legal and financial experts to evaluate exponentially more matters without lowering their standards.

And by integrating legal assessment, financial modeling, and impact alignment within a governed tech framework, justice finance platforms like can function as the connective tissue. Through the platform, impact metrics are applied consistently while human experts remain responsible for final determinations, thereby reducing friction, increasing transparency, and supporting auditability.

When incentives align

It’s no coincidence that many of the leaders exploring justice finance models are women. Globally, women experience legal problems at disproportionately higher rates than men yet are less likely to obtain formal assistance. Women also control significant pools of global wealth and are more likely to . Indeed, 75% of women believe investing responsibly is more important than returns alone, and female investors are almost twice as likely as male counterparts to prioritize environmental, social and corporate governance (ESG) factors when making investment decisions, .

When those most affected by systemic barriers also shape capital allocation decisions, structural change becomes more feasible. Despite facing steep barriers in legal tech funding (just 2% goes to female founders), women represent in access-to-justice legal tech, compared to just 13.8% across legal tech overall.

This alignment between lived experience, innovation leadership, and capital stewardship creates an opportunity to reconfigure incentives in favor of meaningful change.

Expanding funding and impact

Justice financing will not resolve the justice gap on its own. Mission-focused tools for self-represented parties, legal aid, and court reform remain essential components of a functioning justice ecosystem. However, ethical capital represents an additional structural layer that can expand the range of cases and remedies that receive financial support.

Impact orientation can accommodate longer time horizons, alternative dispute resolution pathways, and remedies that extend beyond monetary damages. In certain matters, particularly those involving environmental harm, systemic consumer violations, or community-wide injustice, capital structured around impact metrics may identify and enable solutions that traditional litigation finance models do not prioritize.

For example, capital aligned with defined impact frameworks may support outcomes that include remediation programs, compliance reforms, or community investments alongside financial recovery. These approaches can create durable benefits that outlast a single judgment or settlement.

Of course, solving deep-rooted inequities and legal system complexity requires more than new tools and new investors. It requires designing capital pathways that are repeatable, accountable, and aligned with measurable public benefit.

Although justice finance may not be a fit for every case and has yet to see widespread uptake, it does have the potential to reach cases that currently fall through the cracks — cases that have merit, despite falling outside traditional litigation finance models and legal aid or impact litigation eligibility criteria.


You can find other installments of our Scaling Justice blog series here

]]>
When courts meet GenAI: Guiding self-represented litigants through the AI maze /en-us/posts/ai-in-courts/guiding-self-represented-litigants/ Thu, 19 Feb 2026 18:20:08 +0000 https://blogs.thomsonreuters.com/en-us/?p=69532

Key insights:

      • Considering courts’ approach — Although many courts do not interact with litigants prior to filings, courts can explore how to help court staff discuss AI use with litigants.

      • Risk of generic AI tools — AI use in legal settings can’t be simply categorized as safe or risky; jurisdiction, timing, and procedure are vital factors, making generic AI tools unreliable for court-specific needs.

      • Specialty AI tools require testing — Purpose-built court AI tools offer a safer alternative for litigants, yet these require development and extensive testing.


Self-represented litigants have always pieced together legal help from whatever sources they can access. Now that AI is part of that mix, courts are working to help people use this advanced technology responsibly without implying an endorsement of any particular tool or even the use of AI.

Many litigants cannot afford an attorney; others may distrust the representation they have or may not know where to begin. In any case, people need a meaningful way to interact with the legal system. Used carefully and responsibly, AI can support access to justice by helping self-represented litigants understand their options, organize information, and draft documents, while still requiring litigants to verify their information and consult official court rules and resources.

These issues were discussed in a recent webinar, , hosted by . The panel explored the potential benefits of AI for access to justice and the operational challenges of integrating AI into public-facing guidance for litigants.

The problem with “Just ask AI”

Angela Tripp of the Legal Services Corporation noted that people handling legal matters on their own have long relied on a mix of resources, “some of which were designed for that purpose, and some of which were not.” AI is simply a new tool in that environment, she added. The primary challenge is that court processes are rule-based and time-sensitive; and a mistake can mean missing a deadline, submitting the wrong document, or misunderstanding a requirement that affects the case.

Access to justice also requires more than just access to information in general. Court users need information that is relevant, complete, accurate, and up to date. Generic AI systems, such as most public-facing tools, are trained on broad internet text may not reliably deliver that level of specificity for a particular court, case type, or stage of a proceeding. In these cases, jurisdiction, timing, and procedure all matter. Unfortunately, AI can omit key steps or emphasize the wrong issues, and self-represented litigants may not have the legal experience to recognize what is missing.

At the same time, AI offers several potential benefits to self-represented litigants. It can explain concepts in plain language, help users structure a narrative, and produce a first draft faster than many people can on their own. The challenge is aligning those strengths with the precision that court processes demand.

A strategic pivot: from teaching litigants to equipping staff

In the webinar, Stacey Marz, Administrative Director of the Alaska Court System, described her team’s early efforts to give self-represented litigants clear guidance about safer and riskier uses of AI, including examples of how to properly prompt generative AI queries.

The team tried to create traffic light categories that would simplify decision-making; however, they found this approach very challenging despite several draft efforts to create useful guidance. Indeed, AI use can shift from low-risk to high-risk depending on context, and it was hard to provide examples without sounding like the court was endorsing a tool or sending people down a path to which the court could not guarantee results.

The group ultimately shifted to a more practical approach — training the people who already help litigants. The new guidance targets public-facing staff such as clerks, librarians, and self-help center workers. Instead of teaching litigants how to prompt AI, it equips staff to have informed, consistent conversations when litigants bring AI-generated drafts or AI-based questions to the counter.

The framework emphasizes acknowledgment without endorsement. It suggests language such as:

“Many people are exploring AI tools right now. I’m happy to talk with you about how they may or may not fit with court requirements.”

From there, staff can explain why court filings require extra caution and direct users to court-specific resources.

This approach also assumes good faith. A flawed filing is often a sincere attempt to comply, and a litigant may not realize that an AI output is incomplete or incorrect.

Purpose-built tools take time

The webinar also discussed how courts also are exploring purpose-built AI tools, including judicial chatbots designed around court procedures and grounded in verified information. Done well, these tools can reduce common problems associated with generic AI systems, such as jurisdiction mismatch, outdated requirements, or fabricated or hallucinated citations.

However, building reliable court-facing AI demands significant time and testing. Marz shared Alaska’s experience, noting that what the team expected to take three months took more than a year because of extensive refinement and evaluation. The reason is straightforward: Court guidance must be highly accurate, and errors can materially harm someone’s legal interests. In fact, even after careful testing, Alaska still included cautionary language, recognizing that no system can guarantee perfect answers in every situation.

The path forward

Legal Services’ Tripp highlighted a central risk: Modern AI tools can be clear, confident, and easy to trust, which can lead people to over-rely on them. And courts have to recognize this balance. Courts are not trying to prevent AI use; rather, many are working toward realistic norms that treat AI as a drafting and organizing aid but require litigants to verify claims against official court sources and seek human support when possible.

Marz also emphasized that courts should generally assume filings reflect a litigant’s best effort, including in those cases in which AI contributed to confusion. The goal is education and correction rather than punishment, especially for people navigating complex processes without representation.

Some observers describe this moment as an early AOL phase of AI, akin to the very early days of the world wide web — widely used, evolving quickly, and uneven in its reliability. That reality makes clear guidance and consistent messaging more important, not less.

This shift among courts from teaching litigants to use AI to teaching court staff and other helpers how to talk to litigants about AI reflects a practical effort on the part of courts to reduce the risk of harm while expanding access to understandable information.

As is becoming clearer every day, AI can make legal processes feel more navigable by helping self-represented litigants draft, summarize, and prepare; and for courts to realize that value requires clear guardrails, court-specific verification, and careful implementation, especially when a missed detail can change the outcome of a case.


You can find out more about how AI and other advanced technologies are impactingĚýbest practices in courts and administrationĚýhere

]]>
Scaling Justice: How technology is reshaping support for self-represented litigants /en-us/posts/ai-in-courts/scaling-justice-technology-self-represented-litigants/ Fri, 23 Jan 2026 15:31:24 +0000 https://blogs.thomsonreuters.com/en-us/?p=69124

Key takeaways:

      • From scarcity to abundance — Technology has shifted the challenge in access to justice from scarcity of legal help to issues of accuracy, governance, and effective support.ĚýAI and digital tools now provide abundant legal information to self-represented litigants, but they raise new questions about reliability, oversight, and alignment with human needs.

      • The necessity of human-in-the-loop — Human involvement remains essential for meaningful resolution.ĚýWhile AI can explain procedures and guide users, real support often requires relational and institutional human guidance, especially for vulnerable populations facing anxiety, low literacy, or systemic bias.

      • One part of a bigger question — Systemic reform and broader approaches are needed beyond technological fixes because technology alone cannot solve deep-rooted inequities or the complexity of the legal system. Efforts should include prevention, alternative dispute resolution, and redesigning systems to prioritize just outcomes and accessibility.


Access to justice has long been framed as a problem of scarcity, with too few legal aid lawyers and insufficient funding forcing systems to be built in triage mode. This has been underscored with the unspoken assumption that most people navigating civil legal problems would do so without meaningful help, often because their issues were not compelling or lucrative enough to justify legal representation.

This framing no longer holds, however. Legal information, once tightly controlled by legal professionals, publishers, and institutions, is now abundantly available. Large language models, search-based AI systems, and consumer-facing legal tools can explain civil procedure, identify relevant statutes, translate dense legalese into plain language, and generate step-by-step guidance in seconds.

Increasingly, self-represented litigants are actively using these tools, whether courts or legal aid organizations endorse them or not. Katherine Alteneder, principal at Access to Justice Innovation and former Director of the Self-Represented Litigation Network, notes: “This reality cannot be fully controlled, regulated out of existence, or ignored.”

And as Demetrios Karis, HFID and UX instructor at Bentley University, argues: “Withholding today’s AI tools from self-represented litigants is like withholding life-saving medicine because it has potential side effects. These systems can already help people avoid eviction, protect themselves from abuse, keep custody of their children, and understand their rights. Doing nothing is not a neutral choice.”

Thus, the central question is no longer whether technology can help self-represented litigants, but rather how it should be deployed — and with what expectations, safeguards, and institutional responsibilities.

Accuracy, error & tradeoffs

The baseline capabilities of general-purpose AI systems have advanced dramatically in a matter of months. For common use cases that self-represented litigants most likely seek — such as understanding process, identifying next steps, preparing for hearings, and locating authoritative resources — today’s frontier models routinely outperform well-funded legal chatbots developed at significant cost just a year or two ago.


The central question is no longer whether technology can help self-represented litigants, but rather how it should be deployed — and with what expectations, safeguards, and institutional responsibilities.


These performance gains raise important questions about the continued call for extensive customization to deliver basic legal information. However, performance improvements do not eliminate the need for careful design. Tom Martin, CEO and founder of LawDroid (and columnist for this blog), emphasizes that “minor tweaking” is subjective, and that grounding AI tools in high-quality sources, appropriate tone, and clear audience alignment remains essential, particularly when an organization takes responsibility and assumes liability for the tool’s voice and output.

Not surprisingly, few topics in the legal tech community generate more debate than AI accuracy, but it cannot be evaluated in isolation. Human lawyers make mistakes, static self-help materials become outdated, and informal advice from friends, family, or online forums is often wrong. Models should be evaluated against realistic alternatives, especially when the alternative is no help at all.

Off-the-shelf tools now perform surprisingly well at generating plain-language explanations, often drawing on primary law, court websites, and legal aid resources. In limited testing, inaccuracies tend to reflect misunderstandings or overgeneralizations rather than pure fabrication. And while these are errors that are still serious, they may be easier to detect and correct with review.

Still, caution is key, often because AI tells people what they want to hear in order to keep them on the platform. Claudia Johnson of Western Washington University’s Law, Diversity, and Justice Center asks what an acceptable error rate is when tools are deployed to vulnerable populations and reminds organizations of their duty of care. Mistakes, especially those known and uncorrected, can carry legal, ethical, and liability consequences that cannot be ignored.

Knowledge bases are infrastructure, but more is needed

Vetted, purpose-built, and mission-focused solution ecosystems are emerging to fill the gap between infrastructure and problem-solving. The Justice Tech Directory from the Legal Services National Technology Assistance Project (LSNTAP) provides legal aid organizations, courts, and self-help centers with visibility into curated tools that incorporate guardrails, human review, and consumer protection in ways that general-purpose AI platforms do not.

Of course, this infrastructure does not exist in a vacuum. Indeed, these systems address the real needs of real people. While calls for human-in-the-loop systems are often framed as safeguards against technical failure, some of the most important reasons for human involvement are often relational and institutional. Even accurate information frequently fails to resolve legal problems without human support, particularly for people experiencing anxiety, shame, low literacy, or systemic bias within courts.


Not surprisingly, few topics in the legal tech community generate more debate than AI accuracy, but it cannot be evaluated in isolation.


A human in the loop can improve how self-represented litigants are treated by clerks, judges, and opposing parties. Institutional review models often provide this interaction at pre-filing document clinics, navigator-supported pipelines, and structured AI review workshops that integrate human judgment and augment human effort rather than replacing it.

Abundance and the limits of technology

Information does not automatically produce equity. Technology cannot make up for existing, persistent systemic issues, and several prominent voices caution against treating AI as a workaround for deeper system failures. Richard Schauffler of Principal Justice Solutions, notes that the underlying problem with the use of AI in the legal world is the fact that our legal process is overly complicated, mystified in jargon, inefficient, expensive, and deeply unsatisfying in terms of justice and fairness — and using AI to automate that process does not alter this fact.

Without changes at the courthouse level, upstream technological improvements may not translate into just outcomes. Bias, discrimination, and resource constraints cannot be solved by technology alone. Even perfect information from a lawyer does not equal power when structural inequities persist.

Further, abundance fundamentally changes the problem. As Alteneder notes, rather than access, the primary problem now is “governance, trust, filtering, and alignment with human values.” Similar patterns are seen in healthcare, journalism, and education. Without scaffolding, technology often widens gaps, benefiting those with greater capacity to interpret, prioritize, and act. For self-represented litigants, the most valuable support is often not answers, but navigation: What matters most now, which paths are realistic, how to understand when to escalate and when legal action may not serve broader life needs.

Focusing solely on court-based self-help misses an opportunity to intervene earlier, especially on behalf of self-represented litigants. AI-enabled tools have the potential to identify upstream legal risk and connect people to mediation, benefits, or social services before disputes harden.


You can find more insights about how courts are managing the impact of advanced technology from our Scaling Justice series here

]]>