Justice tech Archives - Thomson Reuters Institute https://blogs.thomsonreuters.com/en-us/topic/justice-tech/ Thomson Reuters Institute is a blog from ¶¶ŇőłÉÄę, the intelligence, technology and human expertise you need to find trusted answers. Thu, 16 Apr 2026 06:20:41 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 Scaling Justice: AI is scaling faster than justice, revealing a dangerous governance gap /en-us/posts/ai-in-courts/scaling-justice-governance-gap/ Mon, 13 Apr 2026 16:57:55 +0000 https://blogs.thomsonreuters.com/en-us/?p=70330

Key takeaways:

      • AI frameworks need to keep up with implementation — While AI governance frameworks are being developed and enacted globally, their effectiveness depends on enforceable mechanisms within domestic justice systems.

      • Access to justice is essential for trustworthy AI regulation — Rights and protections are only meaningful if individuals can understand, challenge, and seek remedies for AI-driven decisions. Without operational access, governance frameworks risk remaining theoretical.

      • People-centered justice and human rights must anchor AI governance — Embedding human rights standards and ensuring equal access to justice in AI regulation strengthens public trust, accountability, and the credibility of both public institutions and private companies.


AI governance is accelerating across global, national, and local levels. As public investment in AI infrastructure expands, new oversight bodies are emerging to assess safety, risk, and accountability. The global policy conversation has from principles to the implementation of meaningful guardrails and AI governance frameworks, which legislators now are drafting and enacting.

These developments reflect growing recognition that AI systems demand structured oversight and a shift from voluntary safeguards and standards to institutionalized governance. One critical dimension remains underdeveloped, however: how do these frameworks function in practice? Are they enforceable? Do they provide accountability? Do they ensure equal access?

AI governance will not succeed on the strength of international declarations or regulatory design alone; rather, domestic justice systems will determine whether it works. At this intersection, the connection between AI governance and access to justice becomes real.

In early February, leaders across government, the legal sector, international organizations, industry, and civil society convened for an expert discussion. The following reflections attempt to build on that dialogue and its urgency.

From principles to enforcement

Over the past decade, AI governance has evolved from hypothetical ethical guidelines to voluntary commitments, binding regulatory frameworks, and risk-based approaches. Due to these game-changing advancements, however, many past attempts to provide structure and governance have been quickly outpaced by technology and are insufficient without enforcement mechanisms. As Anoush Rima Tatevossian of The Future Society observed: “The judicial community should have a role to play not only in shaping policies, but in how they are implemented.”

Frameworks establish expectations, while courts and dispute resolution mechanisms interpret rules, test rights, evaluate harm, assign responsibility, and determine remedies. If individuals are not empowered to safeguard their rights and cannot access these mechanisms, governance frameworks remain theoretical or are casually ignored.

This challenge reflects a broader structural constraint. Even without AI, legal systems struggle to meet demand. In the United States alone, 92% of people do not receive the help they need in accessing their rights in the justice system. Introducing AI into this environment without strengthening access can risk widening, rather than narrowing, the justice gap.


Please add your voice to ¶¶ŇőłÉÄę’ flagship , a global study exploring how the professional landscape continues to change.Ěý


Justice systems serve as the operational core of AI governance. By inserting the rule of law into unregulated areas, they provide the infrastructure that enables accountability by interpreting regulatory provisions in specific cases, assessing whether AI-related harms violate legal standards, allocating responsibility across public and private actors, and providing accessible pathways for redress.

These frameworks also generate critical feedback. Disputes involving AI systems expose gaps in transparency, fairness, and accountability. Legal professionals see where governance frameworks first break down in real-world conditions, often long before policymakers do. As a result, these frameworks function as an early signal of policy effectiveness and rights protections.

Importantly, AI governance does not require entirely new legal foundations. Human rights frameworks already provide standards for legality, non-discrimination, due process, and access to remedy, and these standards apply directly to AI-enabled decision-making. “AI can assist judges but must never replace human judgment, accountability, or due process,” said Kate Fox Principi, Lead on the Administration of Justice at the United Nations (UN) Office of the High Commissioner for Human Rights (OHCHR), during the February panel.

Clearly, rights are only meaningful when individuals can exercise them — this constraint is not conceptual, it’s operational. Systems must be understandable, affordable, and responsive, and institutions should be capable of evaluating complex, technology-enabled disputes.

Trust, markets & accountability

Governance frameworks that do not account for these dynamics risk entrenching inequities rather than mitigating them. An individual’s ability to understand, challenge, and seek a remedy for automated decisions determines whether governance is credible. A people-centered justice approach, as established in the , asks whether individuals can meaningfully engage with the system, not just whether rules exist. For example, women face documented barriers to accessing justice in any jurisdiction. AI systems trained on biased data can replicate or amplify existing disparities in employment, financial services, healthcare, and criminal justice.

“Institutional agreement rings hollow when billions of people experience governance as remote, technocratic, and unresponsive to their actual lives,” said Alfredo Pizarro of the Permanent Mission of Costa Rica to the UN. “People-centered justice becomes essential.”

AI systems already shape outcomes across employment, financial services, housing, and justice. Entrepreneurs, law schools, courts, and legal services organizations are already building AI-enabled tools that help people navigate legal processes and assert their rights more effectively. Governance design will determine whether these tools help spread access to justice and or introduce new barriers.

Private companies play a central role in developing and deploying AI systems. Their products shape economic and social outcomes at scale. For them, trust is not abstract; it is a success metric. “Innovation depends on trust,” explained Iain Levine, formerly of Meta’s Human Rights Policy Team. “Without trust, products will not be adopted.” And trust, in turn, depends on enforceability and equal access to remedy.

AI governance will succeed or fail based on access

As Pizarro also noted, justice provides “normative continuity across technological rupture.” Indeed, these principles already exist within international human rights law and people-centered justice; although they precede the advent of autonomous systems, they provide standards for evaluating discrimination, surveillance, and procedural fairness, and remain durable as new challenges to upholding justice and the rule of law emerge.

People-centered justice was not designed for legal systems addressing AI-related harms, but its outcome-driven orientation remains durable as new justice problems emerge.

The current stage presents an opportunity to align AI governance with access to justice from the outset. Beyond well-drafted rules, we need systems that people can use. And that means that any effective governance requires coordination between policymakers, legal professionals, and the public.


You can find other installments ofĚýour Scaling Justice blog seriesĚýhere

]]>
Scaling Justice: Unlocking the $3.3 trillion ethical capital market /en-us/posts/ai-in-courts/scaling-justice-ethical-capital/ Mon, 23 Mar 2026 17:12:28 +0000 https://blogs.thomsonreuters.com/en-us/?p=70042

Key takeaways:

      • An additional funding stream, not a replacement — Ethical capital has the potential to supplement existing access to justice infrastructure by introducing a justice finance mechanism that can fund cases with measurable social and environmental impact.

      • Technology as trust infrastructure — AI and smart technologies can provide the governance scaffolding required for ethical capital to flow at scale, including standardizing assessment, impact measurement, and oversight.

      • Capital is not scarce; allocation is — The true bottleneck is not the availability of funds; rather it’s the disciplined, investment-grade legal judgment required to evaluate risk, ensure compliance, and measure impact in a way that makes justice outcomes investable.


Kayee Cheung & Melina Gisler, Co-Founders of justice finance platform Edenreach, are co-authors of this blog post

Access to justice is typically framed as a resource problem — the idea that there are too few legal aid lawyers, too little philanthropic funding, and too many people navigating civil disputes alone. This often results in the majority of individuals who face civil legal challenges doing so without representation, often because they cannot afford it.

Yet this crisis exists alongside a striking paradox. While 5.1 billion people worldwide face unmet justice needs, an estimated $3.3 trillion in mission-aligned capital — held in donor-advised funds, philanthropic portfolios, private foundations, and impact investment vehicles — remains largely disconnected from solutions.

Unlocking even a fraction of this capital could introduce a meaningful parallel funding stream — one that’s capable of supporting cases with potential impacts that currently fall outside traditional funding models. Rather than depending on charity or contingency, what if justice also attracted disciplined, impact-aligned investment in cases themselves, in addition to additional funding that could support technology?

Recent efforts have expanded investor awareness of justice-related innovation. Programs like Village Capital’s have helped demystify the sector and catalyze funding for the technology serving justice-impacted communities. Justice tech, or impact-driven direct-to-consumer legal tech, has grown exponentially in the last few years along with increased investor interest and user awareness.

Litigation finance has also grown, but its structure is narrowly optimized for high-value commercial claims with a strong financial upside. Traditional funders typically seek 5- to 10-times returns, prioritizing large corporate disputes and excluding cases with significant social value but lower monetary recovery, such as consumer protection claims, housing code enforcement, environmental accountability, or systemic health negligence.

Justice finance offers a different approach. By channeling capital from the impact investment market toward the justice system and aligning legal case funding with established impact measurement frameworks like the , it reframes certain categories of legal action as dual-return opportunities, covering financial and social.

This is not philanthropy repackaged. It’s the idea that measurable justice outcomes can form the basis of an investable asset class, if they’re properly structured, governed, and evaluated.

Technology as trust infrastructure

While mission-aligned capital is widely available, the ability to evaluate legal matters with the necessary rigor remains limited. Responsibly allocating funds to legal matters requires complex expertise, including legal merit assessment, financial risk modeling, regulatory compliance, and impact evaluation. Cases must be considered not only for their likelihood of success and recovery potential, but also for measurable social or environmental outcomes.

Today, that assessment is largely manual and capacity-bound by small teams. The result is a structural bottleneck as capital waits on scalable, trusted evaluation and allocation.

Without a way to standardize and responsibly scale analysis of the double bottom line, however, justice funding remains bespoke, even when resources are available.

AI-enabled systems can play a transformative role by standardizing assessment frameworks and supporting disciplined capital allocation at scale. By encoding assessment criteria, decision pathways, and compliance safeguards and then mapping case characteristics to impact metrics, technology can enable consistency and allow legal and financial experts to evaluate exponentially more matters without lowering their standards.

And by integrating legal assessment, financial modeling, and impact alignment within a governed tech framework, justice finance platforms like can function as the connective tissue. Through the platform, impact metrics are applied consistently while human experts remain responsible for final determinations, thereby reducing friction, increasing transparency, and supporting auditability.

When incentives align

It’s no coincidence that many of the leaders exploring justice finance models are women. Globally, women experience legal problems at disproportionately higher rates than men yet are less likely to obtain formal assistance. Women also control significant pools of global wealth and are more likely to . Indeed, 75% of women believe investing responsibly is more important than returns alone, and female investors are almost twice as likely as male counterparts to prioritize environmental, social and corporate governance (ESG) factors when making investment decisions, .

When those most affected by systemic barriers also shape capital allocation decisions, structural change becomes more feasible. Despite facing steep barriers in legal tech funding (just 2% goes to female founders), women represent in access-to-justice legal tech, compared to just 13.8% across legal tech overall.

This alignment between lived experience, innovation leadership, and capital stewardship creates an opportunity to reconfigure incentives in favor of meaningful change.

Expanding funding and impact

Justice financing will not resolve the justice gap on its own. Mission-focused tools for self-represented parties, legal aid, and court reform remain essential components of a functioning justice ecosystem. However, ethical capital represents an additional structural layer that can expand the range of cases and remedies that receive financial support.

Impact orientation can accommodate longer time horizons, alternative dispute resolution pathways, and remedies that extend beyond monetary damages. In certain matters, particularly those involving environmental harm, systemic consumer violations, or community-wide injustice, capital structured around impact metrics may identify and enable solutions that traditional litigation finance models do not prioritize.

For example, capital aligned with defined impact frameworks may support outcomes that include remediation programs, compliance reforms, or community investments alongside financial recovery. These approaches can create durable benefits that outlast a single judgment or settlement.

Of course, solving deep-rooted inequities and legal system complexity requires more than new tools and new investors. It requires designing capital pathways that are repeatable, accountable, and aligned with measurable public benefit.

Although justice finance may not be a fit for every case and has yet to see widespread uptake, it does have the potential to reach cases that currently fall through the cracks — cases that have merit, despite falling outside traditional litigation finance models and legal aid or impact litigation eligibility criteria.


You can find other installments of our Scaling Justice blog series here

]]>
When courts meet GenAI: Guiding self-represented litigants through the AI maze /en-us/posts/ai-in-courts/guiding-self-represented-litigants/ Thu, 19 Feb 2026 18:20:08 +0000 https://blogs.thomsonreuters.com/en-us/?p=69532

Key insights:

      • Considering courts’ approach — Although many courts do not interact with litigants prior to filings, courts can explore how to help court staff discuss AI use with litigants.

      • Risk of generic AI tools — AI use in legal settings can’t be simply categorized as safe or risky; jurisdiction, timing, and procedure are vital factors, making generic AI tools unreliable for court-specific needs.

      • Specialty AI tools require testing — Purpose-built court AI tools offer a safer alternative for litigants, yet these require development and extensive testing.


Self-represented litigants have always pieced together legal help from whatever sources they can access. Now that AI is part of that mix, courts are working to help people use this advanced technology responsibly without implying an endorsement of any particular tool or even the use of AI.

Many litigants cannot afford an attorney; others may distrust the representation they have or may not know where to begin. In any case, people need a meaningful way to interact with the legal system. Used carefully and responsibly, AI can support access to justice by helping self-represented litigants understand their options, organize information, and draft documents, while still requiring litigants to verify their information and consult official court rules and resources.

These issues were discussed in a recent webinar, , hosted by . The panel explored the potential benefits of AI for access to justice and the operational challenges of integrating AI into public-facing guidance for litigants.

The problem with “Just ask AI”

Angela Tripp of the Legal Services Corporation noted that people handling legal matters on their own have long relied on a mix of resources, “some of which were designed for that purpose, and some of which were not.” AI is simply a new tool in that environment, she added. The primary challenge is that court processes are rule-based and time-sensitive; and a mistake can mean missing a deadline, submitting the wrong document, or misunderstanding a requirement that affects the case.

Access to justice also requires more than just access to information in general. Court users need information that is relevant, complete, accurate, and up to date. Generic AI systems, such as most public-facing tools, are trained on broad internet text may not reliably deliver that level of specificity for a particular court, case type, or stage of a proceeding. In these cases, jurisdiction, timing, and procedure all matter. Unfortunately, AI can omit key steps or emphasize the wrong issues, and self-represented litigants may not have the legal experience to recognize what is missing.

At the same time, AI offers several potential benefits to self-represented litigants. It can explain concepts in plain language, help users structure a narrative, and produce a first draft faster than many people can on their own. The challenge is aligning those strengths with the precision that court processes demand.

A strategic pivot: from teaching litigants to equipping staff

In the webinar, Stacey Marz, Administrative Director of the Alaska Court System, described her team’s early efforts to give self-represented litigants clear guidance about safer and riskier uses of AI, including examples of how to properly prompt generative AI queries.

The team tried to create traffic light categories that would simplify decision-making; however, they found this approach very challenging despite several draft efforts to create useful guidance. Indeed, AI use can shift from low-risk to high-risk depending on context, and it was hard to provide examples without sounding like the court was endorsing a tool or sending people down a path to which the court could not guarantee results.

The group ultimately shifted to a more practical approach — training the people who already help litigants. The new guidance targets public-facing staff such as clerks, librarians, and self-help center workers. Instead of teaching litigants how to prompt AI, it equips staff to have informed, consistent conversations when litigants bring AI-generated drafts or AI-based questions to the counter.

The framework emphasizes acknowledgment without endorsement. It suggests language such as:

“Many people are exploring AI tools right now. I’m happy to talk with you about how they may or may not fit with court requirements.”

From there, staff can explain why court filings require extra caution and direct users to court-specific resources.

This approach also assumes good faith. A flawed filing is often a sincere attempt to comply, and a litigant may not realize that an AI output is incomplete or incorrect.

Purpose-built tools take time

The webinar also discussed how courts also are exploring purpose-built AI tools, including judicial chatbots designed around court procedures and grounded in verified information. Done well, these tools can reduce common problems associated with generic AI systems, such as jurisdiction mismatch, outdated requirements, or fabricated or hallucinated citations.

However, building reliable court-facing AI demands significant time and testing. Marz shared Alaska’s experience, noting that what the team expected to take three months took more than a year because of extensive refinement and evaluation. The reason is straightforward: Court guidance must be highly accurate, and errors can materially harm someone’s legal interests. In fact, even after careful testing, Alaska still included cautionary language, recognizing that no system can guarantee perfect answers in every situation.

The path forward

Legal Services’ Tripp highlighted a central risk: Modern AI tools can be clear, confident, and easy to trust, which can lead people to over-rely on them. And courts have to recognize this balance. Courts are not trying to prevent AI use; rather, many are working toward realistic norms that treat AI as a drafting and organizing aid but require litigants to verify claims against official court sources and seek human support when possible.

Marz also emphasized that courts should generally assume filings reflect a litigant’s best effort, including in those cases in which AI contributed to confusion. The goal is education and correction rather than punishment, especially for people navigating complex processes without representation.

Some observers describe this moment as an early AOL phase of AI, akin to the very early days of the world wide web — widely used, evolving quickly, and uneven in its reliability. That reality makes clear guidance and consistent messaging more important, not less.

This shift among courts from teaching litigants to use AI to teaching court staff and other helpers how to talk to litigants about AI reflects a practical effort on the part of courts to reduce the risk of harm while expanding access to understandable information.

As is becoming clearer every day, AI can make legal processes feel more navigable by helping self-represented litigants draft, summarize, and prepare; and for courts to realize that value requires clear guardrails, court-specific verification, and careful implementation, especially when a missed detail can change the outcome of a case.


You can find out more about how AI and other advanced technologies are impactingĚýbest practices in courts and administrationĚýhere

]]>
How AI-powered access to justice is impacting unauthorized practice of law regulations /en-us/posts/government/ai-impacts-unauthorized-practice-of-law/ Mon, 02 Feb 2026 17:55:20 +0000 https://blogs.thomsonreuters.com/en-us/?p=69263

Key insights:

      • Courts and the legal profession need to show leadership — Given their specialized knowledge of the needs of litigants and of courts, courts need to take the lead in determining definitions of the unauthorized practice of law.

      • 3 paths forward to workable regulatory solutions — Recent discussions and research around this subject offered three paths toward modernizing UPL definitions.

      • Uncertainty harms users and innovation — Fear of UPL can drive self-censorship and market exits, even as litigants continue to use publicly available GenAI tools.


Today, many Americans experience legal issues but lack proper access to legal representation. At the same time, AI tools capable of providing legal information are rapidly evolving and already in widespread use. Between these two facts lies a critical definitional problem that courts and state bars must urgently address: How to define the unauthorized practice of law (UPL) in way that doesn’t further curtail access to justice.

This discussion is not theoretical. It directly determines whether AI-based legal services can operate, how they should be regulated, and ultimately whether AI can help unrepresented or self-represented litigants gain meaningful access to justice. This issue was explored in more depth during a recent webinar from theĚý, a joint effort by the National Center for State CourtsĚý(NCSC) and the Thomson Reuters Institute (TRI).

The need for clear definitions

During the webinar, Alaska Supreme Court Administrative Director Stacey Marz noted that “there is no uniform definition of the practice of law” and that UPL regulations represent “a real varied continuum of scope and clarity.” This variation makes compliance challenging for technology providers, especially as they navigate 50 different state standards.

UPL generally occurs when someone “not licensed as an attorney attempts to represent or perform legal work on behalf of another person,” explained Cathy Cunningham, Senior Specialist Legal Editor at ¶¶ŇőłÉÄę Practical Law.

Marz added that such legal advice typically involves “applying the law, rules, principles, and processes to specific facts and circumstances of that individual client — and then recommending a course of action.”

The challenge, however, is that AI can appear to do exactly this, yet the regulatory framework remains unclear about whether and how this should be permitted and how consumers can be protected.

3 paths forward

During the recent webinar, panelists discussed several different approaches to UPL regulations, noting that a and outlined three approaches that state courts could take, including:

Path 1: Explicitly enabling tools with regulatory framework — UPL statutes can be revisited to explicitly allow purpose-built AI legal tools to operate without threat of UPL enforcement, provided they meet certain requirements. Prof. Dyane O’Leary, Director of Legal Innovation & Technology at Suffolk University, emphasized that consumer-facing AI legal tools are already being used for tailored legal advice, arguing that some oversight is better than “just letting these tools continue to operate and hoping consumers aren’t harmed by them.”

Path 2: Creating regulatory sandboxes — Courts could establish temporary experimental zones in which AI legal service providers can operate under controlled conditions while regulators gather data about efficacy and safety through feedback and research, with an eye toward informing future regulation reform.

Path 3: Narrowing UPL to human conduct — Clarifying that existing UPL rules apply only to humans who may hold themselves out as attorneys in tribunals or courtrooms or creating legal documents under the guise of being a human attorney, effectively would leave AI-powered legal tools clearly outside UPL restrictions and open up a “new pocket of the free market” for consumers.

Utah Courts Self-Help Center Director Nathanael Player referenced Utah Supreme Court Standing Order Number 15, which established their regulatory sandbox using a fundamentally different standard: Not whether services match what lawyers provide, but rather “is this better than the absolute nothing that people currently have available to them?”

Prof. O’Leary reframed the comparison itself, suggesting that instead of comparing consumers who use AI tools to consumers with an attorney, the framework should be “consumers that use legal AI tools, and maybe consumers that otherwise have no support whatsoever.”

The personhood puzzle

“AI, at this time, does not have legal personhood status,” said Practical Law’s Cunningham. “So, AI can’t commit unauthorized practice of law because AI is not a person.”

However, Player pushed back on this reasoning, clarifying that “AI does have a corporate personhood. There is a corporation that made the AI, [and] the corporation providing that does have corporate personhood.” He added, however, that “it’s not clear, I don’t think we know whether or not there is… some sort of consequence for the provision of ChatGPT providing legal services.”


You can view here


This ambiguity creates what might be called the personhood gap, a zone of legal uncertainty with serious consequences for both innovation and access to justice.

Colin Rule, CEO at online dispute resolution platform ODR.com, explained that “one of the major impacts of UPL is, actually self-censorship.” After receiving a UPL letter from a state bar years ago, he immediately exited that market. This pattern repeats across the legal tech landscape, leaving companies hesitant to innovate.

Rule’s bottom line resonates with anyone trying to build solutions in this space. “As a solution provider, what I want is guidance,” Rules explained. “Clarity is what I need most… that’s my number one priority.”

Moving forward: Clarity over perfection

The legal profession needs to lead on this issue, and that means state bars and state supreme courts must take action now. The tools are already in use, and the question is not whether AI will play a role in legal services, but rather whether that role will be defined by thoughtful regulation or by default.

The solution is for the judiciary to provide clear guidance on what services can be offered, by whom, and under what conditions. To do that, courts much first acknowledge that for most people, the choice is not between an AI tool and a lawyer but between an AI tool and nothing. Given that, states must walk a path that will both encourage innovation and protect consumers.

To this end, legal professionals and courts should experiment with these tools, understand their trajectory as well as their current limitations, and work collaboratively with developers to create frameworks that prioritize consumer protection without stifling innovation that could genuinely expand access to justice.


You can find out more about how courts and legal professionals are dealing with the unauthorized practice of law here

]]>
Legal aid leads on AI: How Lone Star Legal Aid built Juris to deliver faster, fairer results /en-us/posts/ai-in-courts/legal-aid-ai-lone-star-juris/ Mon, 10 Nov 2025 15:57:22 +0000 https://blogs.thomsonreuters.com/en-us/?p=68394

Key takeaways:

      • Legal aid is leading on AI adoption — Legal aid organizations are leading the way in leveraging AI with 74% using AI in their work, driven by the need to serve millions of citizens who lack legal help.

      • Lone Star Legal Aid creates Juris — A new AI-powered tool Juris from Lone Star Legal Aid improves accuracy and trust through retrieval-augmented generation, source-cited answers, and a secure Azure-based architecture with an integrated citation viewer.

      • Keeping costs low — A phased, two-year build-and-test process kept costs low (at about $2,000 a year in infrastructure costs, plus about 300 staff hours) and produced dependable results.


A finds that under-resourced legal aid nonprofits are adopting AI at nearly twice the rate of the broader legal field because of the urgency of the need to serve millions of Americans who may lack legal help. The study shows that almost three-quarters (74%) of legal aid organizations already use AI in their work, compared with a 37% adoption rate for generative AI (GenAI) across the wider legal profession. (LSLA), a legal aid non-profit serving easter Texas, is one of early adopters of AI.

According to LSLA, its attorneys were spending too much time and money hunting for answers across pricey platforms and scattered PDFs. Key materials lived in research databases, internal drives, and static repositories, while individual worker-vetted documents were not centrally accessible. Without a single, trusted hub, staff experienced slower research time that affected clients through duplicated effort and delays.

These strains are not unique to LSLA. In fact, court help centers and self‑help portals face the same fragmentation, licensing costs, and uneven access to authoritative guidance. A verifiable, consolidated knowledge hub that could stabilize quality while reducing spending would be a needed solution.

To solve this problem, LSLA turned to AI to create a legal tool called Juris built to return fast, source‑cited answers. Juris was designed to centralize high‑value legal materials, cut reliance on expensive third‑party platforms, and lay a flexible foundation that the organization could reuse beyond legal research for internal operations and future client tools.

Multifaceted approach to ensuring accuracy and reliability

There were several aspects of Juris that designers used to help its mission to increase access to justice, including:

Design methods fuel trustworthy output — Juris was built to ensure accuracy using a number of methods, such as a retrieval-augmented generation (RAG) pipeline to ensure the chatbot delivers fact-based, source-cited answers. It also uses semantic chunking, a process that breaks a document into natural, meaning‑based sections (for example, a heading plus the paragraphs that belong to it) so the original context stays together.

When a user asks a question, Juris retrieves only the most relevant of these sections. Limiting the AI to evidence from those passages improves accuracy and reduces hallucinations because the model is not guessing from memory. Instead, it is grounding answers in the text it just accessed.

Solid technical architecture helps reliability — Juris’s technical architecture also ensures reliable results because it combines Azure OpenAI for secure, stateless access to AI models to better handle document ingestion, processing, and vector storage. Users interact through a custom internal web interface that integrates a PDF viewer alongside the chat experience that enables seamless citation and document navigation. The platform is securely hosted on Azure App Service with continuous deployment orchestrated through GitHub, which provides reliable operations and streamlined updates.

Phased approach to building and testing yielded dependability — Also to ensure trustworthy results, LSLA developed Juris by following a structured, phased approach over two years. It began with a concept phase that was focused on clearly identifying the problem, followed by a platform evaluation that compared open-source and commercial solutions. A prototype was then created and demonstrated as proof of concept.

In addition, internal testing included adversarial exercises, hallucination detection, and rigorous validation of citation reliability. Based on these findings, the team implemented enhancements, such as moving from size-based to semantic chunking, improving the interface, and expanding the set of source materials. Juris is now in pilot preparation and undergoing final refinements before its release to a select group of subject matter experts.

Efficient resourcing and sharing learnings

LSLA’s phased method to building and testing also made sure that sustainability was built in from the beginning. Indeed, ongoing maintenance is minimal, and Microsoft’s nonprofit Azure credits keep infrastructure costs around $2,000 per year.

The most significant cost was in staff time. Development so far totals roughly 300 staff hours (or about 0.5 full-time equivalent, plus 0.3 FTE over two years). Once Juris enters phase two, which has been funded by a Legal Services Corporation (LSC) technology initiative grant, expected benefits will include faster, more consistent research and reduced workload for frontline and administrative staff, plus a modular framework that others can adapt.

Other legal service organizations that face similar challenges can learn from the Juris development, testing, and implementation as well as other related case studies. These recurring lessons include:

      • beginning with a small, manageable scope
      • inviting end users in from the start, and
      • carving out protected time so staff can innovate alongside daily duties.

Looking ahead, the LSLA team will continue to roll Juris out in phases, while building sister tools. LSLA also plans to share lessons learned through LSC’s AI Peer Learning Labs to help other organizations replicate the model.

Real change at scale, such as this, will only come from collaborating across organizations to share playbooks, pool datasets, and co‑design tools that lift quality while lowering cost. It is only with such partnership and sharing lessons from early adopters of AI that peers can adapt the model and, together, scale solutions that narrow the justice gap.

Angela Tripp, Program Officer for Technology for the Legal Services Corporation contributed to this article.


You can learn more about the ways legal aid organizations are using advanced technology to better serve individuals as they access the justice system here

]]>
Where the algorithm meets the gavel: Appropriate uses of AI in courts /en-us/posts/ai-in-courts/appropriate-use-ai-courts/ Mon, 03 Nov 2025 18:07:51 +0000 https://blogs.thomsonreuters.com/en-us/?p=68289

Key insights:

      • AI use falls on a spectrum — Appropriate AI use hinges on which trial function it touches upon and how much it influences outcomes.

      • AI uses must align with duties — Administrative and preparatory uses should be aligned with lawyers’ duty of competence, with outputs being checked and used within existing ethical rules.

      • Context and timing control admissibility — Courts should assess tools on a case‑by‑case basis, weighing procedural stage, validation and error rates, expertise, and safeguards.


The integration of AI in the legal system is a complex and multifaceted issue, defying simplistic categorizations of right or wrong. Indeed, the application of AI in court is not a binary concept but rather one that exists on a spectrum. The appropriateness of AI use depends on two critical variables: i) which portion of the trial process is being impacted; and ii) the degree of impact that the AI usage has on the outcome.

What matters is not whether AI appears in a case, but which aspect of the trial proceeding the AI in question touches — research, drafting, evidence review, jury selection — and how deeply it may influence outcomes. A document-review algorithm that flags potentially relevant discovery operates at a vastly different point on this spectrum than an AI system that drafts legal arguments or predicts case outcomes.

Low‑impact assistance on routine tasks may be not only permissible but prudent, while high‑impact automation in fact‑finding or credibility assessments can quickly cross ethical or legal lines. Understanding this spectrum — and where a specific use case falls along it — is essential for maintaining ethical standards, preserving the integrity of our judicial system, and serving clients competently in an era in which technology is reshaping every corner of legal practice. For professionals navigating this terrain, it is important to consider where, how much, and with what guardrails AI is utilized.

Administrative applications and professional competence

Administrative applications of AI have gained widespread acceptance within the legal community. The Honorable Erica Yew of the Santa Clara County Superior Court observes that many preliminary research platforms now incorporate AI-enhanced features as standard functionality. These features have become so seamlessly integrated into legal practice that their use is not only appropriate but often expected, requiring little deliberation or justification from practitioners.

Dr. Maura R. Grossman, JD, PhD, a Research Professor in the School of Computer Science at the University of Waterloo, dives deeper into this conversation by discussing the use of AI to provide summaries and chronologies as a part of case preparation. She contends that while it still requires being checked by human lawyers, it is an appropriate use of AI.

Further, the deployment of AI tools in administrative contexts aligns directly with attorneys’ fundamental duty of competence. Judge Yew articulates this connection with clarity, noting that AI should be viewed through the same lens as previous technological innovations. “When looking at rules for appropriate AI, it is akin to the rules for social media or even stationary at their inception — they are all tools,” explains Judge Yew. “We need to make sure we know how to use them and use them within the rules already set for lawyers and judges.”

This perspective underscores a critical principle: AI represents an evolution in legal tool use rather than a departure from established professional standards. Just as attorneys were expected to master word processors and legal databases in previous decades, today’s competent practitioners must understand how to leverage AI effectively while adhering to existing ethical frameworks. The emphasis, naturally, remains on validity, reliability, efficiency, fairness, and compliance with professional responsibilities — all objectives that AI, when properly employed, can significantly advance. That is at the heart of the discussion around appropriate use of AI in legal settings.

Evaluating the impact: A spectrum of appropriateness

While AI has demonstrated clear value in streamlining administrative functions and preliminary case management — indeed, many practitioners increasingly expect its judicious application in these contexts — the deployment of AI avatars in judicial proceedings demands scrutiny. In fact, this appropriateness of such technology usage exists along a spectrum, contingent upon both the intended application and the procedural stage at which it is employed.

Two recent cases illuminate the boundaries of this spectrum. In , a court authorized the use of an AI-generated avatar — in this case, an AI-generated video version of a deceased victim — during the victim-impact statement portion of sentencing proceedings. Conversely, a Appellate Court categorically rejected the use of an AI avatar for oral argument presentation, deeming it fundamentally inappropriate for that forum under the circumstances presented.

While multiple variables distinguish these cases, a critical differentiator emerges: The procedural juncture at which the avatar would function. In these cases, this temporal dimension — when in the judicial process that AI intervention occurs — proves instrumental in determining whether such technology enhances or undermines the integrity of the legal proceedings.

The gray area in practice

A Florida criminal case saw a judge use AI-enabled virtual reality (VR) goggles to review evidence — an unprecedented move that highlights the challenges of integrating advanced technology into courtrooms. Supporters say immersive tools such as the use of VR can clarify crime scenes and improve fact-finding; critics counter that AI reconstruction may be inaccurate, biased, and unduly shape memory.

Again, the core issue is context. Admissibility and weight cannot be resolved by blanket rules. Courts must assess the specific technology, its validation and error rates, the expertise behind the reconstruction, and its safeguards against manipulation. Only rigorous, case-by-case scrutiny can balance innovation with the justice system’s bedrock commitment to fairness.

Indeed, this case-by-case framework becomes all the more essential when we consider how profoundly AI has transformed the nature of evidence itself. The Florida VR case exemplifies a broader epistemological challenge facing modern courts: technology no longer simply captures reality, rather it reconstructs, interprets, and in some instances, generates it. Where traditional evidentiary rules presumed a clear distinction between genuine documentation and fabrication, AI-enabled tools occupy an ambiguous middle ground that resists categorical treatment.

It is precisely this collapse of binary certainty that scholars like Dr. Grossman have identified as the defining evidentiary dilemma of our era, one that demands not merely procedural adjustments but a fundamental reconceptualization of how courts evaluate truth.

Dr. Grossman notes that this shows a critical shift in evidentiary standards for the digital age. Traditionally, photographic and video evidence was evaluated through a binary lens — either authentic or inauthentic. Today, however, AI-generated content has fundamentally altered this calculus because content can be altered in different ways, e.g., simple noise removal versus substantive changes.

Truth now exists on a spectrum, Dr. Grossman observes, now requiring courts to navigate unprecedented gradations of authenticity when determining admissibility.

Into the future of courts

As AI continues its inexorable integration into legal practice, the profession must resist the temptation of categorical acceptance or rejection, instead embracing a nuanced, context-sensitive approach that evaluates each application against the twin metrics of where in the procedural stage AI is used and what is its impact on the finder of fact’s decision.

The future of justice depends not on whether we permit AI in our courtrooms, but on our collective wisdom in distinguishing between AI-driven tools that enhance human judgment and those that threaten to supplant it. This critical distinction demands ongoing vigilance, rigorous validation, and an unwavering commitment to the foundational principles of fairness and accuracy that have long anchored our legal system.


You can find out more about the appropriate use of AI in legal proceedings in the Thomson Reuters Institute’s AI in Courts Resource Center

]]>
Unmasking human trafficking: A collective fight to end sex trafficking & exploitation /en-us/posts/human-rights-crimes/unmasking-human-trafficking/ Fri, 26 Sep 2025 14:27:39 +0000 https://blogs.thomsonreuters.com/en-us/?p=67612

Key highlights:

    • Human trafficking is a local problem — Contrary to the stranger danger myth, trafficking primarily involves emotional manipulation and targets vulnerable populations locally.

    • Prevention requires talking to men and boys — Discussions should be held about how online pornography, and the impact of their visits to strip clubs is furthering the sexual exploitation of women and indirectly fueling sex trafficking.

    • Human trafficking is a criminal enterprise — This enterprise is operating in the shadow economy, with profits that fuel the world’s second-largest illicit financial enterprise.


An estimated 50 million people are currently living in modern slavery globally, a stark reality often hidden in plain sight. Indeed, , established by the United Nations in 2013, serves as a crucial reminder that human trafficking remains one of the most pressing human rights challenges of our time.

A recent Thomson Reuters Institute webinar in observance of this day brought together experts from technology, survivor services, and law enforcement for a discussion to deepen the collective understanding of trafficking’s complexities, examine its devastating impacts on victims, and develop strategies to drive meaningful change.

Debunking myths around human trafficking

Human trafficking is often misunderstood, with common misconceptions including the stranger danger myth that most trafficking situations come from anonymous kidnappers. However, , CEO of , a nonprofit group helping law enforcement on the front lines of domestic minor sex trafficking, notes that trafficking often involves emotional manipulation by traffickers who target individuals from vulnerable populations, including youth without housing and children in foster care and juvenile justice systems.

, CEO of New Friends New Life, which provides comprehensive care to human trafficking victims, explains that between 95% and 97% of the survivors with whom she works are local. And, according to the U.S. Department of Homeland Security (DHS), fraud and coercion are more prevalent than brute force in trafficking cases. In fact, traffickers frequently use and to exploit existing vulnerabilities.

Evolving approaches to combating trafficking

The fight against human trafficking requires a multi-faceted approach that involves cooperation of technology companies, survivor support organizations, and law enforcement agencies. More specifically, this approach can include:

Use and scale of technology — Boorse states that “technology has changed everything” in the anti-trafficking space and has made it easier for offenders to exploit victims. At the same time, technology is also providing opportunities for identifying survivors and offering support. Spotlight has developed innovative tools that help law enforcement and service providers identify trafficking situations and connect survivors with vital resources.

“Trying to identify a child victim in this mound of data is like trying to identify a needle in a haystack, but we aren’t looking for needles, we’re looking for children,” Boorse explains, adding that Spotlight leverages data and AI to help with the identification of some of the most vulnerable children. “We aim to reduce the time it takes to identify a victim from months to minutes. The speed of identification has a direct relationship to recovery and reduces the amount of time a victim remains in trauma.” With the help of technology, Spotlight has helped investigators identify more than 26,000 children.

Holistic approaches to support survivors — Comprehensive survivor support and compassionate care are key parts of the equation when stopping human trafficking. A model to mirror is the one used by New Friends New Life, which provides a range of services, including housing partnerships, emotional trauma support, and economic empowerment through job training and education. Davis emphasizes that rebuilding trust is crucial, and “the whole bunch of love approach” is essential in supporting survivors.

Collaboration with law enforcement and survivor care providers — The protracted nature of prosecuting human trafficking cases makes cooperation between law enforcement and those nonprofits which support survivors critical. Indeed, there often is a 24- to 36-month time span before the trial starts or the trafficker is convicted through a plea deal. This is a long time to keep a victim engaged, and this lengthy timeline can strain that engagement, especially given the trauma and instability survivors often face.

Because of this challenge, law enforcement relies heavily on robust partnerships with service organizations. The hyper-focus on the welfare and the well-being of the victim is key so that law enforcement can then focus on working with the prosecutors on the case.

In addition, collaboration on expanding awareness of traffickers’ recruitment methods in digital spaces is essential. And another key challenge lies in public awareness around the shifting landscape of trafficking online. More awareness and education in our schools and within our own homes are needed, primarily about how offenders use social media and other online platforms to identify and get their foothold on potential victims. Sadly, most minor victims — disproportionately over 95% — are recruited on Instagram, according to DHS.

Addressing root causes is key to prevention

Of course, the best way to prevent sex trafficking is to stop it before it starts, and effective prevention demands a multi-faceted approach starting with early intervention and addressing both vulnerabilities and demand. For example:

Give vulnerable minors love and acceptance — Boorse emphasized the importance of “looking back at the family unit,” noting that “one of the most critical pieces is… this feeling of being loved and accepted, obviously with appropriate boundaries.” She argued that fortifying family and community relationships from early childhood onward can help build emotional resilience and provide children with the strong foundation they need to resist exploitation.

Addressing demand is equally vital — Human trafficking is the second largest and fastest growing criminal enterprise in the world, which means it is a business. “If we are ever going to end this issue, we have to address the demand — and that means talking to our men,” Davis states. More specifically, there is a strong need to educate men and boys about the impact of seemingly “normalized” behaviors, such as consuming sexually exploitative online pornography or visiting strip clubs because “these normalized behaviors fuel a criminal industry,” she adds.

Everyone has a role to play

Combating human trafficking is a collective responsibility that requires education and action from many sectors of society. Everyone has the opportunity to educate themselves and support organizations like Spotlight, New Friends New Life, and the . In addition, reporting suspicious activity to the DHS tip line (1-866-DHS-2-ICE (1-866-347-2423) or the at 1-888-373-7888, as well as advocating for meaningful policy changes in local communities are other ways to help.

Only through shared commitment and action, can we build a world in which exploitation no longer has a foothold and all people can be free from the devastating exploitation of human trafficking.


Learn more about human trafficking and human rights crimes through the Thomson Reuters Institute resource center.

]]>
New white paper: A blueprint for cultivating practice-ready lawyers /en-us/posts/ai-in-courts/white-paper-clear-research-practice-ready-lawyers/ Thu, 18 Sep 2025 13:33:16 +0000 https://blogs.thomsonreuters.com/en-us/?p=67568

Key takeaways:

      • New lawyers lack courtroom preparednessĚý— Judges and experienced attorneys observe a significant decline in litigation, communication, and client advocacy skills among lawyers in their first five years of practice, with many new attorneys needing additional training before appearing in court.

      • Reduced learning opportunities with several factors at the root causeĚý— The readiness gap stems from law schools’ focus on theory over practical skills, reduced learning opportunities due to remote proceedings, and a traditional bar exam that doesn’t adequately assess real-world practice readiness.

      • A blueprint for improvementsĚý— The white paper advocates for solutions like mandatory supervised postgraduate practice, modernized bar admissions (including alternative pathways to legal careers), increased experiential learning in law school, and leveraging AI to enhance skills development and mentorship.


America’s courts are sounding an alarm: Too many new lawyers are entering the courtroom unprepared. To examine this critical situation, the Thomson Reuters Institute has published a new white paper, The unprepared lawyer: How America’s legal education is failing the courts and what must change, which is based on a research initiative from the Committee on Legal Education and Admissions Reform (CLEAR). The committee’s research draws on input from more than 4,000 judges and more than 4,000 practicing attorneys from all over the United States.

The findings are stark. Judges report declining litigation and communication skills among attorneys in their first five years as a lawyer. Nearly 60% of judges say client advocacy is being harmed, and more than half say they believe new lawyers should not appear in court without additional training.

practice-ready lawyers

What is driving the readiness gap? Law schools excel at teaching students research and analysis skills, but the schools’ practical preparation lags. Courtroom advocacy, procedural fluency, and professional communication clearly are not getting enough emphasis, the research shows.

Also, remote proceedings and fewer live trials have reduced opportunities for new lawyers to learn by observing, and many of those lawyers say the traditional bar exam does not measure real practice readiness — even as Ěýnewly updated bar exam that’s focused more on foundational skills will be in use by next summer.

A pathway to change

Distilling this research, the white paper offers a blueprint for change, much of it suggested by the judges involved. For example, many judges suggest that new lawyers have supervised postgraduate practice so no new lawyer practices alone on day one. Also, some urge modernizing bar admissions and offering alternative pathways that evaluate real competencies such as supervised practice, curated portfolios, and staged testing.

Other suggestions include making experiential learning mandatory in law school through clinic externships simulations and practical drafting, as well as harnessing AI to improve skills development, feedback, and mentorship while keeping humans in the loop.

Despite the seriousness of the situation, there is some encouraging news. Evidence shows that initiatives such as supervised practice, mentorships, clinics, externships and on-the-job training do improve the skills needed by new lawyers. Judges and lawyers overwhelmingly credit these experiences with building competence, judgment, and professionalism among newer lawyers.

The legal profession has a choice: Act now to reverse the erosion in courtroom performance and rebuild public trust or allow the problem to deepen. This white paper shows a path forward by aligning education, licensure, and technology to produce lawyers who are not just bar-ready but practice-ready. The nation’s courts, clients, and communities depend on it.


You can download the new white paper “The unprepared lawyer: How America’s legal education is failing the courts and what must change” by filling out the form below:

]]>
Access to housing justice: Leveraging AI to solve NYC’s security deposit crisis /en-us/posts/ai-in-courts/security-deposit-crisis/ Thu, 14 Aug 2025 16:29:27 +0000 https://blogs.thomsonreuters.com/en-us/?p=67208

Key insights:

      • Security deposit disputes are a significant issue in New York City — Half a billion dollars is locked in security deposits at any given time, and nearly 5,000 official complaints about illegally withheld deposits have been filed with the New York Attorney General since 2023. However, many cases go unreported due to the complexity and cost of pursuing justice.

      • AI-powered tool Depositron helps NYC tenants claim their security deposits — By guiding users through a simple process and generating customized, legally sound demand letters, Depositron can give tenants the easy access and means to assert their rights.

      • AI tools Like Depositron can improve scale and access to justice — Depositron’s modular architecture makes it possible to expand to other jurisdictions with similar legal frameworks, providing a scalable blueprint for addressing other legal challenges.


Security deposit disputes are a significant and persistent housing justice issue in New York City. The NYC Comptroller estimates that at any given time. With the as of April, tenants are routinely required to provide large deposits to secure housing.

Since 2023, have been filed with the New York Attorney General, according to Gothamist, but most likely, this only scratches the surface. Most cases go unreported because the time, cost, and complexity of pursuing justice are too high for most tenants.

Despite reforms like the Housing Stability and Tenant Protection Act of 2019, which mandates that landlords return deposits within 14 days and provide itemized deductions for any money withheld, enforcement remains weak and affordable pathways for recourse are slow or not available. And unfortunately for tenants, Legal Aid organizations prioritize eviction defense, not deposit recovery; and the NY AG’s complaint process is slow and opaque.

AI powered tool delivers agency at scale

To address this challenge,Ěý, a long-time tenant advocate with more than 20 years of experience litigating housing justice cases in NYC courts, andĚý, CEO and founder of LawDroid (andĚýcontributor to the Thomson Reuters Institute blog site), developedĚý. This free, AI-powered, mobile phone-accessible tool is available around the clock to help NYC tenantsĚýand those in New York stateĚýrecover their security deposits quickly, legally, and without the need for a lawyer. Depositron also has plans to launch in Florida and Chicago soon.

“Depositron delivers more than a legal document — it delivers agency,” says Nori. Indeed, this encapsulates the tool’s core value because it empowers tenants to take action and reclaim what is rightfully theirs.


“By making legal self-advocacy accessible, Depositron fills a gap left by traditional legal services, which often cannot take on these cases due to capacity or cost constraint.”


Depositron guides users through an intuitive, plain-language process to collect relevant details about their housing situation, including lease facts, deposit amount, move-out date, and landlord information. Users also can upload photos that document the apartment’s condition to strengthen their case. The tool then generates a customized, legally sound demand letter that cites New York laws and incorporates the user’s evidence. This process not only educates tenants about their rights but also gives them a practical, actionable way to assert those rights without facing the intimidation or expense of seeking traditional legal help.

Not surprisingly, AI is at the core of Depositron’s effectiveness. Unlike generic form generators, Depositron takes a hybrid approach, combining advanced large language models and structured prompts, together with conditional logic, to answer basic legal questions and capture the unique facts of each dispute and then translate them into a persuasive, legally grounded narrative.

This customized approach simulates the client interview and legal writing process and makes it possible to help thousands of tenants efficiently. Early testing with law students, tenant advocates, and pro se renters demonstrated success. In fact, users reported that Depositron made them feel more confident, informed, and in control because many recovered their deposits faster than they would have through conventional means.

“By making legal self-advocacy accessible, Depositron fills a gap left by traditional legal services, which often cannot take on these cases due to capacity or cost constraints,” Nori explains. This is a crucial differentiator for Depositron because legal aid organizations and private attorneys are rarely able to assist with disputes over relatively small sums of money.

Poised for expansion in policy and geography

By enabling tenants to send professional, well-researched demand letters at scale, the platform changes the risk calculation for landlords. As more tenants assert their rights with credible legal documents, landlords are incentivized to comply with the law rather than risk penalties or further legal action.

The tool also contributes to systemic change by collecting anonymized data on violation patterns, which can be shared with advocates and enforcement agencies. This data-driven approach enables targeted interventions and supports broader policy efforts to improve housing justice.


By enabling tenants to send professional, well-researched demand letters at scale, the platform changes the risk calculation for landlords.


In addition, Depositron’s modular architecture is designed for expansion to other jurisdictions with similar legal frameworks, such as California, Illinois, and Massachusetts. The technology also can be partnered with local legal aid organizations to accelerate adoption and impact in new markets. Nori and Martin say they have already heard from advocates in Michigan, Maryland, Washington, Tennessee, California, and Washington DC about building platforms in those markets.

Further, the path taken in Depositron’s development can offer lessons for the development of future access-to-justice tools. For example, user empowerment and autonomy are essential, and intuitive design is as important as legal accuracy, Nori and Martin discovered. More specifically, targeted solutions for discrete problems can drive meaningful changes, and AI can serve as both the engine and the interface for delivering legal services at scale.

Depositron demonstrates that technology can bridge longstanding gaps in legal access by making protections both real and actionable for everyone, not just those who can afford traditional representation. By transforming a process that traditionally had taken longer than a year to one that can be resolved in a matter of weeks, Depositron restores agency and financial stability to tenants and provides a scalable blueprint for addressing other access-to-justice challenges in the digital age.


You can find out more about how justice tech solutions and tools are working to improve citizens’ access to justice here

]]>
Government Legal Department Report: Why are courts & government law agencies so slow to implement AI? /en-us/posts/government/courts-slow-implement-ai/ Wed, 13 Aug 2025 12:31:22 +0000 https://blogs.thomsonreuters.com/en-us/?p=67128

Key findings:

      • Skepticism and barriers to tech adoption plague courts — Despite the potential efficiencies offered by GenAI and other innovative technologies, many court professionals and agencies remain skeptical or pessimistic about their adoption.

      • Budget, bureaucracy & culture also impede progress — Courts face significant financial constraints, bureaucratic approval processes, and a cultural reluctance to change, which collectively slow the adoption of new technologies.

      • Workforce shortages undermine access to justice — Severe shortages of court reporters, judges, and prosecutors also are causing backlogs and undermining access to justice, particularly for low-income and self-represented litigants.


Courts and government legal agencies at the federal, state, and local levels continue to face budget constraints and challenges in attracting and retaining talent, according to the Thomson Reuters Institute’s recent 2025 Government Legal Department Report, which also found that court staff and legal professionals strongly desire to spend less time on administrative tasks and more time researching increasingly complex issues and practicing law.

The implementation of generative AI (GenAI) and other innovative technology tools could help these professionals find more efficiencies within their short-staffed teams — yet there are many hurdles to implantation, the report found. Despite the opportunity technology could hold, 36% of court and legal professionals surveyed describe their agency’s attitude toward GenAI as pessimistic or apocalyptic. More than half of these respondents say their organizations have no plans to use GenAI technology in the future; and more than 40% say their organizations lack a plan on how to manage, adopt, and implement innovative GenAI-driven technology.

Further, there are many factors that could be contributing to this skepticism and technological resistance, the report found.

Status of tech implementation in courts & agencies

The report showed that while about 40% of respondents say their agencies saw technology investment increase over the last two years, more than two-thirds describe their public sector technology as inferior to private sector technology and systems. And in the agencies in which tech systems are in place or planned for adoption, they are most likely for use in document management, matter management, and public records Freedom of Information Act (FOIA) requests.

In fact, more than half of the respondents surveyed say their agencies have no plans to install third-party software tools that incorporate GenAI, professional packages that incorporate GenAI, or open-source AI tools.

courts

A potential contributing factor to the hesitancy to digitize and implement cloud-based technology is the threat of cyber-attacks. For example, , experienced a ransomware attack in May that affected its sheriff’s office, circuit clerk, and county courthouse. Fortunately, their planning and resilience efforts allowed for continuity even as the departments were forced offline.

Another example is the Kansas Judicial Branch and the it experienced in October 2023, which forced all but one county’s court systems offline and exposed the personal data of more than 150,000 individuals. The crippled e-court systems for more than three months, and addressing the backlog of paper filings from that period took seven months to work through following the initial attack.

Cultural resistance & financial barriers to tech change

In addition to this hesitancy, the legal sector as a whole has had to reconcile the impact of AI technology. Any task that can be partially or fully automated through AI technology still holds value, even if the time of completion is minuscule. And for lawyers who may represent litigants in court, finding this value proposition can be daunting.

Legal professionals can automate drafting, document review, and significantly reduce their administrative time, which is a benefit to their departments. However, the upfront costs associated with technology for those same departments can be a big hurdle to overcome.

courts

Even if the desire for technological implementation is in place within a court system, budget and cost and the bureaucratic approval process are the biggest barriers to adoption. Public sector procurement is historically structured around long-term contracts, and a preference is given for the lowest cost choice, which can limit flexibility.

Gone are the days of a one-time lump-sum investment in innovative technology. Indeed, the entire Gov/Tech industry is rapidly shifting in the direction of cloud-based solutions and SaaS (software as a service) models — with cloud vendors seeing a 10% increase in SaaS subscription models. However, government agencies — which are traditionally high-value, slow-moving, but secure customers — are effectively discouraged from transitioning or experimenting with newer tools by their contract terms.


There are many factors that could be contributing to this skepticism and technological resistance, the report found.


Yet, some are finding a way. Orange County Superior Court, California, is an outlier in its levels of data fluency, strategic innovation adoption, and measured success in driving operational efficiency.

, former Chief Financial and Administrative Officer for the Superior Court, says the court’s strategy was to first invest in its people (via training programs and building data fluency through a data academy); followed by process (building a data culture and identifying the court’s core problems); with technology implementation last. By focusing on low-risk, high-impact problems first, buy-in was built over time, Dang explains.

This aligns with the National Association of Court Management’s — it begins with the capacity (and data analytics) to measure performance, and from there, moves to the ability to define problems through the data, and finally, to seek solutions.

Effective organizations should ask themselves: “What problem am I solving?” rather than “How can we best use AI?” Indeed, AI should be used to solve real problems that are currently hindering court performance, rather than using it just because the technology is new and available.

As the Government Legal Department Report showed, the path to AI adoption in courts and government legal agencies is blocked by structural, cultural, and statutory resistance. However, by starting with a clear identification of the problems these organizations face, and creating room for ethical and strategic innovation, GenAI could become a more pragmatic solution for courts and government legal agencies rather than a hypothetical one.


You can download a copy of the 2025 Government Legal Department Report here

]]>