AI for Justice Archives - Thomson Reuters Institute https://blogs.thomsonreuters.com/en-us/topic/ai-for-justice/ Thomson Reuters Institute is a blog from ¶¶ŇőłÉÄę, the intelligence, technology and human expertise you need to find trusted answers. Fri, 10 Apr 2026 08:47:01 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 Pattern, proof & rights: How AI is reshaping criminal justice /en-us/posts/ai-in-courts/ai-reshapes-criminal-justice/ Fri, 10 Apr 2026 08:46:55 +0000 https://blogs.thomsonreuters.com/en-us/?p=70255

Key insights:

      • AI’s greatest strength in criminal justice is pattern recognition— AI can process vast amounts of data quickly, helping law enforcement and legal professionals detect connections, reduce oversight gaps, and improve consistency across investigations and casework.

      • AI should strengthen justice, not substitute for human judgment— Legal professionals are integral to evaluating AI-generated outputs, especially when decisions affect evidence, warrants, and individuals’ constitutional rights.

      • The most effective model is human/AI collaboration— AI handles scale and speed, while judges, attorneys, and investigators provide context, accountability, and ethical reasoning needed to protect due process.


The law has always been about patterns — patterns of behavior, patterns of evidence, and patterns of justice. Now, courts and law enforcement can leverage a tool powerful enough to see those patterns at a scale at a speed no human mind could match: AI.

At its core, AI works by recognizing patterns. Rather than simply matching keywords, it learns from large amounts of existing text to understand meaning and context and uses that learning to make predictions about what comes next. In the context of law enforcement, that capability is nothing short of transformative.

These themes were front and center in a recent webinar, , from theĚý, a joint effort by the National Center for State CourtsĚý(NCSC) and the Thomson Reuters Institute (TRI). The webinar brought together voices from across the justice system, and what emerged was a clear and consistent message: AI is a powerful ally in the pursuit of justice, but only when paired with the judgment, accountability, and constitutional grounding that human professionals can provide.

AI’s pattern recognition is a gamechanger

“AI is excellent,” said Mark Cheatham, Chief of Police in Acworth, Georgia, during the webinar. “It is better than anyone else in your office at recognizing patterns. No doubt about it. It is the smartest, most capable employee that you have.”

That kind of capability, applied to the demands of modern policing, investigation, and prosecution, is a genuine gamechanger. However, the promise of AI extends far beyond the patrol car or the precinct. Indeed, it cascades through the entire arc of justice — from the moment a crime is detected all the way through prosecution and adjudication.

Each step in that chain represents not just an operational and efficiency upgrade, but an opportunity to make the system more fair, more consistent, and more protective of the rights of everyone involved.

Webinar participants considered the practical implications. For example, AI can identify and mitigate human error in decision-making, promoting greater consistency and fairness in outcomes across cases. And by automating labor-intensive tasks such as reviewing body camera footage, AI frees prosecutors and defense attorneys to focus on other aspects of their work that demand professional judgment and legal expertise.

In legal education, the potential of AI is similarly recognized. Hon. Eric DuBois of the 9th Judicial Circuit Court in Florida emphasizes its role as a tool rather than a substitute. “I encourage the law students to use AI as a starting point,” Judge DuBois explained. “But it’s not going to replace us. You’ve got to put the work in, you’ve got to put the effort in.”


AI can never replace the detective, the prosecutor, the judge, or the defense attorney; however, it can work alongside them, handling the volume and velocity of data that no human team could process alone.


Judge DuBois’ perspective aligns with broader judicial sentiment on the responsible integration of AI. In fact, one consistent theme across the webinar was the necessity of maintaining human oversight. The role of the legal professional remains central, participants stressed, because that ensures accuracy, accountability, and ethical judgment. The appropriate placement of human expertise within AI-assisted processes is essential to ensuring a fair and effective legal system.

That balance between leveraging AI and preserving human judgment is not just good practice, rather it’s a cornerstone of justice. While Chief Cheatham praises AI’s pattern recognition, he also cautions that it “will call in sick, frequently and unexpectedly.” In other words, AI is a powerful but imperfect tool, and those professionals who rely on it must always be prepared to intervene in those situations in which AI falls short. Moreover, the technology is improving extremely rapidly, and the models we are using today will likely be the worst models we ever use.

Naturally, that readiness is especially critical when individuals’ rights are on the line. “A human cannot just rely on that machine,” said Joyce King, Deputy State’s Attorney for Frederick County in Maryland. “You need a warrant to open that cyber tip separately, to get human eyes on that for confirmation, that we cannot rely on the machine.” Clearly, as the webinar explained, AI does not replace constitutional obligations; rather, it operates within them, and the professionals who use AI are still the guardians of due process.

The human/AI partnership is where justice is served

Bob Rhodes, Chief Technology Officer for ¶¶ŇőłÉÄę Special Services (TRSS) echoed that sentiment with a principle that cuts across every application of AI in the justice system. “The number one thing… is a human should always be in the loop to verify what the systems are giving them,” Rhodes said.

This is not a limitation of AI; instead, it’s the design of a system that works. AI identifies the patterns, and trained, experienced professionals evaluate them, act on them, and are accountable for them.

That partnership is where the real opportunity lives. AI can never replace the detective, the prosecutor, the judge, or the defense attorney. However, it can work alongside them, handling the volume and velocity of data that no human team could process alone. So that means the humans in the room can focus on what they do best: applying judgment, upholding the law, and protecting an individual’s rights.

For judicial and law enforcement professionals, this is the moment to lean in. The patterns are there, the technology to read them is here, and the opportunity to use both in service of rights — not against them — has never been greater.


Please add your voice to ¶¶ŇőłÉÄę’ flagship , a global study exploring how the professional landscape continues to change.Ěý

]]>
Scaling Justice: Unlocking the $3.3 trillion ethical capital market /en-us/posts/ai-in-courts/scaling-justice-ethical-capital/ Mon, 23 Mar 2026 17:12:28 +0000 https://blogs.thomsonreuters.com/en-us/?p=70042

Key takeaways:

      • An additional funding stream, not a replacement — Ethical capital has the potential to supplement existing access to justice infrastructure by introducing a justice finance mechanism that can fund cases with measurable social and environmental impact.

      • Technology as trust infrastructure — AI and smart technologies can provide the governance scaffolding required for ethical capital to flow at scale, including standardizing assessment, impact measurement, and oversight.

      • Capital is not scarce; allocation is — The true bottleneck is not the availability of funds; rather it’s the disciplined, investment-grade legal judgment required to evaluate risk, ensure compliance, and measure impact in a way that makes justice outcomes investable.


Kayee Cheung & Melina Gisler, Co-Founders of justice finance platform Edenreach, are co-authors of this blog post

Access to justice is typically framed as a resource problem — the idea that there are too few legal aid lawyers, too little philanthropic funding, and too many people navigating civil disputes alone. This often results in the majority of individuals who face civil legal challenges doing so without representation, often because they cannot afford it.

Yet this crisis exists alongside a striking paradox. While 5.1 billion people worldwide face unmet justice needs, an estimated $3.3 trillion in mission-aligned capital — held in donor-advised funds, philanthropic portfolios, private foundations, and impact investment vehicles — remains largely disconnected from solutions.

Unlocking even a fraction of this capital could introduce a meaningful parallel funding stream — one that’s capable of supporting cases with potential impacts that currently fall outside traditional funding models. Rather than depending on charity or contingency, what if justice also attracted disciplined, impact-aligned investment in cases themselves, in addition to additional funding that could support technology?

Recent efforts have expanded investor awareness of justice-related innovation. Programs like Village Capital’s have helped demystify the sector and catalyze funding for the technology serving justice-impacted communities. Justice tech, or impact-driven direct-to-consumer legal tech, has grown exponentially in the last few years along with increased investor interest and user awareness.

Litigation finance has also grown, but its structure is narrowly optimized for high-value commercial claims with a strong financial upside. Traditional funders typically seek 5- to 10-times returns, prioritizing large corporate disputes and excluding cases with significant social value but lower monetary recovery, such as consumer protection claims, housing code enforcement, environmental accountability, or systemic health negligence.

Justice finance offers a different approach. By channeling capital from the impact investment market toward the justice system and aligning legal case funding with established impact measurement frameworks like the , it reframes certain categories of legal action as dual-return opportunities, covering financial and social.

This is not philanthropy repackaged. It’s the idea that measurable justice outcomes can form the basis of an investable asset class, if they’re properly structured, governed, and evaluated.

Technology as trust infrastructure

While mission-aligned capital is widely available, the ability to evaluate legal matters with the necessary rigor remains limited. Responsibly allocating funds to legal matters requires complex expertise, including legal merit assessment, financial risk modeling, regulatory compliance, and impact evaluation. Cases must be considered not only for their likelihood of success and recovery potential, but also for measurable social or environmental outcomes.

Today, that assessment is largely manual and capacity-bound by small teams. The result is a structural bottleneck as capital waits on scalable, trusted evaluation and allocation.

Without a way to standardize and responsibly scale analysis of the double bottom line, however, justice funding remains bespoke, even when resources are available.

AI-enabled systems can play a transformative role by standardizing assessment frameworks and supporting disciplined capital allocation at scale. By encoding assessment criteria, decision pathways, and compliance safeguards and then mapping case characteristics to impact metrics, technology can enable consistency and allow legal and financial experts to evaluate exponentially more matters without lowering their standards.

And by integrating legal assessment, financial modeling, and impact alignment within a governed tech framework, justice finance platforms like can function as the connective tissue. Through the platform, impact metrics are applied consistently while human experts remain responsible for final determinations, thereby reducing friction, increasing transparency, and supporting auditability.

When incentives align

It’s no coincidence that many of the leaders exploring justice finance models are women. Globally, women experience legal problems at disproportionately higher rates than men yet are less likely to obtain formal assistance. Women also control significant pools of global wealth and are more likely to . Indeed, 75% of women believe investing responsibly is more important than returns alone, and female investors are almost twice as likely as male counterparts to prioritize environmental, social and corporate governance (ESG) factors when making investment decisions, .

When those most affected by systemic barriers also shape capital allocation decisions, structural change becomes more feasible. Despite facing steep barriers in legal tech funding (just 2% goes to female founders), women represent in access-to-justice legal tech, compared to just 13.8% across legal tech overall.

This alignment between lived experience, innovation leadership, and capital stewardship creates an opportunity to reconfigure incentives in favor of meaningful change.

Expanding funding and impact

Justice financing will not resolve the justice gap on its own. Mission-focused tools for self-represented parties, legal aid, and court reform remain essential components of a functioning justice ecosystem. However, ethical capital represents an additional structural layer that can expand the range of cases and remedies that receive financial support.

Impact orientation can accommodate longer time horizons, alternative dispute resolution pathways, and remedies that extend beyond monetary damages. In certain matters, particularly those involving environmental harm, systemic consumer violations, or community-wide injustice, capital structured around impact metrics may identify and enable solutions that traditional litigation finance models do not prioritize.

For example, capital aligned with defined impact frameworks may support outcomes that include remediation programs, compliance reforms, or community investments alongside financial recovery. These approaches can create durable benefits that outlast a single judgment or settlement.

Of course, solving deep-rooted inequities and legal system complexity requires more than new tools and new investors. It requires designing capital pathways that are repeatable, accountable, and aligned with measurable public benefit.

Although justice finance may not be a fit for every case and has yet to see widespread uptake, it does have the potential to reach cases that currently fall through the cracks — cases that have merit, despite falling outside traditional litigation finance models and legal aid or impact litigation eligibility criteria.


You can find other installments of our Scaling Justice blog series here

]]>
The efficiency imperative: AI as a tool for improving the way lawyers practice /en-us/posts/ai-in-courts/improving-lawyers-practice/ Wed, 18 Mar 2026 17:45:16 +0000 https://blogs.thomsonreuters.com/en-us/?p=70024

Key insights:

      • AI brings improved efficiency — AI accelerates tasks like document review and research, freeing lawyers to pursue more high-value work for clients.

      • AI does the work of a team of lawyers — AI levels the playing field for small law firms and solo practitioners by providing additional capacity without adding headcount, thereby allowing fewer lawyer to do the work of many.

      • Yet, AI still needs guardrails — Lawyers must remain accountable, however, with human oversight and review to ensure that AI outputs are accurate and correct, thereby preserving nuance and professional judgment.


Already, AI is no longer a theoretical concept for legal professionals, nor is it a nice-to-have for law firms that are seeking to impress their clients with improved efficiency and cost savings. That means, the practical question now becomes how to adopt AI in ways that improve speed and capacity of lawyers without compromising accuracy, confidentiality, or professional judgment.

The strongest near-term value shows up where modern practice is most strained: high-volume inputs and relentless timelines. In that environment, AI can be most helpful as an accelerant for the first pass through large bodies of material.

This possibilities, opportunities, and challenges of using AI in this way were discussed by a panel of experts in a recent webinar, , from theĚý, a joint effort by the National Center for State CourtsĚý(NCSC) and the Thomson Reuters Institute (TRI).

One panelist, Mark Francis, a partner at Holland & Knight, described one way that AI can be an enormous help. “Anything where we’re dealing with large volume of materials that need to be reviewed [such as] large sets of documents, large sets of legal research, large sets of discovery. Obviously, AI can be leveraged in all of those circumstances.” That framing is important because it anchors AI’s utility in a familiar workflow: review, triage, and synthesis at scale.

AI also has a role earlier in the workflow than many attorneys expect. In addition to sorting and summarizing, it can help generate starting structures. For lawyers drafting motions, client advisories, demand letters, contract markups, or internal investigations memos, the hardest step can be getting traction from a blank page. “It’s really good at content or idea generation,” Francis said, adding that lawyers can ask AI to “generate some ideas for me on this topic, or generate an outline of a document to cover a particular issue.”


“AI is definitely going to benefit some of the small law firms who cannot actually afford the workforce. AI can be an extension when it comes to the automation.”


Of course, that does not mean letting an AI model decide what the law is; rather, it means using AI to produce an initial outline, identify possible issues to consider, or propose alternate ways to organize an argument. Then, the attorney should apply their own judgment to accept, reject, refine, and verify the AI’s output.

For legal teams, the ideal mindset is that AI can compress the time between intake and a workable first draft, whether that draft is a research plan, a deposition outline, a set of contract fallback positions, or a motion framework. However, speed is only valuable if it facilitates careful lawyering, not just taking shortcuts.

Efficiency that scales down, not just up

AI’s impact is not limited to large law firms with dedicated tech & innovation budgets. In fact, the benefits may be most transformative for smaller legal organizations that feel every hour of administrative drag and every unstaffed matter. Panelist Ashwini Jarral, a Strategic Advisor at IGIS, underscores how broad the current level of AI adoption already is. “AI is already being used in a lot of legal research, contract analysis, and in office operations,” Jarral explained. “Whether that’s in a small law firm or a large law firm, everybody can benefit from that automation with this AI.”

For many practices, that list maps directly onto the work that consumes lawyers’ time without always adding commensurate value: repetitive research steps, first-pass contract review, intake and scheduling, matter administration, and other operational tasks.

Historically, scale favored organizations that could hire more associates, paralegals, and support staff to push volume through the pipeline. Now, AI offers a different form of leverage: additional capacity without adding headcount. “It is definitely going to also benefit some of the small law firms who cannot actually afford the workforce,” Jarral said, adding that “AI can be an extension when it comes to the automation.” For a solo or small firm, that extension can show up as faster first-pass review of contracts, quicker summarization of records, more consistent intake workflows, and reduced time spent on repetitive back-office tasks.

At the same time, it is crucial to be clear-eyed about what is being automated. While AI can help deliver efficiency, it does not offer legal judgment itself. The legal profession still must decide, matter by matter, what level of review is required and what risks are acceptable.


“Lawyers are trained a certain way, and AI is never going to be trained that way. AI misses nuances. We’re always going to need lawyers; we’re always going to need the human in the loop.”


And that’s where implementation discipline becomes a strategic differentiator. Law firms that treat AI as a general-purpose shortcut tend to create risk; while firms that treat AI as a workflow component, with guardrails, review steps, and clear accountability, are more likely to capture value without compromising quality.

The non-negotiable: lawyers remain accountable

Any serious conversation about AI in legal practice must address these limits, panelists agreed. The Hon. Linda Kevins, a Justice on the Supreme Court in the 10th Judicial District of New York (Suffolk County), offered the most direct articulation of the boundary line: “Lawyers are trained a certain way, and AI is never going to be trained that way. AI misses nuances. We’re always going to need lawyers; we’re always going to need the human in the loop.”

Indeed, legal work is saturated with nuance. The same set of facts can carry different weight depending on jurisdiction, judge, forum, procedural posture, and the client’s goals and risk tolerance. Even when the law is clear, the right action often is not. To strive for true justice requires judgment about timing, framing, business consequences, reputational risk, and settlement dynamics. Those are not merely inputs for an AI to process — they are human decisions that define legal representation.

As the webinar made clear, this is the point at which responsible use becomes practical, not abstract. If AI is used for research support, contract analysis, or document review, lawyers need an explicit approach for verification and oversight. The outputs may look polished and may sound confident; however, confidence is not accuracy, and professional responsibility does not shift to a vendor or an AI model. Human review is not a ceremonial or perfunctory step, nor is it a formality. Rather, it is the core control that protects clients and the court, and it is the inflection point that turns AI from a novelty into a defensible tool.

In practice, the human in the loop means deciding in which instances AI can assist and in what instances it cannot. It also means reserving an attorney’s time for the decisions that carry legal and ethical consequences and building repeatable habits that prevent teams from drifting into overreliance on AI, especially under deadline pressure.

The legal profession can capture real benefits from AI, including speed, scalability, and improved access, but only if it adopts the technology in a way that preserves what Justice Kevins highlighted: training, nuance, and human accountability.


You can find out more about how AI and other advanced technologies are impactingĚýbest practices in courts and administration here

]]>
Scaling Justice: Easing the UK’s employee rights crisis /en-us/posts/ai-in-courts/scaling-justice-uk-employee-rights-crisis/ Tue, 24 Feb 2026 18:37:39 +0000 https://blogs.thomsonreuters.com/en-us/?p=69605

Key takeaways:

      • An emerging employment tribunal crisisĚý— The UK’s employment tribunal system is facing unprecedented backlogs, long wait times, and unaffordable legal representation, leaving many workers and small businesses unable to effectively resolve workplace disputes.

      • Process-oriented barriers to justiceĚý— Most claims are dismissed not because they lack merit, but due to claimants disengaging from a slow and complex process, with legal costs often exceeding the value of claims and legal aid unable to meet rising demand.

      • A potential role for legal technologyĚý— Mission-driven legal tech platforms are emerging to provide affordable, scalable support and help claimants stay engaged by offering a practical solution to improve access to justice.


When a worker in the United Kingdom is unfairly dismissed or denied wages, their path to resolution runs through employment tribunals, a specialized court system separate from civil courts. As in the United States, many workers and small businesses cannot afford legal representation and must navigate the process on their own.

With backlogs at all-time highs and affordable legal services at all-time lows, this system is coming under increasing pressure. Fortunately, mission-driven technology and data analysis are emerging to level the playing field and increase access to justice.

Current state by the numbers

According to an analysis of the and other data sources,*Ěýin the second quarter of 2025, employment tribunals resolved just 45% of incoming claims, adding 18,000 cases to the backlog alone. In the past year, the open caseload has surged by 244%. This pressure is set to intensify as the inbound Employment Rights Act 2025 — the UK’s most significant overhaul of workplace protections in decades — is set to extend protection to six million more workers in 2027.

As the backlog increases, so do wait times. In 2025, the average wait for resolution reached 25 weeks, more than double that of 2024, with some claim types like equal pay and discrimination claims reaching up to 37 weeks. Some more complex cases are reported to have their final hearings scheduled as far out as 2029.

With only 8% of cases reaching a final hearing and the majority resolved through settlement or withdrawal, the growing backlog raises concerns about whether lengthy wait times influence how claimants choose to resolve their cases.

In the UK, a common threshold for legal affordability is a salary of ÂŁ55,000, meaning around 65% of workers cannot afford legal representation. Legal aid and pro-bono services exist to support those in need, but with growing funding constraints and rising demand, these services cannot reach nearly two-thirds of claimants.


You can find more insights about how courts are managing the impact of advanced technology fromĚýour Scaling Justice seriesĚýhere


Tribunal awards are largely calculated from salary. This can result in a claim’s value often being lower than the cost of legal representation to pursue it. In a typical hospitality case, for example, a worker owed ÂŁ1,500 in unpaid wages (equivalent to 3½ weeks of pay) has a 92% chance of representing themselves and will wait on average six months for resolution — without pay owed, legal support, or outcome certainty.

The cost, both in time and resources, also falls on employers. In lower-margin industries such as hospitality, default judgments, in which an employer does not engage with proceedings, can reach as high as 37%, compared with a national average of around 6%. For these employers and for smaller businesses more broadly, the cost of legal support may also exceed the value of defending a claim.

With rising costs and growing delays, the risk for both employers and employees is that the system becomes inaccessible, leading to outcomes shaped by who can afford to sustain the process rather than case-by-case strength.

Where justice tech fits

The conventional assumption is that self-represented claimants are at a significant disadvantage when they go to court; yet the data is more nuanced. Self-represented claimants who reach a hearing prevail 44% of the time, compared to 52% for those with legal representation — a gap of less than eight percentage points.

The greater risk is not losing at hearing but never actually reaching one. Analysis of more than 2,700 struck-out, or dismissed, cases by employment rights platform Yerty found that the majority were dismissed not for lack of merit, but because claimants stopped engaging with the process. Only 6% were struck out for having no reasonable prospect of success. This suggests that the primary barrier may not be the absence of legal representation, but the ability to sustain engagement with a slow, complex, and often opaque process.

Increasing numbers of UK workers turning to AI tools like ChatGPT for legal support highlight not only the demand for affordable access but also the risks of general-purpose tools being used in legal contexts. Fabricated case law in tribunal submissions, for example, harms users and adds further pressure to an already overstretched system.


The conventional assumption is that self-represented claimants are at a significant disadvantage when they go to court; yet the data is more nuanced.


A new generation of legal technology platforms is emerging to fill this gap, with tools purpose-built for the specific circumstances of employment law. Yerty and Valla, among others, offer AI-powered guidance tailored to the UK tribunal process, providing affordable, scalable support previously out of reach for most workers. Government organizations are also moving in this direction. For example, in its recent five-year strategy outlook committed to exploring new digital services that offer faster, more accessible support.

Technology alone cannot address underfunding, judicial capacity, or fundamental power imbalances. However, if the majority of dismissed claims stem from disengagement rather than weak cases, and self-represented claimants prevail at comparable rates to those with lawyers, then the answer isn’t more lawyers — it’s better support upstream. Mission-driven legal technology can provide consistent, scalable guidance that helps both parties manage the process and avoid falling through the cracks.

The UK government’s own assessment of the Employment Rights Bill forecasts a 15% increase in claims by 2027 due to expanded eligibility. As noted above, the system is already under significant pressure before these reforms take effect, and traditional responses — more judges, more funding — too often take years to deliver.

While not a complete answer, justice tech can help address a real, measurable problem, that of keeping people engaged in a process that too often disengages them. For a hospitality worker owed back pay, a healthcare worker facing unfair dismissal, or a retail employee navigating a discrimination claim alone, that support could mean the difference between a case heard and one abandoned — and justice delayed or justice denied.


*Sources: Ministry of Justice Tribunal Statistics Quarterly (July-September 2025); Yerty analysis of 2,721 struck-out tribunal decisions and 8,761 case outcomes; ACAS Strategy 2025-2030; 2024 UK Judicial Attitude Survey, UCL Judicial Institute / UK Judiciary, February 2025.

]]>
When courts meet GenAI: Guiding self-represented litigants through the AI maze /en-us/posts/ai-in-courts/guiding-self-represented-litigants/ Thu, 19 Feb 2026 18:20:08 +0000 https://blogs.thomsonreuters.com/en-us/?p=69532

Key insights:

      • Considering courts’ approach — Although many courts do not interact with litigants prior to filings, courts can explore how to help court staff discuss AI use with litigants.

      • Risk of generic AI tools — AI use in legal settings can’t be simply categorized as safe or risky; jurisdiction, timing, and procedure are vital factors, making generic AI tools unreliable for court-specific needs.

      • Specialty AI tools require testing — Purpose-built court AI tools offer a safer alternative for litigants, yet these require development and extensive testing.


Self-represented litigants have always pieced together legal help from whatever sources they can access. Now that AI is part of that mix, courts are working to help people use this advanced technology responsibly without implying an endorsement of any particular tool or even the use of AI.

Many litigants cannot afford an attorney; others may distrust the representation they have or may not know where to begin. In any case, people need a meaningful way to interact with the legal system. Used carefully and responsibly, AI can support access to justice by helping self-represented litigants understand their options, organize information, and draft documents, while still requiring litigants to verify their information and consult official court rules and resources.

These issues were discussed in a recent webinar, , hosted by . The panel explored the potential benefits of AI for access to justice and the operational challenges of integrating AI into public-facing guidance for litigants.

The problem with “Just ask AI”

Angela Tripp of the Legal Services Corporation noted that people handling legal matters on their own have long relied on a mix of resources, “some of which were designed for that purpose, and some of which were not.” AI is simply a new tool in that environment, she added. The primary challenge is that court processes are rule-based and time-sensitive; and a mistake can mean missing a deadline, submitting the wrong document, or misunderstanding a requirement that affects the case.

Access to justice also requires more than just access to information in general. Court users need information that is relevant, complete, accurate, and up to date. Generic AI systems, such as most public-facing tools, are trained on broad internet text may not reliably deliver that level of specificity for a particular court, case type, or stage of a proceeding. In these cases, jurisdiction, timing, and procedure all matter. Unfortunately, AI can omit key steps or emphasize the wrong issues, and self-represented litigants may not have the legal experience to recognize what is missing.

At the same time, AI offers several potential benefits to self-represented litigants. It can explain concepts in plain language, help users structure a narrative, and produce a first draft faster than many people can on their own. The challenge is aligning those strengths with the precision that court processes demand.

A strategic pivot: from teaching litigants to equipping staff

In the webinar, Stacey Marz, Administrative Director of the Alaska Court System, described her team’s early efforts to give self-represented litigants clear guidance about safer and riskier uses of AI, including examples of how to properly prompt generative AI queries.

The team tried to create traffic light categories that would simplify decision-making; however, they found this approach very challenging despite several draft efforts to create useful guidance. Indeed, AI use can shift from low-risk to high-risk depending on context, and it was hard to provide examples without sounding like the court was endorsing a tool or sending people down a path to which the court could not guarantee results.

The group ultimately shifted to a more practical approach — training the people who already help litigants. The new guidance targets public-facing staff such as clerks, librarians, and self-help center workers. Instead of teaching litigants how to prompt AI, it equips staff to have informed, consistent conversations when litigants bring AI-generated drafts or AI-based questions to the counter.

The framework emphasizes acknowledgment without endorsement. It suggests language such as:

“Many people are exploring AI tools right now. I’m happy to talk with you about how they may or may not fit with court requirements.”

From there, staff can explain why court filings require extra caution and direct users to court-specific resources.

This approach also assumes good faith. A flawed filing is often a sincere attempt to comply, and a litigant may not realize that an AI output is incomplete or incorrect.

Purpose-built tools take time

The webinar also discussed how courts also are exploring purpose-built AI tools, including judicial chatbots designed around court procedures and grounded in verified information. Done well, these tools can reduce common problems associated with generic AI systems, such as jurisdiction mismatch, outdated requirements, or fabricated or hallucinated citations.

However, building reliable court-facing AI demands significant time and testing. Marz shared Alaska’s experience, noting that what the team expected to take three months took more than a year because of extensive refinement and evaluation. The reason is straightforward: Court guidance must be highly accurate, and errors can materially harm someone’s legal interests. In fact, even after careful testing, Alaska still included cautionary language, recognizing that no system can guarantee perfect answers in every situation.

The path forward

Legal Services’ Tripp highlighted a central risk: Modern AI tools can be clear, confident, and easy to trust, which can lead people to over-rely on them. And courts have to recognize this balance. Courts are not trying to prevent AI use; rather, many are working toward realistic norms that treat AI as a drafting and organizing aid but require litigants to verify claims against official court sources and seek human support when possible.

Marz also emphasized that courts should generally assume filings reflect a litigant’s best effort, including in those cases in which AI contributed to confusion. The goal is education and correction rather than punishment, especially for people navigating complex processes without representation.

Some observers describe this moment as an early AOL phase of AI, akin to the very early days of the world wide web — widely used, evolving quickly, and uneven in its reliability. That reality makes clear guidance and consistent messaging more important, not less.

This shift among courts from teaching litigants to use AI to teaching court staff and other helpers how to talk to litigants about AI reflects a practical effort on the part of courts to reduce the risk of harm while expanding access to understandable information.

As is becoming clearer every day, AI can make legal processes feel more navigable by helping self-represented litigants draft, summarize, and prepare; and for courts to realize that value requires clear guardrails, court-specific verification, and careful implementation, especially when a missed detail can change the outcome of a case.


You can find out more about how AI and other advanced technologies are impactingĚýbest practices in courts and administrationĚýhere

]]>
Scaling Justice: How technology is reshaping support for self-represented litigants /en-us/posts/ai-in-courts/scaling-justice-technology-self-represented-litigants/ Fri, 23 Jan 2026 15:31:24 +0000 https://blogs.thomsonreuters.com/en-us/?p=69124

Key takeaways:

      • From scarcity to abundance — Technology has shifted the challenge in access to justice from scarcity of legal help to issues of accuracy, governance, and effective support.ĚýAI and digital tools now provide abundant legal information to self-represented litigants, but they raise new questions about reliability, oversight, and alignment with human needs.

      • The necessity of human-in-the-loop — Human involvement remains essential for meaningful resolution.ĚýWhile AI can explain procedures and guide users, real support often requires relational and institutional human guidance, especially for vulnerable populations facing anxiety, low literacy, or systemic bias.

      • One part of a bigger question — Systemic reform and broader approaches are needed beyond technological fixes because technology alone cannot solve deep-rooted inequities or the complexity of the legal system. Efforts should include prevention, alternative dispute resolution, and redesigning systems to prioritize just outcomes and accessibility.


Access to justice has long been framed as a problem of scarcity, with too few legal aid lawyers and insufficient funding forcing systems to be built in triage mode. This has been underscored with the unspoken assumption that most people navigating civil legal problems would do so without meaningful help, often because their issues were not compelling or lucrative enough to justify legal representation.

This framing no longer holds, however. Legal information, once tightly controlled by legal professionals, publishers, and institutions, is now abundantly available. Large language models, search-based AI systems, and consumer-facing legal tools can explain civil procedure, identify relevant statutes, translate dense legalese into plain language, and generate step-by-step guidance in seconds.

Increasingly, self-represented litigants are actively using these tools, whether courts or legal aid organizations endorse them or not. Katherine Alteneder, principal at Access to Justice Innovation and former Director of the Self-Represented Litigation Network, notes: “This reality cannot be fully controlled, regulated out of existence, or ignored.”

And as Demetrios Karis, HFID and UX instructor at Bentley University, argues: “Withholding today’s AI tools from self-represented litigants is like withholding life-saving medicine because it has potential side effects. These systems can already help people avoid eviction, protect themselves from abuse, keep custody of their children, and understand their rights. Doing nothing is not a neutral choice.”

Thus, the central question is no longer whether technology can help self-represented litigants, but rather how it should be deployed — and with what expectations, safeguards, and institutional responsibilities.

Accuracy, error & tradeoffs

The baseline capabilities of general-purpose AI systems have advanced dramatically in a matter of months. For common use cases that self-represented litigants most likely seek — such as understanding process, identifying next steps, preparing for hearings, and locating authoritative resources — today’s frontier models routinely outperform well-funded legal chatbots developed at significant cost just a year or two ago.


The central question is no longer whether technology can help self-represented litigants, but rather how it should be deployed — and with what expectations, safeguards, and institutional responsibilities.


These performance gains raise important questions about the continued call for extensive customization to deliver basic legal information. However, performance improvements do not eliminate the need for careful design. Tom Martin, CEO and founder of LawDroid (and columnist for this blog), emphasizes that “minor tweaking” is subjective, and that grounding AI tools in high-quality sources, appropriate tone, and clear audience alignment remains essential, particularly when an organization takes responsibility and assumes liability for the tool’s voice and output.

Not surprisingly, few topics in the legal tech community generate more debate than AI accuracy, but it cannot be evaluated in isolation. Human lawyers make mistakes, static self-help materials become outdated, and informal advice from friends, family, or online forums is often wrong. Models should be evaluated against realistic alternatives, especially when the alternative is no help at all.

Off-the-shelf tools now perform surprisingly well at generating plain-language explanations, often drawing on primary law, court websites, and legal aid resources. In limited testing, inaccuracies tend to reflect misunderstandings or overgeneralizations rather than pure fabrication. And while these are errors that are still serious, they may be easier to detect and correct with review.

Still, caution is key, often because AI tells people what they want to hear in order to keep them on the platform. Claudia Johnson of Western Washington University’s Law, Diversity, and Justice Center asks what an acceptable error rate is when tools are deployed to vulnerable populations and reminds organizations of their duty of care. Mistakes, especially those known and uncorrected, can carry legal, ethical, and liability consequences that cannot be ignored.

Knowledge bases are infrastructure, but more is needed

Vetted, purpose-built, and mission-focused solution ecosystems are emerging to fill the gap between infrastructure and problem-solving. The Justice Tech Directory from the Legal Services National Technology Assistance Project (LSNTAP) provides legal aid organizations, courts, and self-help centers with visibility into curated tools that incorporate guardrails, human review, and consumer protection in ways that general-purpose AI platforms do not.

Of course, this infrastructure does not exist in a vacuum. Indeed, these systems address the real needs of real people. While calls for human-in-the-loop systems are often framed as safeguards against technical failure, some of the most important reasons for human involvement are often relational and institutional. Even accurate information frequently fails to resolve legal problems without human support, particularly for people experiencing anxiety, shame, low literacy, or systemic bias within courts.


Not surprisingly, few topics in the legal tech community generate more debate than AI accuracy, but it cannot be evaluated in isolation.


A human in the loop can improve how self-represented litigants are treated by clerks, judges, and opposing parties. Institutional review models often provide this interaction at pre-filing document clinics, navigator-supported pipelines, and structured AI review workshops that integrate human judgment and augment human effort rather than replacing it.

Abundance and the limits of technology

Information does not automatically produce equity. Technology cannot make up for existing, persistent systemic issues, and several prominent voices caution against treating AI as a workaround for deeper system failures. Richard Schauffler of Principal Justice Solutions, notes that the underlying problem with the use of AI in the legal world is the fact that our legal process is overly complicated, mystified in jargon, inefficient, expensive, and deeply unsatisfying in terms of justice and fairness — and using AI to automate that process does not alter this fact.

Without changes at the courthouse level, upstream technological improvements may not translate into just outcomes. Bias, discrimination, and resource constraints cannot be solved by technology alone. Even perfect information from a lawyer does not equal power when structural inequities persist.

Further, abundance fundamentally changes the problem. As Alteneder notes, rather than access, the primary problem now is “governance, trust, filtering, and alignment with human values.” Similar patterns are seen in healthcare, journalism, and education. Without scaffolding, technology often widens gaps, benefiting those with greater capacity to interpret, prioritize, and act. For self-represented litigants, the most valuable support is often not answers, but navigation: What matters most now, which paths are realistic, how to understand when to escalate and when legal action may not serve broader life needs.

Focusing solely on court-based self-help misses an opportunity to intervene earlier, especially on behalf of self-represented litigants. AI-enabled tools have the potential to identify upstream legal risk and connect people to mediation, benefits, or social services before disputes harden.


You can find more insights about how courts are managing the impact of advanced technology from our Scaling Justice series here

]]>
Scaling Justice: Unauthorized practice of law and the risk of AI over-regulation /en-us/posts/ai-in-courts/scaling-justice-unauthorized-practice-of-law/ Mon, 01 Dec 2025 19:35:29 +0000 https://blogs.thomsonreuters.com/en-us/?p=68596

Key insights:

      • Are regulations choking innovation? — Current regulatory efforts may be stifling innovation in AI-driven legal solutions, exacerbating the access to justice crisis and prioritizing lawyer business model protection over consumer needs.

      • Some safeguards already in place — Existing consumer protection laws and product liability laws already provide robust safeguards against potential AI-related harm, making it unnecessary to impose additional restrictive policies on AI-driven legal services.

      • A balanced regulatory approach is best — An approach that encourages responsible innovation, prioritizes consumer protection, and fosters a data-driven mindset can best unlock the transformative potential of AI in addressing critical gaps in access to justice.


As AI-driven legal solutions gain traction, calls for regulation have grown apace. Some are thoughtful, others ill-informed or protectionist, and many focus on the issue of unauthorized practice of law (UPL). While protecting the public is crucial, shielding the legal profession from competition is not. A large majority (92%) of low-income people currently receive no or insufficient legal assistance; and the ongoing uncertainty in the legal AI and UPL regulatory landscape is chilling innovation that could support them.

The legal profession has always struggled to provide affordable, accessible services even as they simultaneously attempt to block those working ethically to bridge the gap with technology. When done right, legal industry regulation should balance protection with progress to avoid stifling innovation and exacerbating the access to justice crisis.

Consumer protection laws already provide robust safeguards against potential AI-related harms. Existing product liability laws and enforcement actions by state attorneys general ensure that consumers have recourse if AI legal tools cause harm. Despite these safeguards, concerns about unregulated AI filling the gaps in legal services persist.

It is time to upend the calculus of consumer harm and examine the motives of regulation. Rather than forcing tech-based legal services to prove they cause no harm in order to avoid changes of UPL, regulators should be required to justify, with data, that legal technology companies cause harm and whether any ruling will constrain supply in the face of a catastrophic lack of access to justice.

Uneven regulatory efforts raise questions

Current regulatory efforts tend to focus on companies that directly serve legal consumers, while leaving broader AI models largely unchecked. This raises uncomfortable questions: Are we truly protecting the public, or merely constraining competition and thereby reinforcing barriers to innovation in the process?


You can find out more about here


“If UPL’s purpose is protecting the public from nonlawyers giving legal advice — and if regulators define legal advice as applying law to facts — how many legal questions are asked of these Big Tech tools every day?” asks Damien Riehl, a veteran lawyer and innovator. “And if we won’t go after Big Tech, will regulators prosecute Small Legal Tech, which in turn utilizes Big Tech tools? If Big Tech isn’t violating UPL, then neither is Small Tech [by using Big Tech’s tools].”

Efforts to regulate the use of AI-based legal services are, de facto, another path to market constraint. Any attempt to regulate AI should be rooted in actual consumer experience. Justice tech companies, by definition, pursue mission-driven work to benefit consumers, but if an AI-driven tool causes harm, it should certainly be investigated and regulated. State bar associations are not waiting for harm to occur before considering regulating AI-driven legal help — and we must wonder why.

The risks of premature regulation

We must enable, not obstruct, AI-driven legal solutions and ensure that innovation remains a driving force in modernizing legal services. If restrictive policies make it difficult to develop cost-effective legal solutions, fewer consumers —particularly those with limited resources — will have access to legal assistance.

AI is developing far too quickly for a slower regulatory trajectory to keep up — any contemplated regulation would be evaluating last year’s technology, which is at best half as good as the latest iterations. Regulating AI-driven legal services now is akin to prior restraint, as when published or broadcast material is anticipated to cause problems in the future and is suppressed or prohibited before it can be released. However, this approach does not apply to new technology; we already can look for evidence of harm in product liability.

By prioritizing consumers rather than lawyer business model protection, AI-enabled legal support would be monitored for potential harm with data collected and analyzed to bring to light any issues. That way, regulations could be built around that defined, data-backed harm. For instance, we might require certification protocols for privacy or security if those issues prove problematic.

Forward-thinking states are going further

In July, the Institute for the Advancement of the American Legal System (IAALS) released a new report, , which advocated for a phased approach to regulation, beginning with experimentation, education, and consumer protection, while gathering and evaluating data. Later phases could involve potential regulation based on what is learned. In this way, innovation is encouraged while consumer needs and public trust remain paramount.

Also this year, Colorado cut the proverbial Gordian Knot by releasing a — consistent with existing analysis of UPL complaints in the state — for AI tools focused on improving access to justice. Guiding principles include ensuring consumers have clarity about the services they receive and their limits, educating consumers on the risks inherent in relying on advice from non-lawyer sources, and including a lawyer in the loop. Utah, Washington, and Minnesota all have considered similar policies. And IAALS now is collaborating with Duke University’s Center on Law & Tech to create a toolkit and templates to make it easier for other states to adopt UPL non-prosecution or similar policies.

Yet, some regulators seek the opposite, looking to define the exact types of business activity that will lead to UPL prosecution. While this framework is more likely to become obsolete more quickly, it serves a similar purpose: providing clear guardrails that allow innovation to flourish, while protecting consumers by clearly indicating the limitations of the software. The to specifically exclude tech products from UPL enforcement, provided they are accompanied by adequate disclosures that they are not a substitute for the advice of a licensed lawyer. Such policies are essential, and they can encourage those entrepreneurs aiming to ameliorate the justice gap.

What’s next?

The legal and justice tech industries should aim for a regulatory framework that encourages responsible, iterative innovation — and participants should take some proactive steps, including: i) justice tech companies should participate in the discussion and share their business- and mission-focused perspectives to help shape any new regulations; and ii) regulators with internal non-prosecution policies should consider making them public to encourage entrepreneurs in their state.

These approaches would enable positive change for state residents, support overburdened legal aid organizations and courts, and foster a flourishing tech ecosystem aimed at serving unrepresented and under-represented parties.

The legal profession has not been able to ensure justice for all, making it even harder for low-income and unrepresented parties to find the help they need. Now, AI-driven legal service providers are moving forward on addressing critical gaps in access to justice.

With a measured and equitable approach to regulation that neither ignores AI’s risks nor overlooks its transformative potential, the legal industry and regulators must keep pace with today’s technology — and such efforts should not obstruct those legal providers who can bring the law closer to that ideal and help close the justice gap.


You can learn more about the challenges faced by justice tech providers here

]]>
Legal aid leads on AI: How Lone Star Legal Aid built Juris to deliver faster, fairer results /en-us/posts/ai-in-courts/legal-aid-ai-lone-star-juris/ Mon, 10 Nov 2025 15:57:22 +0000 https://blogs.thomsonreuters.com/en-us/?p=68394

Key takeaways:

      • Legal aid is leading on AI adoption — Legal aid organizations are leading the way in leveraging AI with 74% using AI in their work, driven by the need to serve millions of citizens who lack legal help.

      • Lone Star Legal Aid creates Juris — A new AI-powered tool Juris from Lone Star Legal Aid improves accuracy and trust through retrieval-augmented generation, source-cited answers, and a secure Azure-based architecture with an integrated citation viewer.

      • Keeping costs low — A phased, two-year build-and-test process kept costs low (at about $2,000 a year in infrastructure costs, plus about 300 staff hours) and produced dependable results.


A finds that under-resourced legal aid nonprofits are adopting AI at nearly twice the rate of the broader legal field because of the urgency of the need to serve millions of Americans who may lack legal help. The study shows that almost three-quarters (74%) of legal aid organizations already use AI in their work, compared with a 37% adoption rate for generative AI (GenAI) across the wider legal profession. (LSLA), a legal aid non-profit serving easter Texas, is one of early adopters of AI.

According to LSLA, its attorneys were spending too much time and money hunting for answers across pricey platforms and scattered PDFs. Key materials lived in research databases, internal drives, and static repositories, while individual worker-vetted documents were not centrally accessible. Without a single, trusted hub, staff experienced slower research time that affected clients through duplicated effort and delays.

These strains are not unique to LSLA. In fact, court help centers and self‑help portals face the same fragmentation, licensing costs, and uneven access to authoritative guidance. A verifiable, consolidated knowledge hub that could stabilize quality while reducing spending would be a needed solution.

To solve this problem, LSLA turned to AI to create a legal tool called Juris built to return fast, source‑cited answers. Juris was designed to centralize high‑value legal materials, cut reliance on expensive third‑party platforms, and lay a flexible foundation that the organization could reuse beyond legal research for internal operations and future client tools.

Multifaceted approach to ensuring accuracy and reliability

There were several aspects of Juris that designers used to help its mission to increase access to justice, including:

Design methods fuel trustworthy output — Juris was built to ensure accuracy using a number of methods, such as a retrieval-augmented generation (RAG) pipeline to ensure the chatbot delivers fact-based, source-cited answers. It also uses semantic chunking, a process that breaks a document into natural, meaning‑based sections (for example, a heading plus the paragraphs that belong to it) so the original context stays together.

When a user asks a question, Juris retrieves only the most relevant of these sections. Limiting the AI to evidence from those passages improves accuracy and reduces hallucinations because the model is not guessing from memory. Instead, it is grounding answers in the text it just accessed.

Solid technical architecture helps reliability — Juris’s technical architecture also ensures reliable results because it combines Azure OpenAI for secure, stateless access to AI models to better handle document ingestion, processing, and vector storage. Users interact through a custom internal web interface that integrates a PDF viewer alongside the chat experience that enables seamless citation and document navigation. The platform is securely hosted on Azure App Service with continuous deployment orchestrated through GitHub, which provides reliable operations and streamlined updates.

Phased approach to building and testing yielded dependability — Also to ensure trustworthy results, LSLA developed Juris by following a structured, phased approach over two years. It began with a concept phase that was focused on clearly identifying the problem, followed by a platform evaluation that compared open-source and commercial solutions. A prototype was then created and demonstrated as proof of concept.

In addition, internal testing included adversarial exercises, hallucination detection, and rigorous validation of citation reliability. Based on these findings, the team implemented enhancements, such as moving from size-based to semantic chunking, improving the interface, and expanding the set of source materials. Juris is now in pilot preparation and undergoing final refinements before its release to a select group of subject matter experts.

Efficient resourcing and sharing learnings

LSLA’s phased method to building and testing also made sure that sustainability was built in from the beginning. Indeed, ongoing maintenance is minimal, and Microsoft’s nonprofit Azure credits keep infrastructure costs around $2,000 per year.

The most significant cost was in staff time. Development so far totals roughly 300 staff hours (or about 0.5 full-time equivalent, plus 0.3 FTE over two years). Once Juris enters phase two, which has been funded by a Legal Services Corporation (LSC) technology initiative grant, expected benefits will include faster, more consistent research and reduced workload for frontline and administrative staff, plus a modular framework that others can adapt.

Other legal service organizations that face similar challenges can learn from the Juris development, testing, and implementation as well as other related case studies. These recurring lessons include:

      • beginning with a small, manageable scope
      • inviting end users in from the start, and
      • carving out protected time so staff can innovate alongside daily duties.

Looking ahead, the LSLA team will continue to roll Juris out in phases, while building sister tools. LSLA also plans to share lessons learned through LSC’s AI Peer Learning Labs to help other organizations replicate the model.

Real change at scale, such as this, will only come from collaborating across organizations to share playbooks, pool datasets, and co‑design tools that lift quality while lowering cost. It is only with such partnership and sharing lessons from early adopters of AI that peers can adapt the model and, together, scale solutions that narrow the justice gap.

Angela Tripp, Program Officer for Technology for the Legal Services Corporation contributed to this article.


You can learn more about the ways legal aid organizations are using advanced technology to better serve individuals as they access the justice system here

]]>
Where the algorithm meets the gavel: Appropriate uses of AI in courts /en-us/posts/ai-in-courts/appropriate-use-ai-courts/ Mon, 03 Nov 2025 18:07:51 +0000 https://blogs.thomsonreuters.com/en-us/?p=68289

Key insights:

      • AI use falls on a spectrum — Appropriate AI use hinges on which trial function it touches upon and how much it influences outcomes.

      • AI uses must align with duties — Administrative and preparatory uses should be aligned with lawyers’ duty of competence, with outputs being checked and used within existing ethical rules.

      • Context and timing control admissibility — Courts should assess tools on a case‑by‑case basis, weighing procedural stage, validation and error rates, expertise, and safeguards.


The integration of AI in the legal system is a complex and multifaceted issue, defying simplistic categorizations of right or wrong. Indeed, the application of AI in court is not a binary concept but rather one that exists on a spectrum. The appropriateness of AI use depends on two critical variables: i) which portion of the trial process is being impacted; and ii) the degree of impact that the AI usage has on the outcome.

What matters is not whether AI appears in a case, but which aspect of the trial proceeding the AI in question touches — research, drafting, evidence review, jury selection — and how deeply it may influence outcomes. A document-review algorithm that flags potentially relevant discovery operates at a vastly different point on this spectrum than an AI system that drafts legal arguments or predicts case outcomes.

Low‑impact assistance on routine tasks may be not only permissible but prudent, while high‑impact automation in fact‑finding or credibility assessments can quickly cross ethical or legal lines. Understanding this spectrum — and where a specific use case falls along it — is essential for maintaining ethical standards, preserving the integrity of our judicial system, and serving clients competently in an era in which technology is reshaping every corner of legal practice. For professionals navigating this terrain, it is important to consider where, how much, and with what guardrails AI is utilized.

Administrative applications and professional competence

Administrative applications of AI have gained widespread acceptance within the legal community. The Honorable Erica Yew of the Santa Clara County Superior Court observes that many preliminary research platforms now incorporate AI-enhanced features as standard functionality. These features have become so seamlessly integrated into legal practice that their use is not only appropriate but often expected, requiring little deliberation or justification from practitioners.

Dr. Maura R. Grossman, JD, PhD, a Research Professor in the School of Computer Science at the University of Waterloo, dives deeper into this conversation by discussing the use of AI to provide summaries and chronologies as a part of case preparation. She contends that while it still requires being checked by human lawyers, it is an appropriate use of AI.

Further, the deployment of AI tools in administrative contexts aligns directly with attorneys’ fundamental duty of competence. Judge Yew articulates this connection with clarity, noting that AI should be viewed through the same lens as previous technological innovations. “When looking at rules for appropriate AI, it is akin to the rules for social media or even stationary at their inception — they are all tools,” explains Judge Yew. “We need to make sure we know how to use them and use them within the rules already set for lawyers and judges.”

This perspective underscores a critical principle: AI represents an evolution in legal tool use rather than a departure from established professional standards. Just as attorneys were expected to master word processors and legal databases in previous decades, today’s competent practitioners must understand how to leverage AI effectively while adhering to existing ethical frameworks. The emphasis, naturally, remains on validity, reliability, efficiency, fairness, and compliance with professional responsibilities — all objectives that AI, when properly employed, can significantly advance. That is at the heart of the discussion around appropriate use of AI in legal settings.

Evaluating the impact: A spectrum of appropriateness

While AI has demonstrated clear value in streamlining administrative functions and preliminary case management — indeed, many practitioners increasingly expect its judicious application in these contexts — the deployment of AI avatars in judicial proceedings demands scrutiny. In fact, this appropriateness of such technology usage exists along a spectrum, contingent upon both the intended application and the procedural stage at which it is employed.

Two recent cases illuminate the boundaries of this spectrum. In , a court authorized the use of an AI-generated avatar — in this case, an AI-generated video version of a deceased victim — during the victim-impact statement portion of sentencing proceedings. Conversely, a Appellate Court categorically rejected the use of an AI avatar for oral argument presentation, deeming it fundamentally inappropriate for that forum under the circumstances presented.

While multiple variables distinguish these cases, a critical differentiator emerges: The procedural juncture at which the avatar would function. In these cases, this temporal dimension — when in the judicial process that AI intervention occurs — proves instrumental in determining whether such technology enhances or undermines the integrity of the legal proceedings.

The gray area in practice

A Florida criminal case saw a judge use AI-enabled virtual reality (VR) goggles to review evidence — an unprecedented move that highlights the challenges of integrating advanced technology into courtrooms. Supporters say immersive tools such as the use of VR can clarify crime scenes and improve fact-finding; critics counter that AI reconstruction may be inaccurate, biased, and unduly shape memory.

Again, the core issue is context. Admissibility and weight cannot be resolved by blanket rules. Courts must assess the specific technology, its validation and error rates, the expertise behind the reconstruction, and its safeguards against manipulation. Only rigorous, case-by-case scrutiny can balance innovation with the justice system’s bedrock commitment to fairness.

Indeed, this case-by-case framework becomes all the more essential when we consider how profoundly AI has transformed the nature of evidence itself. The Florida VR case exemplifies a broader epistemological challenge facing modern courts: technology no longer simply captures reality, rather it reconstructs, interprets, and in some instances, generates it. Where traditional evidentiary rules presumed a clear distinction between genuine documentation and fabrication, AI-enabled tools occupy an ambiguous middle ground that resists categorical treatment.

It is precisely this collapse of binary certainty that scholars like Dr. Grossman have identified as the defining evidentiary dilemma of our era, one that demands not merely procedural adjustments but a fundamental reconceptualization of how courts evaluate truth.

Dr. Grossman notes that this shows a critical shift in evidentiary standards for the digital age. Traditionally, photographic and video evidence was evaluated through a binary lens — either authentic or inauthentic. Today, however, AI-generated content has fundamentally altered this calculus because content can be altered in different ways, e.g., simple noise removal versus substantive changes.

Truth now exists on a spectrum, Dr. Grossman observes, now requiring courts to navigate unprecedented gradations of authenticity when determining admissibility.

Into the future of courts

As AI continues its inexorable integration into legal practice, the profession must resist the temptation of categorical acceptance or rejection, instead embracing a nuanced, context-sensitive approach that evaluates each application against the twin metrics of where in the procedural stage AI is used and what is its impact on the finder of fact’s decision.

The future of justice depends not on whether we permit AI in our courtrooms, but on our collective wisdom in distinguishing between AI-driven tools that enhance human judgment and those that threaten to supplant it. This critical distinction demands ongoing vigilance, rigorous validation, and an unwavering commitment to the foundational principles of fairness and accuracy that have long anchored our legal system.


You can find out more about the appropriate use of AI in legal proceedings in the Thomson Reuters Institute’s AI in Courts Resource Center

]]>
Mexico’s judicial elections 2025: A step toward a more accessible justice system? /en-us/posts/government/mexico-judicial-elections-2025/ Tue, 02 Sep 2025 16:09:41 +0000 https://blogs.thomsonreuters.com/en-us/?p=67421

Key takeaways:

      • Historic election with low turnout — Mexican citizens elected judicial authorities for the first time, but low voter participation shows limited civic engagement.

      • Controversial process — Candidate accusations and high campaign spending raised concerns about transparency and accountability.

      • Responsibility lies with elected ministers — Citizens pushed for change, and now it’s up to the new officials to build a fair and independent justice system.


On June 1, Mexico experienced an unprecedented event —Ěýthe country’s first-ever elections for the Judicial Branch, in which 881 judicial positions were filled, including ministers, judges, and magistrates. This historic process is a direct result of the judicial reform enacted in September 2024, aimed at transforming the Mexican judicial system into one that is more efficient, humane, austere, and free from corruption.

Campaigns began on March 30, 2025, — no public or private funding was allowed, and promotion was limited to forums and organic social media. To encourage informed voting, the National Electoral Institute (INE) launched (Get to Know Them), a digital platform that allowed citizens to review candidate profiles in a transparent and accessible way.

Rising controversies and low turnout

Throughout the electoral process, several controversies emerged that cast doubt on the legitimacy of certain candidates. against some candidates for alleged ties to drug cartels and cases of sexual abuse. Nevertheless, their candidacies were approved by committees composed of members from all three branches of government, a decision that raises serious questions about the rigor and transparency of the selection process itself.

There were also , particularly by candidates for ministerial positions, which drew criticism from observers and citizens. Another controversial issue was the increase in the campaign spending cap for national positions, such as ministers on the Supreme Court of Justice of the Nation (SCJN).

Despite the by the Chamber of Deputies in December 2024, the INE carried out the elections normally. Voters received six ballots of different colors, each corresponding to a judicial category. However, voter turnout was low, ranging between just 12.57% and 13.32% of the eligible population.

Even so, Claudia Sheinbaum, President of Mexico, emphasized that “nearly 13 million Mexican men and women participated in the Judicial Branch election, more than double the turnout in the vote on the trial of former presidents.”

Who will shape the future of Mexican law?

The SCJN, the nation’s highest authority in the Judicial Branch, will be composed of the following ministers, who will take office on September 1:

      • Hugo Aguilar Ortiz — A Mixtec lawyer and indigenous rights advocate; elected president of the Supreme Court
      • Lenia Batres — Promotes social justice and austerity
      • YasmĂ­n Esquivel — Supreme Court justice since 2019; faced plagiarism allegations
      • Loretta Ortiz — Supports decentralized justice and socially focused rulings
      • MarĂ­a Estela RĂ­os — Labor law expert
      • Giovanni Figueroa — International academic and human rights defender
      • Irving Espinosa — Magistrate with experience in Mexico City’s government
      • ArĂ­stides Guerrero — Proposes a Mobile Court and rulings in indigenous languages
      • Sara HerrerĂ­as — Human rights prosecutor

This group will be responsible for interpreting the Mexican Constitution and ensuring respect for human rights in the country, within a context of institutional transformation.

What’s next for Mexico’s legal landscape?

Despite this democratic milestone, access to justice in Mexico remains limited. In fact, a National Survey on Victimization and Perception of Public Safety showed that , according to Mexico’s National Institute of Statistics and Geography (INEGI), the country’s main government institution in charge of statistics and census data.

Further, , reflecting a deep mistrust of the judicial system’s ability to address and resolve cases affecting citizens. In budgetary terms, Mexico is among the countries that spend the least on justice, , according to the Organisation for Economic Co-operation and Development (OECD).

Overall, the judicial reform being undertaken includes a series of proposals aimed at strengthening the country’s justice system and ensuring its independence, efficiency, and proximity to citizens. Key measures include:

      • Budget allocation — At least 2% of federal and state budgets are earmarked for judicial branches
      • Collective justice access — Stronger mechanisms will be developed for group lawsuits and shared rights
      • Ruling enforcement — Clear frameworks for executing judicial decisions will be established
      • Feminicide classification — Standardized recognition and investigation protocols nationwide will be enacted

These proposals are outlined in the document , prepared by the SCJN and aim to address the main challenges facing Mexico’s judicial system.

Although the electoral process was framed as a democratic step forward, critics warn it may politicize the Judicial Branch. Some argue the reform could enable one party to control all three branches of government, risking legal uncertainty and weakening transparency. Indeed, the election’s low turnout, candidate allegations, and rising campaign costs raised concerns about the legitimacy and effectiveness of this new judicial election model.

Still, the 2025 judicial election marks a milestone in Mexico’s democratic history, but it also presents profound challenges. The key will be to monitor the implementation of the reform, strengthen judicial independence, and ensure that new ministers and judges act with ethics, professionalism, and social commitment.

Critically, Mexico needs a justice system that is not only accessible but also effective, empathetic, and trustworthy. Despite limited civic participation, the first step toward judicial transformation has been taken. Now, the true responsibility lies with those elected to lead and deliver meaningful change.


You can find out more aboutĚýthe challenges faced by courts here

]]>