Best Practices in Courts & Administration Archives - Thomson Reuters Institute https://blogs.thomsonreuters.com/en-us/topic/best-practices-in-courts-administration/ Thomson Reuters Institute is a blog from ¶¶ŇőłÉÄę, the intelligence, technology and human expertise you need to find trusted answers. Thu, 16 Apr 2026 06:20:41 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 Looking beyond the bench at the importance of judicial well-being /en-us/posts/government/beyond-the-bench/ Wed, 15 Apr 2026 14:06:38 +0000 https://blogs.thomsonreuters.com/en-us/?p=70384

Key insights:

      • Well-being is a professional necessity — Judges experience decision fatigue, emotional stress, and personal biases that can affect their rulings, making mental and physical well-being a judicial duty.

      • Community engagement builds better judgment — Staying connected to the communities they serve helps judges develop empathy, recognize bias, and deliver fairer decisions.

      • Diverse experience strengthens the judiciary — Varied backgrounds and ongoing education in areas like restorative justice make courts more responsive, inclusive, and publicly trusted.


Judges play a unique and essential role in society. They are tasked with interpreting the law, resolving disputes, and upholding justice — often under intense scrutiny and pressure. Their decisions shape lives, influence public policy, and reinforce the rule of law.

Indeed, judicial rulings may be the most visible part of the job, but they are not the only measure of a judge’s effectiveness — or of the judiciary’s overall health.

To truly understand and support a robust legal system, it is vital to look beyond the courtroom and examine the broader context in which judges operate. A judiciary that is fair, empathetic, and resilient depends not only on legal expertise, but also on balance, self-awareness, and active engagement with the communities it serves.

The weight of the robe & the value of connection

Despite the solemnity of the judicial office, judges also carry personal experiences, cognitive biases, and emotional responses. The weight of responsibility in adjudicating complex, often emotionally charged cases can lead to stress, burnout, and decision fatigue. that judicial decisions can be influenced by factors such as time of day, caseload volume, and even personal well-being.

When judges prioritize their own well-being through physical health, mental resilience, and time away from the bench, they are better equipped to render fair and consistent decisions. Judicial wellness is not a personal luxury; rather, it is a professional imperative.

Equally important is the role of community engagement. The law does not exist in a vacuum but is shaped by social norms, economic realities, and cultural shifts. Judges who remain isolated from the communities that are affected by their rulings risk losing touch with the lived experiences of the people before them.


Judicial rulings may be the most visible part of the job, but they are not the only measure of a judge’s effectiveness — or of the judiciary’s overall health.


Engagement with the public helps judges better understand how the law impacts and operates in people’s lives. It also builds the empathy and contextual awareness needed for interpreting statutes or imposing sentences.

For example, a judge who volunteers with youth programs or participates in community forums on public safety may develop a more nuanced understanding of cases involving juvenile offenders or policing practices. Similarly, a judge who attends local cultural events or listens to community leaders may be better positioned to recognize implicit biases or systemic inequities that may be inherent in the justice system.

Community involvement also strengthens public trust. When citizens see judges as accessible and engaged, rather than distant or aloof, confidence in the judiciary increases. And these ideas of transparency and connection are key to maintaining citizens’ trust in the courts.

These themes are explored more in depth in the Thomson Reuters Institute’s video series,ĚýBeyond the Bench. For example, in the episodeĚý,ĚýAssociate Justice Tanya R. Kennedy shares her experience educating youth, participating in civic organizations, and leading legal reform initiatives. The episode also highlights how service beyond judicial duties enhances judges’ decision-making and strengthens community ties.

Another episode of the series,Ěý,Ěýexamines the personal and professional challenges faced by judges and attorneys alike. It features a candid interview with Judge Mark Pfiffer, who emphasizes the importance of mindfulness, peer support, and institutional policies that promote mental health and sustainable work practices.

A judiciary that reflects society

The same principle applies at the institutional level. A judiciary is strongest when it reflects the range of experiences and perspectives present in the society it serves.

Beyond individual judges, the judiciary can benefit from diversity and inclusion. A bench that reflects the full spectrum of society is more likely to deliver balanced and equitable justice. But diversity is not just about representation — it’s also about perspective.

Judges who have worked in public defense, civil rights advocacy, or rural legal services bring different insights to the bench than those who have spent their careers in corporate law or prosecution. These varied experiences enrich judicial deliberation and help ensure that decisions are informed by a broad understanding of justice.

Encouraging judges and court personnel to engage in lifelong learning, mentorship, and cross-sector collaboration further strengthens the judiciary. Programs that support judicial education on topics like implicit bias, trauma-informed practices, or restorative justice are essential to modern, responsive courts.

Improving judges’ well-being

The quality of justice depends not only on what happens in the courtroom, of course, but on what happens outside of it. Judges who maintain personal balance, engage with their communities, and remain open to diverse perspectives are better equipped to serve the public good.

Legal professionals, court administrators, and policymakers should support the kinds of initiatives that promote judicial wellness, community outreach, and professional development. By fostering a judiciary that looks beyond the bench, we ensure a justice system that is not only legally sound, but also humane, inclusive, and trusted.

In the end, judges and the justice they mete out are not defined by court rulings alone. It also depends on relationships, context, and public trust. Recognizing that reality is essential to preserving the well-being of the judiciary and the integrity of the law.


TheĚý“Beyond the Bench”Ěývideo series is available on

]]>
Scaling Justice: AI is scaling faster than justice, revealing a dangerous governance gap /en-us/posts/ai-in-courts/scaling-justice-governance-gap/ Mon, 13 Apr 2026 16:57:55 +0000 https://blogs.thomsonreuters.com/en-us/?p=70330

Key takeaways:

      • AI frameworks need to keep up with implementation — While AI governance frameworks are being developed and enacted globally, their effectiveness depends on enforceable mechanisms within domestic justice systems.

      • Access to justice is essential for trustworthy AI regulation — Rights and protections are only meaningful if individuals can understand, challenge, and seek remedies for AI-driven decisions. Without operational access, governance frameworks risk remaining theoretical.

      • People-centered justice and human rights must anchor AI governance — Embedding human rights standards and ensuring equal access to justice in AI regulation strengthens public trust, accountability, and the credibility of both public institutions and private companies.


AI governance is accelerating across global, national, and local levels. As public investment in AI infrastructure expands, new oversight bodies are emerging to assess safety, risk, and accountability. The global policy conversation has from principles to the implementation of meaningful guardrails and AI governance frameworks, which legislators now are drafting and enacting.

These developments reflect growing recognition that AI systems demand structured oversight and a shift from voluntary safeguards and standards to institutionalized governance. One critical dimension remains underdeveloped, however: how do these frameworks function in practice? Are they enforceable? Do they provide accountability? Do they ensure equal access?

AI governance will not succeed on the strength of international declarations or regulatory design alone; rather, domestic justice systems will determine whether it works. At this intersection, the connection between AI governance and access to justice becomes real.

In early February, leaders across government, the legal sector, international organizations, industry, and civil society convened for an expert discussion. The following reflections attempt to build on that dialogue and its urgency.

From principles to enforcement

Over the past decade, AI governance has evolved from hypothetical ethical guidelines to voluntary commitments, binding regulatory frameworks, and risk-based approaches. Due to these game-changing advancements, however, many past attempts to provide structure and governance have been quickly outpaced by technology and are insufficient without enforcement mechanisms. As Anoush Rima Tatevossian of The Future Society observed: “The judicial community should have a role to play not only in shaping policies, but in how they are implemented.”

Frameworks establish expectations, while courts and dispute resolution mechanisms interpret rules, test rights, evaluate harm, assign responsibility, and determine remedies. If individuals are not empowered to safeguard their rights and cannot access these mechanisms, governance frameworks remain theoretical or are casually ignored.

This challenge reflects a broader structural constraint. Even without AI, legal systems struggle to meet demand. In the United States alone, 92% of people do not receive the help they need in accessing their rights in the justice system. Introducing AI into this environment without strengthening access can risk widening, rather than narrowing, the justice gap.


Please add your voice to ¶¶ŇőłÉÄę’ flagship , a global study exploring how the professional landscape continues to change.Ěý


Justice systems serve as the operational core of AI governance. By inserting the rule of law into unregulated areas, they provide the infrastructure that enables accountability by interpreting regulatory provisions in specific cases, assessing whether AI-related harms violate legal standards, allocating responsibility across public and private actors, and providing accessible pathways for redress.

These frameworks also generate critical feedback. Disputes involving AI systems expose gaps in transparency, fairness, and accountability. Legal professionals see where governance frameworks first break down in real-world conditions, often long before policymakers do. As a result, these frameworks function as an early signal of policy effectiveness and rights protections.

Importantly, AI governance does not require entirely new legal foundations. Human rights frameworks already provide standards for legality, non-discrimination, due process, and access to remedy, and these standards apply directly to AI-enabled decision-making. “AI can assist judges but must never replace human judgment, accountability, or due process,” said Kate Fox Principi, Lead on the Administration of Justice at the United Nations (UN) Office of the High Commissioner for Human Rights (OHCHR), during the February panel.

Clearly, rights are only meaningful when individuals can exercise them — this constraint is not conceptual, it’s operational. Systems must be understandable, affordable, and responsive, and institutions should be capable of evaluating complex, technology-enabled disputes.

Trust, markets & accountability

Governance frameworks that do not account for these dynamics risk entrenching inequities rather than mitigating them. An individual’s ability to understand, challenge, and seek a remedy for automated decisions determines whether governance is credible. A people-centered justice approach, as established in the , asks whether individuals can meaningfully engage with the system, not just whether rules exist. For example, women face documented barriers to accessing justice in any jurisdiction. AI systems trained on biased data can replicate or amplify existing disparities in employment, financial services, healthcare, and criminal justice.

“Institutional agreement rings hollow when billions of people experience governance as remote, technocratic, and unresponsive to their actual lives,” said Alfredo Pizarro of the Permanent Mission of Costa Rica to the UN. “People-centered justice becomes essential.”

AI systems already shape outcomes across employment, financial services, housing, and justice. Entrepreneurs, law schools, courts, and legal services organizations are already building AI-enabled tools that help people navigate legal processes and assert their rights more effectively. Governance design will determine whether these tools help spread access to justice and or introduce new barriers.

Private companies play a central role in developing and deploying AI systems. Their products shape economic and social outcomes at scale. For them, trust is not abstract; it is a success metric. “Innovation depends on trust,” explained Iain Levine, formerly of Meta’s Human Rights Policy Team. “Without trust, products will not be adopted.” And trust, in turn, depends on enforceability and equal access to remedy.

AI governance will succeed or fail based on access

As Pizarro also noted, justice provides “normative continuity across technological rupture.” Indeed, these principles already exist within international human rights law and people-centered justice; although they precede the advent of autonomous systems, they provide standards for evaluating discrimination, surveillance, and procedural fairness, and remain durable as new challenges to upholding justice and the rule of law emerge.

People-centered justice was not designed for legal systems addressing AI-related harms, but its outcome-driven orientation remains durable as new justice problems emerge.

The current stage presents an opportunity to align AI governance with access to justice from the outset. Beyond well-drafted rules, we need systems that people can use. And that means that any effective governance requires coordination between policymakers, legal professionals, and the public.


You can find other installments ofĚýour Scaling Justice blog seriesĚýhere

]]>
How AI-powered access to justice is impacting unauthorized practice of law regulations /en-us/posts/government/ai-impacts-unauthorized-practice-of-law/ Mon, 02 Feb 2026 17:55:20 +0000 https://blogs.thomsonreuters.com/en-us/?p=69263

Key insights:

      • Courts and the legal profession need to show leadership — Given their specialized knowledge of the needs of litigants and of courts, courts need to take the lead in determining definitions of the unauthorized practice of law.

      • 3 paths forward to workable regulatory solutions — Recent discussions and research around this subject offered three paths toward modernizing UPL definitions.

      • Uncertainty harms users and innovation — Fear of UPL can drive self-censorship and market exits, even as litigants continue to use publicly available GenAI tools.


Today, many Americans experience legal issues but lack proper access to legal representation. At the same time, AI tools capable of providing legal information are rapidly evolving and already in widespread use. Between these two facts lies a critical definitional problem that courts and state bars must urgently address: How to define the unauthorized practice of law (UPL) in way that doesn’t further curtail access to justice.

This discussion is not theoretical. It directly determines whether AI-based legal services can operate, how they should be regulated, and ultimately whether AI can help unrepresented or self-represented litigants gain meaningful access to justice. This issue was explored in more depth during a recent webinar from theĚý, a joint effort by the National Center for State CourtsĚý(NCSC) and the Thomson Reuters Institute (TRI).

The need for clear definitions

During the webinar, Alaska Supreme Court Administrative Director Stacey Marz noted that “there is no uniform definition of the practice of law” and that UPL regulations represent “a real varied continuum of scope and clarity.” This variation makes compliance challenging for technology providers, especially as they navigate 50 different state standards.

UPL generally occurs when someone “not licensed as an attorney attempts to represent or perform legal work on behalf of another person,” explained Cathy Cunningham, Senior Specialist Legal Editor at ¶¶ŇőłÉÄę Practical Law.

Marz added that such legal advice typically involves “applying the law, rules, principles, and processes to specific facts and circumstances of that individual client — and then recommending a course of action.”

The challenge, however, is that AI can appear to do exactly this, yet the regulatory framework remains unclear about whether and how this should be permitted and how consumers can be protected.

3 paths forward

During the recent webinar, panelists discussed several different approaches to UPL regulations, noting that a and outlined three approaches that state courts could take, including:

Path 1: Explicitly enabling tools with regulatory framework — UPL statutes can be revisited to explicitly allow purpose-built AI legal tools to operate without threat of UPL enforcement, provided they meet certain requirements. Prof. Dyane O’Leary, Director of Legal Innovation & Technology at Suffolk University, emphasized that consumer-facing AI legal tools are already being used for tailored legal advice, arguing that some oversight is better than “just letting these tools continue to operate and hoping consumers aren’t harmed by them.”

Path 2: Creating regulatory sandboxes — Courts could establish temporary experimental zones in which AI legal service providers can operate under controlled conditions while regulators gather data about efficacy and safety through feedback and research, with an eye toward informing future regulation reform.

Path 3: Narrowing UPL to human conduct — Clarifying that existing UPL rules apply only to humans who may hold themselves out as attorneys in tribunals or courtrooms or creating legal documents under the guise of being a human attorney, effectively would leave AI-powered legal tools clearly outside UPL restrictions and open up a “new pocket of the free market” for consumers.

Utah Courts Self-Help Center Director Nathanael Player referenced Utah Supreme Court Standing Order Number 15, which established their regulatory sandbox using a fundamentally different standard: Not whether services match what lawyers provide, but rather “is this better than the absolute nothing that people currently have available to them?”

Prof. O’Leary reframed the comparison itself, suggesting that instead of comparing consumers who use AI tools to consumers with an attorney, the framework should be “consumers that use legal AI tools, and maybe consumers that otherwise have no support whatsoever.”

The personhood puzzle

“AI, at this time, does not have legal personhood status,” said Practical Law’s Cunningham. “So, AI can’t commit unauthorized practice of law because AI is not a person.”

However, Player pushed back on this reasoning, clarifying that “AI does have a corporate personhood. There is a corporation that made the AI, [and] the corporation providing that does have corporate personhood.” He added, however, that “it’s not clear, I don’t think we know whether or not there is… some sort of consequence for the provision of ChatGPT providing legal services.”


You can view here


This ambiguity creates what might be called the personhood gap, a zone of legal uncertainty with serious consequences for both innovation and access to justice.

Colin Rule, CEO at online dispute resolution platform ODR.com, explained that “one of the major impacts of UPL is, actually self-censorship.” After receiving a UPL letter from a state bar years ago, he immediately exited that market. This pattern repeats across the legal tech landscape, leaving companies hesitant to innovate.

Rule’s bottom line resonates with anyone trying to build solutions in this space. “As a solution provider, what I want is guidance,” Rules explained. “Clarity is what I need most… that’s my number one priority.”

Moving forward: Clarity over perfection

The legal profession needs to lead on this issue, and that means state bars and state supreme courts must take action now. The tools are already in use, and the question is not whether AI will play a role in legal services, but rather whether that role will be defined by thoughtful regulation or by default.

The solution is for the judiciary to provide clear guidance on what services can be offered, by whom, and under what conditions. To do that, courts much first acknowledge that for most people, the choice is not between an AI tool and a lawyer but between an AI tool and nothing. Given that, states must walk a path that will both encourage innovation and protect consumers.

To this end, legal professionals and courts should experiment with these tools, understand their trajectory as well as their current limitations, and work collaboratively with developers to create frameworks that prioritize consumer protection without stifling innovation that could genuinely expand access to justice.


You can find out more about how courts and legal professionals are dealing with the unauthorized practice of law here

]]>
Between hype and fear: Why I have not issued a standing order on AI /en-us/posts/ai-in-courts/standing-order-on-ai/ Thu, 15 Jan 2026 19:31:57 +0000 https://blogs.thomsonreuters.com/en-us/?p=69072

Key insights:

      • The legal system should avoid both overhyping and over-fearing AI — Instead, adopting a balanced approach that emphasizes careful, deliberate engagement and responsible experimentation.

      • Mandatory AI disclosure or certification orders do not necessarily improve the reliability of legal filings — In addition, they run the risk of creating confusion, false assurance, and additional hurdles, especially for smaller law firms and self-represented litigants.

      • Rather than imposing a restrictive order, the author issued guidance — This guidance is designed to promote responsible AI use, focusing on verification and accountability while allowing space for lawyers to engage with AI as a tool for augmentation rather than automation.


The legal system is being pulled in two directions when it comes to AI: On one side is overconfidence, the idea that AI will quickly solve legal work by automating it; and on the other side, fear — the feeling that AI is so risky that the safest response is to restrict it, discourage its use, or fence it off with new rules.

Both reactions are understandable; but neither is getting us where we need to go.

In a recent interview, Erik Brynjolfsson, the Director of the Stanford Digital Economy Lab and lead voice for the Stanford Institute for Human-Centered AI, makes that explain why both hype and too much skepticism miss the mark.

First, those caught up in the hype are moving too quickly toward automation. Tools work best when they support people, not when they try to stand in for them. Second, skeptics are overreacting to early stumbles. Early failures do not mean AI is a dead end. More often, they mean institutions are still learning how to use it well.

There is a middle ground. It’s not about rushing ahead, and it’s not about slamming the brakes. It’s about careful but deliberate use while testing tools, learning their limits, and moving forward with intention.

That perspective informs my approach.

Standing orders on AI

After well-publicized AI mistakes, it makes sense to look for something concrete that signals seriousness, and disclosure and certification orders do that. They tell the public and the bar that courts are paying attention. However, I don’t think disclosure does the work people hope it does, and I worry it pulls attention away from things that matter much more. I’ll explain.

Disclosure does not make filings more reliable — Knowing whether a lawyer used AI to help draft a filing does not tell me whether that filing is accurate, complete, or well supported. Long before modern AI entered the picture, courts had to guard against overstated arguments, bad citations, and unsupported claims. Knowing which tools were used to prepare a filing did not make those filings or the tools more reliable then, and it does not make them more reliable now.

Certifications and disclosures may offer false assurance — The spotlight is on hallucinations (AI-generated fake cases or citations), but courts already have ways to identify and address those problems. The more concerning risks are quieter: bias, AI over-reliance, or subtle framing that influences how an argument is presented. I’m also extremely concerned about deepfakes, which are much more difficult to detect. Disclosure about AI use in briefs does not address any of those risks, and it may distract us from the far bigger risks. It also creates a false sense that a filing is more careful or reliable than it actually may be.

Additional orders can add confusion — AI standing orders are growing in number, and they take very different approaches. Some require disclosure, some certifications, some limits, some are outright bans. Definitions vary or are missing altogether. Lawyers can comply, but it takes time and careful reading, and as noted already, it doesn’t necessarily improve the quality of what reaches the court.

Early in my time as a United States Magistrate Judge, I made it a point to seek feedback from the legal community about what made legal practice more difficult than it needed to be. One theme came up repeatedly — keeping track of multiple, overlapping judicial practice standards was tough. In response, I worked with my colleagues to consolidate standards into a single, uniform set. I see a similar risk emerging with AI standing orders. Well-intentioned but divergent approaches can splinter practices and create new hurdles, particularly for smaller law firms and self-represented litigants. I don’t want to issue a standing order that adds another layer of complexity without meaningfully improving the quality of what comes before me.

The rules already cover the landscape — I already have tools to deal with inaccurate or misleading filings. Lawyers are responsible for the work they submit, and Rule 11 doesn’t stop working because AI was involved. If something is wrong or misleading, I already have ways to address it.

Certification or disclosure could be misinterpreted as discouraging AI use, and I worry about who gets left out — When new tools are treated as suspect or off-limits, those with the most resources find ways to keep moving forward. However, smaller firms and individual litigants fall further behind. A system that chills responsible experimentation risks widening access gaps instead of narrowing them. In my view, everyone should be exploring ways to, as Brynjolfsson says, “augment” themselves. So long as we remain accountable for the result, augmentation is how lawyers, judges, and other professionals will retain their value in a legal system that is becoming more AI-integrated every day.

Rather than issue a standing order that limits AI use or requires certification or disclosure, I offer : Check your work, protect confidential information, and take responsibility for what you submit. I published this guidance for those interested in my perspective, but it is deliberately not an order, so as to avoid the concerns described above.

We shouldn’t fear AI — we should shape it

Some warn that AI is coming for the legal profession; however, I’m more optimistic (and perhaps more idealistic).

In my view, the justice system depends on human judgment. Empathy, discretion, humility, moral reasoning, and uncertainty are not bugs in the system, rather they’re an essential part of the program. If we want to preserve human judgment in the age of AI, we must be involved in how AI is used. And we can’t do that from a distance. We have to engage with AI, understand its limits, and model responsible use.

Used carefully, AI can help judges:

      • organize large records,
      • identify gaps or inconsistencies,
      • spot issues that need a closer look,
      • identify and locate key information,
      • translate legal jargon to help self-represented litigants better understand what is being asked of them, and
      • reduce administrative drag so more of a judge’s time is spent on decision-making.

This kind of use does not replace us; rather, it supports us. It augments us so we do our work as well as we can, help as many people as possible, and still keep human judgment at the center of everything.

Why this moment matters

The AI conversation in law will remain noisy for a while. Some legal professionals will promise too much. Others will warn against everything. The better path is in the middle — engage, test, verify, and adjust.

As the Newsweek article suggests, this is a watershed moment. Not because AI will decide the future of our institutions, but because we will. The choices we make now will shape what AI does in the justice system, and just as importantly, what it does not do.

We should not be afraid of AI. We should help shape how it is used so it strengthens, rather than replaces, the human judgment at the heart of the legal justice system.


You can find out more about how courts and the legal system are managing AI here

]]>
Generative AI in legal: A risk-based framework for courts /en-us/posts/ai-in-courts/genai-risk-based-framework/ Fri, 21 Nov 2025 13:57:31 +0000 https://blogs.thomsonreuters.com/en-us/?p=68524

Key highlights:

      • Risk varies by workflow and context — Practitioners should apply risk ratings based on workflow and context, such as low for productivity, moderate for research, moderate to high for drafting and public‑facing tools, and high for decision-support.

      • Courts need their own developed benchmarks — Courts should develop and regularly review their own independent benchmarks and evaluation datasets instead of relying solely on vendor claims, because vendors may optimize systems for known tests.

      • Need for benchmarking to detect drift, degradation, and bias — Continuous, rigorous benchmarking of AI models is essential for courts and legal professionals to maintain confidence in these systems, since both the law and AI models change over time.


AI is not monolithic technology, and a risk-based assessment process is needed when using it. Indeed, courts and legal professionals must scale their scrutiny to match risk levels.

This approach — which balances innovation with accountability, along with other essential best practices — is detailed in a recent publication, , created as part of .

In a recent webinar, , one of the co-authors of the document, explained the purpose of the document: “The central aim of what we were thinking about in these best practices is to give courts and legal professionals a principle-based architecture when you’re thinking about the adoption of GenAI tools.”

Risk and human judgement serve as central elements

What is unique about this framework is that it categorizes risk based on key workflow actions of lawyering, for example:

      • Productivity tools carry minimal to moderate risk
      • Research tools are assigned moderate risk
      • Drafting tools range from moderate to high risk
      • Public-facing tools carry moderate to high risk
      • Decision-support tools pose high risk

The framework holds that risk is dynamic rather than static, and there can be shifts in risk levels based on use cases. For example, a scheduling tool typically poses minimal risk; however, the same tool becomes high risk when used for urgent national security cases. And translation tools can shift from lower risk research support to high-risk decision-support depending on their use.

Similarly, when tools range from moderate risk to high risk, users need to be especially discerning in order to understand the underlying risks — and if the task should be delegated to AI at all.

“You can’t just rely on categories,” explains from the IP High Court of Korea. “You need to understand the underlying risks and ask yourself: Would I delegate this task to another person? Am I comfortable delegating it publicly? If the answer is no, then you probably shouldn’t be delegating it to an AI either.”

In addition, clear red lines around when AI should never be used and classified as unacceptable risk exist for judicial use. “I believe the clear red line is automated final decisions or AI systems that assess a person’s credibility or determine fundamental rights involving incarceration, housing, family,” says Judge Kwon, adding that fundamental rights require human judgment.


“You can’t just rely on categories. You need to understand the underlying risks and ask yourself: Would I delegate this task to another person? Am I comfortable delegating it publicly? If the answer is no, then you probably shouldn’t be delegating it to an AI either.”


The extent of human judgment also has layers. , Shareholder at Greenberg Traurig, says he believes that AI for any legal use currently requires human oversight. “The human supervision piece… is utterly critical in the real world of practicing lawyers and law firms,” Greenberg says. “You have to supervise the lawyers in the firm that are using the technology, including young lawyers.”

To help distinguish which type of human oversight is appropriate, the framework in the Key Considerations document defines two forms of such oversight: i) human in the loop, which means active human involvement in decisions; and ii) human on the loop, which means monitoring automated processes and intervening when needed.

What the difference between what each concept could look like in a court setting shows that a human in the loop is, for example, a law clerk using AI to do research on relevant case law and checking to make sure that the references are legally sound; and a human on the loop is a clerk monitoring an established robotic process to extract data for the case management system and spot-checking for accuracy.

Practical guidance for courts

In addition to judges considering the risk level of AI tools, Judge Kwon, Greenberg, and Carpenter, noted the importance of technical AI competence as part of lawyers’ and judges’ ethical duty, especially around verification, transparency, and independent benchmarks as part of accountability, as well as the need for understandable documentation to maintain public trust. To reinforce the latter point, , Director in Government Practice for ¶¶ŇőłÉÄę Practical Law states: “It’s very vital, especially as we usher in the age of AI, that the public be informed as much as they can be about how that decision-making process is taking place.”

In addition, Judge Kwon, Greenberg, and Carpenter highlighted additional guidance on the criticality of benchmarking, including:

      • Court-developed benchmarks prevent overreliance on vendor data — Courts should develop their own benchmarks and independent evaluation datasets rather than relying entirely on vendor claims and review evaluation scenarios regularly. Vendors may optimize their systems for known tests, which leads to overfitting, in which a model learns patterns specific to its training data so well that it performs poorly on new, unseen data. This gives a misleading impression of reliability.
      • Ongoing rigorous benchmarking to detect model drift & degradation — To build confidence in AI models, courts and legal professionals must approach AI model evaluation with rigor and ongoing vigilance. Continuous benchmarking is essential, and it cannot be a one-time process because the law evolves constantly and precedents shift. In addition, AI models themselves update regularly, and courts need to monitor performance over time to detect AI degradation or bias drift.

Adopting a thoughtful, risk-informed approach to GenAI in legal practice and courts will help realize its benefits for efficiency and access to justice while protecting ethical obligations, due process, and public trust in the legal system.


You can find out more about how AI and other advanced technologies are impacting best practices in courts and administration here

]]>
Upskilling court staff: A training plan blueprint for an AI-powered future /en-us/posts/ai-in-courts/upskilling-court-staff/ Fri, 07 Nov 2025 14:46:05 +0000 https://blogs.thomsonreuters.com/en-us/?p=68388

Key takeaways:

      • Establish core AI competencies for all court personnel— Every court staff member needs foundational skills in data literacy, critical evaluation of AI outputs, ethical and legal knowledge, human-AI teaming, and cybersecurity.

      • Create role-specific training pathways— Judges need skills in assessing AI evidence admissibility, while administrators require workflow integration expertise.

      • Implement comprehensive change management from the start— Success depends on strong leadership commitment, clear communication about AI’s purpose and boundaries, and stakeholder engagement through feedback loops and user councils.


With AI’s growing potential to improve the nation’s court operations, the success of this advance technology is dependent upon the readiness of those who use it. Unfortunately, many courts lack comprehensive strategies to prepare their workforce for this digital transformation. Yet, only by providing a systematic approach to building AI literacy across all court staff — which includes ensuring that judges, clerks, court administrators, and support staff possess the knowledge, skills, and ethical framework necessary to take advantage of AI’s potential — can courts expect to see the fruits of these efforts.

Defining the core competencies for all court staff

The first step to embedding AI literacy into the courts is establishing clear competency frameworks and ensuring consistent training outcomes across all court personnel. Core competencies should anchor every role’s development path, highlighting major areas of knowledge, including:

      • Data literacy — This encompasses understanding data sources and determining quality assessment, bias detection, and lineage tracking to enable courts to use data-driven insights.
      • Critical evaluation of AI outputs — This involves identifying verification protocols, recognizing system limitations and error patterns, and interpreting confidence measures to maintain judicial accuracy.
      • Ethical and legal knowledge — ThisĚýcovers determining due process requirements, transparency obligations, explainability standards, privacy protection, and bias mitigation strategies that preserve justice system integrity.
      • Human-AI teamingĚýskills — This involves determining when to trust AI recommendations compared to human oversight, plus the abilities to conduc proper documentation and audit trail maintenance for accountability.
      • Cybersecurity — ThisĚýensures secure handling of sensitive case data, AI prompts, and system outputs.

Establishing role-specific AI competencies

To successfully integrate AI into workflows, as a next step courts must establish clear, role-specific AI literacy requirements tailored to each position’s unique responsibilities. For example, judges need competencies in assessing AI-derived evidence admissibility, applying appropriate reliance thresholds for AI outputs, and supervising AI writing aids while maintaining personal accountability for judicial reasoning.

Likewise, court clerks require practical training in AI-powered scheduling, document management, and data analsis tools; while court administrators must develop skills in workflow integration, data quality stewardship, and vendor evaluation against security and bias criteria. Further, IT personnel need advanced capabilities in AI deployment, maintenance, and cybersecurity to support these technologies effectively.


The first step to embedding AI literacy into the courts is establishing clear competency frameworks and ensuring consistent training outcomes across all court personnel.


Courts should create structured AI literacy pathways that define required competency levels at specific career milestones from initial hiring to promotion thresholds as well. This milestone-based approach enables employees to develop appropriate AI skills progressively as both their responsibilities grow and court technology adoption expands.

Implementation of these pathways requires updating both hiring practices and ongoing professional development programs to reflect new knowledge requirements for AI. Job descriptions and recruitment strategies must be revised to attract candidates with relevant AI skills while expanding outreach to build diverse talent pools. Courts should then design comprehensive training programs that deliver role-appropriate AI education through on-boarding processes, dedicated training events, continuing education opportunities, and on-the-job experience.

A new AI Readiness guide for courts, created by the National Center for State Court, provides more details on how to establish role-based skills and how to embed them throughout employees’ career life-cycle not matter the role.

Additional considerations

In addition, successful AI integration also requires comprehensive change management strategies implemented from the very beginning of any upskilling initiative. This process must begin with thorough stakeholder mapping to identify those key champions who will advocate for AI adoption, skeptics who may resist change, and the specific workflows that will be impacted by adoption of any new technologies.

A robust communication plan is equally essential. Indeed, clear and frequent messaging explains the purpose of AI tools and their benefits to court operations, while establishing boundaries for their use and the safeguards to be put in place to protect judicial integrity.

An additional element of the change management program is that courts need to establish effective feedback loops and metrics to align with performance goals in order to maintain accountability. For example, courts should ensure that input from users is reported back to court leadership, and they should create user councils and anonymous reporting channels. Courts should also hold quarterly gatherings to address concerns and establish ownership of AI training initiatives at all levels of the organization.


An additional element of the change management program is that courts need to establish effective feedback loops and metrics to align with performance goals in order to maintain accountability.


Likewise, the success of any judicial AI upskilling program fundamentally depends on strong leadership commitment and visible championship of AI education initiatives. Leaders must first establish a clear vision and guardrails by articulating how AI tools align with the court’s mission and constitutional principles. And this commitment must be demonstrated through modeling behaviors, with judicial leaders completing training programs themselves, using AI tools responsibly in their own work, and openly sharing lessons learned with their teams.

Finally, effective leadership requires strategic resource allocation that ensures governance alignment by embedding AI education into strategic plans, continuing judicial education requirements, and performance evaluation frameworks.

To create AI skills based on roles, courts need a disciplined plan and the will to follow it. By defining shared competencies, tailoring role-based learning, and backing change with clear communication, metrics, and leadership, courts can introduce AI without compromising their independence, fairness, or accountability. Indeed, starting small, measuring rigorously, iterating transparently, and centering human judgment at every step are core elements of the blueprint — and now is the moment to put it to work.


To get started, check out these created by the

]]>
New guide: A three-level approach to AI readiness in state courts /en-us/posts/ai-in-courts/ai-readiness-courts-guide/ Thu, 30 Oct 2025 17:52:31 +0000 https://blogs.thomsonreuters.com/en-us/?p=68252 3 key takeaways:
      • Establish strong governance and principles firstĚý— Before implementing AI, courts must create cross-functional oversight committees, define guiding principles that align stakeholders, and develop clear AI use policies with high-quality data governance.

      • Prioritize people-centered implementationĚý— Successful AI adoption requires engaging stakeholders early as co-creators and conducting thorough resource assessments that account for total cost of ownership (including maintenance and compliance).

      • Commit to continuous monitoring and adaptationĚý— AI implementation requires ongoing human oversight to monitor performance, prevent data and model drift, and systematically review governance structures and policies after each project to strengthen courts’ overall AI readiness for future initiatives.


AI has the clear potential to revolutionize courtroom workflows, but AI itself can carry unforeseen risks. Indeed, AI solutions are complex and opaque, with inherent randomness and risk, says , Senior AI Manager in the New Jersey courts.

To help courts leverage AI safely, the with support from the State Justice Institute convened 16 experts to create an , which was featured in a recent webinar by the . This guide provides practical advice and offers a three-level approach for courts adopting AI: strategic planning (level 1); thoughtful project implementation (level 2); and continuous adaptation (level 3). These three levels guide courts from establishing governance and principles to executing measurable, people-centered projects that enhance trust and further the course of justice.

Establishing governance, principles & policies

To unlock AI’s potential while mitigating hazards, courts must first establish a strong foundation through clear governance, guiding principles, and well-defined policies. More specifically, courts should:

Establish governance with a diverse group of voices — A cross-functional committee sets policy, oversight, and feedback loops. “AI governance is… really the leadership structure for all of the court’s uses of AI,” says the NCSC’s , adding that AI Governance Tool in the AI Readiness guide should be used to run a structured 12‑month plan that covers level 1 readiness steps end-to-end.

Define your operating philosophy before you start — Establishing guiding principles are not bureaucratic exercises but rather essential blueprints for successful and ethical AI integration.ĚýWithout them, courts risk misalignment among stakeholders, the development of systems that do not serve their intended purposes, and the possibility of costly failures. These principles provide a constant reference point, ensuring that as AI projects evolve, the court remains true to its core values and objectives.

Indeed, the overarching mindset that directs actions and choices as part of the governing principles should align stakeholders, manage expectations, and anchor future decisions. “The leading cause of software failures historically has been misalignment among stakeholders and changing or poorly documented requirements,” says , Assistant Professor of Computer Science at George Mason University, adding that the same is true for AI projects. “Without these guiding principles [for AI use], there’s the same risk for misalignment among stakeholders.”

Another core tenet of any firm foundation is to set internal rules as part of an AI use policy that provides guardrails and clarity for staff during the transition. And because high-quality, well-governed data is fundamental, pay attention to the quality of the data. “One of the dirty secrets of data science is the data cleansing process,” says Appavoo. “Garbage in, garbage out.”

Finally, pick projects using workflow analysis and by identifying pain points; then use a scoring matrix to evaluate potential projects based on criteria such as impact and feasibility.

Implementing projects that focus on practicality

After foundational planning is complete, the next stage focuses on the practical implementation of AI projects through productive change management, resource assessment, and strategic procurement. Beyond initial deployment, substantial work occurs during this stage.

The most important element in this phase is that successful AI adoption hinges on a strategic, people-centric approach that carefully considers resources and risk. “When people are engaged early and meaningfully, they stop being subjects of change and start being co-creators and co-designers of it,” explains , Assistant Professor of Art and Design at Northeastern University. “And that sense of ownership is one of the strongest predictors of adoption.”

Indeed, effective change management and prioritizing person-centered design are paramount. Often, this means actively engaging stakeholders, fostering open communication, and providing comprehensive training and support throughout the project lifecycle.


The most important element… is that successful AI adoption hinges on a strategic, people-centric approach that carefully considers resources and risk.


Perhaps the most challenging action in this phase is that courts start moving beyond immediate costs and benefits to better understand the full financial and operational implications of AI projects. This requires an accurate assessment of both tangible and intangible costs, along with clearly defining success metrics.

“What’s really tricky about that is some of those costs are very obvious and simple,” says Dr. Miller. “Some of them are very squishy and hard to estimate, and the same goes for the benefits.”

In fact, at this stage there are common pitfalls around cost, according to , Chief of Innovation and Emerging Technologies for Maricopa County, Arizona. “Courts sometimes focus only on the upfront purchase price, or the development budget, and they ignore the updates, the retraining, the legal compliance — and that can multiply the total cost of ownership.”

Further, courts need to consider their own capabilities, the practicality of their AI solution, its long-term sustainability, and potential risks such as transparency and vendor dependency. If the decision is to buy a product off the shelf, the procurement process and vetting vendors will be key. “If we don’t clarify who’s responsible when the system makes a mistake, we expose ourselves to reputational and legal risk,” Judy notes.

Continuous improvement and preparing for the next AI initiative

After implementing an AI project, the journey does not end. Indeed, it evolves the critical importance of incorporating those lessons learned back into court operations through post-project review.

“It is not about getting in the game when it comes to AI, it is about staying in the game,” says Appavoo. “The complexity is actually after you productionize a solution — that is what we see.” You have to have a human in the loop, stay on top of things in terms of observability, constantly monitor the performance, constantly check the data or the model are not drifting, or the business context is changing, Appavoo explains.

To help put this into practice, the AI readiness guide has comprehensive feedback checklists courts can use to systematically review the foundational AI program elements for ongoing adaptation. More specifically, the post-project review process should examine whether governance structures remain effective, if guiding principles need refinement, and whether internal policies require updates. This continuous improvement approach transforms each AI implementation into a learning opportunity that strengthens the court’s overall AI readiness for its subsequent initiatives.


You can access the from the National Center for State Courts and the State Justice Institute here

]]>
AI evidence in jury trials: Navigating the new frontier of justice /en-us/posts/ai-in-courts/ai-evidence-trials/ Mon, 06 Oct 2025 15:58:09 +0000 https://blogs.thomsonreuters.com/en-us/?p=67834

Key highlights:

      • AI evidence creates a credibility dilemma for juries — Jurors are prone to treating AI outputs as factual and authentic, which makes it difficult to distinguish between legitimate evidence and sophisticated deepfakes.

      • Current evidentiary rules are inadequate for AI — Traditional rules of evidence may not be compatible with AI’s ability to create hyper-realistic fabricated content and authenticate evidence.

      • Courts need proactive measures and adaptability — To navigate this new frontier, courts must implement comprehensive jury instructions, build boards of AI evidence experts, provide red flag training for all participants, and develop flexible legal guidelines that can be adapted to the rapid evolution of AI technology


AI evidentiary issues are presenting multi-layered challenges in the United States court system. To deal with them, courts must perform a careful balancing act: Embracing helpful technology while guarding against potential deception from AI-generated evidence, especially in jury trials.

Several issues make this balancing act a challenge. First, individuals — including those in juries — tend to treat AI outputs as authentic and factual. “We know that people often treat artificial intelligence, [and] the outputs that they receive from there, as factual, and that inflates credibility across the board,” says , Director of the Center for Jury Studies and Principal Court Management Consultant at the National Center for State Courts (NCSC). “We know that the artificial intelligence itself has become more believable.”

Second, audiovisual testimony can be more memorable in a juror’s mind than written testimony, according to , a Research Professor at the University of Waterloo (Ontario) and an eDiscovery lawyer and specialist. And that can irreversibly influence juries, particularly if the audiovisual testimony is fabricated, because this reality makes deepfakes hard to unsee, Grossman explains.

, of the Santa Clara County Superior Court in California, agrees that the novelty of this development can be a challenge. “In the past, video has been used to refresh someone’s recollection when they forgot something,” Judge Yew says. “And so now we are worried about videos being used to modify or change or corrupt someone’s memory.”

Finally, the liar’s dividend risks juries dismissing legitimate, properly authenticated evidence simply because AI manipulation is possible. “If we overdo it, we are going to make our jurors so skeptical of everything, and they will become cynical and question all evidence, even legitimate evidence,” Grossman says. “But if we give them no guidance, we certainly do not want them pulling out magnifying glasses” to ascertain authenticity on their own using ad hoc methods.

A recent webinar hosted by the , looked at how courts can navigate the complexity of these psychological and technical challenges. The webinar panel included Judge Yew, Grossman, Johnson, and , Dean and Professor of Law at the University of New Hampshire Franklin Pierce School of Law, as the moderator.

The panel discussed how AI can impact evidence that is both acknowledged and unacknowledged. For example, acknowledged AI-generated evidence can enhance expert testimony and improve juror comprehension, such as in accident reconstruction. This is only possible when AI methods are transparent and there is clear chain-of-custody of the evidence.

Unacknowledged AI-generated evidence, on the other hand, can include deepfakes and other falsified evidence that are intended to deceive and may be hard to detect. Courts and lawyers must balance skepticism, disclosure, and expert input in these cases to protect juries without paralyzing them.

Limitations of current rules of evidence

Panelists also laid out options for practical legal frameworks that govern how acknowledged and unacknowledged AI-generated evidence can be admitted and evaluated in courtrooms. Different rules apply, of course, depending on the type of evidence. More specifically, in matters of acknowledged AI-generated evidence, validity, reliability and bias are primary considerations; and in matters concerning potential unacknowledged AI-generated evidence, the primary issue for judges and juries is to determine authenticity.

The current rules regarding evidence may not always be compatible with AI-generated content in several critical ways. Traditional authentication rules under Federal Rule of Evidence 901 assumes that evidence originates from reliable sources; however, given AI’s ability to create hyper-realistic deepfakes that are indistinguishable from authentic content, this assumption may not always be correct. In addition, current self-authenticating document provisions in Rule 902 may inadvertently admit fabricated evidence that has been processed through official channels, such as AI-generated documents filed with government agencies that then become official records.

Further, the technical sophistication of generative AI compounds these challenges. AI systems use a training method in which two algorithms compete to get better, which makes it harder to detect them, Grossman explains, adding that current automated-detection tools often fail in these cases, and even human experts can only provide probability assessments rather than definitive determinations of authenticity.

In her work with Judge Paul W. Grimm, Grossman has . The first, related to acknowledged AI-generated evidence, seeks to incorporate aspects of into current evidentiary rules. Instead, the Federal Rules Advisory Committee decided to draft a new rule — proposed Federal Rule of Evidence 707 — which applies the expert reliability standards found in current rules to machine-generated evidence offered without expert testimony. This solution ensures AI-generated evidence meets the same reliability requirements as expert testimony while also addressing concerns about validity, bias, and methodological soundness.

In scenarios involving unacknowledged AI-generated evidence, such as in cases in which the authenticity of the evidence is disputed, Grossman and Judge Grimm proposed that if there is evidence that a jury could reasonably believe that suggests that the evidence is and is not authentic, judges should use a balancing test, which weighs how much the evidence actually helps prove something important in the case (known as probative value), against the risk that it will unfairly inflame, mislead, confuse, or waste time (referred to as the prejudicial value).

What can courts do now

Courts must take critical steps to deal with AI evidence in jury trials as it becomes increasingly sophisticated. The NCSC’s Johnson cites the importance of ensuring court participants are prepared. “There’s a very technical aspect to this discussion,” Johnson says. “And there is certainly a space for education… not just for individuals serving on a jury, but for the people who help juries.”

Most importantly, implementing comprehensive jury instructions that help jurors understand their evolving responsibilities in evaluating digital evidence allows them to gather valuable insight. And having judges allow jurors to ask questions about how AI is used in evidence during the process of screening questionable evidence can improve jury comprehension.

In addition, two additional actions for courts include:

Building a board of AI evidence experts — Courts may need to appoint AI-detection specialists to ensure fair evaluation when parties lack resources for private experts. Judge Yew points out that there is precedent to having a group of people whom the courts can call upon as experts on retainer, such as those used in competency hearings to determine an individual’s competence capacity to withstand a criminal case.

Offering “red flag” training — Courts should also enable attorneys, judges, and jurors to spot suspicious evidence through training. Indeed, court participants should learn how to scrutinize too good to be true evidence, particularly when original devices or documents are unavailable for examination, Grossman advise. Elaborate explanations for unavailability should trigger heightened scrutiny and potential expert analysis to verify authenticity before the evidence reaches a jury.

Finally, the panelists say they believe that, over the long term, any legal framework requires flexible rules that can adapt to rapidly evolving technology. Given the swift pace of AI development, rigid regulations would quickly become obsolete. Instead, adaptable guidelines that focus on principles like reliability, transparency, and fairness will better serve the future legal proceedings in which AI evidence is involved.


To learn more about AI evidentiary issues, visit the and

]]>
Augmenting justice: A practical framework for AI in judicial workflows /en-us/posts/ai-in-courts/augmenting-justice-framework-judge-schlegel/ Mon, 29 Sep 2025 13:28:47 +0000 https://blogs.thomsonreuters.com/en-us/?p=67689

Key insights:

      • Stewardship over speed — Courts shouldn’t rush to adopt AI; they should implement it deliberately with policies, training, and review protocols that align with judicial ethics.

      • Human judgment is non‑negotiable — AI can streamline research and drafting, but interpretation, credibility assessments, proportionality, and equitable discretion must remain human — and handled by the right person at the right decision points.

      • Phased, role‑aware integration — A practical, 10‑phase framework enables incremental adoption across varying readiness levels, emphasizing clear boundaries, verification of outputs, confidentiality controls, and accountability to preserve judicial integrity.


As AI moves from novelty to infrastructure in professional practice, courts face a pivotal question — not whether to use AI, but rather how to implement these tools responsibly.

of Louisiana’s Fifth Circuit Court of Appeal has become a careful and credible voice in this conversation. Drawing on active judicial experience, Judge Schlegel has published practical guidance and suggested guardrails in a new paper, .

In a recent discussion, he outlined how courts can harness AI’s strengths without compromising the integrity, independence, and wisdom that define sound adjudication.

Why is this framework needed now?

AI’s rapid evolution presents both opportunity and risk. According to Judge Schlegel, technology has reached a stage in which judges must exercise independent judgment in deciding how and when to deploy advanced technology. The judiciary need not be first to adopt new tools; rather, it must be right in how it adopts them. That measured stance reframes innovation as a matter of judicial craft — the question is not speed, but stewardship.

Judge Schlegel’s 10‑phase implementation framework is built from lessons learned in chambers, not a laboratory. Its purpose is to help courts establish boundaries, define roles, and stage adoption in a way that is consistent with judicial ethics and institutional realities. The framework provides a clear on‑ramp for courts at different levels of readiness, emphasizing that successful integration is a process, not a single event.

framework
Judge Scott Schlegel

The initial step, as Judge Schlegel describes, is deceptively simple. “Step 1 is the most important, and that is to do your job,” he writes. AI can accelerate tasks such as drafting or research triage, but it cannot — and must not — replace the uniquely human functions of judging. Interpretation, deliberation, credibility assessments, proportionality, and the exercise of equitable discretion remain irreducibly human. Properly implemented, AI frees judges and chambers staff to focus more attention on those human functions rather than less.

Having the right human in the loop

Much commentary urges keeping a human in the loop; however, Judge Schlegel suggests going further, emphasizing the need to place the right human at the right points in the workflow. Not every participant in chambers must or should use AI for every task. The key is calibrated involvement: Identify decision nodes in which human judgment is critical, and ensure those decisions are made by the appropriate judicial officer or trained staff member. In other words, governance is not satisfied by mere human presence, rather it requires intentional role design and accountability.

Judge Schlegel further cautions against universal, simultaneous adoption. Not every judge needs to begin using AI immediately. However, what every court does need is a shared foundation — policies, training, and review protocols — that clarifies those tasks in which AI belongs, where it does not, and how outputs will be verified. His framework is designed to be accessible and scalable, and able to support judges who are early in their learning curve as well as more advanced users that wish to experiment within defined guardrails.

Guardrails that preserve judicial integrity

Responsible implementation turns on a few themes that run through Judge Schlegel’s framework. Verification requires structured review of AI outputs, including fact‑checking and citation validation, before those outputs can influence judicial reasoning or orders. Confidentiality and privilege demand clear limits on what materials may be processed by AI tools and under what data‑handling terms, particularly situations in which sensitive information or sealed records are involved. Finally, training and change management matter because effective adoption strongly depends on equipping judges and staff with the skills to use AI judiciously and to recognize where it could potentially fail.

Overall, treating AI as a shiny new tool is less helpful than recognizing it as a set of capabilities that, when properly governed, can expand a court’s capacity to deliver timely, well‑reasoned justice. The goal is not to automate judgment, but to support it. When AI accelerates routine drafting or organizes complex records more efficiently, chamber staff can devote more attention to the hard work that only people can do, such as weighing credibility, interpreting precedent, crafting remedies, and explaining decisions in ways that foster public trust.

Moving forward

Judicial adoption of AI will be judged not by novelty but by fidelity to first principles. Judge Schlegel’s message is clear: Courts do not need to be first, but they must get it right.

A phased framework, as he outlines, that dictates the placement of the right humans at the right points, and a disciplined focus on the core judicial function, when taken together, can provide a path for responsible integration. With those commitments in place, AI can help courts do more of what matters most — delivering justice that is timely, transparent, and trustworthy.


You can find out more about how courts are managing their transition to a more AI-driven environment here

]]>
New white paper: A blueprint for cultivating practice-ready lawyers /en-us/posts/ai-in-courts/white-paper-clear-research-practice-ready-lawyers/ Thu, 18 Sep 2025 13:33:16 +0000 https://blogs.thomsonreuters.com/en-us/?p=67568

Key takeaways:

      • New lawyers lack courtroom preparednessĚý— Judges and experienced attorneys observe a significant decline in litigation, communication, and client advocacy skills among lawyers in their first five years of practice, with many new attorneys needing additional training before appearing in court.

      • Reduced learning opportunities with several factors at the root causeĚý— The readiness gap stems from law schools’ focus on theory over practical skills, reduced learning opportunities due to remote proceedings, and a traditional bar exam that doesn’t adequately assess real-world practice readiness.

      • A blueprint for improvementsĚý— The white paper advocates for solutions like mandatory supervised postgraduate practice, modernized bar admissions (including alternative pathways to legal careers), increased experiential learning in law school, and leveraging AI to enhance skills development and mentorship.


America’s courts are sounding an alarm: Too many new lawyers are entering the courtroom unprepared. To examine this critical situation, the Thomson Reuters Institute has published a new white paper, The unprepared lawyer: How America’s legal education is failing the courts and what must change, which is based on a research initiative from the Committee on Legal Education and Admissions Reform (CLEAR). The committee’s research draws on input from more than 4,000 judges and more than 4,000 practicing attorneys from all over the United States.

The findings are stark. Judges report declining litigation and communication skills among attorneys in their first five years as a lawyer. Nearly 60% of judges say client advocacy is being harmed, and more than half say they believe new lawyers should not appear in court without additional training.

practice-ready lawyers

What is driving the readiness gap? Law schools excel at teaching students research and analysis skills, but the schools’ practical preparation lags. Courtroom advocacy, procedural fluency, and professional communication clearly are not getting enough emphasis, the research shows.

Also, remote proceedings and fewer live trials have reduced opportunities for new lawyers to learn by observing, and many of those lawyers say the traditional bar exam does not measure real practice readiness — even as Ěýnewly updated bar exam that’s focused more on foundational skills will be in use by next summer.

A pathway to change

Distilling this research, the white paper offers a blueprint for change, much of it suggested by the judges involved. For example, many judges suggest that new lawyers have supervised postgraduate practice so no new lawyer practices alone on day one. Also, some urge modernizing bar admissions and offering alternative pathways that evaluate real competencies such as supervised practice, curated portfolios, and staged testing.

Other suggestions include making experiential learning mandatory in law school through clinic externships simulations and practical drafting, as well as harnessing AI to improve skills development, feedback, and mentorship while keeping humans in the loop.

Despite the seriousness of the situation, there is some encouraging news. Evidence shows that initiatives such as supervised practice, mentorships, clinics, externships and on-the-job training do improve the skills needed by new lawyers. Judges and lawyers overwhelmingly credit these experiences with building competence, judgment, and professionalism among newer lawyers.

The legal profession has a choice: Act now to reverse the erosion in courtroom performance and rebuild public trust or allow the problem to deepen. This white paper shows a path forward by aligning education, licensure, and technology to produce lawyers who are not just bar-ready but practice-ready. The nation’s courts, clients, and communities depend on it.


You can download the new white paper “The unprepared lawyer: How America’s legal education is failing the courts and what must change” by filling out the form below:

]]>