Unauthorized Practice of Law Archives - Thomson Reuters Institute https://blogs.thomsonreuters.com/en-us/topic/unauthorized-practice-of-law/ Thomson Reuters Institute is a blog from ¶¶ŇőłÉÄę, the intelligence, technology and human expertise you need to find trusted answers. Mon, 02 Feb 2026 17:59:42 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 How AI-powered access to justice is impacting unauthorized practice of law regulations /en-us/posts/government/ai-impacts-unauthorized-practice-of-law/ Mon, 02 Feb 2026 17:55:20 +0000 https://blogs.thomsonreuters.com/en-us/?p=69263

Key insights:

      • Courts and the legal profession need to show leadership — Given their specialized knowledge of the needs of litigants and of courts, courts need to take the lead in determining definitions of the unauthorized practice of law.

      • 3 paths forward to workable regulatory solutions — Recent discussions and research around this subject offered three paths toward modernizing UPL definitions.

      • Uncertainty harms users and innovation — Fear of UPL can drive self-censorship and market exits, even as litigants continue to use publicly available GenAI tools.


Today, many Americans experience legal issues but lack proper access to legal representation. At the same time, AI tools capable of providing legal information are rapidly evolving and already in widespread use. Between these two facts lies a critical definitional problem that courts and state bars must urgently address: How to define the unauthorized practice of law (UPL) in way that doesn’t further curtail access to justice.

This discussion is not theoretical. It directly determines whether AI-based legal services can operate, how they should be regulated, and ultimately whether AI can help unrepresented or self-represented litigants gain meaningful access to justice. This issue was explored in more depth during a recent webinar from theĚý, a joint effort by the National Center for State CourtsĚý(NCSC) and the Thomson Reuters Institute (TRI).

The need for clear definitions

During the webinar, Alaska Supreme Court Administrative Director Stacey Marz noted that “there is no uniform definition of the practice of law” and that UPL regulations represent “a real varied continuum of scope and clarity.” This variation makes compliance challenging for technology providers, especially as they navigate 50 different state standards.

UPL generally occurs when someone “not licensed as an attorney attempts to represent or perform legal work on behalf of another person,” explained Cathy Cunningham, Senior Specialist Legal Editor at ¶¶ŇőłÉÄę Practical Law.

Marz added that such legal advice typically involves “applying the law, rules, principles, and processes to specific facts and circumstances of that individual client — and then recommending a course of action.”

The challenge, however, is that AI can appear to do exactly this, yet the regulatory framework remains unclear about whether and how this should be permitted and how consumers can be protected.

3 paths forward

During the recent webinar, panelists discussed several different approaches to UPL regulations, noting that a and outlined three approaches that state courts could take, including:

Path 1: Explicitly enabling tools with regulatory framework — UPL statutes can be revisited to explicitly allow purpose-built AI legal tools to operate without threat of UPL enforcement, provided they meet certain requirements. Prof. Dyane O’Leary, Director of Legal Innovation & Technology at Suffolk University, emphasized that consumer-facing AI legal tools are already being used for tailored legal advice, arguing that some oversight is better than “just letting these tools continue to operate and hoping consumers aren’t harmed by them.”

Path 2: Creating regulatory sandboxes — Courts could establish temporary experimental zones in which AI legal service providers can operate under controlled conditions while regulators gather data about efficacy and safety through feedback and research, with an eye toward informing future regulation reform.

Path 3: Narrowing UPL to human conduct — Clarifying that existing UPL rules apply only to humans who may hold themselves out as attorneys in tribunals or courtrooms or creating legal documents under the guise of being a human attorney, effectively would leave AI-powered legal tools clearly outside UPL restrictions and open up a “new pocket of the free market” for consumers.

Utah Courts Self-Help Center Director Nathanael Player referenced Utah Supreme Court Standing Order Number 15, which established their regulatory sandbox using a fundamentally different standard: Not whether services match what lawyers provide, but rather “is this better than the absolute nothing that people currently have available to them?”

Prof. O’Leary reframed the comparison itself, suggesting that instead of comparing consumers who use AI tools to consumers with an attorney, the framework should be “consumers that use legal AI tools, and maybe consumers that otherwise have no support whatsoever.”

The personhood puzzle

“AI, at this time, does not have legal personhood status,” said Practical Law’s Cunningham. “So, AI can’t commit unauthorized practice of law because AI is not a person.”

However, Player pushed back on this reasoning, clarifying that “AI does have a corporate personhood. There is a corporation that made the AI, [and] the corporation providing that does have corporate personhood.” He added, however, that “it’s not clear, I don’t think we know whether or not there is… some sort of consequence for the provision of ChatGPT providing legal services.”


You can view here


This ambiguity creates what might be called the personhood gap, a zone of legal uncertainty with serious consequences for both innovation and access to justice.

Colin Rule, CEO at online dispute resolution platform ODR.com, explained that “one of the major impacts of UPL is, actually self-censorship.” After receiving a UPL letter from a state bar years ago, he immediately exited that market. This pattern repeats across the legal tech landscape, leaving companies hesitant to innovate.

Rule’s bottom line resonates with anyone trying to build solutions in this space. “As a solution provider, what I want is guidance,” Rules explained. “Clarity is what I need most… that’s my number one priority.”

Moving forward: Clarity over perfection

The legal profession needs to lead on this issue, and that means state bars and state supreme courts must take action now. The tools are already in use, and the question is not whether AI will play a role in legal services, but rather whether that role will be defined by thoughtful regulation or by default.

The solution is for the judiciary to provide clear guidance on what services can be offered, by whom, and under what conditions. To do that, courts much first acknowledge that for most people, the choice is not between an AI tool and a lawyer but between an AI tool and nothing. Given that, states must walk a path that will both encourage innovation and protect consumers.

To this end, legal professionals and courts should experiment with these tools, understand their trajectory as well as their current limitations, and work collaboratively with developers to create frameworks that prioritize consumer protection without stifling innovation that could genuinely expand access to justice.


You can find out more about how courts and legal professionals are dealing with the unauthorized practice of law here

]]>
Scaling Justice: Unauthorized practice of law and the risk of AI over-regulation /en-us/posts/ai-in-courts/scaling-justice-unauthorized-practice-of-law/ Mon, 01 Dec 2025 19:35:29 +0000 https://blogs.thomsonreuters.com/en-us/?p=68596

Key insights:

      • Are regulations choking innovation? — Current regulatory efforts may be stifling innovation in AI-driven legal solutions, exacerbating the access to justice crisis and prioritizing lawyer business model protection over consumer needs.

      • Some safeguards already in place — Existing consumer protection laws and product liability laws already provide robust safeguards against potential AI-related harm, making it unnecessary to impose additional restrictive policies on AI-driven legal services.

      • A balanced regulatory approach is best — An approach that encourages responsible innovation, prioritizes consumer protection, and fosters a data-driven mindset can best unlock the transformative potential of AI in addressing critical gaps in access to justice.


As AI-driven legal solutions gain traction, calls for regulation have grown apace. Some are thoughtful, others ill-informed or protectionist, and many focus on the issue of unauthorized practice of law (UPL). While protecting the public is crucial, shielding the legal profession from competition is not. A large majority (92%) of low-income people currently receive no or insufficient legal assistance; and the ongoing uncertainty in the legal AI and UPL regulatory landscape is chilling innovation that could support them.

The legal profession has always struggled to provide affordable, accessible services even as they simultaneously attempt to block those working ethically to bridge the gap with technology. When done right, legal industry regulation should balance protection with progress to avoid stifling innovation and exacerbating the access to justice crisis.

Consumer protection laws already provide robust safeguards against potential AI-related harms. Existing product liability laws and enforcement actions by state attorneys general ensure that consumers have recourse if AI legal tools cause harm. Despite these safeguards, concerns about unregulated AI filling the gaps in legal services persist.

It is time to upend the calculus of consumer harm and examine the motives of regulation. Rather than forcing tech-based legal services to prove they cause no harm in order to avoid changes of UPL, regulators should be required to justify, with data, that legal technology companies cause harm and whether any ruling will constrain supply in the face of a catastrophic lack of access to justice.

Uneven regulatory efforts raise questions

Current regulatory efforts tend to focus on companies that directly serve legal consumers, while leaving broader AI models largely unchecked. This raises uncomfortable questions: Are we truly protecting the public, or merely constraining competition and thereby reinforcing barriers to innovation in the process?


You can find out more about here


“If UPL’s purpose is protecting the public from nonlawyers giving legal advice — and if regulators define legal advice as applying law to facts — how many legal questions are asked of these Big Tech tools every day?” asks Damien Riehl, a veteran lawyer and innovator. “And if we won’t go after Big Tech, will regulators prosecute Small Legal Tech, which in turn utilizes Big Tech tools? If Big Tech isn’t violating UPL, then neither is Small Tech [by using Big Tech’s tools].”

Efforts to regulate the use of AI-based legal services are, de facto, another path to market constraint. Any attempt to regulate AI should be rooted in actual consumer experience. Justice tech companies, by definition, pursue mission-driven work to benefit consumers, but if an AI-driven tool causes harm, it should certainly be investigated and regulated. State bar associations are not waiting for harm to occur before considering regulating AI-driven legal help — and we must wonder why.

The risks of premature regulation

We must enable, not obstruct, AI-driven legal solutions and ensure that innovation remains a driving force in modernizing legal services. If restrictive policies make it difficult to develop cost-effective legal solutions, fewer consumers —particularly those with limited resources — will have access to legal assistance.

AI is developing far too quickly for a slower regulatory trajectory to keep up — any contemplated regulation would be evaluating last year’s technology, which is at best half as good as the latest iterations. Regulating AI-driven legal services now is akin to prior restraint, as when published or broadcast material is anticipated to cause problems in the future and is suppressed or prohibited before it can be released. However, this approach does not apply to new technology; we already can look for evidence of harm in product liability.

By prioritizing consumers rather than lawyer business model protection, AI-enabled legal support would be monitored for potential harm with data collected and analyzed to bring to light any issues. That way, regulations could be built around that defined, data-backed harm. For instance, we might require certification protocols for privacy or security if those issues prove problematic.

Forward-thinking states are going further

In July, the Institute for the Advancement of the American Legal System (IAALS) released a new report, , which advocated for a phased approach to regulation, beginning with experimentation, education, and consumer protection, while gathering and evaluating data. Later phases could involve potential regulation based on what is learned. In this way, innovation is encouraged while consumer needs and public trust remain paramount.

Also this year, Colorado cut the proverbial Gordian Knot by releasing a — consistent with existing analysis of UPL complaints in the state — for AI tools focused on improving access to justice. Guiding principles include ensuring consumers have clarity about the services they receive and their limits, educating consumers on the risks inherent in relying on advice from non-lawyer sources, and including a lawyer in the loop. Utah, Washington, and Minnesota all have considered similar policies. And IAALS now is collaborating with Duke University’s Center on Law & Tech to create a toolkit and templates to make it easier for other states to adopt UPL non-prosecution or similar policies.

Yet, some regulators seek the opposite, looking to define the exact types of business activity that will lead to UPL prosecution. While this framework is more likely to become obsolete more quickly, it serves a similar purpose: providing clear guardrails that allow innovation to flourish, while protecting consumers by clearly indicating the limitations of the software. The to specifically exclude tech products from UPL enforcement, provided they are accompanied by adequate disclosures that they are not a substitute for the advice of a licensed lawyer. Such policies are essential, and they can encourage those entrepreneurs aiming to ameliorate the justice gap.

What’s next?

The legal and justice tech industries should aim for a regulatory framework that encourages responsible, iterative innovation — and participants should take some proactive steps, including: i) justice tech companies should participate in the discussion and share their business- and mission-focused perspectives to help shape any new regulations; and ii) regulators with internal non-prosecution policies should consider making them public to encourage entrepreneurs in their state.

These approaches would enable positive change for state residents, support overburdened legal aid organizations and courts, and foster a flourishing tech ecosystem aimed at serving unrepresented and under-represented parties.

The legal profession has not been able to ensure justice for all, making it even harder for low-income and unrepresented parties to find the help they need. Now, AI-driven legal service providers are moving forward on addressing critical gaps in access to justice.

With a measured and equitable approach to regulation that neither ignores AI’s risks nor overlooks its transformative potential, the legal industry and regulators must keep pace with today’s technology — and such efforts should not obstruct those legal providers who can bring the law closer to that ideal and help close the justice gap.


You can learn more about the challenges faced by justice tech providers here

]]>
Scaling Justice: Breaking through global regulatory roadblocks for increased justice equity /en-us/posts/ai-in-courts/scaling-justice-breaking-through-roadblocks/ Wed, 16 Apr 2025 13:19:29 +0000 https://blogs.thomsonreuters.com/en-us/?p=65528

This article is part of an ongoing series titled , by Maya Markovich and others in consultation with the Thomson Reuters Institute. This series aims to not only explore how justice technology fits within the modern legal system, but how technology companies themselves can scale as businesses while maintaining their access to justice mission.


Despite the historic domination of business law in the field of legal tech, the industry has begun to include mention of access to justice in discussions of tech application, entrepreneurship, and investment in solutions that aim to narrow the justice gap through technology. The opportunity is vast, with a total addressable market of 5 billion people worldwide.

Like any nascent sector, startups on the crest of this breaking wave of justice tech must work through this lack of market awareness and stakeholder inertia toward new models. Further, they also face a gauntlet of regulatory obstacles in an antiquated profession that is reluctant to release its monopoly on the market. These restrictions hamper progress toward universal access to justice. With the advent of widely available AI tools, however, technology can and should be part of the solution, and clearing roadblocks to scale innovation in access to justice is essential.

Legal advice and services in the US and the UK

Regulations differ across regions of course but often have the same effect of hampering progress and market growth. In the United States for example, the American Bar Association’s Model Rules of Professional Conduct 5.4 prohibits unauthorized practice of law, precluding non-licensed attorneys from delivering legal services. Each of the 50 states have adopted some form of Rule 5.4, but there is no bright line between providing legal information and legal advice, leaving those who help pro se litigants just making their best guess.

Given the scale of the access to justice gap, lawyers alone cannot narrow it. Not only will there never be enough lawyers in the US to represent everyone who needs help with their legal issue, it does not make economic sense for them to take on clients in certain types of matters. Pro bono and legal aid organizations are doing incredible work, but they are under-resourced and overwhelmed. As tech tools reach true viability in the legal sector, the profession needs clarity on what constitutes unauthorized practice of law and where and how technology can fit in to address the access issue.

In the United Kingdom, legal advice and services are similarly regulated. Legal advice is defined as guidance on legal rights and obligations, while reserved legal services include activities such as litigation and conveyancing (property transfer), which can only be performed by authorized professionals. This regulatory framework, while designed to ensure quality of legal services, also limits their accessibility to those who can afford them.

Clients often face overwhelming legal jargon and documentation, with no support for implementation or follow-up once a case concludes. This lack of transparency and continuity further alienates individuals seeking resolution to their legal problems. Moreover, costs can spiral without a clear ceiling, often leaving clients worse off than they were before they engaged legal services.

The UK and US have thriving legal tech communities, yet their impact on access to justice has been limited. Current first touchpoints for legal advice, such as UK.gov and Citizens Advice in the UK and local bar associations in the US, simply direct users to lawyers rather than offering self-serve solutions. Collaboration is essential: government bodies, courts, legal aid organizations, and lawyers should work with justice tech companies to harness the potential of generative AI (GenAI) to promote access to justice.

Exploratory frameworks like go beyond just providing innovators with access to regulators for Q&As and instead can provide a platform for joint learning, user testing, and feedback-gathering that will enable evidence-based policy changes. This approach not only fosters trust and understanding but also paves the way for AI-driven legal services to gain the recognition needed to transform the sector.

The knotty state of current regulations

Without regulatory change, direct to consumer justice tech companies cannot help the millions of Americans who need it — because they are stuck. Indeed, some of the challenges that justice tech companies face are very well laid out in in New York, especially around outdated rules of professional responsibility, which includes hurdles such as:

      • vagueness of unauthorized practice of law statutes and revenue-sharing restrictions;
      • the inability to hire lawyers directly to provide legal advice;
      • the fact that, with very limited exception, only law firms can collect fees directly from clients seeking legal advice, otherwise it is “fee splitting” or revenue sharing, which is prohibited;
      • also, companies cannot hire paralegals with decades of experience to give any sort of public-facing advice to consumers unless they are supervised by a lawyer in a firm. What is allowed, however, is a lawyer with zero years of experience in an area of law supervising a paralegal within a firm structure to deliver the same advice; and
      • justice tech companies live under the constant threat of looming lawsuits for the unauthorized practice of law, which are threatened monthly and can easily bankrupt a startup.

These outmoded regulations governing unauthorized practice of law and revenue-sharing are standing in the way of ameliorating the access to justice crisis. And while their principles are incredibly important, there are no clear standards. Often unauthorized practice of law situations are subjective and oversee by legal bar groups that are often set up to protect lawyers’ market share, not the consumer.

The potential of GenAI

There is hope, however, in advanced technology. GenAI, particularly large language models (LLMs), holds much promise for the legal sector. By 2026, it is anticipated that 80% of legal cases will involve GenAI, significantly reducing time and resources. Further, GenAI has the potential to democratize access to justice by making legal advice affordable, accessible, and equitable. This scalability could provide exceptional value, far surpassing any incremental increases in the legal aid budget.

The UK and US stand at a similar crossroads, with the opportunity to lead the integration of GenAI into legal services and create a more accessible, affordable, and equitable legal system. The advent of Generative AI presents a chance to build upon alternative business models, transforming legal services in a way that benefits billions of consumers rather than merely increasing margins for law firms.

To effectively promote adoption and impact, however, regulators should transition from merely facilitating advocacy to instead establishing regulatory sandboxes. These sandboxes would allow consumers to safely experiment with new technologies, while innovators would gain visibility and a platform to build trust. Meanwhile, regulators could collect feedback and evidence from real users to guide their policy changes.

With the demand for legal access outstripping the supply of professionals, GenAI has the potential to transform justice tech by ensuring its applications are affordable, available at all hours, and equitable for every user. It is also critical to the future of the legal profession, which is desperately in need of new and creative ways to adapt to and survive in our changing world.


You can find out more about the impact of Justice Tech here

]]>
New AI-powered chatbot revolutionizes housing repairs and access to justice /en-us/posts/ai-in-courts/housing-repairs-chatbot/ Thu, 30 Jan 2025 11:17:11 +0000 https://blogs.thomsonreuters.com/en-us/?p=64645 Housing justice — seen as the proliferation of substandard housing conditions and a persistent lack of repair and upkeep — is a persistent gap in New York City. While tenants in the city have many rights relating to the safety and quality of their housing, there is a stark difference between these rights and the reality that tenants endure. Indeed, tenants should expect to live in safe, well-maintained buildings that are free from vermin, leaks, and hazardous conditions, but health and safety are far from guaranteed in the city’s housing network.

And this can have disastrous consequences. In January 2022, for example, 17 people died in a preventable fire at the Twin Parks North West apartment complex in the Bronx. “Basic safety measures, like self-closing doors, . The small fireĚýspread across a building as a result and suffocated people far away from the source of the fire,” says , a Senior Legal Innovation Strategist at Just-Tech and a housing attorney in NYC.

Indeed, this terrible event shows the dangers of housing repairs not being done across the city. Nori explains how tenants living in city-owned housing through and other serious conditions, even as the city implements new policies to fix public housing apartments. “The impact of housing conditions falls squarely on the most vulnerable people, like children and the elderly, who suffer serious health issues like asthma from preventable conditions like leaks and mold,” Nori adds.

To address part of the housing justice gap, Nori was instrumental in launching a new AI assistant, , a groundbreaking tool developed through a collaboration between (HCA), a nonprofit organization specializing in tenancy law, and legal tech company . Roxanne addresses a critical gap in the NYC housing landscape — the lack of accessible, actionable information for tenants dealing with substandard housing conditions.

, Executive Director of HCA says with the new Roxanne app, NYC renters can now get instant answers to all their rental repair questions. “Renting law and regulations in New York are notoriously complicated and hard to digest, so with Roxanne, we’ve made rental repairs guidance both easy to access and understand,”ĚýLaurie says.

Roxanne’s journey from development to implementation

The idea for Roxanne was born from the observation that the disparity in resources for housing condition issues was related to Nori’s efforts at eviction prevention. Nori then initiated a collaboration between HCA and Josef to leverage AI in bridging this gap.

The team developed a prototype that combined the capability of Josef’s Q platform with HCA’s extensive knowledge resources about repair issues. The benefit in using Josef’s platform is that the HCA staff did not need to learn anything new to create and work with the tool. It works like a simple chat interface.

Initially, Roxanne was designed to assist HCA’s frontline staff in answering queries by providing valuable insights into common housing issues, offering effective answers, and directing users to areas for advocacy in housing law. Over time, however, Roxanne evolved to a point to which it could be directly accessed by tenants themselves. The tech team creating the app rigorously tested Roxanne for more than six months to ensure optimal accuracy and trust.

Challenges and lessons learned

The development of Roxanne was not without its hurdles, however. Three main challenges emerged during the process development and implementation stages, according to Nori, including:

      • Trust — There was initial skepticism among HCA staff about the use of AI. Overcoming this required patience and a demonstration of Roxanne the Repair Bot’s effectiveness.
      • Accuracy — While Roxanne initially outperformed human workers in accuracy, the team aimed for an even higher standard of more than 95% accuracy to ensure widespread acceptance and adoption.
      • Safety & compliance — A third challenge is to make sure that the tool protects the privacy of the users and complies with regulations and laws on non-lawyer legal help. These rules around the unauthorized practice of law are being tested continuously as AI expands its capabilities to provide actionable legal help directly to the public.

There is no doubt that the development of Roxanne was an experiment and a process of learning. For others looking to use AI and technology to address access to justice issues, Nori explains that the primary lesson learned from Roxanne was how itĚýhighlights the importance of patience and trusting the process. While Nori was initially eager to launch quickly, the team’s decision to take more time ultimately resulted in a more robust and public-ready tool.

Roxanne the Repair Bot represents a significant step forward in using AI to promote housing justice. By providing tenants with easy access to crucial information about their rights and options, this innovative tool has the potential to improve living conditions and health outcomes for many New Yorkers. As we look to the future, Roxanne serves as an inspiring example of how technology, lawyers, and non-profit advocates can work hand in hand to create meaningful improvements in the lives of many.


You can find out more about how technology is helping further the cause of justice here

]]>
Humanizing Justice: The transformational impact of AI in courts, from filing to sentencing /en-us/posts/ai-in-courts/humanizing-justice/ https://blogs.thomsonreuters.com/en-us/ai-in-courts/humanizing-justice/#respond Fri, 25 Oct 2024 15:20:37 +0000 https://blogs.thomsonreuters.com/en-us/?p=63549 Artificial intelligence (AI) tools are being introduced at every step of client interactions with the criminal and civil justice systems — well beyond the courtroom. These tools have improved efficiency and equity for defendants and their legal representation from court filings to parole boards. They’ve also enabled employees to focus more on human interactions in their work and provided learning opportunities to improve equity and reduce bias in judicial outcomes.

Clerks’ offices: Streamlining document processing

AI-driven tools have been deployed in courts and clerks’ offices over the past five years, allowing clerks to reduce inefficiencies and errors that may occur in a largely human-run filing process. , for example, received a national digital innovation award in 2018 for its use of a Lights-Out Document Processing program, which allows users to seamlessly analyze document filings as well as tag and index them with appropriate case information. Palm Beach County employees tested the software on a limited number of document types as a low-risk pilot in 2018. And after training the software on hundreds of documents, the team audited all of the documents that were organized by the software.

With a , the machines far outperformed the accuracy of their human counterparts. This deployment of five robotic document management systems was equivalent to the workload capacity of 19 human employees and freed up Palm Beach County workers for more thoughtful jobs that enabled them to grow both their skillsets and their earning potential.

Public defender offices: Enhancing legal advocacy

Also in Florida, Miami-Dade County Public Defender Carlos Martinez (previously featured our Revolutionizing Rights series) has advocated for using large language model AI tools to aid the public defenders’ office in initial drafting of legal documents and research. Their office is one of the first in the United States to make use of the AI tools that can aides attorneys and their teams in research, document preparation, and memo drafting.

Technology can also help to improve case outcomes by diverting people from prison. Like Miami-Dade, the office of the Los Angeles County Public Defender also has integrated . Chief Information Officer Mohammad Al Rawi helped migrate 24 legacy, in-house systems to cloud-based platforms — fortuitously, right before the global Covid-19 pandemic.

The efforts of the office centered on humanizing the indigent and treating the people intersecting with the criminal justice system as just that, people rather than simply case numbers. With AI tools to manage the troves of data within the office’s systems and then organize them by person (rather than by case), data can be transformed into a human narrative. This narrative approach helps legal teams to advocate for alternative treatments and is part of the office’s larger goal to .

In the non-criminal landscape, generative AI (GenAI) tools can help both legal professionals and pro se litigants.

In fact, some organizations funded by the have used AI tools to generate pleadings and other filings for high-frequency, low-complexity case types, such as workers compensation, landlord-tenant disputes, and consumer debt filings. Additionally, for those people who these organizations are not able to help individually, AI tools can provide do-it-yourself resources for pro se litigants, including document assembly tools and AI-consumer-focused chatbots.

While some are wary of the risk of these AI-driven tools veering into unauthorized practice of law, if these programs are not tailored services and notice is given that ghostwriting was used, this would not be considered the unauthorized practice of law.

AI’s role in reducing incarceration and predicting recidivism

A assessed the use of AI tools when they helped inform judges’ decision-making in more than 50,000 convictions in the State of Virginia. AI software was used to score each offender’s risk of re-offending and then advise judges on sentencing options: incarceration or alternative punishments. The study found that the AI tool could help to correct gender and racial bias that may come into play in judges’ discretionary decisions.

The tools would generate recommendations, but final sentencing decisions were made by human judges. The study found that judges disproportionately declined to offer alternative punishments to defendants of color, even when the tools suggested the alternative punishments.

Still, there is ample opportunity for AI tools to be utilized in this way. For example, the New York State Parole Board is one in several states which uses the actuarial tool to support decisions on parole and assess the likelihoods of recidivism. The actuarial score for each incarcerated person is based on factors such as education level, age at the time of conviction, and their individual plans for re-entry into society.

A Ěýemployed an AI tool to analyze data fromĚýmore thanĚý4,000 individuals released on parole between 2012 and 2015. This research evaluated the outcomes for these individuals, considering their COMPAS scores andĚýsubsequentĚýparole board decisions. The findings indicated that parole was often denied to those with low-risk COMPAS scores on account of the severity of their initial offenses. Conversely, aĚýĚýand a ĚýĚýhave critiqued COMPAS, suggesting that its algorithm might be no better than human judgment and may suffer from bias.
We understand that the use of AI tools comes with concerns. Responsible utilization of AI technology demands the formulation of ethical GenAI principles, the establishment of governance frameworks, and continuous engagement with interdisciplinary experts to tackle ethical issues pertaining to bias, fairness, accuracy, reliability, and data privacy. Moreover, human oversight remains essential, as AI servesĚýas aĚýtoolĚýrather thanĚýas a lawyer itself.

While none of these studies suggest that AI tools should replace human judgement, of course, but these tools can be used to evaluate and identify unconscious bias which often can directly work against efforts to reduce rates of incarceration.


You can find out more about here.

]]>
https://blogs.thomsonreuters.com/en-us/ai-in-courts/humanizing-justice/feed/ 0
Increasing Access to Justice: Innovations now happening in the nation’s court system /en-us/posts/government/access-justice-court-innovations/ https://blogs.thomsonreuters.com/en-us/government/access-justice-court-innovations/#respond Fri, 04 Oct 2024 13:37:52 +0000 https://blogs.thomsonreuters.com/en-us/?p=63234 As generative artificial intelligence (GenAI) becomes more common, judges are discussing its potential role in courts. While government and legal profession respondents recognize opportunities for cost savings and reduction with GenAI, concerns about reliability and accuracy hinder its application, according to findings in the 2024 Generative AI in Professional Services report from the Thomson Reuters Institute.

Despite these concerns, however, there are opportunities to use GenAI to enhance customer experience and increase access to justice in civil courts across the United States.

GenAI in docket management

Judge Scott Schlegel of the advocates for GenAI’s potential and enhance customer experience. And there is clearly a need for this type of modernization — 17% of hearings in 2023 were delayed more than 15 minutes, with failure-to-appear, both in-person and virtually, as a major cause, according to the 2024 State of the Courts Report from the Thomson Reuters Institute.

Further, the Superior Court of California has introduced a in Los Angeles County which allows litigants to make and manage their own reservations on the docket, within parameters established by courtroom professionals. This allows for courts to manage their docket days and workloads but offers litigants a level of flexibility in their scheduling. One additional feature is the ability for litigants to subscribe to a to schedule email and text message reminders about their hearings.

Research from the shows that smarter court scheduling technology like these examples can increase equity, efficiency, and reduce failure-to-appear rates.

While these innovations could make docket management easier for many citizens, half to three-quarters of middle- and low-income Americans still struggle with access to legal guidance and representation for civil matters such as eviction, bankruptcy, workplace injuries, debt litigation, and more.

A study found that Legal Service Corporation-funded organizations simply can’t keep pace with demands — providing inadequate or no help for 71% for civil cases.

Fortunately, GenAI can further aide pro se (self-represented) litigants in high-frequency, low-complexity civil matters. A stakeholder session facilitated by Stanford University’s highlighted in court matters that do not constitute unauthorized practice of law, which would prohibit their use in some cases. These applications include drafting legal help guides, preparing court forms, and checking documents for errors in user’s preferred language.

Concerns and solutions

Critics have voiced concerns about GenAI tools related to a potential lack of empathy, the reinforcement of systemic historical biases, or oversimplification of complex legal information. Also, the increasing presence of GenAI tools will require users to have regular access to high-speed internet service, which is not universally accessible for all citizens. This digital divide, or the gap between those with or without access to high-speed internet, disproportionately impacts lower-income and rural households.

One important consideration for organizations to remember is that , and the future deployment of mobile-friendly GenAI applications will ensure that these tools reach a larger audience, especially those individuals who face barriers to high-speed internet.

And more tools and customer service innovations are coming on-line every day, potentially alleviating access-to-justice concerns. For example, the offered a strong example of GenAI’s potential by introducing Cleo, a virtual office assistant and interactive chatbot. The tool now handles , which has saved the County more than $28,000 in the second quarter of 2022 (the last quarter for which data was available.) An average of nearly 10,500 users per month accessed Cleo through text-based interactions, earning the County a customer satisfaction score of 90.7 out of 100.

Similarly, the uses their own avatar named — an acronym for Self-HelpĚýAssistantĚýNavigator forĚýDigitalĚýInteractions. SANDI, which uses an action knowledge base of more than 900 questions, offers individuals the ability to talk via microphone in a speech-to-text fashion rather than typing. An average of 950 pro se litigants use the tool each month, which has reduced the need for human-driven chats by 94%. The tool offers bilingual conversational assistance in English and Spanish, and there are plans in the works to add Creole as a third language.

Considerations in future GenAI applications

Courts across the US face redundancy in developing GenAI tools due to jurisdictional differences; however, there are excellent examples from public and private sector legal professionals that can be modeled.

LegalYou, a tool developed by Ice Legal, a consumer protection firm with offices in Florida and New Hampshire, has created and (including infographics and quizzes) that break down legal complexities into “bite-sized, easy to understand snippets.”

The Northwest Justice Project (the largest publicly funded legal aid program in Washington State) maintains the website, with an extensive library of legal resources, DIY court forms, and language provisions for more than a dozen languages, as well as videos for those who are hearing or sight impaired.

Early examples like Cleo and SANDI demonstrate the twin successes of improving user experiences while saving taxpayer money. While it may be a time-intensive undertaking for any individual court system to develop their own tools, data supports the fact that GenAI legal resources can expand access to justice and reduce employee workload in a court setting, providing benefits to all involved parties.


You can find more articles about here.

]]>
https://blogs.thomsonreuters.com/en-us/government/access-justice-court-innovations/feed/ 0