Chatbots Archives - Thomson Reuters Institute https://blogs.thomsonreuters.com/en-us/topic/chatbots/ Thomson Reuters Institute is a blog from ¶¶ŇőłÉÄę, the intelligence, technology and human expertise you need to find trusted answers. Fri, 30 Jan 2026 14:49:57 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 Upskilling court staff & driving operational efficiency through the use AI chatbots /en-us/posts/ai-in-courts/court-staff-chatbots/ Mon, 17 Mar 2025 17:57:32 +0000 https://blogs.thomsonreuters.com/en-us/?p=65203 In recent years, the California Superior Court of Orange County experienced high vacancy rates and turnover in staff resulting in the loss of expertise and knowledge, in part due to retirements. Knowing that the current situation was unsustainable, the Court took an innovative approach to solving this challenge by pursuing automated solutions involving AI, according to , Chief Financial & Administrative Officer at the Orange County Superior Court.

One of the main achievements in the Superior Court’s pursuit of innovation was its creation of EVA (Employee Virtual Assistant), an AI-powered chatbot used internally by the Court. The tool helps staff, especially those with limited experience, to quickly and accurately access information about court procedures, policies, and forms that are essential to perform their duties. It also provides quick access across various case types and departments; for example, civil case processing alone has 150 procedures.

To use the chatbot, staff members just type in questions in their natural language. EVA then searches a large database of documents, such as court procedures and policies, and returns a concise summary of relevant information to the user, including a link to the source document so that users can conduct further research easily. The link to the reference material is not only a great resource to validate and verify the accuracy of the tool, but it’s also an excellent mechanism to build trust and comfort with it, thereby increasing adoption and utilization. EVA is currently being used by staff in several different court divisions.

Although EVA is a work in progress, the Orange County team expects this tool to reduce training time, improve procedural accuracy, and serve as the centerpiece in creating a more supportive working environment for staff and enhancing timely service to the public.

The journey to developing and implementing EVA

The development of EVA began with the creation of a knowledge base, comprised by uploading more than 150 civil procedures and automatically indexing them. One of the initial challenges faced by the EVA development team was managing the multiple contexts within these procedural documents in which terms like complaint could have varied meanings depending on the situation.

To overcome this, the team shifted their tactics by developing hyper-specific bots tailored to individual departments and case types. They also used advanced embedding techniques (AET),Ěýwhich is a special way to help computers understand what words mean by giving each word a special set of numbers that are like a secret code that only the computer can understand. This tactic breaks down the knowledge into contextually relevant pieces, and the combination of using tailored bots and AET can go a long way to improve accuracy.

Another challenge during the development of EVA was outdated procedures within the knowledge base, leading to inaccurate or irrelevant responses. To remove this barrier, the team implemented procedural reviews.

In the testing phase, subject matter experts refined EVA’s responses and identified outdated or inconsistent procedures for revision. This collaborative effort with the operations training teams led to significant improvements in the chatbot’s accuracy.

After fine tuning EVA’s capabilities, a soft launch was conducted with a select group of 15 users, who provided valuable feedback to identify and fix system bugs. This iterative feedback process ensured that EVA was robust and ready for full deployment. Additionally, user engagement was crucial in improving EVA’s functionality; and iterative testing and feedback sessions with end-users, such as legal processing specialists, enhanced the chatbot’s effectiveness.

During the implementation phase, comprehensive training materials, including videos and reference guides, were developed. In-person demos also were conducted to facilitate a seamless transition for all users. These resources ensured staff were well-equipped to integrate EVA into their workflows effectively.

Lessons learned and guidance for courts

The development and implementation of EVA yielded valuable lessons that will guide future innovations and improvements in court operations. , Strategic Innovations Group Operations Analyst, and , Strategic Innovations Group Data Analyst, both at Orange County, California Superior Court, described some of these lessons:

      • First, direct engagement with end-users during the development process is essential. They provide critical insights into their needs and ensure the solution meets their requirements.
      • Second, consolidating and updating procedures was a time-consuming effort, but it had a significant impact on the accuracy and effectiveness of EVA.
      • Finally, tailoring AI solutions to specific departments and case types ensured the solution meets the unique needs and improves the workflows of each department.

To help successfully navigate the implementation of this transformative technology further, Hernandez and Gee offered ways that other courts can get started, based on their firsthand experience:

      • Engage stakeholders early — Court tech teams should involve key stakeholders from the beginning to ensure their buy-in and support throughout the implementation process. ThisĚýalso helps in aligning the tool with the needs and expectations of its users.
      • Start with low-risk proof of concepts — Teams should begin with small-scale, low-risk proof of concepts to test and familiarize themselves with the technology. Then, use these initial trials to gather feedback and make iterative improvements.
      • Future-proof solutions — Teams should design the tool with flexibility in mind to accommodate future advancements, including making the solution easily upgradable and adaptable to new developments, such as integration with the latest large language models.

By leveraging AI to streamline access to critical information, EVA aids in mitigating the effects of limited training due to high turnover and demonstrates the Court’s commitment to embracing technology as a means to enhance operational efficiency and staff support, said Nora Sanchez, Chief Operating Officer at the Orange County Superior Court. And as courts continue to face evolving challenges, solutions like EVA showcase the potential of AI to transform traditional systems, upskill staff, and improve the overall administration of justice.


You can find out more about the here

]]>
Chatbots for justice: Building AI-powered legal solutions step by step /en-us/posts/ai-in-courts/chatbots-for-justice-building-ai-powered-legal-solutions/ Wed, 12 Mar 2025 22:39:16 +0000 https://blogs.thomsonreuters.com/en-us/?p=65222 Low-income people in the United States can’t afford adequate legal help in 92% of civil matters, and the promise of AI could potentially make legal services more affordable, according to the . In fact, several court systems and nonprofits are demonstrating this promise, a couple of which were recently highlighted in a webinar series hosted by the .

For example, the developed the chatbot Beagle+ to assist people with step-by-step guidance on everyday legal problems. , Digital & Content Lead at the People’s Law School, led the efforts to create Beagle+ with technical assistance fromĚý, Founder of Tangowork. And the Alaska Court System (ACS) andĚý using a grant from the NCSC to develop an AI-powered chatbot called theĚýAlaska Virtual Assistant, or AVA.ĚýJeannie Sato, Director of Access to Justice Services of ACS worked with , CEO and Founder of LawDroid, to develop the tool.

How courts can successfully experiment with AI

Jackson, McGrath, Sato, and Martin all offered their step-by-step guidance on how courts and nonprofits can experiment and use AI successfully within courts systems.

Step 1: Determine the problem

When starting a generative AI (GenAI) legal assistance project, it is crucial to first pinpoint the specific legal needs and challenges faced by your target audience. McGrath noted that he sees several common examples, including providing public access to legal information, creating internal resources like bench books for judges, and automating court document preparation.

To properly identify the problem, conduct thorough user research to understand pain points related to accessing and applying legal information. For instance, Martin suggests starting by speaking with court staff. “I think we sometimes get caught up in the excitement about wanting to throw AI at the problem and create a solution,” Martin explains. “And there are many use cases, but I think the part that’s really important is to meet with your staff, meet with everyone who’s being impacted by the burden of work, and then determine, based on that, what is the best choice.”

Taking the time upfront to clearly define the problem will help ensure that any AI solution being developed is truly meeting a demonstrated need.

Step 2: Craft a vision

Shifting from problem identification to crafting a vision for the GenAI-powered solution is crucial. The People’s Law School’s Beagle+ chatbot illustrated this well. “Begin with the end in mind,” says Jackson. “When you begin a project, keep in mind what you’re trying to achieve and what success looks like because that’s going to be different for each person.”

Jackson further described how in 2018, the initial vision was to create a chatbot capable of intelligently answering questions about consumer and debt law in British Columbia. Today, while that vision is realized, the ability of GenAI technology to adapt and improve over time necessitates a continuous and evolving vision.

Step 3: Allocate realistic resources

Assessing available resources is crucial before embarking on a GenAI project, with a realistic evaluation considering such factors as existing legal content, technological capabilities, staff expertise and capacity, and budget.

It’s important to examine the state of the organization’s existing legal information, including its documents and web pages, to determine the quality and consistency. Indeed, conflicting information across sources often can confuse GenAI models.

For staff capacity, Sato explains how the ACS started with a small team of people, which included the court administrator, the chief technology officer, a webmaster, and two to three staff attorneys, who were necessary for content review, testing, and feedback. It is not uncommon for an initial project to consume about 30% of each team member’s time.

Technological expertise is also a key consideration in resource assessment. In fact, Martins says this underscores the importance of working with a technology partner that can help navigate the different choices and options available, including the need to understand options for AI model selection, vector databases, and embedding strategies. While some may consider using large language models (LLMs)to reduce costs, the expenses for setup and maintenance often outweigh the benefits compared to using established services like OpenAI.

Financial resources are also a consideration, of course; however, it is worth noting that the cost of OpenAI tokens is often surprisingly low compared to other project expenses. For the creators of Beagle+, for example, using OpenAI’s tool has cost no more than $75 per month, according to Tangowork’s McGrath.


Courts can explore the possibilities of AI tools in tackling their specific legal challenges by experimenting within


Addressing common concerns

Our experts say that two common concerns often arise when considering the use of GenAI to solve justice gaps: one is the need for multilingual capabilities; and the second is how to handle AI-generated inaccurate information, or so-called hallucinations.

“Advanced LLMs like GPT-4 demonstrate impressive multilingual capabilities and are able to understand and respond in numerous languages on-the-fly without requiring additional training or configuration,” explains McGrath. “Multilingual support is a key advantage of modern LLMs, enabling chatbots to serve diverse populations with minimal additional development effort.”

However, hallucinations are a significant concern when using LLMs for legal applications. Fortunately, the combination of several advanced strategies can mitigate hallucinations:

      • First, grounding responses in providing context through techniques like retrieval-augmented generation can help tether outputs to verified source material.
      • Second, careful prompt engineering and relevancy scoring can further constrain responses.
      • And finally, automated checks that compare model outputs to source documents can flag potential hallucinations.

At the same time, manual expert review by humans — known colloquially as human in the loop — remains crucial, even with automated safeguards in place. Therefore it is key to periodically sample responses for human verification and focus more intensive review on higher-risk conversations.

Creating a successful AI-powered chatbot for legal information requires careful consideration of the several steps cited above. By following these actions and staying up to date with the latest developments in AI technology, courts and organizations working to close the justice gap can create effective and responsible chatbots that provide valuable legal information to those who need it most.


You can register here for the upcoming NCSC webinar on March 19, which will explore the

]]>
Chatbots for justice: The impact of AI-driven tech tools for pro se litigants /en-us/posts/ai-in-courts/chatbots-pro-se-litigants/ Wed, 12 Feb 2025 15:14:13 +0000 https://blogs.thomsonreuters.com/en-us/?p=64812 Access to justice is a fundamental pillar of a fair and equitable society, yet only one-in-four respondents to the survey agreed that courts are doing enough to help individuals navigate the court system without an attorney. Many of these pro se litigants still face substantial barriers to accessing legal assistance.

However, AI-powered chatbots now offer a promising solution by providing timely, tailored legal information to those in need — and two early examples are the chatbots Beagle+ and AVA.

Beagle+ makes Canadian law accessible in plain language

Beagle+ is a chatbot powered by generative AI (GenAI) and developed by . The chatbot assists people with step-by-step guidance on everyday legal problems by allowing users to input their legal concerns in their own words. The chatbot responds with appropriate information, links to relevant resources, and potential next steps. , Digital & Content Lead at People’s Law School, led the efforts to create Beagle+ with technical assistance from , Founder of Tangowork. Jackson and McGrath worked together to launch Beagle+ in early-2024.

Central to the success of Beagle+ is its thoughtful design and user-centric approach. The team prioritized creating a system that is both empathetic and informative with a primary focus on providing users with clear, actionable guidance. The chatbot’s ability to integrate seamlessly with existing web resources without requiring dual data maintenance is another significant achievement because it reduces operational overhead while maintaining up-to-date legal content.

Although the tool is successful, Jackson and McGrath faced challenges throughout the developmental journey. One key barrier to overcome was ensuring the chatbot did not give incorrect legal advice from its training data. Another challenge was improving the system’s ability to handle nuanced legal questions. To address these challenges, the team used iterative testing and refinement to achieve a 99% accuracy rate in legal conversations.

Alaska state court develops its first chatbot

The Alaska Court System (ACS) partnered with , a legal technology company which has pioneered access to justice chatbots since 2016, and used a grant from the National Center for State Courts to develop an AI-powered chatbot called the Alaska Virtual Assistant, or AVA. The tool, which is in the final testing phase before launching, will help self-represented litigants navigate probate estate cases.

AVA uses enhanced retrieval augmented generation, which combines information retrieval with GenAI for improved accuracy and context in responses that are based on the court’s existing self-help web content, according to , CEO and Founder of LawDroid. Notably, AVA provides citations to verifiable sources and suggested follow-up questions to aid self-represented litigants in finding the information they didn’t even know they needed. ACS and LawDroid have been testing both OpenAI’s ChatGPT4 and Anthropic’s Claude Sonnet 3.5 and comparing accuracy and tone. A decision has not yet been made on which model will ultimately be used, according to Jeannie Sato, Director of Access to Justice Services of ACS.

Managing the complexities of legal language and ensuring the chatbot’s responses are consistent and reliable were two main challenges experienced during the development of AVA. These were addressed through a combination of meticulous content review, the use of advanced AI models, and continuous collaboration with legal and technical experts. Also, substantial effort was spent to create a comprehensive knowledge base from existing web content to ensure external sources did not leak in and result in erroneous responses to prompts. The production of AVA also required rigorous testing and refinement to address inaccurate inferences and inconsistent responses.

What the courts can learn from AVA and Beagle+

The development of Beagle+ and AVA yielded several key lessons that courts and legal services organizations can benefit from, including:

Focus on user needs during development — When creating public-facing legal tools, the most important requirement during the development and implementation journey is considering the needs of the average self-representing user who may have limited or no knowledge of the legal system. Beagle+ and AVA balance empathy with clear information to ensure the delivery of user-centric guidance that is both compassionate and practical, containing actionable insights and support. Additionally, both tools prioritize clear and concise language to achieve a reading level that is understood by the general public.

Collaborate with an interdisciplinary team — Both projects stressed the importance of having a multidisciplinary team that possesses legal and technical expertise along with a commitment to use plain language. This helps ensure that the chatbot is legally accurate, technically sound, and easy to understand.

Use iterative testing and human reviewĚý— The development teams of both projects used rigorous and recurring testing and regular human review of responses — they also focused on using information solely from trusted sources (the knowledge base) to guarantee that users receive correct legal guidance. Maintaining a system for documenting and preserving all prompts and responses helps track accuracy and allows the team to monitor progress over time. ACS found that instructing the model to include a citation to the source of the information can help confirm accuracy and improve user confidence.

Continuously evaluate and improve the chatbotĚý— Both teams underscored the importance of ongoing refinements to the knowledge base, stemming from iterative testing and user feedback analysis to maintain accuracy and improve the chatbot’s performance over time.

Dedicate resources well — Cost is often a factor for smaller court systems as well as for nonprofits and legal aid organizations. However, the most important factor in resource planning is dedicating the appropriate amount of internal staff time to the AI project. Project managers should plan to dedicate at least the 30% of one staff person’s time to build and review the knowledge base, evaluate and refine output, and fulfill other responsibilities. Allocate 30% of another person’s time for technical development.

Conclusion

As AI-powered legal chatbots continue to evolve, they offer a promising path to bridge the justice gap and empower self-represented litigants. By learning from successful implementations like Beagle+ and AVA, courts and legal services organizations can develop more effective tools to increase access to justice for all.


Join us for on February 19 to delve deeper into the technical aspects of building and monitoring these AI tools

]]>
2025 Predictions: How will the interplay of AI and fraud play out? /en-us/posts/corporates/2025-predictions-interplay-fraud-ai/ Wed, 05 Feb 2025 18:52:10 +0000 https://blogs.thomsonreuters.com/en-us/?p=64774 As we move forward to 2025, it is evident that fraud persists and that those intent on exploiting the nation’s corporate, government, and financial systems will continue their efforts. With every technological advancement, new scams and fraudulent enterprises emerge.

In 2023, losses surpassed $10 billion, representing a 14% increase from 2022. And in the alone, consumers reported losing $20 million to government impersonation scams involving cash payments. This figure pertains to just one type of scam over a single quarter. As the year advances, it is expected that the total number of reported scams will continue to increase annually.

Types of fraud on the rise

Due to the sophistication of artificial intelligence (AI), generative AI (GenAI), retrieval-augmented generation (RAG), and other large language model (LLM) tools, the complexity in fraud schemes is growing. There are several areas to keep an eye on, including:

Deep fakes of documents

GenAI now has the capability to create high-quality deepfakes of identification documents. These deepfakes are so convincing that they include shadows and other markers of authenticity. In November 2024, the U.S. Treasury Department’s Financial Crimes Enforcement Network (FinCEN) specifically encouraging the review of identification documents. The agency reported “an increase in suspicious activity reporting by financial institutions, describing the suspected use of deepfake media, particularly the use of fraudulent identity documents to circumvent identity verification and authentication methods.”

, noting thatĚýfraudsters can use GenAI to create convincing fake ID documents — such as driver’s licenses or professional credentials — that might also incorporate AI-generated images. Illicit actors then can use these documents to verify identity to fraudulently open a new account or to take over an existing account. ’s 2024 survey, , found that 97% of organizations are having difficulty verifying identity.

Deep fakes of videos

AI algorithms have the capability to alter or substitute faces in video footage. Such manipulated videos can be misused as business records, disseminated as false news stories, or, in certain instances, presented as evidence in judicial proceedings. These videos also can be used to further convince victims of a scam’s legitimacy. This scam is already being carried out by foreign-based groups, such as the so-called in Africa.

GenAI-enhanced scams

GenAI also can be utilized in ways that enhance the credibility of fraudulent schemes. It can improve the grammar or address other issues in emails and websites, making them more convincing. Further, LLMs are capable of creating sophisticated chatbots, which further enhance the plausibility of these scams.

Using AI to prevent fraud

There is some good news too. By this year, individuals and organizations can effectively combat fraud through increased vigilance and awareness of AI.

Thinking logically and acting deliberately

It’s critical that all parties look for signs of AI use, as it is essential to pay close attention to subtle cues and discrepancies that can tip off the user to possible AI trickery. Also, it’s important to think rationally rather than emotionally. If you receive a call requesting immediate action, it is essential to respond logically and verify the information provided. Never simply give your personal information over the phone or email.

Indeed, always act deliberately. Before transferring any funds, ensure that the transaction is traceable and conducted through reputable sources, such as banks. And report all suspected and actual fraud incidents promptly to ensure thorough tracking and documentation.

Making AI part of the team

In the ¶¶ŇőłÉÄę’ recent Future of Professionals report, 78% of all surveyed professional service workers said they believe AI is a force for good in their profession, indicating that people are thinking positively about AI. Indeed, it is important to see AI as a part of your organization’s fraud-fighting team, rather than as a tool of the enemy.

Organizations also can combat fraud using AI-based tools, and LLMs and RAG assistants also could be valuable assets in fraud detection. In fact, LLMs excel at natural language processing and can analyze text within transactions, emails, or messages to identify suspicious language, unusual phrasing, or patterns that may indicate fraudulent intent. For instance, LLMs can track anomalies in speech or writing that would link the same party attempting to open multiple accounts. LLMs also can summarize complex fraud cases by extracting key information from various documents, which could save investigators time by providing a concise overview of the documents.

On the other hand, RAG assistants, which can access and process vast knowledge bases, can cross-reference transactions with external data sources to flag suspicious transactions and identify discrepancies or anomalies that might signal fraud.

Taken together, these tools can analyze historical fraud data and help build predictive models to identify potential future fraud attempts based on similar patterns. Essentially, these AI tools can act as intelligent assistants, augmenting the capabilities of human investigators and enhancing the speed and accuracy of fraud detection.

Enhancing ID verification

As fraud cases continue to increase and gain in complexity due to the advancements in GenAI and other advanced tech, it is imperative for companies, financial institutions, and government agencies to enhance their identity verification processes. In the forthcoming year, these verification tools will progressively address some of the security gaps that have emerged because of the rapid evolution of GenAI.

As we look ahead to 2025, it is clear that the landscape of fraud is evolving rapidly, driven by advancements in AI. The rise of sophisticated AI tools, such as Ge AI and LLMs, has introduced new challenges in the fight against fraud. However, these same technologies also offer powerful solutions for detecting and preventing fraudulent activities.

By leveraging AI to analyze text for anomalies, flag suspicious transactions, and automate case summarization, organizations can enhance their fraud detection capabilities and stay one step ahead of fraudsters.


You can find more about here.

]]>
New AI-powered chatbot revolutionizes housing repairs and access to justice /en-us/posts/ai-in-courts/housing-repairs-chatbot/ Thu, 30 Jan 2025 11:17:11 +0000 https://blogs.thomsonreuters.com/en-us/?p=64645 Housing justice — seen as the proliferation of substandard housing conditions and a persistent lack of repair and upkeep — is a persistent gap in New York City. While tenants in the city have many rights relating to the safety and quality of their housing, there is a stark difference between these rights and the reality that tenants endure. Indeed, tenants should expect to live in safe, well-maintained buildings that are free from vermin, leaks, and hazardous conditions, but health and safety are far from guaranteed in the city’s housing network.

And this can have disastrous consequences. In January 2022, for example, 17 people died in a preventable fire at the Twin Parks North West apartment complex in the Bronx. “Basic safety measures, like self-closing doors, . The small fireĚýspread across a building as a result and suffocated people far away from the source of the fire,” says , a Senior Legal Innovation Strategist at Just-Tech and a housing attorney in NYC.

Indeed, this terrible event shows the dangers of housing repairs not being done across the city. Nori explains how tenants living in city-owned housing through and other serious conditions, even as the city implements new policies to fix public housing apartments. “The impact of housing conditions falls squarely on the most vulnerable people, like children and the elderly, who suffer serious health issues like asthma from preventable conditions like leaks and mold,” Nori adds.

To address part of the housing justice gap, Nori was instrumental in launching a new AI assistant, , a groundbreaking tool developed through a collaboration between (HCA), a nonprofit organization specializing in tenancy law, and legal tech company . Roxanne addresses a critical gap in the NYC housing landscape — the lack of accessible, actionable information for tenants dealing with substandard housing conditions.

, Executive Director of HCA says with the new Roxanne app, NYC renters can now get instant answers to all their rental repair questions. “Renting law and regulations in New York are notoriously complicated and hard to digest, so with Roxanne, we’ve made rental repairs guidance both easy to access and understand,”ĚýLaurie says.

Roxanne’s journey from development to implementation

The idea for Roxanne was born from the observation that the disparity in resources for housing condition issues was related to Nori’s efforts at eviction prevention. Nori then initiated a collaboration between HCA and Josef to leverage AI in bridging this gap.

The team developed a prototype that combined the capability of Josef’s Q platform with HCA’s extensive knowledge resources about repair issues. The benefit in using Josef’s platform is that the HCA staff did not need to learn anything new to create and work with the tool. It works like a simple chat interface.

Initially, Roxanne was designed to assist HCA’s frontline staff in answering queries by providing valuable insights into common housing issues, offering effective answers, and directing users to areas for advocacy in housing law. Over time, however, Roxanne evolved to a point to which it could be directly accessed by tenants themselves. The tech team creating the app rigorously tested Roxanne for more than six months to ensure optimal accuracy and trust.

Challenges and lessons learned

The development of Roxanne was not without its hurdles, however. Three main challenges emerged during the process development and implementation stages, according to Nori, including:

      • Trust — There was initial skepticism among HCA staff about the use of AI. Overcoming this required patience and a demonstration of Roxanne the Repair Bot’s effectiveness.
      • Accuracy — While Roxanne initially outperformed human workers in accuracy, the team aimed for an even higher standard of more than 95% accuracy to ensure widespread acceptance and adoption.
      • Safety & compliance — A third challenge is to make sure that the tool protects the privacy of the users and complies with regulations and laws on non-lawyer legal help. These rules around the unauthorized practice of law are being tested continuously as AI expands its capabilities to provide actionable legal help directly to the public.

There is no doubt that the development of Roxanne was an experiment and a process of learning. For others looking to use AI and technology to address access to justice issues, Nori explains that the primary lesson learned from Roxanne was how itĚýhighlights the importance of patience and trusting the process. While Nori was initially eager to launch quickly, the team’s decision to take more time ultimately resulted in a more robust and public-ready tool.

Roxanne the Repair Bot represents a significant step forward in using AI to promote housing justice. By providing tenants with easy access to crucial information about their rights and options, this innovative tool has the potential to improve living conditions and health outcomes for many New Yorkers. As we look to the future, Roxanne serves as an inspiring example of how technology, lawyers, and non-profit advocates can work hand in hand to create meaningful improvements in the lives of many.


You can find out more about how technology is helping further the cause of justice here

]]>
From the printing press to the singularity: Contemplating the role of the judiciary in the age of AI /en-us/posts/ai-in-courts/contemplating-judiciary-role/ Fri, 10 Jan 2025 13:42:49 +0000 https://blogs.thomsonreuters.com/en-us/?p=64357 Artificial Intelligence (AI) moves forward in some incredible way, every day. I’ve done my best to keep up, particularly as it concerns the legal profession, but I’m certain that as I write this, some new advancement will make old news of even the most recent one.


You can hear more insights from Judge Maritza Dominguez Braswell on here


While much of the current discussion among lawyers and judges has focused on the ethical implications and practical applications of large language models, it’s important to recognize that AI and other technological advancements extend far beyond that. I’m convinced we’re in the midst of a revolution that will fundamentally reshape our world and our roles as judicial officers.

, author and AI visionary at Google, recently published a second book about what he calls, the singularity. In it, he describes the exponential growth of AI capabilities and other technological advancements, predicting that in the not-so-distant future, machine intelligence will surpass that of even the smartest humans. Kurzweil also believes that eventually human and machine intelligence will merge, and AI will be a seamless extension of our brains — hence the term, singularity.

In physics, the term singularity is used to describe the point at which space-time collapses on itself, and the laws of physics break down. Kurzweil borrows this term to describe the point in time when humans merge with AI via brain-to-computer interfaces, predicting this will occur around 2045.

AI in common use

Consider this: your AI-powered personal assistant — like Siri or Alexa — currently operates like an extension of you and your brain. You don’t have to write a to-do list, just tell Siri to remind you. You don’t need mental math for your expanded recipe, give it to Co-Pilot, tell it how many people are served by the recipe and how many more you expect, and it will quickly adjust every measurement for you. Eventually, these AI assistants will become so attuned to our activities and preferences that they’ll anticipate our needs and requests.


I’m convinced we’re in the midst of a revolution that will fundamentally reshape our world and our roles as judicial officers.


This vision of the future may seem distant, but as Kurzweil explains, AI systems are already being used for incredible breakthroughs. For example, biotech company uses machine learning to detect cancer; and , a machine learning-based algorithm used as an early warning system in medicine, is credited with lowering sepsis deaths by approximately 20%. , which uses x-ray images to train an AI neural network for diagnoses, has outperformed human doctors.

Future AI applications are almost limitless. Imagine a smart watch that so precisely interprets biometric data, that it detects an oncoming heart attack and saves your life. Or one that communicates seamlessly with the cloud so your information can be downloaded by your doctor and your treatment finely tailored to your body’s specific needs.

According to Kurzweil, these and other biomedical advancements will mean some of us who are alive today may live healthy lives past our current maximum life span of 120 years. In fact, Kurzweil predicts that as a result of dramatic discoveries largely driven by advanced AI, people will achieve “longevity escape velocity” by around 2030.

Taking in all of this, one thing becomes certain: autonomous systems and breakthrough technologies will completely change how we work, live, and play.

The role of the judiciary

This is precisely why the judiciary must be well-attuned. As AI becomes more integrated into our daily lives, new legal issues will emerge. Data privacy laws continue to evolve to address the increased use of AI to collect, analyze, and store data. Intellectual property and copyright disputes will look different as it becomes more difficult to determine ownership. Cybercrime will be increasingly complex. And the federal rules of evidence are already being tested by so-called deepfakes. We once thought a video recording was unassailable — now, we must think about authenticity in entirely new ways.

In a world in which AI becomes so sophisticated that humans begin to delegate key decision-making to our robo-friends, the judiciary may even have to re-think some of the bedrocks of our legal system — like the concepts of duty and breach, which may have to stretch or altogether transform.

The need to adapt judicially in the wake of a revolution is nothing new. The invention of the printing press in the 15th century was revolutionary, making the mass production of books and wide dissemination of knowledge possible. With it came an entirely new set of disputes. For example, in England, a were aimed at controlling and censoring what could be printed, resulting in meaningful and unsurprising pushback. On one hand, there was real fear that unchecked printing could spread seditious or heretical ideas; and on the other, people like John Milton began laying the groundwork for our hallowed freedoms of speech and the press. Courts of the time had to grapple with opposing views and navigate the delicate balance between state control and individual rights. Their decisions shaped British law and meaningfully influenced the U.S. Constitution.


We must understand where AI is today, where it’s going tomorrow, and the many ways it can help us solve humanity’s greatest challenges.


The Industrial Revolution was another turning point. It transformed economies and brought about significant societal change that required judicial adaptation. For example, with the rise of factories and demand for labor, courts were frequently called upon to protect vulnerable populations. Labor laws evolved to meet the moment. And eventually — after thousands of factory fatalities and injuries highlighted an urgent need — Ěýa new framework was born. The Occupational Safety and Health Act of 1970 established the Occupational Safety and Health Review Commission, an independent agency whose primary function would be to resolve disputes arising from OSHA citations. This was not the first specialized and streamlined adjudicative process to be implemented, but it illustrates how a revolution calls upon the various branches of government to adapt.

We must meet the moment

The AI revolution will similarly call upon us to adjust and advance. This revolution will bring about a societal transformation like no other — indeed, some compare it to the advent of fire, tools, or agriculture. I believe it will be even more transformative, in part because of the break-neck speed of new developments.

In an open letter from the calling for , labs were accused of engaging in an “out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control.” Yet, most of the developed world has now joined this out-of-control race, and very advanced AI is likely to touch every aspect of our lives sooner than we think.

Unfettered AI can have disastrous consequences, we know that already. Lawsuits have been filed towards extremely dangerous behavior, and the fear of an AI-robot takeover is feeling less like science fiction these days. After all, if AI is poised to surpass human intelligence, what’s to say it can’t take control and become an existential threat to our species?

But we cannot allow these dangers to paralyze us.

To safeguard against AI’s peril, we must be willing to grapple with its promise. We must understand where AI is today, where it’s going tomorrow, and the many ways it can help us solve humanity’s greatest challenges. Only then — only when we truly engage and understand AI’s possibilities and promise — can we reasonably and appropriately safeguard against the dangers.

Granted, some of this is necessarily the work of lawmakers and regulators, but the judiciary is uniquely positioned to anticipate disputes, understand the legal implications of AI across different industries, and consider how unique adjudicatory frameworks might avoid some of the chaos that follows technological disruption.

History is being written before our very eyes. We are at the beginning of an epochal shift, and our work during this turning point will be age-defining. We must meet the moment to ensure justice for all.


You can find more information in the Thomson Reuters Institute’s new AI in Courts Resource CenterĚýhere

]]>