Jake Heller Archives - Thomson Reuters Institute https://blogs.thomsonreuters.com/en-us/innovation-topics/jake-heller/ Thomson Reuters Institute is a blog from , the intelligence, technology and human expertise you need to find trusted answers. Wed, 23 Oct 2024 20:32:37 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 Legal AI Benchmarking: CoCounsel /en-us/posts/innovation/legal-ai-benchmarking-cocounsel/ Wed, 23 Oct 2024 14:04:16 +0000 https://blogs.thomsonreuters.com/en-us/?post_type=innovation_post&p=63580 We’re excited to be sharing a detailed look into our testing program for CoCounsel, including specific methodologies for evaluating its skills. We aim not only to showcase the steps we take to ensure CoCounsel’s reliability, but also to contribute to broader benchmarking efforts in the legal AI industry. Though it’s challenging to establish universal benchmarks in such a diverse field, we’re engaging with industry stakeholders to work toward the shared goal of elevating the reliability and transparency of AI tools for all legal professionals.

Why evaluating legal skills is complicated

Traditional legal benchmarks usually rely on multiple-choice, true/false, or short-answer formats for easy evaluation. But these methods aren’t enough to assess the complex, open-ended tasks lawyers encounter daily and that large language model (LLM)-powered solutions like CoCounsel are built to perform.

CoCounsel’s skills produce nuanced outputs that must meet multiple criteria, including factual accuracy, adherence to source documents, and logical consistency. These are difficult outputs to evaluate using true/false tests. On top of that, assessing the “correctness” of legal outputs can be subjective. For instance, some users prefer detailed summaries, others prefer concise ones. Neither is “wrong,” it just comes down to preference, which makes it difficult to consistently automate evaluations.

To make it even more complicated, each CoCounsel skill often involves multiple components, with the LLM handling only the final stage of answer generation. For example, the Search a Database skill first uses various non-LLM-based search systems to retrieve relevant documents before the LLM synthesizes an answer. If the initial retrieval process is substandard, the LLM’s performance will be compromised. So, our evaluation must consider both LLM-based and non-LLM-based aspects, to make sure our assessment of the whole is accurate.

How we benchmark

Our benchmarking process begins long before putting CoCounsel through its paces. Whenever a significant new LLM is released, we test it across a wide suite of public and private legal tests, such as the dataset created by our Stanford collaborators, to assess their aptitude for legal review and analysis. We then integrate the LLMs that perform well in these initial tests with the CoCounsel platform, in a staging environment, to evaluate how they perform under real-world conditions.

Then we use an automated platform to run a battery of test cases created by our Trust Team (more on this below), to evaluate the output that comes from this experimental integration. If the results are promising, we conduct additional manual reviews using a skilled team of attorneys. When we see an improvement in performance compared to previous benchmarks, then we start talking as a team about how it might improve the CoCounsel experience for our users.

How we test

Our Trust Team has been around as long as CoCounsel has. This group of experienced attorneys from diverse backgrounds – in-house counsel, large and small law firms, government, public policy – is dedicated to continually rigorously testing CoCounsel performance.

We continue to follow a process that’s been integral to all our performance evaluation since CoCounsel’s inception: Our Trust Team creates tests representative of the real work attorneys use CoCounsel for and runs these tests against CoCounsel skills. When creating a test, they first consider what the skill’s for and how it might be used, based on their own insights, customer feedback, and secondary sources. Once the test is created, the attorney tester manually completes the test task, just as a lawyer would, to create an answer key – what we refer to as an “ideal response.” These tests and their corresponding ideal responses then undergo peer review. Being this meticulous is crucial, because the quality of our ideal responses determines the benchmark for a passing score.  

Once the ideal response has been created, a member of the Trust Team runs the test, using the applicable CoCounsel skill to complete the task just as a user would. An attorney tester reviews the output, referred to as our “model response.” Then they point-by-point compare CoCounsel’s response to the ideal response, identifying differences and assessing whether these differences deviate from the ideal response in a way or to a degree that would make the skill’s output incomplete, incorrect, or misleading. It is important to note that a test can be failed for many reasons, not just because an answer is outright wrong. For instance, some skills (e.g., Summarize) are more inherently subjective than others and do not lend themselves to a right or wrong answer. Accordingly, even though attorneys’ opinions of the “correct” answer can reasonably vary, we try to maintain the highest grading standards and “fail” the test if it is missing a single detail the attorney-tester considers to be Critical.

Based on this manual evaluation, we create a set of evaluation instructions tailored to the ideal output unique to each skill, capturing the needs of each skill so an LLM can match our human reviewers’ process. To illustrate this, let’s look at a real example using CoCounsel’s Extract Contract Data skill. After running the test in our automated evaluation platform, here’s what an attorney reviewer would see when reconciling the model response with the LLM’s evaluation of it: 

  • Question: If the customer is acquired through a merger, can the agreement be assigned to the acquiring entity without obtaining the provider’s prior written consent? 
    • Trust Team ideal response:
    • Short answer: Yes, with 30 days’ prior written notice.  
    • Detailed answer: Yes, if the customer is acquired through a merger, the agreement can be assigned to the acquiring entity without obtaining the provider’s prior written consent; the customer is only required to provide the provider with 30 days’ prior written notice. The clause specifies that either party may assign its rights or delegate its obligations without such consent, provided that 30 days’ prior written notice is given to the other party, to an entity that acquires all or substantially all of the business or assets of such party, including by merger. 
  • Model response:
    •  Short answer: Yes, with notice. 
    • Detailed answer: The agreement can be assigned to the acquiring entity without obtaining the provider’s prior written consent if the customer is acquired through a merger. The clause provides an exception to the general rule requiring consent for assignment. Specifically, it states that either party may assign its rights or delegate its obligations without such consent to an entity that acquires all or substantially all of the business or assets of the party, including through a merger, provided that 30 days’ prior written notice is given to the other party. 
    • Automated evaluation analysis: The model response captures the correct information that the agreement can be assigned to the acquiring entity without obtaining the provider’s prior written consent but requires 30 days’ prior written notice. However, it incorrectly states the short answer as “Yes, with notice” instead of “Yes, with 30 days’ prior written notice,” as per the ideal answer. This is a minor difference and does not change the meaning significantly. 

In this instance, the model response included a minor discrepancy from the attorney-authored ideal response. But the LLM’s evaluation of the response accurately determined that the answer remained sufficient, because it captured the complete notice requirement elsewhere in the response.

Our ideal-response approach provides two key advantages over assertion-based evaluations. It excels at identifying deviations from attorney expectations, including hallucinations. And it pinpoints extraneous or inconsistent information that, while not technically a hallucination, could make even a complete response incorrect if that information introduces logical inconsistencies, which would result in a failing score. 

We rely on our Trust Team to create well-defined ideal responses and auto-evaluation instructions and to determine if a test case passes or fails. A skill’s output definitively fails if it falls short of this ideal because of material omissions, factual incorrectness, or hallucinations. However, we recognize that many legal issues aren’t black-and-white, and the “correct” answer could be open to reasonable disagreement. To address this, we peer review ideal responses in cases when the answer might require a second opinion. And we might eliminate tests when we find insufficient agreement among the attorney testers. This is how we both ensure that our passing criteria remain rigorous and account for the nuanced nature of legal analysis. 

Maintenance and improvement

Creating a skill test set is only the beginning. Once we begin using it, the Trust Team continually monitors and refines it by manually reviewing failure cases from the automated tests and spot-checking passing samples to make sure the automated evaluation is in line with human judgments. We also regularly add tests to cover more use cases and capture user-reported issues, which could lead to further iterations of the tests submitted for automated evaluation and their success criteria.  

By following this process, every night we can execute, across all CoCounsel skills, more than 1,500 tests on our automated platform under attorney oversight, which combined with manual testing means we’ve run more than 1,000,000 tests since CoCounsel’s launch. And it empowers us to quickly identify areas for improvement, which is vital to ensuring CoCounsel remains the most trustworthy AI legal assistant available.

Conclusion

, we explored what it means for an AI tool to be “professional-grade” and why that standard is crucial for professionals in high-stakes fields like law. This post takes that concept further by diving into how we benchmark CoCounsel to ensure it meets those rigorous standards. By understanding the extensive testing that goes into evaluating its performance, you can see how CoCounsel consistently delivers the reliability and accuracy expected of a true professional-grade GenAI solution.

To promote the transparency my team and I believe is necessary in the legal AI field, we’ve decided to release some of our performance statistics for the first time and a sample of the tests that are used to arrive at the figures below applying the criteria referenced within this article. Check out our results .

Thisis a guestpost fromJake Heller,headofCoCounsel,ThomsonReuters.

]]>
Unlocking the full potential of professional-grade GenAI for your work /en-us/posts/innovation/unlocking-the-full-potential-of-professional-grade-genai-for-your-work/ Tue, 15 Oct 2024 12:06:01 +0000 https://blogs.thomsonreuters.com/en-us/?post_type=innovation_post&p=63470 Today, nearly two years since ChatGPT debuted, GenAI continues to dominate our cultural and professional conversations. Even as its adoption for work steadily increases, the biggest concern for most professionals – 70% of them – is accuracy of output.

However, not using GenAI for work at all is a non-option. 77% of professionals believe AI will have a high or transformational impact on their work over the next five years, and 78% call AI a “force for good” in their profession. In fact, 50% of law firms named AI among their top five strategic priorities for the next 18 months. If there were still doubt, there definitely isn’t anymore: GenAI is here to stay.

So how can conscientious – and forward-thinking – professionals make the most of this generational technology while guarding against its drawbacks? How do you know if the GenAI solution you’re considering will live up to your professional obligation to work ethically and ensure your clients’ data is securely handled? Is any GenAI product trustworthy? Is it even possible for tools built on large language models (LLMs) such as GPT-4o from OpenAI and Google’s Gemini, all of which are known to hallucinate, to be safe enough to use professionally?

Yes, it is possible. “Built on” is the key. When we launched our GenAI assistant, CoCounsel, our product and engineering teams delivered on the challenge of creating a product that could take advantage of LLMs’ tremendous raw power while eliminating as many of their serious limitations as possible – like hallucinations – that curb the professional utility of models when used on their own. What makes the current generation of LLMs truly extraordinary, then, is not what they alone can do, but what they enable.

Using a model directly should be done with great caution and exposes users to risk if they use the output professionally. CoCounsel, on the other hand, harnesses that power and has engineered robust, well-tested accuracy, privacy, and security controls around it. In short: LLMs are the world’s most incredible engines. CoCounsel uses that engine to take you incredible places – places you couldn’t reach without these LLMs – safely.

Why can professionals trust CoCounsel?

We’ve applied our technical and domain expertise to leading LLMs in creating and continuing to optimize CoCounsel, a first-of-its-kind product that both does more than LLMs can and corrects the problems that make them unsuitable on their own.

In short, CoCounsel is a professional-grade GenAI assistant. And no professional should use a GenAI solution that isn’t.

What does it take for a GenAI assistant to be professional-grade? At a bare minimum – without which it should not be trusted for your work – it must be:

  1. Built for domain-specific use and grounded in reliable sources of data relevant to that use. A professional-grade solution, such as CoCounsel, harnesses the power of LLMs but limits the source of knowledge to known, reliable data sources – such as profession-specific domains or professionals’ or their clients’ databases – which rigorously limits the possibility of inaccuracies.
  2. Built to make verifying its output easy. CoCounsel was not designed to replace the role of the professional, but rather to help them accomplish more and higher-quality work in less time. So just as lawyers review all work delegated to a junior associate or paralegal, they must validate CoCounsel’s output. We’ve made it easy to do so: all answers link to their origin in the source documents, so it’s simple to “trust, but verify.”
  3. Developed by technical teams with deep GenAI expertise.  Though GenAI has only been broadly talked about since 2022, it’s been around since 1961. AI engineers and research teams have worked with LLMs since their invention, were among the first to build with GPT-4, and have invented patented approaches to applying LLMs to professional use cases. 
  4. Continually and consistently tested and authenticated by a dedicated team of domain experts.  AI engineers and Trust Team attorneys together filter, rank, and score CoCounsel’s responses to a daily battery of thousands of tests developed to simulate real-life legal use cases and ensure the assistant’s answers are consistent and accurate. To date we’ve run more than 1,000,000 such tests against CoCounsel. 
  5. Secure and private, because it interacts with third-party LLMs the right way. GenAI solutions access third-party LLMs through dedicated, private servers, and through an “eyes off” API. No LLM partner employees can see customer queries or documents, and our LLM access is contractually “zero retention.” Our LLM partners cannot store customer data longer than it takes to process the request. Our product data is never used to train any third-party models. And all product data is encrypted in transit and at rest, and subject to rigorous security policies and practices.

Why “makes minimal mistakes” isn’t enough

As important as the above five characteristics are, they’ve become price-of-entry criteria for professional-grade GenAI. And given how rapidly the technology is evolving, what you expect from a professional-grade solution should, as well. Remember: GenAI has the power to do so much more than help you complete jobs. It can transform what it means to be a professional, freeing you for more strategic, creative, valuable work that a machine just cannot do – which can transform not only how you do business, but also how much business you do.

To take full advantage of this potential, you need a GenAI assistant that fulfills two more key requirements:

1. Professional-grade means intelligently and seamlessly handling workflows, not just completing a series of tasks. A true GenAI assistant goes beyond responding to your requests, instead guiding you through the steps required to finish long, complex, even open-ended projects. Only this kind of product, such as CoCounsel, can truly unlock the full potential of GenAI.

Through an expanding set of capabilities and deep connections with both your documents and the tools you use every day – e.g., Microsoft 365 – a professional-grade GenAI assistant can traffic an entire deliverable from task to task, program to program, in a continuous stream, prompting you forward through the next steps and simultaneously handling multiple pieces of the work itself.

CoCounsel is built for workflows. It’s accessible across multiple products, bringing together both fundamental capabilities such as summarization and document review with specialized functions such as legal research, for a smooth transition from one type of work to the next. And it’s integrated with both Microsoft 365 and document management systems, available for you literally wherever you’re working, from client communication to research to document drafting and beyond.

2. Professional-grade means providing partnership, not just product. Without in-depth, sustained support, you’re unlikely to get the most possible value from your investment. A professional-grade vendor offers a success and support team that will be there for you long after you’ve signed a subscription agreement. They work with you to deeply understand your most prevalent use cases and how their solution will help you tackle them, and of course are available when something’s not working the way it should. They keep you informed about product changes and improvements and continue creating content you can use to increase your knowledge, such as videos, webinars, and written materials. A truly professional-grade team has a vision for their GenAI assistant, ensuring it will increase in power and capability as the technology advances. And most important, they are as invested in you as you are in them.

Upon adopting CoCounsel, legal professional users are trained by the Customer Success team, made up of licensed attorneys, many still practicing, with dozens of years of combined experience in both litigation and transactional law. And many of these trainers are prompting engineer specialists, who in addition to being licensed attorneys have a background in computer science. After onboarding, we ensure everyone using CoCounsel has the opportunity to attend live trainings and watch recorded webinars, get individual help through live chat and email, and access dozens of video tutorials—a resource pool that will only keep growing.

This is a guest post from Jake Heller, head of CoCounsel, Thomson Reuters, and Erin Nelson, CoCounsel content strategist, .

]]>
AI Policy Consortium Kicks Off Educational Series With “Fundamentals of AI in the U.S. Court System” /en-us/posts/innovation/ai-policy-consortium-kicks-off-educational-series-with-fundamentals-of-ai-in-the-u-s-court-system/ Mon, 26 Aug 2024 18:01:24 +0000 https://blogs.thomsonreuters.com/en-us/?post_type=innovation_post&p=62781 The Thomson Reuters Institute-National Center for State Courts (NCSC) AI Policy Consortium for Law and Courts kicks off its educational offerings with on Aug. 28.

Jake Heller, head of Product for CoCounsel, , and Jake Porway, co-founder of DataKind and an NCSC AI consultant, will host the webinar as part of a new AI and the Courts series of monthly discussions. The first session will offer participants a foundational understanding of AI and its potential to enhance the efficiency and effectiveness of the judicial process, highlighting current applications of AI in the court system and the ethical implications of its use.

Launched in June, the consortium is a joint initiative designed to educate the judiciary about the opportunities and challenges of evolving AI and generative AI solutions, enabling judges and legal and court professionals to make informed decisions about adoption and use.

“I’m thrilled to help bring together the legal industry’s top AI experts from the courts, law firms, academia, and technology organizations,” said Heller. “The pace of innovation in the legal industry is fast and furious, and has a long tradition of customer collaboration and leadership in applying cutting-edge technologies to legal research and legal workflows. The AI Policy Consortium for Law and Courts will be a tremendous resource for the judiciary and legal professionals seeking to keep up with the quickly evolving AI and generative AI tools available to augment the practice of law.”

“Having more than 1,000 participants registered for the webinar speaks to the judiciary and legal profession’s eagerness to better understand AI solutions and their impact on how legal professionals work,” Porway said. “Our consortium is filling a gap in the industry as courts and legal professionals navigate how to best evaluate, adopt, and sanction the uses of generative AI. I’m excited to introduce the AI and the Courts series with an exploration of the current and future ways AI is used in the court system as well as its challenges and implications.”

The consortium’s four workstreams – AI governance and ethics, workforce readiness for AI adoption, rules and practices pertaining to AI, and AI’s impact on access to justice – examine the opportunities and risks of AI and generative AI. The second webinar, Ethics of Generative AI: A Guide for Judges and Legal Professionals, will be held on Sept. 18. Future topics will include AI’s impact on the judicial and legal workforce, how AI can enhance court efficiency, and more.

Register for the webinar . To learn more about on the consortium, read the press release.

]]>
ILTACON Highlights: CIOs’ Perspectives on Document Management Systems, Generative AI, and More /en-us/posts/innovation/iltacon-highlights-cios-perspectives-on-document-management-systems-generative-ai-and-more/ Fri, 23 Aug 2024 13:56:54 +0000 https://blogs.thomsonreuters.com/en-us/?post_type=innovation_post&p=62759 Steve Assie, general manager, Global Large Law Firms, , attended and moderated a panel discussion for CIOs from G100 and G200 firms. Below, he shares his takeaways from the panel and the conference.

What truly stood out to me at ILTACON was the energy and enthusiasm of the event. I was struck by how the collaborative atmosphere among attendees – ranging from seasoned legal professionals to tech enthusiasts – created an enriching environment for knowledge exchange and networking.

The conference center was brimming with excited representatives from law firms and vendors showcasing innovative solutions. Everyone was grappling with how – and when – generative AI could transform the practice of law.

The insightful sessions on cybersecurity, data privacy, and the future of legal tech trends provided valuable takeaways that are likely to shape the industry in the coming years. Overall, the conference underscored the rapid advancements and the pivotal role of technology in transforming the legal landscape.

CIO insights

I moderated a among G100 and G200 CIOs. The CIOs discussed the evolving role of the document management system, the opportunity for generative AI to drive efficiency improvements now and in the future, the evolving expectations of corporate law departments, and data security practices.

Audience attendees seemed especially interested in the tenor of the conversation around transformative AI solutions. One of the CIOs said that his firm was using CoCounsel as its AI assistant and that lawyers at the firm were eager adopters. He mentioned wanting to buy more seats. That was a fun moment for me!

CoCounsel 2.0

We hosted a number of extremely well-attended events, including customer dinners, an appreciation event, and other celebrations. The opportunity to share perspectives and learn from law firms was invaluable.

I was thrilled to see that the announcement of CoCounsel 2.0 seemed to generate the most excitement among product news. Jake Heller, head of Product for CoCounsel, , showcased a side-by-side view on the speed of the application, highlighting how CoCounsel 2.0 moves more than 3 times faster than the current version CoCounsel Core. After customers saw that, they were clamoring for access.

On a personal level, I enjoyed seeing so many former colleagues and peers. The legal community is a tight-knit group. It was amazing to walk the halls of the conference centers and catch up with so many great people.

Check out other ILTACON recaps, including the legal industry’s reaction to thelaunch of CoCounsel 2.0. and highlights from the “How Today’s Lawyers Are Enhancing Their Practice with AI” session.

]]>
CoCounsel 2.0 Launch: Legal Industry Reactions /en-us/posts/innovation/cocounsel-2-0-launch-legal-industry-reactions/ Mon, 19 Aug 2024 16:54:50 +0000 https://blogs.thomsonreuters.com/en-us/?post_type=innovation_post&p=62692 Notable moments included the launch of CoCounsel 2.0 and a celebration of the one-year anniversary of Casetext joining . Below are takeaways from legal industry journalists and influencers on the milestones.

“It seems like just last year we were talking about CoCounsel 1.0, the generative AI product launched by Casetext and then swiftly acquired by ,” Joe Patrice said in . “That’s because it was just last year. Since then, has worked to marry Casetext’s tool with TR’s treasure trove of data.”

Patrice noted CoCounsel 2.0 draws on “the experience gained over the last year and a mélange of multiple LLMs under the hood” and includes Claims Explorer – an AI skill in Westlaw Precision – plus CoCounsel Drafting.

In , Isha Marathe highlighted CoCounsel 2.0 as “a much faster, more polished chatbot” that offers more personalization.

“’ CoCounsel 2.0, which has a ChatGPT-like interface, now also connects with the user’s DMS to offer a more integrated, personal experience,” Marathe said. “Additionally, usernames, passwords and other log-in keys will also be standardized across various applications, explained Jake Heller, CEO and co-founder of Casetext, part of .”

Marathe added: “Essentially, this brings CoCounsel ‘on the same infrastructure’ as the user’s Westlaw and applications, Heller said, creating a user-centric experience as opposed to a fragmented one.”

Richard Tromans David Wong, chief product officer, , made during his ILTACON presentation on CoCounsel 2.0 about the strength of data.

“David Wong, CPO, said that ‘this is the most meaningful work I have done at ,’” Tromans noted. “He then stressed the fact that they have all the three main food groups when it comes to AI tool development:‘We have the data, the expertise, and the tech. Few have all three in such quantity and depth.’ And that’s a key point. If you look at some of the challengers out there, there are few that have all of that lovely, rich data.”

Tromans added. “… Wong is right, having genAI skills and having a smart team is great, but a ton of authoritative legal data is the cherry on the cake if you want to offer a really broad genAI platform.”

Watch the Innovation Blog for more on the one-year anniversary of Casetext joining , and read Innovation Blog posts for more product news, leader insights, and customer perspectives on how is paving the way for the future of professionals.

]]>
ILTACON Sneak Peek: CoCounsel 2.0 Combines the Power of Google Cloud AI, OpenAI, and /en-us/posts/innovation/iltacon-sneak-peek-cocounsel-2-0-combines-the-power-of-google-cloud-ai-openai-and-thomson-reuters/ Mon, 12 Aug 2024 09:48:50 +0000 https://blogs.thomsonreuters.com/en-us/?post_type=innovation_post&p=62576 today announced CoCounsel 2.0, the professional-grade GenAI assistant, will optimize for and combine the strengths of leading LLMs, allowing customers to realize the greatest value from this rapidly evolving technology.The next-gen CoCounsel AI assistant marks a significant milestone in the vision for a single GenAI assistant, enabling professionals across industries to accelerate and streamline entire workflows.

CoCounsel 2.0 draws on its robust set of specialized skills to handle complex, multi-step work, helping professionals quickly pinpoint key knowledge in vast databases, thoroughly communicate sophisticated information, and complete essential work with unprecedented speed.CoCounsel 2.0 generates answers three times faster than the current version, operates more intuitively, and delivers more thorough, nuanced results.

It also tests combining the unique capabilities and strengths from OpenAI, Google, and , such as its industry-leading content and legal technology. CoCounsel 2.0 will also bring additional and upgraded capabilities for legal professionals.

The just-launched Claims Explorer in Westlaw Precision with CoCounsel simplifies claims research by enabling legal professionals to enter facts and identify applicable claims or counterclaims. , the end-to-end GenAI-enabled solution from , accelerates drafting by as much as 50%.

“ is here for one reason: to ensure our customers reliably and safely realize the greatest possible value from this generational technology—as quickly as possible,” said David Wong, chief product officer, . “CoCounsel 2.0 is founded upon our ability to combine our data, expertise, and trusted content with cutting-edge technology. Partnering with leading LLM providers is a key part of our strategy and will help us deliver even more for our customers, enabling them to accomplish what they need to evolve their businesses more quickly and more effectively than ever.”

Wong will discuss CoCounsel 2.0 with Jake Heller, head of Product for CoCounsel, , and Kriti Sharma, chief product officer of Legaltech, , in their ILTACON session, “Maximizing Impact and ROI: Harnessing the Full Potential of GenAI in the Legal Profession.”

“CoCounsel 2.0 is more powerful than the first generation of CoCounsel and accessible from within products – beginning with Westlaw Precision and Practical Law – plus from within Microsoft 365, beginning with Word, Teams, and Outlook. It’s exactly what our legal customers are looking for,” Sharma said. “Debuting CoCounsel 2.0 at ILTACON is the perfect fit, as it’s all about realizing successful legal strategies for transforming the legal industry. I’m thrilled for the opportunity to share our powerful CoCounsel 2.0 vision and discuss how legal professionals can embrace professional-grade GenAI to streamline their workflows and boost productivity while maximizing ROI.”

“We’ve always been on the leading edge of emerging technology and it’s incredibly fulfilling to experience our vision – of providing every professional we serve with a GenAI assistant – becoming reality for our customers,” Heller said. “We’re seeing a growing maturity in the adoption of AI, and legal professionals are ready for the practical application and optimization of GenAI. CoCounsel 2.0 enables them to use GenAI to its fullest potential to drive efficiencies and productivity gains – working with the tools they already use every day.”

For more on CoCounsel 2.0, read the press release. Check out more Innovation Blog posts for product news, leader insights, and customer perspectives on how is paving the way for the future of professionals.

]]>
CoCounsel Core, Leading Legal GenAI Assistant, and AI-Assisted Research on Westlaw Edge Rollout to UK Legal Professionals /en-us/posts/innovation/thomson-reuters-cocounsel-core-leading-legal-genai-assistant-and-ai-assisted-research-on-westlaw-edge-rollout-to-uk-legal-professionals/ Tue, 12 Mar 2024 16:11:06 +0000 https://blogs.thomsonreuters.com/en-us/?post_type=innovation_post&p=61887 (TSX/NYSE:TRI), a global content and technology company, today announced the UK launch of its generative AI legal assistant, CoCounsel Core. Continuing the rapid execution of AI technology strategy, UK availability of this leading generative AI legal assistant quickly follows launches in Canada, Australia, and the United States.

The company also announced that AI-Assisted Research on Westlaw Edge UK will be available in the coming weeks. The generative AI legal research solution will help legal professionals get better, faster answers to complex research questions grounded in trusted Westlaw content. Together, they will lead the transformation of the UKs legal profession.

CoCounsel Core equips legal professionals with eight generative AI-powered core legal skills: Prepare for a Deposition, Draft Correspondence, Search a Database, Review Documents, Summarize a Document, Extract Contract Data, Contract Policy Compliance, and Timeline.

“This latest international launch of CoCounsel Core is yet another significant milestone in our mission to empower legal professionals to do better work, more efficiently, for more clients,” said Jake Heller, head of Product, CoCounsel, . “In just over a year since CoCounsel debuted, our goal of transforming how people work is becoming a reality in more places across the world, more quickly than we could have imagined. It’s proof that the build, buy and partner strategy is accelerating how quickly we can deliver generative AI solutions to the professionals who rely on us.”

Designed with the technical controls and data governance to meet legal professionals’ ethical and confidentiality obligations, CoCounsel Core is the only professional-grade generative AI assistant built specifically for the practice of law. Together with AI-Assisted Research on Westlaw Edge UK, CoCounsel Core’s capabilities constitute the industry’s most comprehensive set of generative AI skills, designed to help lawyers quickly gather deeper insights and deliver a better work product. No other suite of generative AI legal products offers this breadth of use, depth of content, and reliability of results. CoCounsel Core can save legal professionals as much as 60% of the time they spend on commonly executed tasks, freeing them for more high-value, strategic, and creative work.

CoCounsel Core is already being used by multiple UK law firms, including Addleshaw Goddard LLP and Linklaters LLP. Employing more the 1,600 lawyers in 19 offices worldwide, Addleshaw Goddard’s origins reach back to the UK’s first public record of solicitors, the Law List, in 1775. Linklaters is currently exploring use-cases for CoCounsel within its business. Linklaters has operated in the legal market for over 185 years, and is a leading global law firm, employing more than 3,100 lawyers in 31offices across 21 countries.

“Generative AI has untold potential to support our lawyers and transform our client services now and in years to come.” said Kerry Westland, partner and head of the Innovation Group at Addleshaw Goddard. “While researching and exploring over 100 generative AI solutions, CoCounsel stood out as a solution that could be highly effective for a range of use cases, including bulk document analysis. We are already applying CoCounsel in the work we are delivering for our clients, and it is exciting to see the value that this technology can bring. We are looking forward to seeing CoCounsel and other AI solutions working together, delivering a powerful suite of tools to our lawyers to enhance our delivery of legal services.”

Foundational to these launches and product developments planned for 2024 is the Generative AI Platform, an overarching innovation resource that enables the company to quickly and easily launch new solutions by leveraging reusable components and bringing together content, AI, generative AI, and more, as the building blocks for future products.

]]>