Mike Dahn Archives - Thomson Reuters Institute https://blogs.thomsonreuters.com/en-us/innovation-topics/mike-dahn/ Thomson Reuters Institute is a blog from 抖阴成年, the intelligence, technology and human expertise you need to find trusted answers. Wed, 11 Dec 2024 16:58:10 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 2024 Reflections: Top Innovation Highlights From 抖阴成年 /en-us/posts/innovation/2024-reflections-top-innovation-highlights-from-thomson-reuters/ Wed, 11 Dec 2024 10:59:55 +0000 https://blogs.thomsonreuters.com/en-us/?post_type=innovation_post&p=64149 抖阴成年 closed out 2024 with thousands of corporate, legal, tax, audit and accounting customers focusing on the year鈥檚 theme: generative AI and innovation. They convened at SYNERGY 2024, the premier annual technology conference for professionals, for eight days of product and innovation announcements, thought-leadership insights and networking opportunities. Below are 2024 product and innovation highlights plus a sneak peek of what鈥檚 to come in 2025.

抖阴成年 President and CEO Steve Hasker shared a state of the industry outlook, noting generative AI is as disruptive and transformative as previous technology shifts yet is happening even faster. He emphasized what differentiates 抖阴成年, including investments the company is making in generative AI to enable professionals to accelerate and streamline entire workflows and deliver more value for clients. 听

Hasker said 抖阴成年 has invested more than $200M in AI in the last year. He discussed the company鈥檚 vision to provide each professional it serves with an AI assistant; the launch of CoCounsel 2.0, which generates answers three times faster than the previous version; and new work with Microsoft on autonomous agents to increase revenue, reduce costs, and scale impact for customers.鈥

Tax, Audit & Accounting听

鈥淎I is not just changing the landscape of accounting, it’s reshaping it.鈥 That was the message from Elizabeth Beastrom, president of Tax & Accounting at 抖阴成年.

While the profession sees AI as a game-changer to help them work differently, tax and accounting professionals also continue to wrestle with the perennial challenge of a talent shortage. This, combined with escalating complexity and more tax regulations, as well as changing client expectations, leaves tax professionals in need of a critical solution.

抖阴成年 sees the potential of AI to help alleviate these challenges by augmenting human capabilities. Automating mundane, time-consuming tasks will enhance efficiency for tax professionals, helping them reclaim time to channel into higher value tasks. 抖阴成年 is working to bring the power of generative AI, machine learning and automation into its solutions in the following ways:

  1. Saving time in tax preparation:

Coming in beta during the upcoming busy season, 抖阴成年 will launch an AI-assisted tax preparation experience to increase firm efficiency. The solution combines the power of CoCounsel, 抖阴成年 professional-grade generative AI assistant, with workflow automation and software integrations. It supports the delegation of data gathering to simplify mundane tasks and automate tax preparation. 抖阴成年 research shows that customers using this solution will save at least two hours per 1040 tax return on average.

2. Supporting firms鈥 growth with advisory:

As client expectations continue to evolve, they鈥檙e increasingly looking to their accountants as trusted advisors. Firms of all sizes are focusing on growing their advisory practices to help bring their clients additional value, as well as supporting their growth. In 2025, the 抖阴成年 Advisory Solution will combine the power of CoCounsel and Checkpoint content to identify advisory opportunities. Advisory services are integrated directly into a firm鈥檚 practice, with technology empowering junior staff to take on higher-value advisory work and seasoned professionals to move beyond technical expertise to value-added synthesis.

鈥淚t helps firms build their advisory practice with confidence to deliver unprecedented value to meet clients鈥 evolving needs,鈥 said Nancy Hawkins, vice president of Product Management, Research.

3. Transforming audit efficiency:

Halving sample sizes, boosting efficiency and sharpening the focus on high-risk areas are all at the heart of 抖阴成年 Audit Intelligence Analyze solution, which launched in October. Further functionality will be coming in 2025 as it expands the Audit Intelligence suite capabilities. 鈥楾est鈥 will support with automating substantive testing with dynamic transaction tracing, while 鈥楶lan鈥 will harness full data populations with cutting-edge analytics for superior risk assessment. Both will launch with beta programs next year, along with the addition of CoCounsel to the Audit Intelligence suite.

All three solutions 鈥 Review Ready, 抖阴成年 Advisory Solution and the Audit Intelligence suite 鈥 will be further enhanced with 惭补迟别谤颈补鈥檚 generative and agentic AI capabilities.

Corporates

Laura Clayton McDonnell, president of the Corporates segment, shared how enterprise technology, including AI and generative AI, is revolutionizing the profession with innovative and emerging solutions. She emphasized that companies are taking a streamlined and proactive approach to addressing risk and compliance across the enterprise, while driving towards their business goals, will maintain their competitive advantage. Clayton McDonnell also shared how organizations are using solutions including ONESOURCE Pagero, CoCounsel Core, Legal Tracker, Checkpoint Edge with CoCounsel and CLEAR to solve challenges and realize value for their business.

In addition, Ray Grove, head of Corporate Tax and Trade, 抖阴成年, highlighted the company鈥檚 efforts to build a seamless, integrated compliance network, and Kevin Appold, vice president of US Public Records, 抖阴成年, shared how the company鈥檚 risk and fraud solutions play a critical role in the convergence of compliance and commerce. Also, Valerie McConnell, senior director of CoCounsel Customer Success, discussed how CoCounsel is transforming the general counsel鈥檚 office.

Legal

A highlight from the Legal Professionals segment included an in-depth look at the 抖阴成年 2025 AI product roadmap from David Wong, chief product officer; Mike Dahn, head of Westlaw Product; and Valerie McConnell, senior director of CoCounsel Customer Success. They outlined upcoming generative AI features and innovations to support legal professionals, including deeper integration of CoCounsel 2.0 in Westlaw and Practical Law plus generative AI research features including Claims Explorer, Mischaracterization Identification in Quick Check and AI Jurisdictional Surveys.

Legal SYNERGY attendees also participated in interactive sessions and CLE courses on advanced prompting techniques, the science behind large language models, and optimizing generative AI for tasks like drafting and legal research. Sessions offered attendees a comprehensive view of the future of AI in law.鈥

SYNERGY 2024 also included several customer panels and executive briefing sessions. Watch the Innovation Blog for highlights from these sessions and for 2025 product and innovation highlights.

]]>
Quick Check Mischaracterization Identification: New Westlaw Enhancement Furthers the 抖阴成年 Generative AI Vision /en-us/posts/innovation/quick-check-mischaracterization-identification-new-westlaw-enhancement-furthers-the-thomson-reuters-generative-ai-vision/ Tue, 22 Oct 2024 13:19:05 +0000 https://blogs.thomsonreuters.com/en-us/?post_type=innovation_post&p=63572 抖阴成年 recently announced deeper integration of CoCounsel 2.0 in Westlaw and Practical Law as well as new generative AI research features 鈥 Mischaracterization Identification in Quick Check and AI Jurisdictional Surveys 鈥 that are saving customers significant time and helping them ensure accuracy of their research. The enhancements build on the 抖阴成年 vision to deliver a comprehensive GenAI assistant for every professional it serves.

Below, CJ Lechtenberg, senior director, Westlaw Product Management, 抖阴成年, shares her insights on developing Mischaracterization Identification, a generative AI capability to help detect mischaracterizations and omissions in legal briefs.

In the five years since Quick Check was introduced, you鈥檝e added many enhancements including Quick Check Contrary Authority Identification, Quick Check Judicial and Quick Check Quotation Analysis. How did integrating generative AI make the Mischaracterization Identification enhancement different than previous ones?

Lechtenberg: This enhancement takes researchers beyond the step of knowing what might be a potential mischaracterization to an explanation of why something might be a potential mischaracterization 鈥 and that is radically different from any feature we鈥檝e deployed in Quick Check before.

I鈥檓 sure it鈥檒l come as no surprise when I say that generative AI is just a completely different beast. Lay people may think about the law as being black and white.听You can do this; you 肠补苍鈥檛 do that.听But legal professionals know that the law is really a sea of varying shades of gray. With machine learning, we wrestled with how we could ever give the machine enough data to figure out all the different ways an attorney may mischaracterize the law.

In Quick Check Quotation Analysis prior to the Mischaracterization Identification enhancement, we highlighted the actual textual differences 鈥 additions, omissions, and changes 鈥 in the quotations and showed the context around the quotes.听Doing so certainly saved researchers a significant amount of time and helped them spot issues they might not otherwise find, but the onus was still on researchers to review everything and determine what the precise differences were and how material they might be, if at all.听Even with the additional context provided, it could still be difficult to determine whether the quotations were taken out of context, especially if the quotes themselves didn鈥檛 appear to be different.

In developing Mischaracterization Identification, we recognized that the task of analyzing quotations and their context is so nuanced that attorneys will have different expectations for whether a mischaracterization occurred, so we needed to provide more than just categorizations. We found that large language models (LLMs) can generate nuanced descriptions of potential mischaracterizations, versus just explicit categorizations, and do it well, which is hugely beneficial for this type of task.

How will using Mischaracterization Identification give legal professionals and law firms a competitive advantage? How will judges using it benefit?听

Lechtenberg: The advantages of using the new Mischaracterization Identification are substantial for both legal professionals and the judiciary 鈥 both in terms of speed of review and quality of work product.听When we launched Quick Check Quotation Analysis in 2020, customers, both legal professionals and the judiciary, lamented about how time-consuming it is for them to review quotations and how challenging it is to spot differences. It is a mentally taxing task and often our brains fill in the blanks 鈥 interpreting what we think a brief maybe should say but actually doesn鈥檛.听 Attorneys never have a surplus of time, so the last thing they want to do is spend the little bit they have on the most tedious of tasks and still end up missing potential problems.

For attorneys, Mischaracterization Identification will help them efficiently and accurately make contextual misstatement and omission determinations for their opponents鈥 and their own quotations and the context surrounding those quotations. The fear of missing their own mistakes is very real for attorneys, but the possibility of missing the opportunity to capitalize on their opponents鈥 mistakes is an even larger concern. This new enhancement reduces both of those worries and will help attorneys be even better advocates for their clients.

Judges will also be able to effectively review the filings of parties in matters before them much faster. Attorneys owe a duty of candor to the judiciary and the Mischaracterization Identification feature will help flag any potential issues quickly. An added benefit, which members of the judiciary or their staff perhaps haven鈥檛 considered, is the ability to analyze their own orders and opinions to ensure that they haven鈥檛 made mistakes that could be appealed. This new enhancement will help alert judges and law clerks to potential issues before they finalize their opinions.

What early feedback are you hearing from customers?

Lechtenberg: In a recent survey, 93% of law firm professionals told us they鈥檝e seen opposing counsel misuse a quotation, 66% said they鈥檝e seen misrepresentations by an associate or colleague, and 65% of corporate respondents said they check the accuracy of outside counsel鈥檚 quotations.听The need to review opposing counsels鈥 and colleagues鈥 briefs for mischaracterizations of the law is still a very real issue for attorneys. Likewise, attorneys have said they鈥檙e always concerned about the accuracy of their work and that maintaining their reputation as a credible litigator with courts and opposing counsel is incredibly important.

Customers are extremely excited about this new Quick Check enhancement to help combat these concerns and we鈥檝e received positive feedback from them.听One law firm managing partner stated that they would use this tool a lot.听They cite-check their opponents鈥 briefs, so any shortcuts are beneficial to them. They recognize that most of the time, errors are harmless, but occasionally there are things they want to bring to the court鈥檚 attention and this feature will help them spot those issues more quickly and accurately.

Another law firm partner said this new feature is the 鈥渦ltimate security blanket鈥 because everything attorneys do is based on their credibility, and this feature alerting them to quotes being taken out of context before filing with the court would calm some of those fears.

Any surprising or unexpected moments as the team worked on developing or launching Mischaracterization Identification?

Lechtenberg: The fact that we鈥檝e accomplished this now with the use of LLMs is exciting, a little surprising and a long time coming. I鈥檓 an attorney who leads a team of attorneys; we鈥檙e literally trained to question everything and have a healthy dose of skepticism.听But I have been dreaming about a mischaracterization identification feature in Quick Check ever since we developed Quotation Analysis more than five years ago. At my core, I believed someday this could be achieved, but for years traditional machine learning approaches were just not powerful or nuanced enough to do it well.

Leveraging LLMs for a use case like this is a new frontier like we鈥檝e never seen before.听The LLM鈥檚 ability to analyze text from an uploaded document and compare that text to the text from the cited case used to support the argument and then go beyond highlighting textual differences and provide an actual explanation of what may be problematic 鈥 whether that鈥檚 a selective quote, omitted context or a misinterpreted holding 鈥 has been absolutely astounding.

What鈥檚 the one thing you want everyone to know about Mischaracterization Identification?

Lechtenberg: Mischaracterization Identification will not only help researchers spot contextual misstatements and omissions in their opponents鈥 or their own quotations and contextual statements faster and with more accuracy, but most importantly it will help them understand why those misstatements or omissions may be problematic. And, spoiler alert: Mischaracterization Identification is just the beginning of how 抖阴成年 will harness the power of generative AI in Quick Check to solve important customer problems.

For more on Mischaracterization Identification, read the press release or check out the by Mike Dahn, head of Westlaw Product Management, 抖阴成年.

]]>
How Harmful Are Errors in AI Research Results? /en-us/posts/innovation/how-harmful-are-errors-in-ai-research-results/ Fri, 02 Aug 2024 14:19:28 +0000 https://blogs.thomsonreuters.com/en-us/?post_type=innovation_post&p=62473 AI and large language models have proven to be powerful tools for legal professionals. Our customers are seeing the gains in efficiency and tell us it鈥檚 greatly beneficial. However, there has been a lot of discussion lately of errors and hallucinations, but what hasn鈥檛 been discussed is the extent of harm that comes from errors or the benefits of answers with an error.

First, let鈥檚 settle on terminology. We should use terms like 鈥渆rrors鈥 or 鈥渋naccuracies鈥 instead of 鈥渉allucinations.鈥 鈥淗allucination鈥 sounds smart, like we鈥檙e AI insiders and know the lingo, but the term is often defined narrowly as a fabrication, which is just one type of error. Customers will be as concerned, if not more concerned, about non-fabricated statements from non-fabricated cases that, despite being real, are still incorrect for the question. 鈥淓rrors鈥 or 鈥渋naccuracies鈥 are much better and more encompassing ways to describe the full range of problems we care about.

Next, let鈥檚 consider types of errors and risk of harm from each. Error rates are often just reported as a percentage, which is a binary view 鈥 either an answer has an error or it does not, but that鈥檚 overly simplistic. It conflates the big differences in risk of harm from different types of errors and ignores the potential benefit of lengthy and nuanced answers that contain a minor error.

There are dozens of ways to categorize errors in LLM-generated answers, but we鈥檝e found three to be most helpful:

  1. Incorrect references in otherwise correct answers
  2. Incorrect statements in otherwise correct answers
  3. Answers that are entirely incorrect

A fourth category of error that sometimes comes up in discussions with customers is about inconsistency, where the system provides a correct answer one time, then later, when the same exact question is submitted, the answer is different and sometimes less complete or incorrect. Minor differences in wording are very common when submitting the same question. Substantial differences are uncommon, but when they do result in an error, the error simply falls into one of the three categories above.

Incorrect references refer to situations where an answer is correct, but the footnote references provided for a statement of law does not stand for the precise proposition of the statement. Fortunately, risk of harm with these types of errors appears to be low, since they鈥檙e easy to detect when researchers review the primary law cited. Answers with these types of errors still offer substantial benefit to researchers because they get them to the right answer quickly, often with a lot of nuance about the issues, but the researcher still has to use additional searches or other research techniques to find the best source material.

Incorrect statements in otherwise correct answers are often obvious in the answer. An answer might say the law is X in paragraphs 1 鈥 4 and then, inexplicably, declare the law is Y in paragraph 5, then go back to stating the law is X in paragraph 6. Risk of harm with these errors also appears to be low, since the inconsistency is obvious and prompts the researcher to dig into the primary law to figure it out. Answers with these types of errors still offer some benefit, since they point the user to highly relevant primary law, explain the issues, and help the researcher with what to look for when reviewing primary law.

Answers that are entirely wrong are more problematic. These are quite rare in our testing, but they do occur. Often a simple check of the primary sources cited will resolve the error quickly, but sometimes additional research is needed beyond that. These answers still offer some benefit to researchers, since they often point to relevant primary law in a way that is more effective and useful than traditional searching, but they also come with greater risk of harm, since the incorrectness of the answer is not obvious, and simply reviewing cited sources does not always resolve the issue.

These sound scary, but researchers have been dealing with this type of issue for ages. For instance, secondary sources can be incredibly helpful for summarizing complex areas of law and offering insights, but they sometimes fail to discuss important nuance, and sometimes the law has changed since they were written. If researchers relied on them alone, without doing further research, they would be at risk of harm, even if they consulted cited primary sources.

Yet we would never tell researchers to avoid using secondary sources because they can sometimes be beautifully written, very convincing, and utterly wrong. What we tell researchers is they can be enormously helpful for research but must be used as part of a sound research process where primary law is reviewed, and tools like KeyCite, Key Numbers, and statutes annotations are used to make sure the researcher has a complete understanding of the law.

Individual research tools have rarely been perfect. Their value has been in improving sound research practices. Stephen Embry captured this idea well in his recent blog post, :

鈥淭he point is not whether Gen AI can provide perfect answers. It鈥檚 whether, given the speed and efficiency of using the tools and their error rates compared to those of humans, we can develop mitigation strategies that reduce errors. That鈥檚 what we do with humans. (I.E. read the cases before you cite them, please).鈥

But if you must check primary resources and engage in sound research practices when using a research tool, is there really any benefit to using it? If it improves overall research times or helps surface important nuance that might otherwise be missed, the answer is yes.

Prior to launching AI-Assisted Research, we knew large language models would not produce answers free of errors 100% of the time, so we asked attorneys if the tool would be valuable even with an occasional error, and if we should we release it now or wait until it was perfect?

Most of the attorneys said, 鈥淚 want this now.鈥 They saw clear benefits and thought an occasional error was worth it for the extraordinary benefits of the new tool, since they would easily uncover an error when reading through primary law. They said that if they knew the answers were generated by AI, they would never trust them and would verify by checking primary sources. If there was an error, those primary sources (and further standard research checks, like looking at KeyCite flags, statute annotations, etc.) would reveal it. That鈥檚 why we put AI in the name of this CoCounsel skill, so researchers would be encouraged to check primary sources.

Our customers have submitted over 1.5 million questions to AI-Assisted Research in Westlaw Precision. Generally, three big research benefits come up in discussions:

  1. It gives them a helpful overview before diving into primary sources.
  2. It uncovers sub-issues, related issues, or other nuances they might not have found as quickly with traditional approaches.
  3. It points them to the best primary sources for the question more quickly and efficiently than traditional methods of research.

Customers have described these benefits with great enthusiasm, telling us AI-Assisted Research 鈥渟aves hours鈥 and is a 鈥済ame changer.鈥

Lawyers know they need to rely on the law when writing a brief or advising a client, and the law lies in primary law documents (cases, statutes, regulations, etc.). Researchers have always known that when they鈥檙e looking at something that is not a primary law document, such as a treatise section, a bar journal article, or an answer from AI, they must check the primary law before relying on it to advise a client or write a brief. That鈥檚 why we cite to primary law in the answers and why we provide an even greater selection of relevant primary and secondary sources under the answers 鈥 to make this checking easy.

But what about ? That lawyer submitted his brief without ever reading any of the cases he was citing.

That 肠补苍鈥檛 be the standard for considering the value of products like Westlaw that provide a rich set of research tools that make it easy to check primary sources, understand their validity, and find related material. If the standard were, a user might not read any of the primary law, many high-value research capabilities today would be deemed useless.

The way to dramatically reduce the risk of harm from LLM-based results or any other individual research tool, like secondary sources, is what it has always been: sound research practices.

Jean O鈥橤rady conveyed this beautifully in :

鈥淒oes generative AI pose truly unique risks for legal research? In my opinion, there is no risk that could not be completely mitigated by the use of traditional legal research skills. The only real risk is lawyers losing the ability to read, comprehend and synthesize information from primary sources.鈥

At 抖阴成年, we鈥檙e continuing to work on ways to reduce all types of errors in generative AI results, and we expect rapid improvement in the coming months. Because of the way large language models work, even with retrieval augmented generation, eliminating errors is difficult, and it鈥檚 going to be quite some time before answers are completely free of errors. That鈥檚 the bad news.

The good news is that harm from these types of errors can be reduced dramatically with common research practices. It鈥檚 why we鈥檙e not only investing in generative AI projects. We鈥檙e also continuing to build out a full suite of research tools that help with the entire research process because that process will continue to be important.

Even when errors get reduced to just 1%, that will still mean that 100% of answers need to be checked, and thorough research practices employed.

We鈥檙e currently involved in two consortium efforts to provide benchmarking for generative AI products. When generative AI products for legal research are tested against these benchmarks, I expect we鈥檒l see the following:

  • None of the products will produce answers that are all entirely free of errors.
  • All the products will require sound research practices, including checking primary law documents, to reduce risk of harm.
  • When sound research practices are employed, the risk of harm from errors in the answers is small and no different in magnitude from the risks we see with traditional research tools like secondary sources or Boolean search.

Even in the age of generative AI, sound research practices remain important and are here to stay. As Aravind Srinivas, CEO and cofounder of , said,

鈥淭he journey doesn鈥檛 end once you get an answer鈥 the journey begins after you get an answer.鈥

I think Aravind鈥檚 statement applies perfectly to legal research and to the art of crafting legal arguments. Even as our teams strive to reduce errors further, we should keep in mind the benefits of generative AI and weigh them against the new and traditional risks of harm in tools that are less than perfect. When used as part of a thorough research process, these new tools offer tremendous benefits with very little risk of harm.

This is a guest post from Mike Dahn, head of Westlaw Product Management, 抖阴成年.

]]>
Thomson Reuters Introduces New Generative AI Skill in Westlaw Precision with CoCounsel /en-us/posts/innovation/thomson-reuters-introduces-new-generative-ai-skill-in-westlaw-precision-with-cocounsel/ Sun, 21 Jul 2024 08:02:03 +0000 https://blogs.thomsonreuters.com/en-us/?post_type=innovation_post&p=62313 Today 抖阴成年听introduced Claims Explorer, a new generative AI skill available in , that enables legal professionals to enter facts and identify applicable claims or counterclaims. Using generative AI to simplify claims research, users enter facts and quickly receive a list of applicable claims.

For legal professionals filing a lawsuit, defending a lawsuit, or advising clients on potential liability, often their first step is to identify applicable claims or counterclaims. Yet not all claims are equal. Some causes of action have a lower threshold to achieve, some provide for attorneys鈥 fees or higher damages, and some fit better with the facts of a particular case.

In testing with attorneys, those who used the new skill found relevant causes of action three times faster than when using traditional research methods. In addition, in reviewing Am Law 50 litigation where claims were added after the initial pleadings, the new skill found 94% of the claims that were missed in the initial pleadings and later added by the firms.

鈥淔inding claims with traditional research methods can be difficult and time consuming,鈥 said Mike Dahn, head of Westlaw Product Management, 抖阴成年. 鈥淓ven experienced lawyers can miss applicable claims. Customers have told us about the difficulty of claims research for years, and it鈥檚 not just that it can take hours 鈥 it’s error prone, which is easy to see in how often reputable firms attempt to add new claims or counterclaims later in litigation, after the initial pleadings. But courts won鈥檛 always allow you to add a claim later, and missing the best claims can have significant consequences. It can mean the difference between winning or losing a motion, recovering more in damages or attorney’s fees, or potentially losing a case.鈥

Dahn added this new skill was purpose-built using the latest generative AI plus new claims content created by 抖阴成年 attorney editors. 鈥淲hen we tried to solve claims research issues with AI alone, it didn鈥檛 work very well, so we had our attorney editors create new content about causes of action that enabled AI to work much better. We鈥檒l continue to do work like this for other workflows where AI alone struggles.”

The new skill is the latest milestone in the 抖阴成年 expanded vision for CoCounsel 鈥 the professional-grade GenAI assistant 鈥 to enable professionals to seamlessly complete complicated work involving multiple products through a single generative AI assistant.

For more on the new skill, check out .

]]>