Morgan Lewis Archives - Thomson Reuters Institute https://blogs.thomsonreuters.com/en-us/innovation-topics/morgan-lewis/ Thomson Reuters Institute is a blog from ¶¶ŇőłÉÄę, the intelligence, technology and human expertise you need to find trusted answers. Thu, 23 Apr 2026 14:11:09 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 CoCounsel Legal – Reimagined /en-us/posts/innovation/cocounsel-legal-reimagined/ Mon, 20 Apr 2026 14:33:43 +0000 https://blogs.thomsonreuters.com/en-us/?post_type=innovation_post&p=70484 When we first built CoCounsel, our north star was accuracy and reliability – delivering carefully controlled, structured workflows attorneys could trust. That foundation remains unchanged. But our long-term vision was always bigger. Recent advances in agentic AI now makes it possible to combine flexibility and accuracy, fundamentally expanding what legal AI can do.

Today, we’re announcing the next generation of CoCounsel Legal, now available in Beta. Built from the ground up, it delivers on the vision we set out from the start: an AI companion that works alongside lawyers through every task and every stage of a matter, grounded in the trusted sources of knowledge they rely on.


Built on the most advanced AI, and engineered for how legal work actually gets done

Built on Anthropic’s Claude Agent SDK, the next generation of CoCounsel Legal is a unified agentic platform that plans, selects tools, retrieves authoritative content, and adapts mid-workflow just as a senior associate would, not a first-year waiting for the next instruction. Critically, the lawyer remains in control—able to see the agent’s reasoning as it unfolds, step in to redirect its approach, challenge its assumptions, and probe whether alternative angles have been considered.

CoCounsel Legal doesn’t reason from the web – it’s built with Westlaw and Practical Law content and tools natively embedded. Different by design, the technology and the sources are built as one system, making defensibility part of the architecture rather than a feature. As a result, when CoCounsel Legal produces a deal term sheet, contract, or litigation strategy memo, every step of its reasoning is grounded in authoritative legal sources, guided by 35 million West Key Number classifications and 3.9 million Precision Research attributes, and fully transparent through verifiable Practical Law resources and Westlaw citations. Developed and evaluated by practicing-attorney editors working alongside top AI data scientists, the breakthrough isn’t simply faster task completion – it’s the ability to produce complex work product across the many decision points of a legal matter, moving beyond task execution to true legal reasoning.

Our leading evaluation framework encodes quality at each step. This means before any capability ships; we measure it. Licensed attorneys, including our Practical Law editors, define what the correct output looks like for each task type. Every new capability must demonstrate measurable improvement against that benchmark before it reaches production. The framework evaluates not just final outputs, but the full chain of reasoning that produced them, because an agent that arrives at the right answer through flawed reasoning cannot be trusted to do so consistently.

And we’ve gone further to protect the integrity of that reasoning, with patent-pending tools for citation integrity and output verification:

  • Verification and grounding as system primitives. Authoritative retrieval, explicit source handling, and verifiable citation flows are product infrastructure -not post-processing or marketing language.
  • Patent-pending link integrity. Our patent pending citation ledger architecture tracks every source the agent brings into context and the specific passages it reads.

This is ; outputs grounded in authoritative content and customer context – making verification part of the system’s architecture rather than an afterthought. In a profession where a single missed citation can cost a client their case, defensibility isn’t a nice-to-have. It’s the whole point. In a profession where a single missed citation can cost a client their case, defensibility isn’t a nice-to-have. It’s the whole point.

What our customers are telling us

The feedback we’re hearing from customers reflects this.

Brooke Conkle, partner in Consumer Financial Services at Troutman Pepper Locke, asked CoCounsel Legal a broad question about recent TCPA developments across two circuits and the solution “immediately zeroed in on the precise ascertainability nuances” between them, the kind of careful parsing that typically requires significant time and research. Her conclusion: “The underlying legal analysis genuinely blew me away and made me rethink what is possible with AI in complex litigation work.”

That’s not the response of someone who found a faster tool. That’s the response of someone who found a different kind of tool.

Andrew Medeiros, managing director of Innovation at Troutman Pepper Locke, captures something I think is fundamental to why this matters: “Lawyers don’t want to just operate software, and that’s not what great AI should do.” What he’s seeing is that CoCounsel Legal keeps lawyers in the analytical mindset they were trained for, going back and forth, challenging answers and steering the work.

He added: “The next generation of CoCounsel Legal seems to be a total game changer as we’ve introduced it to litigation and transactional attorneys. It’s meeting them within their workflows, allowing them to ask plain language questions and then see the step-by-step approach that CoCounsel [Legal] takes to help them draft the document relying upon Westlaw Deep Research and the Practical Law guidance.”

The AI Knowledge Management Department at Morgan Lewis, shared, “We were really impressed with the enhancements to the CoCounsel Legal platform. In our evaluation, it demonstrated strong capabilities in supporting efficient document drafting and in addressing gaps in information, such as filing party details, with both speed and accuracy when prompted. The outputs were well-structured and immediately usable, and the overall workflow was intuitive and easy to navigate. Performance was consistently fast. We are really looking forward to what’s next!”

Why we’re launching this as a beta, and building in public

Just as important as what we’re building is how we’re introducing it to customers.

We are deliberately launching the next generation of CoCounsel Legal as a beta, with a clear commitment to building in public and in partnership with our customers. This beta includes leading law firms such as Troutman Pepper Locke, Morgan Lewis, Carlton Fields, and Caplin & Drysdale, as well as four large enterprise customers. As we move through successive beta waves ahead of general availability later this year, we’re putting the solution in the hands of real lawyers working on real matters – listening closely to where it earns confidence, where it doesn’t, and incorporating that feedback directly into how the product evolves.

We’re inviting customers to help shape what CoCounsel Legal becomes – an AI that works at the level of a senior associate, built with Anthropic with cutting edge technology, engineers for legal work with authority and verification at its core.

This reflects a core belief I hold: the solution itself should be the argument. The strongest validation won’t come from launch announcements or benchmarks alone, but from sustained use – when lawyers choose to rely on the product because it holds up under real professional accountability.

Today’s beta is just the beginning. I’m excited to put the next generation of CoCounsel Legal in the hands of more customers as the year progresses.

I encourage you to explore how it works.

]]>
Industry Insights: Raghu Ramanathan and David Wong on Evaluating AI Vendors /en-us/posts/innovation/industry-insights-raghu-ramanathan-and-david-wong-on-evaluating-ai-vendors/ Thu, 27 Jun 2024 15:16:27 +0000 https://blogs.thomsonreuters.com/en-us/?post_type=innovation_post&p=62022 Raghu Ramanathan, president, Legal Professionals, ¶¶ŇőłÉÄę, and David Wong, chief product officer, ¶¶ŇőłÉÄę, shared their insights on evaluating AI vendors during a with Morgan Lewis partner Rahul Kapoor and associate Shokoh Yaghoubi.

They offered advice on evaluating a vendor’s technology expertise, support services, and transparency. Also, they shared how firms and organizations can mitigate the risks posed by acquiring a vendor’s AI services while maximizing their investment in AI. Below are highlights from their conversation.

Keys to choosing AI vendors

To start the process of assessing potential AI vendors, Yaghoubi emphasized the importance of reviewing their experience and expertise “to allow your business to make informed decisions about whether to engage the vendor.”

Wong said that in addition to performance and cost, firms should consider safety and trust factors.

Ramanathan noted it’s important to consider whether you want a consumer- grade model – that’s cheaper – or a more reliable professional-grade model. He emphasized three criteria to focus on when choosing an AI vendor:

  1. “What’s your philosophy and principles around how AI should be used?” He said asking a vendor this question allows you to see if your firm’s vision and long-term strategy and roadmap are aligned with the vendor’s approach.
  2. Request a vendor’s references and testimonials. Ramanathan explained that firms and organizations should ask vendors how many customers are already using their solutions. “AI is still a game of scale,” he said. “You don’t want to be the first customer training a model.”
  3. Clarify the level of support and training a vendor provides. Ramanathan said this is key to ensuring that all levels of staff are trained and can use the AI solutions constructively.

Wong added that the questions he receives from potential clients focus on data, technology, and talent. He warned that some companies simply repackaged existing large language models (LLMs) for legal use cases without adding much.

“Clients that are working with companies that are building AI have a say,” Wong said. “They can contribute, iterate, and build the products.”

Also, he stressed the importance of working with a vendor that knows how to customize solutions and integrate customer feedback into product development.

How vendors use data

Kapoor asked what customers should consider regarding how vendors use their data. Wong said that understanding the data flow and how the data is processed are key, as well as understanding licensing and data rights, including intellectual property usage rights, cyber risk, and data leakage.

Ramanathan noted encryption standards as well as access control are critical as is demanding transparency from vendors: “You have the right to ask how the data your inputting is used.”

Ramanathan added, “Good vendors should have governance systems that answer” details such as where data is stored and who has access to it.

“Look for transparency” on data output

Wong advised firms to “look for transparency” from vendors, making sure they provide qualitative and quantitative information about the quality of their outputs. He said vendors should be guided by a set of AI principles and should follow a data governance and AI model governance process to mitigate hallucinations and potential risks.

Ramanathan noted that good vendors conduct regular model validation on a periodic basis. He also flagged that professional-grade AI solutions – unlike consumer-grade AI solutions – give a sense for the reliability of the answer.

Data output considerations also include encryption standards as well as vendors’ privacy and security policies. Ramanathan said a baseline is compliance with standards such as GDPR and CCPA.

“The privacy and security measures a vendor takes are a result of their philosophy about AI and how to use AI,” Ramanathan said. “It gives you a clue as to what you can expect downstream in terms of execution.”

Ramanathan added that vendors should share their risk management framework and enterprise risk framework as well as disclose how frequently they conduct audits and what mitigating actions they put in place.

Wong added that most firms and organizations have “tried and tested approaches for technology procurement” that they should apply to assessing AI vendors too.

Lack of AI-Specific SLAs

When exploring initial and ongoing training and documentation, Shokoh asked if AI service-level agreements (SLAs) are similar to those offered for SaaS-type platforms.

Ramanathan said there are elements of SLAs similar to cloud software “that you can and should expect,” such as uptime and maintenance. He noted the hard part is the lack of industry standards for AI-specific SLAs to address issues like response time and accuracy.

In the absence of industry standards, Ramanathan recommended asking questions around issues like product reliability controls and internal testing programs.

Going above the legal requirements

Part of assessing an AI vendor involves anticipating it will adapt to new and changing AI regulations, given the lack of a comprehensive federal law in United States and various states implementing their own guidance.

“There’s wide range and little consistency across the market,” Wong said. “What ¶¶ŇőłÉÄę has done is look at AI standards in all the markets that we operate in and identify the most restrictive standards. We use a combination of the NIST standards and the EU AI directive as the basis for much of our governance framework.”

Wong added that ¶¶ŇőłÉÄę applies this viewpoint to its risk management framework and to its data and AI model governance framework.

“We projected what the regulation would be rather than look at where the regulation is today,” Ramanathan explained. “We proactively defined what we call our Data and AI Ethics principles, which are very hard-coded guidelines that go into engineering our products as well.”

To watch a recording of the webinar, .

]]>