professional-grade AI Archives - Thomson Reuters Institute https://blogs.thomsonreuters.com/en-us/innovation-topics/professional-grade-ai/ Thomson Reuters Institute is a blog from ¶¶ŇőłÉÄę, the intelligence, technology and human expertise you need to find trusted answers. Wed, 25 Mar 2026 17:37:50 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 CoCounsel Legal Is Redefining Professional AI in the UK Legal Market /en-us/posts/innovation/cocounsel-legal-is-redefining-professional-ai-in-the-uk-legal-market/ Mon, 26 Jan 2026 05:00:07 +0000 https://blogs.thomsonreuters.com/en-us/?post_type=innovation_post&p=69163 The future of legal work isn’t about AI sitting beside professionals – it’s about AI being embedded in the work itself. ¶¶ŇőłÉÄę marks a pivotal moment in that transformation as , introducing a new standard for what professional-grade agentic AI can deliver.

Why Agentic AI Matters

The distinction between AI copilots and agentic AI solutions isn’t just semantic – it’s fundamental to how legal work gets done. While copilots offer suggestions and assistance, agentic AI like CoCounsel Legal takes on complex, multi-step professional work with advanced reasoning, authoritative content integration, and deep subject matter expertise.

CoCounsel Legal UK home

 

The UK launch of CoCounsel Legal represents more than geographic expansion. It’s the convergence of critical innovations: Deep Research capabilities on both Practical Law and Westlaw Advantage, and seamless integration with existing legal technology ecosystems including Microsoft 365, document management systems, and ¶¶ŇőłÉÄę HighQ.

Sam Dixon, chief innovation officer at Womble Bond Dickinson, said: “We knew we needed to bring in a GenAI legal assistant to help us deliver the best possible service we can for clients. For us, the fact that CoCounsel had the ability to lean on the Westlaw content and the Practical Law content was really beneficial. And it already integrates with a lot of the rest of our legal tech stack, such as HighQ.”

CoCounsel Legal’s native integration with existing workflows means legal professionals don’t have to choose between innovation and productivity – they get both.

“We have found the working relationship with ¶¶ŇőłÉÄę very collaborative, transparent and supportive – real partners who go the extra mile to support us in getting value out of our relationship,” said Christina Demetriades, global operating officer, Accenture Legal. “CoCounsel is a massive opportunity for our function – we see it as a way of displacing outside counsel spend and augmenting our team in practice – I see it helping build the Future Ready Legal professional. I have already used it myself to prepare advice for our business on an upcoming opmodel transformation. It was a great value add.”

Deep Research: A Global First for UK Legal Professionals

The UK launch introduces several industry firsts. debuts globally in the UK, with its U.S. release set for February. The new brings professional-grade agentic AI research capabilities specifically tailored to UK legal content. Most significantly, both content sets are unified in a single platform, eliminating the need to navigate between systems while delivering comprehensive results across practice areas.

What makes Deep Research genuinely transformative is its ability to reason, plan, and execute comprehensive legal research autonomously. It doesn’t just retrieve information – it generates multi-step research plans, traces its logic with transparent reasoning, and delivers structured reports backed by Westlaw and Practical Law citations. Legal professionals can hand off complete research questions to an AI that understands the assignment, explains its process, sources its answers, and builds argument foundations, all with human oversight.

Deep Research on Practical Law in CoCounsel Legal UK

 

Deep Research on Westlaw Advantage UK

 

The Foundation for Professional AI

David Wong, chief product officer, ¶¶ŇőłÉÄę, said: “Professional-grade AI is fundamentally changing how legal work gets done, and with CoCounsel Legal, we’re delivering enterprise-ready agentic AI that helps UK law firms and legal departments future-proof their practices. This isn’t just about efficiency – it’s about empowering legal professionals with AI that reasons through complex problems, integrates seamlessly into existing workflows, and scales across entire organizations while maintaining the trusted, authoritative foundation our customers depend on. This isn’t just about convenience – it’s about delivering real value to our clients and their work.”

That foundation – combining advanced reasoning models, authoritative content, deep subject matter expertise, and native integration – represents the essential components needed to complete complex, multi-step professional work. It’s what separates professional-grade AI from consumer-grade tools, and what makes CoCounsel Legal uniquely positioned to transform how legal work happens in practice, not just in theory.

CoCounsel Legal UK Library

 

Looking Ahead

The UK launch of CoCounsel Legal signals a broader shift in how professional services will be delivered. As agentic AI capabilities mature and expand, the question isn’t whether AI will transform legal work – it’s whether legal professionals and organizations will embrace tools purpose-built for their needs or settle for generic solutions that promise much but deliver little.

For UK legal professionals ready to explore what professional-grade agentic AI can do for their practice, the opportunity is here. The future of legal work isn’t waiting – it’s embedded in the work itself.

Learn more about .

]]>
Tailoring Large Language Models for Professional-Grade Work /en-us/posts/innovation/tailoring-large-language-models-for-professional-grade-work/ Thu, 14 Nov 2024 09:15:17 +0000 https://blogs.thomsonreuters.com/en-us/?post_type=innovation_post&p=63874 Data curation is crucial for training large language models (LLMs) to operate effectively, especially in professional settings. Generative AI tools like GPT-4 and other mass-market LLMs can get tripped up when it comes to nuanced or specialized tasks, such as navigating the intricacies of U.S. tax codes.

LLMs for professional-grade AI solutions must be tailored with the right mix of data sources and go through a rigorous data architecture process. For enterprise tasks, developers need specialized data plus domain expertise to organize it in such a way that the eventual outputs will be helpful for end-user professionals. Developing a tool for accountants or tax attorneys, for example, involves gathering a wide array of tax codes, regulatory filings, legal interpretations, and more as well as integrating and standardizing this data into a format that LLMs can digest.

As I recently shared in , getting raw data to a place where it can be used to power a generative AI solution requires two steps: grounding and the human factor. Grounding is like giving an LLM a specialized education – analogous to an individual going from an undergrad degree to law school – by augmenting it with use-case-specific information. Human experts, of course, are irreplaceable when it comes to domain expertise, which is essential for creating industry-specific LLMs.

Leading the Technology Services team of engineers at ¶¶ŇőłÉÄę is an incredibly rewarding experience. We tackle the unique challenges of creating professional-grade AI solutions that meet the high standards of accuracy and reliability demanded by legal and tax professionals.

Our team is deeply committed to bridging the gap between cutting-edge technology and specialized domain knowledge. We understand that our work doesn’t just involve writing code or developing algorithms; it’s about empowering professionals with tools that enhance their expertise and efficiency. Guiding this team has shown me the impact that thoughtful, well-crafted AI can have in transforming the way professionals work, and it reinforces our dedication to continuous innovation in this space.

Check out my article to learn more about the need for data curation and data stewardship in developing professional-grade AI solutions.

This is a guest post from Noah Pruzek, head of Technology Systems, ¶¶ŇőłÉÄę.

]]>
The Progressive Rise of Generative AI: A Conversation With David Wong and Joel Hron /en-us/posts/innovation/the-progressive-rise-of-generative-ai-a-conversation-with-david-wong-and-joel-hron/ Wed, 30 Oct 2024 09:50:36 +0000 https://blogs.thomsonreuters.com/en-us/?post_type=innovation_post&p=63649 In honor of the one-year anniversary of the first episode of TechConnect, highlights the progressive rise of generative AI In the past year.

“As fast as it started, it really feels like in the last year, there’s been an even more rapid acceleration, and many companies racing to become leaders in this field, including ¶¶ŇőłÉÄę,” said Joel Hron, chief technology officer, ¶¶ŇőłÉÄę.

Hron and David Wong, chief product officer, ¶¶ŇőłÉÄę, shared their takes on the most significant advances in generative AI technology, including improvements in accessibility to the technology, with more developer tools alongside reduced costs and more out-of-the-box capabilities.

Wong said he’s most excited about large language models’ ability to have longer context windows, enabling them to keep more information in their short-term memory and answer ever-more complex questions.

“That’s critical for the way ¶¶ŇőłÉÄę uses a lot of these models,” Wong said.

“The agentic behaviors of the models have become more robust in their ability to plan and ability to use reason over complex information,” Hron added.

They also discussed balancing the need to innovate and go fast with the need for ethical, responsible and high-quality AI development.

Wong noted how ¶¶ŇőłÉÄę is best positioned to develop professional-grade AI, grounded in fact and data. He emphasized customers’ need for measurable solutions, so they can discern tools’ accuracy rates, as well as the need for security and privacy.

Wong said ¶¶ŇőłÉÄę has the scale and infrastructure to understand customers’ needs and develop solutions to solve their biggest challenges, guided by a philosophy and process that ensures the right balance between moving fast and ensuring quality.

Hron said the company’s human-centric approach to AI development is key.

“Our human expertise at ¶¶ŇőłÉÄę and the level of rigor and quality we put behind both our content and our products for many years has really been a cornerstone of our brand,” Hron said.

Hron said the iterations between technology and domain experts are crucial to how ¶¶ŇőłÉÄę helps customers streamline their workflows with AI, such as with AI-Assisted Research on Westlaw Precision and CoCounsel Core.

They also highlighted the ¶¶ŇőłÉÄę acquisition of Materia, an AI assistant and platform for accounting and auditing professionals.

“It’s a reinforcement of our belief in AI assistants being in the hands of every professional and a reinforcement of our commitment around AI across our entire product portfolio,” Hron said.

He added that Materia’s strengths have included leaning into the long context and multimodal capabilities of generative AI as well as enabling agentic behavior.

Hear more of Wong and Hron’s insights on Materia as well as the evolution of generative AI in the of the TechConnect series, which brings diverse and dynamic perspectives from all corners of the technology world with thought-provoking questions and conversation.

]]>
Legal AI Benchmarking: CoCounsel /en-us/posts/innovation/legal-ai-benchmarking-cocounsel/ Wed, 23 Oct 2024 14:04:16 +0000 https://blogs.thomsonreuters.com/en-us/?post_type=innovation_post&p=63580 We’re excited to be sharing a detailed look into our testing program for CoCounsel, including specific methodologies for evaluating its skills. We aim not only to showcase the steps we take to ensure CoCounsel’s reliability, but also to contribute to broader benchmarking efforts in the legal AI industry. Though it’s challenging to establish universal benchmarks in such a diverse field, we’re engaging with industry stakeholders to work toward the shared goal of elevating the reliability and transparency of AI tools for all legal professionals.Ěý

Why evaluating legal skills is complicatedĚý

Traditional legal benchmarks usually rely on multiple-choice, true/false, or short-answer formats for easy evaluation. But these methods aren’t enough to assess the complex, open-ended tasks lawyers encounter daily and that large language model (LLM)-powered solutions like CoCounsel are built to perform.Ěý

CoCounsel’s skills produce nuanced outputs that must meet multiple criteria, including factual accuracy, adherence to source documents, and logical consistency. These are difficult outputs to evaluate using true/false tests. On top of that, assessing the “correctness” of legal outputs can be subjective. For instance, some users prefer detailed summaries, others prefer concise ones. Neither is “wrong,” it just comes down to preference, which makes it difficult to consistently automate evaluations.Ěý

To make it even more complicated, each CoCounsel skill often involves multiple components, with the LLM handling only the final stage of answer generation. For example, the Search a Database skill first uses various non-LLM-based search systems to retrieve relevant documents before the LLM synthesizes an answer. If the initial retrieval process is substandard, the LLM’s performance will be compromised. So, our evaluation must consider both LLM-based and non-LLM-based aspects, to make sure our assessment of the whole is accurate.Ěý

How we benchmarkĚý

Our benchmarking process begins long before putting CoCounsel through its paces. Whenever a significant new LLM is released, we test it across a wide suite of public and private legal tests, such as the dataset created by our Stanford collaborators, to assess their aptitude for legal review and analysis. We then integrate the LLMs that perform well in these initial tests with the CoCounsel platform, in a staging environment, to evaluate how they perform under real-world conditions.Ěý

Then we use an automated platform to run a battery of test cases created by our Trust Team (more on this below), to evaluate the output that comes from this experimental integration. If the results are promising, we conduct additional manual reviews using a skilled team of attorneys. When we see an improvement in performance compared to previous benchmarks, then we start talking as a team about how it might improve the CoCounsel experience for our users.Ěý

How we testĚý

Our Trust Team has been around as long as CoCounsel has.Ěý This group of experienced attorneys from diverse backgrounds – in-house counsel, large and small law firms, government, public policy – is dedicated to continually rigorously testing CoCounsel performance. Ěý

We continue to follow a process that’s been integral to all our performance evaluation since CoCounsel’s inception: Our Trust Team creates tests representative of the real work attorneys use CoCounsel for and runs these tests against CoCounsel skills. When creating a test, they first consider what the skill’s for and how it might be used, based on their own insights, customer feedback, and secondary sources. Once the test is created, the attorney tester manually completes the test task, just as a lawyer would, to create an answer key – what we refer to as an “ideal response.” These tests and their corresponding ideal responses then undergo peer review. Being this meticulous is crucial, because the quality of our ideal responses determines the benchmark for a passing score.  Ěý

Once the ideal response has been created, a member of the Trust Team runs the test, using the applicable CoCounsel skill to complete the task just as a user would. An attorney tester reviews the output, referred to as our “model response.” Then they point-by-point compare CoCounsel’s response to the ideal response, identifying differences and assessing whether these differences deviate from the ideal response in a way or to a degree that would make the skill’s output incomplete, incorrect, or misleading. It is important to note that a test can be failed for many reasons, not just because an answer is outright wrong. For instance, some skills (e.g., Summarize) are more inherently subjective than others and do not lend themselves to a right or wrong answer. Accordingly, even though attorneys’ opinions of the “correct” answer can reasonably vary, we try to maintain the highest grading standards and “fail” the test if it is missing a single detail the attorney-tester considers to be Critical.ĚýĚý

Based on this manual evaluation, we create a set of evaluation instructions tailored to the ideal output unique to each skill, capturing the needs of each skill so an LLM can match our human reviewers’ process. To illustrate this, let’s look at a real example using CoCounsel’s Extract Contract Data skill. After running the test in our automated evaluation platform, here’s what an attorney reviewer would see when reconciling the model response with the LLM’s evaluation of it: Ěý

  • Question: If the customer is acquired through a merger, can the agreement be assigned to the acquiring entity without obtaining the provider’s prior written consent? Ěý
    • Trust Team ideal response:Ěý
    • Short answer: Yes, with 30 days’ prior written notice.  Ěý
    • Detailed answer: Yes, if the customer is acquired through a merger, the agreement can be assigned to the acquiring entity without obtaining the provider’s prior written consent; the customer is only required to provide the provider with 30 days’ prior written notice. The clause specifies that either party may assign its rights or delegate its obligations without such consent, provided that 30 days’ prior written notice is given to the other party, to an entity that acquires all or substantially all of the business or assets of such party, including by merger. Ěý
  • Model response: Ěý
    •  Short answer: Yes, with notice. Ěý
    • Detailed answer: The agreement can be assigned to the acquiring entity without obtaining the provider’s prior written consent if the customer is acquired through a merger. The clause provides an exception to the general rule requiring consent for assignment. Specifically, it states that either party may assign its rights or delegate its obligations without such consent to an entity that acquires all or substantially all of the business or assets of the party, including through a merger, provided that 30 days’ prior written notice is given to the other party. Ěý
    • Automated evaluation analysis: The model response captures the correct information that the agreement can be assigned to the acquiring entity without obtaining the provider’s prior written consent but requires 30 days’ prior written notice. However, it incorrectly states the short answer as “Yes, with notice” instead of “Yes, with 30 days’ prior written notice,” as per the ideal answer. This is a minor difference and does not change the meaning significantly. Ěý

In this instance, the model response included a minor discrepancy from the attorney-authored ideal response. But the LLM’s evaluation of the response accurately determined that the answer remained sufficient, because it captured the complete notice requirement elsewhere in the response.ĚýĚý

Our ideal-response approach provides two key advantages over assertion-based evaluations. It excels at identifying deviations from attorney expectations, including hallucinations. And it pinpoints extraneous or inconsistent information that, while not technically a hallucination, could make even a complete response incorrect if that information introduces logical inconsistencies, which would result in a failing score. 
 
We rely on our Trust Team to create well-defined ideal responses and auto-evaluation instructions and to determine if a test case passes or fails. A skill’s output definitively fails if it falls short of this ideal because of material omissions, factual incorrectness, or hallucinations. However, we recognize that many legal issues aren’t black-and-white, and the “correct” answer could be open to reasonable disagreement. To address this, we peer review ideal responses in cases when the answer might require a second opinion. And we might eliminate tests when we find insufficient agreement among the attorney testers. This is how we both ensure that our passing criteria remain rigorous and account for the nuanced nature of legal analysis. 

Maintenance and improvementĚý

Creating a skill test set is only the beginning. Once we begin using it, the Trust Team continually monitors and refines it by manually reviewing failure cases from the automated tests and spot-checking passing samples to make sure the automated evaluation is in line with human judgments. We also regularly add tests to cover more use cases and capture user-reported issues, which could lead to further iterations of the tests submitted for automated evaluation and their success criteria.  Ěý

By following this process, every night we can execute, across all CoCounsel skills, more than 1,500 tests on our automated platform under attorney oversight, which combined with manual testing means we’ve run more than 1,000,000 tests since CoCounsel’s launch. And it empowers us to quickly identify areas for improvement, which is vital to ensuring CoCounsel remains the most trustworthy AI legal assistant available.ĚýĚý

ConclusionĚý

, we explored what it means for an AI tool to be “professional-grade” and why that standard is crucial for professionals in high-stakes fields like law. This post takes that concept further by diving into how we benchmark CoCounsel to ensure it meets those rigorous standards. By understanding the extensive testing that goes into evaluating its performance, you can see how CoCounsel consistently delivers the reliability and accuracy expected of a true professional-grade GenAI solution.Ěý

To promote the transparency my team and I believe is necessary in the legal AI field, we’ve decided to release some of our performance statistics for the first time and a sample of the tests that are used to arrive at the figures below applying the criteria referenced within this article. Check out our results .

ThisĚýis a guestĚýpost fromĚýJake Heller,ĚýheadĚýofĚýCoCounsel,ĚýThomsonĚýReuters.

]]>
Unlocking the full potential of professional-grade GenAI for your work /en-us/posts/innovation/unlocking-the-full-potential-of-professional-grade-genai-for-your-work/ Tue, 15 Oct 2024 12:06:01 +0000 https://blogs.thomsonreuters.com/en-us/?post_type=innovation_post&p=63470 Today, nearly two years since ChatGPT debuted, GenAI continues to dominate our cultural and professional conversations. Even as its adoption for work steadily increases, the biggest concern for most professionals – 70% of them – is accuracy of output.Ěý

However, not using GenAI for work at all is a non-option. 77% of professionals believe AI will have a high or transformational impact on their work over the next five years, and 78% call AI a “force for good” in their profession. In fact, 50% of law firms named AI among their top five strategic priorities for the next 18 months. If there were still doubt, there definitely isn’t anymore: GenAI is here to stay.Ěý

So how can conscientious – and forward-thinking – professionals make the most of this generational technology while guarding against its drawbacks? How do you know if the GenAI solution you’re considering will live up to your professional obligation to work ethically and ensure your clients’ data is securely handled? Is any GenAI product trustworthy? Is it even possible for tools built on large language models (LLMs) such as GPT-4o from OpenAI and Google’s Gemini, all of which are known to hallucinate, to be safe enough to use professionally?Ěý

Yes, it is possible. “Built on” is the key. When we launched our GenAI assistant, CoCounsel, our product and engineering teams delivered on the challenge of creating a product that could take advantage of LLMs’ tremendous raw power while eliminating as many of their serious limitations as possible – like hallucinations – that curb the professional utility of models when used on their own. What makes the current generation of LLMs truly extraordinary, then, is not what they alone can do, but what they enable.ĚýĚýĚýĚýĚýĚý

Using a model directly should be done with great caution and exposes users to risk if they use the output professionally. CoCounsel, on the other hand, harnesses that power and has engineered robust, well-tested accuracy, privacy, and security controls around it. In short: LLMs are the world’s most incredible engines. CoCounsel uses that engine to take you incredible places – places you couldn’t reach without these LLMs – safely.Ěý

Why can professionals trust CoCounsel?Ěý

We’ve applied our technical and domain expertise to leading LLMs in creating and continuing to optimize CoCounsel, a first-of-its-kind product that both does more than LLMs can and corrects the problems that make them unsuitable on their own.Ěý

In short, CoCounsel is a professional-grade GenAI assistant. And no professional should use a GenAI solution that isn’t.Ěý

What does it take for a GenAI assistant to be professional-grade? At a bare minimum – without which it should not be trusted for your work – it must be:Ěý

  1. Built for domain-specific use and grounded in reliable sources of data relevant to that use. A professional-grade solution, such as CoCounsel, harnesses the power of LLMs but limits the source of knowledge to known, reliable data sources – such as profession-specific domains or professionals’ or their clients’ databases – which rigorously limits the possibility of inaccuracies.
  2. Built to make verifying its output easy. CoCounsel was not designed to replace the role of the professional, but rather to help them accomplish more and higher-quality work in less time. So just as lawyers review all work delegated to a junior associate or paralegal, they must validate CoCounsel’s output. We’ve made it easy to do so: all answers link to their origin in the source documents, so it’s simple to “trust, but verify.”Ěý
  3. Developed by technical teams with deep GenAI expertise.  Though GenAI has only been broadly talked about since 2022, it’s been around since 1961. ¶¶ŇőłÉÄę AI engineers and research teams have worked with LLMs since their invention, were among the first to build with GPT-4, and have invented patented approaches to applying LLMs to professional use cases. Ěý
  4. Continually and consistently tested and authenticated by a dedicated team of domain experts.  ¶¶ŇőłÉÄęĚýAI engineers and Trust Team attorneys together filter, rank, and score CoCounsel’s responses to a daily battery of thousands of tests developed to simulate real-life legal use cases and ensure the assistant’s answers are consistent and accurate. To date we’ve run more than 1,000,000 such tests against CoCounsel. Ěý
  5. Secure and private, because it interacts with third-party LLMs the right way. ¶¶ŇőłÉÄę GenAI solutions access third-party LLMs through dedicated, private servers, and through an “eyes off” API. No LLM partner employees can see customer queries or documents, and our LLM access is contractually “zero retention.” Our LLM partners cannot store customer data longer than it takes to process the request. Our product data is never used to train any third-party models. And all product data is encrypted in transit and at rest, and subject to ¶¶ŇőłÉÄę rigorous security policies and practices. Ěý

Why “makes minimal mistakes” isn’t enoughĚý

As important as the above five characteristics are, they’ve become price-of-entry criteria for professional-grade GenAI. And given how rapidly the technology is evolving, what you expect from a professional-grade solution should, as well. Remember: GenAI has the power to do so much more than help you complete jobs. It can transform what it means to be a professional, freeing you for more strategic, creative, valuable work that a machine just cannot do – which can transform not only how you do business, but also how much business you do.ĚýĚý

To take full advantage of this potential, you need a GenAI assistant that fulfills two more key requirements:Ěý

1. Professional-grade means intelligently and seamlessly handling workflows, not just completing a series of tasks. A true GenAI assistant goes beyond responding to your requests, instead guiding you through the steps required to finish long, complex, even open-ended projects. Only this kind of product, such as CoCounsel, can truly unlock the full potential of GenAI.

Through an expanding set of capabilities and deep connections with both your documents and the tools you use every day – e.g., Microsoft 365 – a professional-grade GenAI assistant can traffic an entire deliverable from task to task, program to program, in a continuous stream, prompting you forward through the next steps and simultaneously handling multiple pieces of the work itself.

CoCounsel is built for workflows. It’s accessible across multiple ¶¶ŇőłÉÄę products, bringing together both fundamental capabilities such as summarization and document review with specialized functions such as legal research, for a smooth transition from one type of work to the next. And it’s integrated with both Microsoft 365 and document management systems, available for you literally wherever you’re working, from client communication to research to document drafting and beyond.

2. Professional-grade means providing partnership, not just product. Without in-depth, sustained support, you’re unlikely to get the most possible value from your investment. A professional-grade vendor offers a success and support team that will be there for you long after you’ve signed a subscription agreement. They work with you to deeply understand your most prevalent use cases and how their solution will help you tackle them, and of course are available when something’s not working the way it should. They keep you informed about product changes and improvements and continue creating content you can use to increase your knowledge, such as videos, webinars, and written materials. A truly professional-grade team has a vision for their GenAI assistant, ensuring it will increase in power and capability as the technology advances. And most important, they are as invested in you as you are in them.

Upon adopting CoCounsel, legal professional users are trained by the ¶¶ŇőłÉÄę Customer Success team, made up of licensed attorneys, many still practicing, with dozens of years of combined experience in both litigation and transactional law. And many of these trainers are prompting engineer specialists, who in addition to being licensed attorneys have a background in computer science. After onboarding, we ensure everyone using CoCounsel has the opportunity to attend live trainings and watch recorded webinars, get individual help through live chat and email, and access dozens of video tutorials—a resource pool that will only keep growing.

This is a guest post from Jake Heller, head of CoCounsel, Thomson Reuters, and Erin Nelson, CoCounsel content strategist, ¶¶ŇőłÉÄę.ĚýĚý

]]>