Partnerships Archives - Thomson Reuters Institute https://blogs.thomsonreuters.com/en-us/innovation-topics/partnerships/ Thomson Reuters Institute is a blog from , the intelligence, technology and human expertise you need to find trusted answers. Thu, 16 Apr 2026 12:16:52 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 AI Is Taking Action. No One Is Accountable. /en-us/posts/innovation/ai-is-taking-action-no-one-is-accountable/ Thu, 16 Apr 2026 12:16:52 +0000 https://blogs.thomsonreuters.com/en-us/?post_type=innovation_post&p=70448 The lawyer is still accountable. The AI system acting on her behalf is not. That gap is no longer theoretical.

After convening the first meeting of the Trust in AI Alliance, it is clear this mismatch is emerging as one of the biggest barriers to enterprise AI deployment.

As AI systems move from answering questions to taking action inside professional workflows, a fundamental mismatch is emerging. Execution shifts to the system. Responsibility still sits with the human.

In agentic systems, that model is being reconfigured, but there is still no clear answer to a critical question: how does a human maintain accountability as more of the work is executed by the system?

That question was at the center of the inaugural convening of the Trust in AI Alliance, a group bringing together leaders across model development, infrastructure, and enterprise AI deployment, where participants from OpenAI, Google, Anthropic, AWS, and discussed what trustworthy agentic systems require in practice.

A clear theme emerged: AI capability is accelerating faster than accountability.

Most systems today are not designed for that standard.

The Shift No One Is Talking About

In the first wave of AI, the defining question was whether a system could produce a correct answer. That is no longer enough.

As AI systems take on multi-step tasks across real workflows, the question is shifting from accuracy to accountability.

As Michael Gerstenhaber, Vice President of Product Management at Google, said during the discussion: “Delegating agency to a synthetic agent implies trust. The more you delegate, the more you need observability, tracing, and audit. It is not one feature. It is defense in depth.”

In traditional professional environments, accountability is clear. Humans determine relevance, review source material, verify outputs, and take responsibility for outcomes. In agentic systems, that model is evolving.

Retrieval is automated. Context is lost across steps. Outputs appear grounded in source material without preserving fidelity. Tools execute beyond the user’s visibility.

As Frank Schilder, Senior Principal Scientist at , noted: “When we move to an agentic workflow, we automate steps that professionals used to perform manually and that introduces new risks: Context can be silently dropped. Source fidelity can become fragile. Maintaining clear accountability becomes more complex.”

These are not edge cases. They are structural risks.We are automating the work, but not accountability.

If You Can’t Inspect It, You Can’t Trust It

In regulated industries, trust has never meant blind confidence. It has always meant the ability to verify. That standard is now colliding with how many AI systemsoperate.

Accuracy drives experimentation. Inspection determines adoption.

If a system cannot show its work, it cannot be trusted in high-stakes environments.

As Gayle McElvain, Head of TR Labs at , put it: “Errors create liability. For many professionals, trust means ‘trust but verify.’ That means building AI systems where verification is built in.”

Across the discussion, several consistent priorities emerged around what trustworthy systems must provide:

    • Step-by-step auditability
    • Traceable reasoning and inspectable tool use
    • Durable logs and process artifacts
    • Clear, persistent provenance

This is not a feature. It isinfrastructure.

Trust Breaks When Source Integrity Breaks

In knowledge-based professions, trust depends on the integrity of source material.

Agentic systems introduce new failure modes. They may paraphrase where precision is required. They may surface outdated information. They may blur the boundary between authoritative sources and generated reasoning.

These are not cosmetic issues. A single altered word in a statute can change its meaning. A misapplied version of a regulation can create real consequences.

As Zach Brock, Engineering Lead at OpenAI, described: “We are moving toward agents that share durable scratch spaces. Citations, version identifiers, and hashes of source material can travel through a workflow without being compressed away.”

That level of persistence isnot a technicaldetail. It is what makes accountability possible.
Without it, professionals cannot trace how an answer was constructed or verify whether it reflects the correct source at the correct point in time. Without it, accountability breaks.

Accountability does notemergeautomatically from more capable systems. It must be explicitly defined.

As ByronCook, Director of Automated Reasoning at AWS, said: “With AI, some of those socio-technical mechanisms go away. Wehave todefine the dividing line between behaviors weacceptand those we do not—and enforce that symbolically. Without that, accountability cannot bemaintainedas systems take on more of the work.”

This Is a Systems Problem

Much of today’s AI development isoptimizedfor performance benchmarks. But in real-world environments, performance is only part of the equation.

As ScottWhite, Head of Product, Enterprise at Anthropic, noted: “Benchmarks measure whether a model can do the task.Enterprises are asking a bigger question: will the system around it hold up in the environments where the workactually happens?A trustworthy agentrequiresthe model, the boundaries around it, and the record of what it did. Getting all three right is what turns AI from a powerful tool into asystem enterprisescan trust with important work.That’swhat will drive the next wave of adoption.”

Trustworthy systems must be designed to operate safely under pressure, with clear boundaries and strong safeguards.

That requires:

    • Clear separation between system instructions and external content
    • Built-in safeguards against prompt injection and data leakage
    • Continuous monitoring and testing
    • Audit trails aligned with regulatory expectations

Agentic AI is not just a model challenge. It is a governance challenge.

The Next Phase of AI

We are entering a new phase of AI adoption, one defined not by experimentation, but by deployment inside real workflows.

The industry is shifting from outputs to systems, from benchmarks to reliability, and from capability to accountability.

But this shift will not happen automatically. It requires new standards for auditability, clearer approaches to provenance, and systems designed to preserve truth and responsibility across every step of a workflow. These are solvable problems—but only if accountability is designed into the system from the start.

The organizations that solve this will define the next generation of AI.

In high-stakes domains, trust is not optional.

It is not a feature. It is the product.

The Trust in AI Alliance was announced in January to bring together leaders across the AI ecosystem to advance practical standards for accountability, transparency, and trust in AI systems. The group will continue to meet regularly, withselectinsights from those discussions shared publicly.

]]>
and Ecosystem Partners Bring PPC Methodology into AI‑Powered Audit Workflows /en-us/posts/innovation/thomson-reuters-and-ecosystem-partners-bring-ppc-methodology-into-ai%e2%80%91powered-audit-workflows/ Mon, 01 Dec 2025 14:05:22 +0000 https://blogs.thomsonreuters.com/en-us/?post_type=innovation_post&p=68604 When I talk with audit leaders today, I hear the same things: tight capacity, rising expectations, evolving standards, and a flood of AI tools that are hard to evaluate. Firms want to modernize their audit firms, but not at the expense of quality, documentation, or compliance.

At , our starting point is, and will remain, methodology. For decades, firms have relied on PPC methodology as the gold standard for audit quality, documentation, and compliance. Our vision for AI in auditing builds on that foundation. We’re not asking firms to change how they practice. We’re focused on making PPC the most AI‑automated audit methodology in the market—through our own products and through deep partnerships with innovators our customers already trust.

That’s the idea behind our recent partnerships with , , , , , and . Together, we’re embedding PPC into AI‑driven tools across We’re supporting AI-powered automation with , so firms can automate more work while staying grounded in the trusted methodology they already rely on.

Trullion: Methodologyaware automation for financial statement review

Financial statement review is one of the most judgment‑intensive parts of the audit—but many of the underlying procedures are repeatable. Our integration with Trullion brings AI‑native automation to financial statement review and testing, with full traceability back to PPC methodology and the relevant guidance at every step.

Artie Minson, CEO at Trullion, describes the shift: “This partnership signals a new era for audit automation and lays the foundation for trusted and truly agentic workflows. Our vertical AI solution is built for auditors by auditors, ensuring our outputs are within the framework of professional standards. This integration creates methodology-aware automation. Auditors can now focus their time on applying judgment to fully evidenced, agentic outputs, rather than searching for them, delivering audits with unmatched efficiency, accuracy, and quality.”

For us, “methodology‑aware” is key: automation is valuable only when it operates within the same professional framework firms already use to define, document, and support their work.

Audit Sight: Substantive analytics that reduce testing

Substantive testing is another area where firms feel the strain. Even when technology is available, many teams still default to large samples and manual procedures.

As T.C. Whittaker, Co‑Founder and CEO of Audit Sight, puts it: “Audit firms are seeking smarter ways to expand capacity and elevate quality without adding headcount. Bringing automated testing together with ’ PPC methodology — and enabling it through Guided Assurance — is the ultimate unlock for auditors. It transforms the audit plan itself, making it intelligent and dynamic by tailoring procedures, eliminating unnecessary tests, and reducing sample sizes based on automated evidence and client-specific risk. This partnership represents a shared vision to redefine how assurance is delivered in the modern era.”

Crunchafi: Automating lease procedures inside PPC

Lease accounting has become a complex, time‑consuming area for many firms. Too often, teams spend hours on calculations and reconciliations instead of higher‑value work.

By integrating Crunchafi into Guided Assurance, we bring seamless lease accounting automation directly into PPC‑based workflows, eliminating manual lease calculations and providing audit-ready journal entries, amortization schedules and footnote disclosures while preserving firms’ established methodology.

Mike Cooke, CRO atCrunchafi, explains: “Audit teams want efficiency without sacrificing quality. By aligningCrunchafi’s automation with the PPC Methodology, we’re giving firms a clearer, more reliable way to handle lease accounting from the start of the engagement to the final deliverable.”

This is the pattern we’re aiming for: automation that plugs into how firms already work, rather than asking them to start from scratch.

Fieldguide: Empowering Firms with Flexible Paths to Automate PPC Methodology

Many firms also want a more connected environment where methodology, evidence, and automation all live together. Our goal is to meet firms where they are – and give them options.

That is why we’ve partnered with Fieldguide to embed Guided Assurance—which delivers PPC methodology—directly into Fieldguide’s professional‑grade agentic AI platform. This creates a unified experience where trusted PPC content and intelligent automation collaborate to execute engagements efficiently and consistently.

Whether firms choose to automate audits with or Fieldguide, they can be confident they’re using the most trusted and automated methodology in the profession. This flexibility reflects our commitment to innovation and the unique needs of our customers.

Jin Chang, Co-Founder and CEO of Fieldguide says: “Firms are under pressure to do more with less. They need trusted methodology and AI agents that work the way they do. By embedding PPC methodology into our platform, we’re helping firms deliver higher quality work with more consistency and less effort. This partnership reflects a shared commitment to the future of the profession.”

Validis: Data as the foundation for AIdriven auditing

AI is only as good as the data behind it. For many firms, getting clean, audit‑ready data from clients is one of the toughest operational challenges.

Through , our work with Validis focuses on solving that. Validis powers secure, on‑demand ingestion of client trial balance, general ledger, and subledger data directly into Audit Intelligence. From there, we use AI and machine learning to focus testing on high‑risk areas, segment populations by risk, and reduce the number of items to be tested, with anomaly detection automatically surfacing unusual items and generating the required documentation.

As Jeff Gramlich, Managing Director at Validis, explains: “We’re excited to collaborate with , a true market leader and innovator, to deliver audit-ready data through our cutting-edge ingestion capabilities. This partnership provides auditors with the data breadth and granularity crucial for effective AI-driven auditing. By integrating our technology into the Audit Intelligence suite, we’re empowering auditors to conduct data-driven audits with enhanced efficiency and risk analysis, ultimately transforming the process to benefit both auditors and their clients.”

Valid8 Financial: Turning evidence gathering into an automated workflow

Finally, there’s the everyday work of matching samples to evidence and documenting that work in a way that stands up to inspection and peer review. This is some of the most manual and time‑consuming work in an audit.

, developed with Valid8 Financial, automates the matching and documentation of samples to supporting evidence, dynamically tracing accounting transactions to banking activity to confirm occurrence. It brings technology traditionally used in advisory, forensic, and financial crime work into an integrated audit workflow.

Brett Suchor, CEO of Valid8 Financial, says: “We built our technology to solve real problems auditors face every day – reducing the manual, time-consuming work of matching samples to evidence. Through our collaboration with , we’re delivering a faster, more reliable testing experience to audit professionals across the industry.”

The future of Audit

In the next 3-5 years, we’re going to see big changes to the audit profession. Audit is moving decisively toward an automated, data-driven future. Using the right tools to increase efficiency and quality so teams can focus on higher risk areas and deliver better outcomes for clients is paramount.

In today’s environment, firms are being asked to do more with less, navigating tighter deadlines, increasing complexity, and growing client expectations. At , we are bringing auditors advanced audit technologies, with ournewest audit solutionsincreasing efficiency and accuracy.

We’ll keep investing in our own AI capabilities and in this partner ecosystem so firms can modernize at their own pace, on their own terms—without walking away from the methodology that has served them well for decades.

This post was authored by Dave Wyle, General Manager of Audit at .

]]>