Industry Insights Archives - Thomson Reuters Institute https://blogs.thomsonreuters.com/en-us/innovation-topics/industry-insights/ Thomson Reuters Institute is a blog from , the intelligence, technology and human expertise you need to find trusted answers. Thu, 16 Apr 2026 12:16:52 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 AI Is Taking Action. No One Is Accountable. /en-us/posts/innovation/ai-is-taking-action-no-one-is-accountable/ Thu, 16 Apr 2026 12:16:52 +0000 https://blogs.thomsonreuters.com/en-us/?post_type=innovation_post&p=70448 The lawyer is still accountable. The AI system acting on her behalf is not. That gap is no longer theoretical.

After convening the first meeting of the Trust in AI Alliance, it is clear this mismatch is emerging as one of the biggest barriers to enterprise AI deployment.

As AI systems move from answering questions to taking action inside professional workflows, a fundamental mismatch is emerging. Execution shifts to the system. Responsibility still sits with the human.

In agentic systems, that model is being reconfigured, but there is still no clear answer to a critical question: how does a human maintain accountability as more of the work is executed by the system?

That question was at the center of the inaugural convening of the Trust in AI Alliance, a group bringing together leaders across model development, infrastructure, and enterprise AI deployment, where participants from OpenAI, Google, Anthropic, AWS, and discussed what trustworthy agentic systems require in practice.

A clear theme emerged: AI capability is accelerating faster than accountability.

Most systems today are not designed for that standard.

The Shift No One Is Talking About

In the first wave of AI, the defining question was whether a system could produce a correct answer. That is no longer enough.

As AI systems take on multi-step tasks across real workflows, the question is shifting from accuracy to accountability.

As Michael Gerstenhaber, Vice President of Product Management at Google, said during the discussion: “Delegating agency to a synthetic agent implies trust. The more you delegate, the more you need observability, tracing, and audit. It is not one feature. It is defense in depth.”

In traditional professional environments, accountability is clear. Humans determine relevance, review source material, verify outputs, and take responsibility for outcomes. In agentic systems, that model is evolving.

Retrieval is automated. Context is lost across steps. Outputs appear grounded in source material without preserving fidelity. Tools execute beyond the user’s visibility.

As Frank Schilder, Senior Principal Scientist at , noted: “When we move to an agentic workflow, we automate steps that professionals used to perform manually and that introduces new risks: Context can be silently dropped. Source fidelity can become fragile. Maintaining clear accountability becomes more complex.”

These are not edge cases. They are structural risks.We are automating the work, but not accountability.

If You Can’t Inspect It, You Can’t Trust It

In regulated industries, trust has never meant blind confidence. It has always meant the ability to verify. That standard is now colliding with how many AI systemsoperate.

Accuracy drives experimentation. Inspection determines adoption.

If a system cannot show its work, it cannot be trusted in high-stakes environments.

As Gayle McElvain, Head of TR Labs at , put it: “Errors create liability. For many professionals, trust means ‘trust but verify.’ That means building AI systems where verification is built in.”

Across the discussion, several consistent priorities emerged around what trustworthy systems must provide:

    • Step-by-step auditability
    • Traceable reasoning and inspectable tool use
    • Durable logs and process artifacts
    • Clear, persistent provenance

This is not a feature. It isinfrastructure.

Trust Breaks When Source Integrity Breaks

In knowledge-based professions, trust depends on the integrity of source material.

Agentic systems introduce new failure modes. They may paraphrase where precision is required. They may surface outdated information. They may blur the boundary between authoritative sources and generated reasoning.

These are not cosmetic issues. A single altered word in a statute can change its meaning. A misapplied version of a regulation can create real consequences.

As Zach Brock, Engineering Lead at OpenAI, described: “We are moving toward agents that share durable scratch spaces. Citations, version identifiers, and hashes of source material can travel through a workflow without being compressed away.”

That level of persistence isnot a technicaldetail. It is what makes accountability possible.
Without it, professionals cannot trace how an answer was constructed or verify whether it reflects the correct source at the correct point in time. Without it, accountability breaks.

Accountability does notemergeautomatically from more capable systems. It must be explicitly defined.

As ByronCook, Director of Automated Reasoning at AWS, said: “With AI, some of those socio-technical mechanisms go away. Wehave todefine the dividing line between behaviors weacceptand those we do not—and enforce that symbolically. Without that, accountability cannot bemaintainedas systems take on more of the work.”

This Is a Systems Problem

Much of today’s AI development isoptimizedfor performance benchmarks. But in real-world environments, performance is only part of the equation.

As ScottWhite, Head of Product, Enterprise at Anthropic, noted: “Benchmarks measure whether a model can do the task.Enterprises are asking a bigger question: will the system around it hold up in the environments where the workactually happens?A trustworthy agentrequiresthe model, the boundaries around it, and the record of what it did. Getting all three right is what turns AI from a powerful tool into asystem enterprisescan trust with important work.That’swhat will drive the next wave of adoption.”

Trustworthy systems must be designed to operate safely under pressure, with clear boundaries and strong safeguards.

That requires:

    • Clear separation between system instructions and external content
    • Built-in safeguards against prompt injection and data leakage
    • Continuous monitoring and testing
    • Audit trails aligned with regulatory expectations

Agentic AI is not just a model challenge. It is a governance challenge.

The Next Phase of AI

We are entering a new phase of AI adoption, one defined not by experimentation, but by deployment inside real workflows.

The industry is shifting from outputs to systems, from benchmarks to reliability, and from capability to accountability.

But this shift will not happen automatically. It requires new standards for auditability, clearer approaches to provenance, and systems designed to preserve truth and responsibility across every step of a workflow. These are solvable problems—but only if accountability is designed into the system from the start.

The organizations that solve this will define the next generation of AI.

In high-stakes domains, trust is not optional.

It is not a feature. It is the product.

The Trust in AI Alliance was announced in January to bring together leaders across the AI ecosystem to advance practical standards for accountability, transparency, and trust in AI systems. The group will continue to meet regularly, withselectinsights from those discussions shared publicly.

]]>
The Manufacturing Compliance Problem Is Vertical by Nature /en-us/posts/innovation/the-manufacturing-compliance-problem-is-vertical-by-nature/ Wed, 15 Apr 2026 20:07:39 +0000 https://blogs.thomsonreuters.com/en-us/?post_type=innovation_post&p=70408 The compliance challenges confronting manufacturing are unlike those of any other industry. Multi-tier supplier networks with thousands of vendors require continuous monitoring for possible sanctions, violations, evidence of forced labor, and any hints of financial instability. A single tariff change can cascade across trade compliance, which affects transfer pricing calculations and procurement decisions, requiring contract updates spanning tax, trade, legal, and risk functions at once.

Managing that chain reaction requires shared data across functions and AI with true manufacturing domain expertise. Most companies have neither. As a result, compliance gaps damage customer experience for 54% of manufacturers, according to Forrester, directly impacting revenue and loyalty.

Two Problems, One Crisis

Manufacturers face two compounding challenges.

The first problem is structural. Many rely on multiple compliance vendors across tax, trade, legal, and risk functions, each operating in silos. IT manages ERP systems, operations oversees supply chain, finance owns tax compliance, and legal reviews contracts independently. The result is blind spots where no one sees how changes in one area cascade into others. Forty percent of manufacturers say this fragmentation slows decision-making and innovation, while 54% report it increases financial exposure.

The second problem is technological. While many manufacturers are turning to AI, most tools are too generic to be effective. A general-purpose LLM can summarize a regulation, but it can’t assess how a new forced labor rule in one jurisdiction affects your supplier contracts, duty exposure, and your transfer pricing position at the same time. That kind of reasoning requires AI that is trained on, and embedded within, manufacturing compliance. Generic tools can flag issues; they can’t resolve them with the precision the industry demands.

These two problems reinforce each other. Fragmented systems leave AI tools without full context, while generic AI means applies intelligence that isn’t precise enough – even when the data is shared. Together, they create the compliance burden weighing down the industry.

Embedded, Expert Intelligence Built for Manufacturing

Manufacturers successfully navigating this complexity share a common trait: they’ve moved beyond generic AI and fragmented point solutions to intelligence that is both connected across functions and built around their industry’s specific regulatory reality.

This is where ONESOURCE+, powered by CoCounsel, is fundamentally different, and the distinction is easiest to see in a concrete scenario, like, for example, when a new tariff is announced. With generic AI and disconnected systems, the impact unfolds sequentially – trade flags the change, tax recalculates pricing, and procurement adjusts weeks later, after margins are already at risk. That same tariff change triggers a need for updates across trade classification, indirect tax, and transfer pricing workflows. Our AI is trained specifically on manufacturing compliance rather than general knowledge, allowing your experts to make identification of changes fast, and craft actionable remedies based on defensible decisions.

That domain depth matters and products within ONESOURCE+ deliver. ONESOURCE Global Classification AI and Global Trade Management don’t just centralize workflows, they apply classification and FTA intelligence that generic tools can’t replicate. CLEAR delivers purpose-built supplier risk screening for sanctions and forced labor screening across multi-tier supply chains, cutting false positives without cutting accuracy. For legal teams, CoCounsel Legal puts manufacturing-specific regulatory intelligence directly into contract workflows, enabling real-time action instead of manual handoffs.

The results reflect what connected, vertical expertise makes possible: 50% reduction in product classification time, 2.5x faster free trade agreement processing, and 92% fewer false positives in supplier risk screening.

Compliance as a Competitive Weapon

Leading manufacturers are recognizing that compliance isn’t just a cost center, it’s a competitive advantage when powered by the right intelligence. When industry-specific AI connects a tariff change to its downstream impact on tax calculations, supplier contracts, and procurement, and when the systems that share that intelligence talk to each other, compliance data shifts from burden to growth driver.

With connected intelligence, trade insights inform procurement in real time, customs valuations align with transfer pricing, and legal teams move faster using current, manufacturing‑specific regulatory insights. The result: smarter decisions, faster execution, and advantages competitors can’t easily replicate.

As regulatory complexity accelerates, driven by forced labor regulations, evolving trade agreements, and ESG requirements, manufacturers can no longer rely on fragmented or generic tools. The companies that will succeed will replace disconnected point solutions with expert intelligence embedded in operations, trained on the depth manufacturing requires, and capable of connecting regulatory change to real business impact.

The question facing every manufacturing executive is simple: Are your compliance tools connected enough, and smart enough, for your business?

]]>