Data Analytics Archives - Thomson Reuters Institute https://blogs.thomsonreuters.com/en-us/topic/data-analytics/ Thomson Reuters Institute is a blog from ¶¶ŇőłÉÄę, the intelligence, technology and human expertise you need to find trusted answers. Mon, 13 Apr 2026 20:33:23 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 Country-by-country reporting is getting more complicated — and the window to get ahead is closing /en-us/posts/corporates/country-by-country-reporting/ Tue, 14 Apr 2026 12:22:22 +0000 https://blogs.thomsonreuters.com/en-us/?p=70335

Key takeaways:

      • Country-by-country reporting will only increase in complexity — Australia’s enhanced Country-by-country reporting (CbCR) requirements — reconciling taxes accrued against taxes credited — are a preview of where other high-scrutiny jurisdictions are heading, and companies need to build that explanatory analysis capability now, systematically, rather than scrambling later.

      • There has to be a shared narrative from corporate teams — The EU’s public CbCR is a reputational event, not just a filing. So that means tax, communications, and investor relations teams need a shared narrative before the data goes public — inconsistencies create exposure you do not want to manage reactively.

      • Rethink your filing jurisdiction in light of changes — If EU filing jurisdiction was chosen at initial implementation and never revisited, look again. Guidance has matured, and a more efficient or better-suited option may now be available.


WASHINGTON, DC — Among the many pressing topics discussed in detail at the recent , country-by-country reporting (CbCR) and its ability to reshape the corporate tax industry, certainly had its place. Between escalating local jurisdiction requirements, the , and for deeper explanatory disclosures, CbCR has quietly evolved from a transfer pricing filing obligation into something far more strategically consequential.

The floor is just the floor

The creation of the by the Organisation for Economic Co-operation and Development (OECD) was intended as a minimum standard for countries. And now jurisdictions are increasingly layering additional requirements on top of the OECD’s basic template, resulting in a widening gap between the standard requirements and what tax authorities actually want.

Currently, Australia is the most pointed example. Australian tax authorities are now requiring multinational groups to go beyond the standard CbCR data fields and provide explanatory narratives that reconcile taxes accrued against taxes actually credited. This requires corporate tax departments to bridge the gap between financial statement accruals and their organizations’ cash tax positions in a way that is coherent, defensible, and consistent with positions taken elsewhere.

At the TEI event, panelists explained that for tax departments this will carry complex timing differences, deferred tax positions, or significant jurisdictional mismatches between booked and cash taxes. Indeed, this additional layer of scrutiny will need dedicated attention.


Please add your voice to ¶¶ŇőłÉÄę’ flagship , a global study exploring how the professional landscape continues to change.Ěý


The broader signal matters: Australia will not be the last jurisdiction to move in this direction. So that means that tax departments should treat Australia’s approach as a leading indicator of where other high-scrutiny jurisdictions could be heading. Building the capability to produce this kind of explanatory analysis systematically — rather than scrambling jurisdiction by jurisdiction — would be the smarter long-term investment for corporate tax teams.

Public CbCR in the EU: The transparency ratchet has turned

For US-based multinationals with significant European operations, the EU’s public CbCR directive has fundamentally changed the calculus. Unlike the confidential tax authority filings most corporate tax departments are accustomed to, the EU’s public CbCR rules put organizations’ jurisdictional profit and tax data into the public domain, making it visible to investors, journalists, civil society groups, and organizations’ employees and customers.

The EU framework specifies which entities trigger the reporting obligation and which entity within the group is responsible for making the public filing. That scoping analysis is not always straightforward for complex multinational structures and getting it wrong could present both reputational and legal risk.


Choosing a filing jurisdiction is not purely an administrative decision — it is a choice that affects the regulatory environment that governs the disclosure, the language requirements, the timing, and the interpretive framework that applies to data.


For US-headquartered groups, the implications extend well beyond Europe. Public CbCR data is now being read alongside US disclosures, reporting on ESG activities, and public narratives about tax governance. Inconsistencies, including those technically explainable, could create unwanted noise about the company. This is clearly another reason why the tax function should partner across the business — in this case with the communications team — to make they both are aligned to tell the CbCR story instead of being caught off guard by a journalist or an investor during an earnings call.

Questions that US multinationals should be asking

Fortunately, US multinationals with multiple EU subsidiaries are not required to file public CbCR reports in every EU member state in which they have a presence. Instead, under the EU framework, a qualifying ultimate parent or standalone undertaking can satisfy the public disclosure requirement through a single filing in one EU member state, provided the relevant conditions are met. Germany and the Netherlands have emerged as two of the more popular choices for this consolidated filing approach, given their well-developed regulatory frameworks and the depth of available guidance on what compliant disclosure looks like in practice.

The strategic implication is meaningful. Choosing a filing jurisdiction is not purely an administrative decision — it is a choice that affects the regulatory environment that governs the disclosure, the language requirements, the timing, and the interpretive framework that applies to data. Corporate tax departments that defaulted to a filing jurisdiction early in the EU implementation process should take a fresh look. Regulatory guidance has matured significantly, and there may be a more efficient or better-suited path available than the one originally chosen.

The uncomfortable divergence

There is a notable irony in the current environment. Domestically, the IRS and U.S. Treasury’s 2025-2026 Priority Guidance Plan reflects an explicit focus on deregulation and burden reduction, detailing dozens of projects aimed at reducing compliance costs for US businesses. Meanwhile, the international compliance environment has moved in the opposite direction, adding disclosure layers, explanatory requirements, and public transparency obligations that many US businesses cannot avoid simply because they are headquartered in the United States.

This divergence has a direct implication for how tax departments allocate resources and make the internal case for investment in international compliance infrastructure. The burden internationally is not going down — indeed, it is intensifying — and that argument is now backed by concrete examples rather than projections.

3 things worth doing now

There are several actions that corporate tax teams should consider, including:

Audit CbCR data quality with Australia’s enhanced requirements in mind — If you cannot readily reconcile taxes accrued to taxes credited at the jurisdictional level, that gap needs to be closed before it becomes an authority inquiry.

Revisit EU filing jurisdiction strategy — If your jurisdictional decision was made at the time of initial implementation and has not been reviewed since, it is worth a fresh look before the next reporting cycle.

Develop an internal narrative around public CbCR data before it circulates externally — Your company’s tax story should not be a surprise to the corporate teams involved in communications, investor relations, or ESG — and in today’s world, assuming such news stays quiet is no longer a safe assumption.

While CbCR started as a tool for tax authorities, it today has become something more visible, more public, and more consequential than that — and that trajectory is not reversing any time soon.


You can download a full copy of the Thomson Reuters Institute’s

]]>
From emerging player to contender: How Latin America can compete in the global AI race /en-us/posts/technology/latam-ai-investment/ Mon, 06 Apr 2026 11:57:46 +0000 https://blogs.thomsonreuters.com/en-us/?p=70259

Key takeaways:

      • Strategic collaboration is becoming a defining strength for the region — Latin American organizations are realizing that progress in AI accelerates when they combine forces by linking industry expertise, academic talent, and public‑sector support.

      • AI initiatives rooted in real local challenges are gaining global relevance — By developing solutions grounded in the region’s own structural needs, whether in infrastructure, finance, agriculture, education, or mobility, many LatAm firms are producing technologies that are both highly impactful and naturally scalable.

      • Demonstrating clear outcomes is becoming fundamental — Organizations that show concrete operational improvements, measurable efficiencies, or stronger customer outcomes are strengthening their position with investors and partners.


In recent years, Latin America has experienced significant growth in investments related to AI, accounting for . This is strikingly low given that the region makes up around 6.6% of global GDP, highlighting the region’s opportunities to scale AI initiatives even further. Although there are notable differences among countries, Mexico and Brazil — the two largest LatAm economies — stand out for their volume of AI projects and funding, followed by other nations such as Chile, Colombia, and Argentina.

By recognizing the region’s strengths — which include cost-effective operations, access to data, clean energy, and public support — the region’s businesses can better position themselves and design strategies to draw in international investors that may be increasingly seeking promising locations for AI development.

Lessons from LatAm’s AI success stories

Latin America has produced remarkable AI success stories that can serve as models to build confidence among investors. These cases — involving companies that attracted substantial investment and achieved growth — demonstrate valuable best practices that range from technological innovation to working with governments and corporations. Some of these best practices include:

Building strategic alliances

The journey of innovation rarely unfolds in isolation. At times, the presence of large, established companies, whether local industry leaders or multinationals, has served as a catalyst for AI projects. The experience of that specializes in AI-powered agricultural irrigation, proves it. Now, Kilimo is partnering with EdgeConneX, a data center company based in the United States, on a community .

Academia, too, can be woven into this narrative. Collaborations with research centers or universities offer scientific credibility and connect ventures with emerging talent. In Mexico, AI startups often originate within university settings — such as computer vision projects from the National Autonomous University of Mexico (UNAM), for instance — and maintain agreements that sustain ongoing innovation and technical progress even with modest resources. And academic validations, whether in published papers or conference accolades, tend to resonate with foreign investors. Indeed, the emergence of this ecosystem that features early corporate clients and academic mentors frequently lends a distinctive appeal for those seeking investment.

Focusing on local problems with global impact

Within Latin America, certain issues prove especially relevant in situations in which AI solutions intersect with sectors renowned for regional strengths, such as fintech and financial inclusion, agrotech optimizing agriculture, and foodtech drawing on local ingredients. The experience of Chilean food startup NotCo — in which and subsequently exported — suggests how innovations rooted in local context may generate broader attention.

By addressing needs in urban transport, education, mining and related areas, local LatAm companies can provide access to homegrown data and users, which can further refine technology and open pathways for investors into similar emerging markets. When AI solutions respond to genuine pain points rather than mere novelty, momentum often builds more quickly, and the model finds validation among that evaluate investments.

Showing results and AI ROI early on

Questions linger for many executives . Evidence of clear metrics like cost savings, sales growth, or error reduction can prove persuasive, especially when complemented by success stories from local clients.

Recent studies show that companies ; and such figures tend to reassure those considering investment by illustrating tangible improvements. Testimonials or independent validations, such as a university study, can further illuminate achievements.

The act of quantifying impact — whether in efficiency, revenue, or other relevant KPIs — has a way of transforming perceptions from uncertainty toward clarity.

Leveraging government incentives and collaborations

Many Latin American nations have put forth support programs for AI and tech projects, such as non-repayable funds, soft loans, and tax benefits for innovation illustrated in , , , or the .

Public financing, when present, often acts as a stamp of validation for private investors. For example, this trust extended to Brazilian startups receiving Finep support for AI health projects, which in turn can shift perceptions for foreign ventures capitals. Engagement in government pilots, such as smart city initiatives or solutions for ministries, provides valuable exposure. In such contexts, public-private partnerships and incentives seem to act as quiet levers for growth and legitimacy.

Seeking smart and diversified financing

Financial strategies in Latin America have been shaped by the interplay of local and foreign capital. Local funds often bring insights and patience, while foreign funds may offer larger investments and global scaling experience. Ownership dilution sometimes accompanies the arrival of strategic investors, whose networks can prove invaluable, such as . Programs like 500 Startups, Y Combinator, MassChallenge, and international competitions have ushered LatAm AI startups such as Heru, Rappi, Bitso, and Clip into new rounds of capital following increased exposure.

Efficiency in capital management, which can be demonstrated with lean burn rates and milestone achievement with limited resources, signals an ability to execute within the realities of LatAm, which may enhance the appeal for future investments. The cultivation of relationships and responsible stewardship of capital frequently matters as much as the funds themselves, suggesting that the value of mentorship, contacts, and reputation is often intertwined with deepening financial support.

Unlocking AI Investment

By applying these principles, Latin American companies have achieved a better position to attract AI investments to their projects and help position the region as a viable destination for technology capital. These recent experiences show that when a LatAm company combines innovation, talent, and strategy — while communicating its story well — it can win over global and local investors alike. Each of the best practices noted above is based on real lessons: international alliances (NotCo with US funds), leveraging incentives (Brazilian companies funded by Finep), talent formation (Santander and Microsoft programs), focus on ROI (successful use cases that convince boards), and more.

Latin America has challenges but also unique advantages. Companies that manage to navigate this environment intelligently will increase their chances of securing the financing needed to innovate and grow. By doing so, they will contribute to a virtuous circle in which each new success attracts more investment to the region and opens doors for the next generation of LatAm AI ventures.


You can find more about the challenges and opportunities in the Latin American region here

]]>
Reinventing the data core: The arrival of the adaptable AI data foundry /en-us/posts/technology/reinventing-data-core-adaptable-data-foundry/ Thu, 05 Mar 2026 16:08:59 +0000 https://blogs.thomsonreuters.com/en-us/?p=69795

Key takeaways:

      • There is a widening gap between AI ambition and readiness — The gap between AI ambition and data readiness is widening, making the adoption of an adaptable data foundry essential for scalable, explainable, and compliant AI outcomes.

      • A data foundry model directly addresses the root cause — A data foundry model enables organizations to industrialize data production, automate compliance, and ensure consistent data lineage, thereby overcoming the limitations of brittle, legacy data architectures.

      • Incorporate the data core into your AI planning — Reinventing the data core is now a strategic imperative for those enterprises that aim to thrive in 2026 and beyond, as agentic AI, regulatory demands, and integration complexity accelerate.


This article is the third and final installment in a 3-part blog series exploring how organizations can reset and empower their data core.

A defining theme of this year so far is the widening distance between organizational ambition and data readiness. Leaders want the hype and inherent capabilities they believe are instantly contained within agentic AI — automated compliance, predictive integration for M&A, and decision-intelligence pipelines that reduce operational friction.

Without a data foundry, however, much of that will be impossible. Instead, workflows will remain brittle, AI agents will hallucinate under inconsistent semantics, and data lineage will break down across federated sources. Further, without a data foundry, regulatory mappings involved with the Financial Data Transparency Act (FDTA) and the Standard Business Reporting (SBR) framework cannot be automated, cross-functional insights will require manual reconciliation, and auditability will collapse under scrutiny.

This is not a failure of leadership. It is a failure of architectural design to recognize the congealment of data as a predecessor to technologies and the critical priorities of data security, auditability, and lineage.

data core

For decades, organizations built monolithic systems that were optimized for stability and reporting. Today’s world demands modularity, continuous adaptation, and agent-driven interoperability. Architecture has shifted from build and operate to build and evolve. This is precisely what a data foundry enables.

Why reinvention can no longer wait

Throughout 2025 and now into the early months of 2026, data and AI have quietly shifted from innovation topics to enterprise constraints. Leaders across regulated markets are starting to recognize that the obstacles limiting their AI ambitions are neither mysterious nor technical — they are structural. These obstacles sit inside the data core, waiting inside the silent architecture that determines whether any form of automation, intelligence, or compliance can scale beyond a pilot.

The data bears this out. When you examine the work coming from Tier-1 research bodies, supervisory institutions, and global transformation benchmarks, a consistent narrative emerges beneath the headlines: AI is accelerating, regulation is hardening, and integration demands are expanding. Moreover, organizational data remains pinned to assumptions that were forged in static, pre-AI operating environments. This gap is not theoretical; rather, it is measurable, persistent, and directly correlated to business performance.

data core

Let’s look at the AI results first. Across industries, organizations continue to experience a familiar pattern: early promise, limited adoption, and rapid degradation once the model encounters inconsistent semantics or fragmented lineage. Global studies show that the vast majority of enterprise AI initiatives still struggle to reach full production maturity, and among those that do, many encounter performance drift within the first year.

The driver is remarkably consistent. It is not the sophistication of the model nor the skill of the data science team — it is the quality, clarity, and traceability of the data that is feeding the system.

Taken together, these signals deliver a clear message. The gap between AI ambition and data readiness is widening, not narrowing. This is why the data foundry conversation matters now. It is not an abstract architectural concept. It is a response to the full stack of quantitative pressures the market has been telegraphing for years — costs rising, compliance hardening, AI accelerating, and integration straining under inconsistent semantics and fragile lineage.

A data foundry model directly addresses the root cause of this by industrializing the creation of consistent, reusable, explainable data products that can fuel agentic AI, support regulatory defensibility, and accelerate enterprise reinvention.

The numbers point to a simple conclusion. Reinvention is no longer optional, and the window to address the data core before agentic AI becomes standard practice is narrow and closing. The organizations that act now will be the ones that define what compliant, explainable, interoperable AI looks like in the next decade. Those that defer the work will find themselves restructuring under pressure rather than reinventing by choice.

This is the inflection point. In truth, the quantitative signals have made the case more clearly than a multitude of strategy narratives ever could.

The data foundry: A model for continuous alignment

Unsurprisingly, agentic AI introduces new, more demanding requirements, including:

      • machine-interpretable semantics;
      • context-preserving lineage across federated systems;
      • decomposition of enterprise knowledge into reusable data products;
      • dynamic trust-scoring tied to source reliability and timeliness;
      • automated compliance overlays and regulatory logic; and
      • cross-domain metadata orchestration.

These capabilities are not optional, and they are non-negotiable. Indeed, they determine whether AI elevates risk or mitigates it, whether it accelerates productivity or introduces unrecoverable inconsistencies. And they determine whether AI augments decision quality or produces volatility.

A data foundry shifts organizations from artisanal, one-off data preparation and toward industrialized data production, in which patterns replace pipelines, and building blocks replace custom engineering. This shift will mean that lineage is generated, not documented; semantics are governed, not patched; and compliance is automated, not reconstructed. In this way, reuse becomes the default, not the exception.

In fact, this process is analogous to manufacturing. Instead of producing bespoke components for each need, the enterprise creates standardized, high-fidelity data assets that can be assembled into any workflow, any AI use case, and any reporting requirement.

A data foundry becomes the quiet architecture behind every future capability, making these capabilities systematic rather than ad-hoc. The chart below showcases the progressive build-up using a data factory, beginning with data intake and harmonization and ending with the AI agent orchestration and reusable data products that learn from their deployment.

data core

Unfortunately, organizations are still building increasingly advanced AI decisioning and efficiency solutions on top of an aging and brittle data foundation. The results are predictable: stalled initiatives, compliance exposure, and stakeholder frustration. Additionally, instead of asking why, organizations keep adding more tools — more dashboards, more cloud services, more AI pilots, and more flavors of transformation.

Clearly, enterprises aren’t dealing with an AI problem. They’re dealing with a data alignment problem disguised as progress within fragmented, AI enclosures.

Reinvention starts at the data core

For more than a decade, firms across regulated industries have repeated the same mantra: Data is our most critical asset. When you peel back the layers or when you sit in board review sessions or integration meetings or regulatory remediation audits, however, the evidence does not match the rhetoric.

Reinvention is no longer optional. Instead, it is the starting point for meeting the demands of 2026 and beyond. The institutions that thrive will be those that understand that the data core is not a technical asset — it is the operational backbone of the enterprise. Indeed, the institutions that succeed will be those that recognize the truth early: AI is an output, and the data core is the strategy. And the organizations able to industrialize their data — through a foundry model, through AXTent, through repeatable semantic structures — will be the ones leading innovation, reducing compliance risk, accelerating M&A synergies, and achieving enterprise-wide reinvention.

In the end, the real question isn’t whether AI will transform business; the question is whether the data foundation will allow it. And the answer is rebuilding your data core so AI can actually deliver the outcomes your organization needs — and that work begins now.


You can find more blog postsĚýby this author here

]]>
When courts meet GenAI: Guiding self-represented litigants through the AI maze /en-us/posts/ai-in-courts/guiding-self-represented-litigants/ Thu, 19 Feb 2026 18:20:08 +0000 https://blogs.thomsonreuters.com/en-us/?p=69532

Key insights:

      • Considering courts’ approach — Although many courts do not interact with litigants prior to filings, courts can explore how to help court staff discuss AI use with litigants.

      • Risk of generic AI tools — AI use in legal settings can’t be simply categorized as safe or risky; jurisdiction, timing, and procedure are vital factors, making generic AI tools unreliable for court-specific needs.

      • Specialty AI tools require testing — Purpose-built court AI tools offer a safer alternative for litigants, yet these require development and extensive testing.


Self-represented litigants have always pieced together legal help from whatever sources they can access. Now that AI is part of that mix, courts are working to help people use this advanced technology responsibly without implying an endorsement of any particular tool or even the use of AI.

Many litigants cannot afford an attorney; others may distrust the representation they have or may not know where to begin. In any case, people need a meaningful way to interact with the legal system. Used carefully and responsibly, AI can support access to justice by helping self-represented litigants understand their options, organize information, and draft documents, while still requiring litigants to verify their information and consult official court rules and resources.

These issues were discussed in a recent webinar, , hosted by . The panel explored the potential benefits of AI for access to justice and the operational challenges of integrating AI into public-facing guidance for litigants.

The problem with “Just ask AI”

Angela Tripp of the Legal Services Corporation noted that people handling legal matters on their own have long relied on a mix of resources, “some of which were designed for that purpose, and some of which were not.” AI is simply a new tool in that environment, she added. The primary challenge is that court processes are rule-based and time-sensitive; and a mistake can mean missing a deadline, submitting the wrong document, or misunderstanding a requirement that affects the case.

Access to justice also requires more than just access to information in general. Court users need information that is relevant, complete, accurate, and up to date. Generic AI systems, such as most public-facing tools, are trained on broad internet text may not reliably deliver that level of specificity for a particular court, case type, or stage of a proceeding. In these cases, jurisdiction, timing, and procedure all matter. Unfortunately, AI can omit key steps or emphasize the wrong issues, and self-represented litigants may not have the legal experience to recognize what is missing.

At the same time, AI offers several potential benefits to self-represented litigants. It can explain concepts in plain language, help users structure a narrative, and produce a first draft faster than many people can on their own. The challenge is aligning those strengths with the precision that court processes demand.

A strategic pivot: from teaching litigants to equipping staff

In the webinar, Stacey Marz, Administrative Director of the Alaska Court System, described her team’s early efforts to give self-represented litigants clear guidance about safer and riskier uses of AI, including examples of how to properly prompt generative AI queries.

The team tried to create traffic light categories that would simplify decision-making; however, they found this approach very challenging despite several draft efforts to create useful guidance. Indeed, AI use can shift from low-risk to high-risk depending on context, and it was hard to provide examples without sounding like the court was endorsing a tool or sending people down a path to which the court could not guarantee results.

The group ultimately shifted to a more practical approach — training the people who already help litigants. The new guidance targets public-facing staff such as clerks, librarians, and self-help center workers. Instead of teaching litigants how to prompt AI, it equips staff to have informed, consistent conversations when litigants bring AI-generated drafts or AI-based questions to the counter.

The framework emphasizes acknowledgment without endorsement. It suggests language such as:

“Many people are exploring AI tools right now. I’m happy to talk with you about how they may or may not fit with court requirements.”

From there, staff can explain why court filings require extra caution and direct users to court-specific resources.

This approach also assumes good faith. A flawed filing is often a sincere attempt to comply, and a litigant may not realize that an AI output is incomplete or incorrect.

Purpose-built tools take time

The webinar also discussed how courts also are exploring purpose-built AI tools, including judicial chatbots designed around court procedures and grounded in verified information. Done well, these tools can reduce common problems associated with generic AI systems, such as jurisdiction mismatch, outdated requirements, or fabricated or hallucinated citations.

However, building reliable court-facing AI demands significant time and testing. Marz shared Alaska’s experience, noting that what the team expected to take three months took more than a year because of extensive refinement and evaluation. The reason is straightforward: Court guidance must be highly accurate, and errors can materially harm someone’s legal interests. In fact, even after careful testing, Alaska still included cautionary language, recognizing that no system can guarantee perfect answers in every situation.

The path forward

Legal Services’ Tripp highlighted a central risk: Modern AI tools can be clear, confident, and easy to trust, which can lead people to over-rely on them. And courts have to recognize this balance. Courts are not trying to prevent AI use; rather, many are working toward realistic norms that treat AI as a drafting and organizing aid but require litigants to verify claims against official court sources and seek human support when possible.

Marz also emphasized that courts should generally assume filings reflect a litigant’s best effort, including in those cases in which AI contributed to confusion. The goal is education and correction rather than punishment, especially for people navigating complex processes without representation.

Some observers describe this moment as an early AOL phase of AI, akin to the very early days of the world wide web — widely used, evolving quickly, and uneven in its reliability. That reality makes clear guidance and consistent messaging more important, not less.

This shift among courts from teaching litigants to use AI to teaching court staff and other helpers how to talk to litigants about AI reflects a practical effort on the part of courts to reduce the risk of harm while expanding access to understandable information.

As is becoming clearer every day, AI can make legal processes feel more navigable by helping self-represented litigants draft, summarize, and prepare; and for courts to realize that value requires clear guardrails, court-specific verification, and careful implementation, especially when a missed detail can change the outcome of a case.


You can find out more about how AI and other advanced technologies are impactingĚýbest practices in courts and administrationĚýhere

]]>
ESG is evolving and becoming embedded in global trade operations /en-us/posts/international-trade-and-supply-chain/esg-embedded-in-global-trade/ Thu, 05 Feb 2026 12:09:16 +0000 https://blogs.thomsonreuters.com/en-us/?p=69328

Key insights:

      • ESG is becoming more operationalized — ESG is being conducted with a lower public profile while also playing an increasingly strategic role in supplier governance frameworks.

      • Data collection remains widespreadand robust — Companies continue to collect comprehensive ESG data from their suppliers.

      • Technology usage in ESG is increasing — Greater investment in automation demonstrates continuing commitment to effectively managing ESG.


Environmental, social and governance (ESG) issues have played an increasing role in global trade operations in recent years. As the United States government sharply pulled back its role in encouraging ESG in global trade in 2025, concerns were raised over whether that would impact ESG efforts globally.

However, ESG-related efforts in global trade have not diminished, although they are evolving in form and positioning, according to the Thomson Reuters Institute’s recent 2026 Global Trade Report. In fact, the report’s survey respondents said that ESG data collection from suppliers is now largely structurally embedded in trade operations, although at the same time, it is being carried out with a lower public profile than in previous years.

ESG management remains a core trade function

Managing ESG remains one of the most widespread responsibilities among trade professionals. Almost two-thirds (62%) of those surveyed said their role includes ensuring ESG compliance throughout the supply chain. That represents a higher percentage than for other responsibilities, such as procurement and sourcing, supplier management, trade systems management, risk management, customs clearance, and regulatory compliance. The only more widespread role being done by those global trade professionals surveyed is business strategy for global trade and supply chain.

More importantly, ESG remains integral and nearly universal when it comes to the supplier selection process. All respondents in the Asia-Pacific region (APAC), Latin American and the European Union-United Kingdom, as well as 99% of US respondents, report that ESG considerations remain moderately important, important, or very important in influencing their decisions around using a supplier. And overwhelming 78% say it is an important or very important consideration.

Clearly, as the report demonstrates, ESG remains a core component of the trade function for most businesses.

ESG moves toward structural governance frameworks

Only a very small proportion of respondents — 3% in the US and 4% globally — said they stopped ESG-related data collection entirely in 2025. Meanwhile, ESG data collection has increased across several major metrics.

As companies move to embed ESG expectations directly into their supplier governance frameworks, they are shifting these efforts from being a publicly declarative initiative to becoming operationalized as a permanent compliance and sourcing discipline alongside other operational considerations.

Businesses are focusing on supplier information in areas that have direct operational relevance. For example, companies collecting data on Free Trade Agreement (FTA) eligibility status for ESG purposes can also leverage the data to reduce costs, ensure supply chain security through Customs Trade Partnership Against Terrorism (CTPAT) participation, and better maintain compliance with country-of-origin requirements. Similarly, Country of Origin (COO) and Authorized Economic Operator (AEO) status are both classified under ESG but are also highly trade operations specific. These metrics merge the lines, representing areas in which ethical considerations intersect with practical trade strategy.

Supplier data collection is shifting to operational relevance as well. Indeed, the scope of supplier data being gathered remains broad and reflects a holistic view of the supply chain. The most common areas for ESG data collection in 2025 were: i) environmental metrics, such as water usage, waste management, energy management, and carbon emissions, including Scope 3 emissions; ii) social metrics, such as health and safety, labor standards, human rights including modern slavery or indentured service, and diversity in employees; and iii) governance and compliance, including data privacy, business ethics, and anti-corruption.

Data collection from suppliers

global trade

Meanwhile, ESG data collection has been scaled back in areas such as trade evaluation, AEO/CTPAT status in some jurisdictions, diversity in ownership, and anti-corruption assessments. The most cited reason for the pullbacks was insufficient cost-benefit return for collecting data in areas in which customer scrutiny was minimal. This trade-off reflects a rational reprioritization: companies are focusing their ESG diligence in areas in which regulatory risk is more material rather than reputational.

Integrating ESG into broader trade workflows

The report also shows that businesses are leveraging ESG to make it more operationally effective, drive greater efficiency, reduce costs, and add greater value for the organization. ESG is becoming less of a marketing and brand building exercise, and more of a compliance and sourcing discipline that factors into strategic decision-making — it is subject to the same analytical rigor as financial or operational risks.

To this end, organizations are less prone to make a string of bold public goals and commitments, or issue standalone ESG reports, updates, or scorecards that tout their progress. Instead, ESG data is being seamlessly embedded into supplier evaluation and selection alongside non-ESG business metrics and other considerations. As such, organizations are using ESG to quietly build the structural frameworks, data infrastructure, and management approaches they’ll need for more strategic planning.


ESG is shifting to strategically supporting business growth and away from reputational focus


Helping this shift along, the report shows, is that the use of technology to manage ESG has accelerated significantly in 2025. One-third of respondents said their organizations use automated ESG solutions, a major increase from only 20% in 2024. This provides a clear indication that more organizations are not only continuing but strengthening their commitment to effectively managing ESG.

And this provides a boost, because greater automation can improve the efficiency and ability of trade professionals to manage ESG efforts, further enhancing the integration of ESG data into other operational workflows as organizations incorporate ESG data to drive greater value.

What lies ahead for ESG

ESG practices and organizations’ embrace of them remain near-universal across trade operations. This continuation presents a clear indication that there is no widespread retreat from ESG management. For trade professionals, ESG is here to stay and is evolving into an operational discipline to help grow their business.

For organizations to have continued success in this evolving ESG environment, they should take several steps that require strategic thinking, including:

      • Identify which metrics truly matter — Connect ESG metrics that affect trade operations, particularly those that impact supply chain cost, efficiency, and reliability.
      • Invest in the technology infrastructure — Improve efficiency in tracking and analyzing key ESG metrics.
      • Articulate ESG value — Develop the ability to demonstrate the value of ESG to the trade function and communicate it in business terms to senior management.

The shift of ESG towards operational trade management may represent a more sustainable long-term path forward than the earlier wave of ESG enthusiasm — embedding ethical considerations into core business processes rather than treating them as separate compliance exercises. By focusing on metrics that genuinely matter to business operations, companies are building practices that will persist regardless of any political winds or public relations trends.

Those corporate trade departments that can skillfully navigate this evolving environment will be positioned to more effectively leverage ESG considerations as a strategic asset and competitive differentiator. And in an increasingly complex and volatile global trading landscape, they will find themselves playing a more central role in their organizations’ success.


You can download a copy of the Thomson Reuters Institute’s 2026 Global Trade Report here

]]>
The child exploitation crisis online: Gaps in digital privacy protection /en-us/posts/human-rights-crimes/children-digital-privacy-gaps/ Wed, 04 Feb 2026 18:39:04 +0000 https://blogs.thomsonreuters.com/en-us/?p=69312

Key highlights:

      • Fragmented protection creates vulnerability —Current US privacy laws operate as a patchwork system without comprehensive national standards, leaving children and other users exposed to data exploitation across state lines and international borders.

      • Body data collection opens future manipulation potential —Virtual reality platforms collect granular biometric information through sensors that can reveal deeply sensitive information about users.

      • Use-based regulations outlast technology changes — Restricting harmful applications of data provides more durable protection than the current regulatory approach, which relies on categorizing rapidly evolving data types.


Virtual reality (VR), social media, and gaming companies have long avoided robust content moderation, largely out of concern over implementation costs and the risk of alienating users. This reluctance stems from platforms wanting to have the widest pool of users as possible. Yet, the shortsightedness of this decision has consequences, including insufficient protection of children and long-term cost to companies’ bottom-lines.

The child exploitation crisis in digital spaces requires better laws and a reimagining of how VR, gaming, and social medial companies balance privacy, safety, and accountability across diverse platform architectures, according to , an expert in child exploitation methods in digital spaces and Policy Advisor at the NYU Stern Center for Business and Human Rights.

Limitations of existing regulatory frameworks

The current regulatory landscape is insufficient to protect children online. The lack of a comprehensive national privacy law in the United States, the use of consent mechanisms, and the haphazard rollout of age verification all expose protection gaps and come with economic and psychological costs, according to Olaizola Rosenblat. For example, some of the dangers include:

Gaps in patchwork of regulations leave children vulnerable — Regulatory demands for child safety often collide with privacy protections, creating contradictory obligations that platforms cannot realistically satisfy. In the absence of unified standards, however, companies operate in a jurisdictional maze that leaves most users, including children, exposed to data exploitation across borders.

America’s regulatory landscape remains especially fragmented, with no comprehensive national privacy law to provide consistent protection. comes close to establishing meaningful safeguards, according to Olaizola Rosenblat, yet it still permits companies to collect data even after users opt out of the sale or sharing of their data.

digital privacy
Mariana Olaizola Rosenblat, of the NYU Stern Center for Business and Human Rights

Federal reform attempts, including the , collapsed amid conflicts between states demanding stronger protections and tech lobbyists aligned with conservative representatives seeking weaker standards. In addition, child-specific laws, such as the , provide protection only for those under 13, which leaves older minors and adults vulnerable.

“Once users turn 13, they fall off a regulatory cliff,” says Olaizola Rosenblat. “There is no federal child-specific data protection regime, and existing state-level safeguards are patchy and largely ineffective for teens.”

Internationally, the European Union’s (GDPR), although considered the gold standard for regulation, suffers from a persistent gap between its ambitious text and its uneven enforcement.

Age verification tensions — These regulatory shortcomings also are evident in debates over age verification. Protecting children requires collecting data to determine user age, yet privacy advocates frequently oppose such measures. Without pragmatic guidance acknowledging these inherent trade-offs, platforms often face contradictory obligations they cannot simultaneously fulfill.

Current consent frameworks offer little protection — Current consent mechanisms offer users an illusory choice that fails to protect children from data exploitation. Even relatively robust frameworks like the GDPR rely on consent models in which refusal means exclusion from digital spaces essential to modern life. This approach proves particularly inadequate for younger users. Indeed, that about one-third of Gen Z respondents expressed indifference to online tracking.

VR data collections may allow future exploitation

VR platforms differ fundamentally from traditional gaming spaces and social media platforms. Users with VR headsets embody avatars that move through thousands of interconnected experiences. While no actual touching occurs, the experiences feel visceral. Indeed, the psychological and physiological responses can mirror aspects of real-world experiences, which include sexual exploitation, even though no physical contact occurs.

Olaizola Rosenblat explains that the data collected from the sensors can open up the potential for future exploitation. “The inferences that can be drawn from your body-based data collected by these sensors is granular and often intimate,” she explains. “The power that gives to companies is pretty remarkable in terms of knowing things about you that you might not even know yourself.”

Recommended actions to address challenges

Addressing the child exploitation crisis in digital spaces requires coordinated action, according to Olaizola Rosenblat, and that needs to include:

Universal protection standards — Corporate action in partnership with legislators is necessary for effective reform that protect all users rather than fragmenting safeguards by age or vulnerability status. Current approaches that shield only younger children create dangerous gaps and leave adolescents and adults exposed once they age out of protected categories.

Enforce existing regulations — Even well-crafted legislation proves meaningless without robust enforcement mechanisms. Commitment by government agencies along with the appropriate levels of funding is the most meaningful approach to achieve desired outcomes.

Technology-agnostic use regulation — Rather than attempting to categorize rapidly evolving data types, companies in the VR, gaming, and social media sectors must work with legislators to restrict harmful uses of data such as manipulation, exploitation, and unauthorized surveillance, regardless of technical collection methods. Regulating data use — rather than the current method of regulation based on categories of data, which include personally identifiable information — is the right approach.

Public mobilization is essential — Citizens must understand that the stakes of data exploitation beyond corporate collection also include hacking vulnerabilities and manipulative deployment. Without consumer demand for better protection and the willingness for legislators to pass the laws, regulation will not happen.

The path forward

The digital exploitation of children demands immediate action that transcends partisan divides and corporate interests. Only through coordinated regulatory reform, meaningful enforcement, and sustained public pressure can we create digital spaces in which innovation thrives without sacrificing our privacy and safety. The cost of continued inaction grows steeper each day we delay.


You can find out more on how organizations and agencies are fighting child exploitation here

]]>
Understanding the data core: From legacy debt to enterprise acceleration /en-us/posts/technology/understanding-data-core-enterprise-acceleration/ Tue, 03 Feb 2026 14:47:41 +0000 https://blogs.thomsonreuters.com/en-us/?p=69255

Key takeaways:

      • The real bottleneck for AI is the data core — AI is advancing rapidly, but most organizations’ data architectures, governance, and legacy assumptions can’t keep up. Without a repeatable, business-aligned data foundation, AI initiatives will struggle to scale and deliver reliable results.

      • AI success relies on explainable, traceable, and reusable data — For AI to be reliable and compliant, organizations must design data environments that emphasize lineage, semantics, and trust; and that means that compliance and auditability need to be built into the data core, not added on later.

      • Business should shift from tool-centric upgrades to business-driven, data-centric reinvention — Efforts focused only on modernizing tools or platforms miss the root issue: legacy data structures. Leaders must prioritize building a cohesive, reusable data core that aligns with business strategy.


This article is the first in a 3-part blog series exploring how organizations can reset and empower their data core.

Across boardrooms, regulatory briefings, and strategic off-sites, leaders are asking with growing urgency some variation of the same question: How do we make AI reliable, scalable, auditable, and economically defensible? The surprising answer is not in the AI technology, nor in the cloud stack, nor in another round of system upgrades.

It is in the data. Not the data we store, not the data we report, and not the data we move across our pipelines. It is in the data that we must now explain, contextualize, trace, validate, and reuse continuously as agentic AI becomes embedded in every workflow, every decision system, and every regulatory outcome.

The stark reality across industries then becomes what to do as AI matures faster than our data cores can support. For the first time, technology is not the bottleneck — architecture is, organizational assumptions are, and governance strategies are. More importantly, the lack of a repeatable, business-aligned data foundry has become the strategic inhibitor standing between today’s operations and tomorrow’s autonomy-ready enterprises.

The realities of 2026

As 2026 gets underway, the pressures of regulation, AI adoption, data lineage requirements, and cross-system consistency have converged into a single strategic reality: We can’t keep modernizing data at the edges. The data core itself must be reimaged and compartmentalized.

For leaders across highly regulated industries, the challenge is recognizing that our data architectures were never designed for the world we’re moving into. Historically, solutions were built for predictable siloed-data systems, linear programmatic processes, and dashboard reporting. Today’s demands are continuous, variable, cross-domain, and machine-interpreted and not bound by traditional methods and techniques of process efficiency and system adaptability. Tomorrow’s systems will be comprehensively trained by data. To properly frame these realities, leaders must understand:

      • Agentic AI exposes weak data architecture immediately — Models may scale, but data debt does not. This is a new, priority constraint.
      • Lineage, semantics, and trust scoring — not models — will determine enterprise readiness — AI will only be as reliable as the meaning and traceability of enterprise data.
      • Compliance cannot be retrofitted; rather, it must be designed into the data core — Compliance no longer ends in reporting, it must exist upstream and be addressed continuously.
      • Return on investment in AI is impossible without composable, modular, and reusable data products — Data that cannot be composed, traced, and made consistent cannot be automated.
      • The bottleneck is not talent or tools, it is the absence of a data foundry — Without robust, industrial-grade data production, AI will remain fragmented and experimental.

By delivering a practical, business-first path integrated with a data-centric design, organizations enable reuse, compliance, and measurable ROI. AI is accelerating, but data readiness is not. This mismatch is where many transformation efforts die.

Agentic AI demands a data environment that simply does not exist with most legacy solutions. It requires decision-aligned semantics, federated trust scoring, cross-domain lineage, dynamic compliance overlays, and consistent interpretability. No model, no matter how advanced, can compensate for data environments that have been engineered for static reporting and linear process logic. We are entering a cycle of reinvention in which data becomes the organizing principle.

The business need, not the engineering myth

Executives are rightfully fatigued by transformation programs. They have seen modernization initiatives expand scope, escalate cost, and ultimately underdeliver. They have heard the promises of clean data, enterprise data platforms, microservices, cloud migration, and AI-readiness. However, when agentic AI begins interacting with these ecosystems, the fragility of the entire operation becomes instantly visible.

Why? Because most data modernization initiatives have been driven by tool-centric solutions rather than architecture-centric capabilities. Prior data governance is about oversight, not enablement and reuse, as is being demanded by emerging AI designs. Often, legacy methods kept their audit and lineage contained within siloed processes, bridging bridged them with replicated data warehouses, extract, transform, load systems (ETLs), and application programming interfaces (API) protocols.

However, this tool-centric, legacy-enabled approach is the problem. We keep optimizing the wrong layers, and we keep modernizing the components.

As a result, we too often see that AI pilots succeed, but enterprise scaling fails. Or, that regulatory reporting improves marginally, but compliance costs increase. Or M&A integrations appear straightforward, but post-close data convergence drags on for years.

The gap between ambition and reality

As a solution, a data foundry approach corrects that imbalance by formalizing the factory-grade patterns required to support agentic AI systems. It becomes the production line for reusable data products, compliant semantics, and decision-aligned datasets. It also eliminates reinvention by institutionalizing repeatable structures; and, most importantly, it restores business leadership over AI outcomes, rather than relegating decision logic to engineering workstreams and emerging technologies.

As illustrated below, AI requirements and realities need to be tempered with business demands, organizational risks, and data agility capabilities (including skill sets) to achieve realistic roadmaps of action — not strategic aspirations.

data core

Today, the question isn’t whether organizations understand the importance of data, it’s whether leaders know how to build environments in which data becomes reusable, trustworthy, and ready for agentic AI. The issue, however, continues to be that our data cores — the architectural, operational, and standards ecosystems beneath all this — were not designed for continuous change.

Before they mobilize and execute against AI plans, business leaders need to answer the question: What business decisions are we trying to improve — and what data do these decisions actually requires today, and for tomorrow?

The organizations that will lead in the coming decade will do so not because they found the perfect technology stack, but because they built a reusable, continuously improving data foundation that can support AI, regulation, risk, and innovation simultaneously.

The question for leaders then becomes: Are we prepared to reinvent?

The work begins now — quietly, deliberately across the data core where tomorrow’s competitive advantages will be created. The chart below illustrates the business-driven AI elements that must be addressed, and how the old sequence of system provisioning must be replaced, beginning with outcomes and ending with engineered AI tools.

data core

AI is an output — a capability that’s unlocked after the underlying data foundation becomes coherent, traceable, explainable, and aligned with business decisions. For leaders, the data core is no longer a back-office concern or one-off IT initiative. It is a strategic asset that can shape speed, resilience, and trust across the organization.


In the next post in this series, the author will explain how to architect an integrated data core, particularly through the AXTent architectural framework for regulated organizations. You can find more blog postsĚýby this authorĚýhere

]]>
Improving corporate governance requires managing AI’s footprint /en-us/posts/sustainability/corporate-governance-ai-footprint/ Mon, 08 Dec 2025 18:33:23 +0000 https://blogs.thomsonreuters.com/en-us/?p=68692

Key insights:

      • Elevate AI governance to the board — Companies should tie their AI deployment to enterprise risk management with explicit KPIs for energy intensity, water withdrawals and consumption, and supply‑chain human rights.

      • Make transparency a competitive asset — Implement auditable disclosures on AI workload footprints, water stewardship, and supplier traceability, and then link executive compensation and vendor contracts to measurable efficiency and resiliency outcomes.

      • Demand transparency despite practical challenges — Although demanding transparency from suppliers may not be practical now due to current challenges, collectively asking for detailed information sends a notable requirement to AI infrastructure providers that the company is seeking to drive change and preserve trust in an AI-driven economy.


AI now sits at the center of corporate sustainability governance as it supercharges data gathering, analytics, and reporting. Indeed, there is is areas of energy optimization, emissions monitoring, land‑use assessment, and climate scenario analysis.

At the same time, AI’s rise is colliding with sharply growing electricity and water demands from data centers and concerns over geopolitically exposed supply chains. The governance challenge for companies therefore is to manage risk at this intersection. This means treating AI as a capital‑intensive, cross‑border infrastructure program whose environmental footprint and supply dependencies must be actively governed.

Why electricity and water are now board‑level AI risks

AI has turned electricity and water from background utilities into constraints that should be dealt with on the board level. Indeed, AI magnifies water risk across cooling, power generation, and chip manufacturing. This makes sourcing and efficiency choices strategic imperatives for many organizations.

Electricity demand — AI use and the data centers that power the tools already account for a significant and rising share of electricity use in the United States. The finds , a figure poised to grow as AI workloads scale. Forward‑looking projections from the U.S. Department of Energy indicate that by 2028 could be attributed to AI workloads.

If you translate those projections into , you can get an idea of the potential magnitude of the problem. Together, these sources suggest that the fastest‑growing part of AI’s energy appetite is not just for training models, but the steady, pervasive inference capabilitiesĚýrequired to power AI features in everyday products and operations.

Direct and indirect water use — Data centers powering AI also negatively impact local water footprints. It shows up in three places: i) data‑center cooling; ii) the electricity feeding those facilities, including thermoelectric and hydroelectric generation; and iii) AI’s own hardware supply chain. In regions already facing scarcity, these demands compound local stress. For example, the average per capita water withdrawal is 132 gallons per day; yet a large data center consumes water .

This makes data centers one of theĚý in the country, which incidentally is home to . At the end of 2021, aroundĚý from moderately to highly stressed watersheds in the western US. This is a common situation as well.

Geopolitical exposure — The hardware that powers AI includes advanced logic and memory chips, which depend on concentrated manufacturing nodes and supply chains with access to critical minerals. Extraction and processing of inputs, such as lithium and cobalt, are often clustered in jurisdictions with elevated levels of human‑rights, environmental, or geopolitical risk. This potential amplifies exposure to export controls, sanctions, or resource nationalism, especially directly for companies’ supply chains and indirectly for those companies using AI.

Companies need to ensure their communication on legal and policy issues are pointing in the same direction in regard to these concerns. Indeed, companies need to deepen value‑chain due diligence while navigating evolving supply‑chain and AI‑specific regulatory regimes.

Recommended actions for companies

These intersections have clear implications for corporate governance. AI’s promise to accelerate decarbonization, improve transparency, and strengthen decision‑making will be realized only if leaders can properly manage the physical, political, and social realities underpinning the technology. Recommended actions to manage risk in areas in which AI and geopolitics converge include:

Demand transparency in electricity and water consumption of AI infrastructure — Companies building AI infrastructure need to conduct AI workload planning. Companies using AI can demand transparency of their suppliers’ 24- to 36-month forecast of training and inference by region with overlays in grid carbon and local water stress to better understand their indirect environmental impacts.

De‑risk impact by incentivizing clarity in supply chains — Companies using AI can begin asking AI infrastructure companies to provide due diligence in tier 2, 3, and 4 suppliers, all the way down to smelters, refiners, and miners to make sure that companies are not indirectly contributing to environmental and social harms.

The bottom line

While these recommendations generally align with evolving corporate practices in sustainability and risk management, the challenge of implementation will vary based on the company’s size, influence over suppliers, and existing governance structures. The most challenging aspect will likely be achieving transparency and clarity in supply chains, which requires cooperation from suppliers and the investment of potentially significant resources.

At the same time, however, if more companies collectively ask for this level of detailed information from their AI infrastructure providers, it will send a notable demand signal. Indeed, AI is both a sustainability tool and a sustainability liability, but its benefits will be realized only if leaders confront the physical and geopolitical constraints that make AI possible.

Those companies that begin asking for this level of transparency can preserve the trust that underwrites their license to navigate successfully in an AI‑driven economy.


You can find out more on the sustainability issues companies are facing around the environment here

]]>
Beyond cost reduction: How corporate legal departments can align strategic value /en-us/posts/corporates/value-alignment/ Tue, 02 Dec 2025 15:10:37 +0000 https://blogs.thomsonreuters.com/en-us/?p=68491

Key insights:

      • Value perception gap persists — Most corporate legal departments still measure and report primarily on cost, obscuring their broader strategic contributions.

      • Value alignment toolkit — A new value framework exists for legal departments to close the gap in the perception of their value to the organization.

      • AI accelerates urgency — The rise of AI makes comprehensive value measurement essential in order to safeguard legal department budgets and resources.


As many General Counsel continue to elevate their position as strategic leaders in their business, they are often constrained by cost-focused narratives. Despite their success in delivering high-quality legal advice, managing complex risks, and enabling business growth, many corporate legal departments remain trapped in a narrow perception defined almost entirely by spend metrics.

The disconnect is clear. While legal departments support strategic goals across multiple dimensions — delivering effective advice, operating efficiently, protecting the organization, and enabling business strategy — most measure and report only on cost and time. And when leadership sees only budget and time metrics, this unfortunately reinforces the cost center narrative and hides the real value of the legal department.

The perception gap: What gets measured gets seen and valued

Research from the Thomson Reuters Institute (TRI) reveals a troubling pattern: While 90% of legal departments now use formal metrics — up from 75% eight years ago — very few align those metrics to the full range of their strategic goals. Indeed, nearly half of all metrics currently in use relate to spend factors, while only about one-in-four measure quality, and even fewer capture how legal departments protect enterprise value or enable business strategy.

This creates what TRI calls a perception gap. When C-Suite executives describe in what areas they expect their legal departments to focus, they consistently over-emphasize efficiency while under-recognizing contributions such as business protection and strategic enablement. As a result, many legal departments struggle to secure resources for risk management initiatives, their strategic contributions go unnoticed and unrecognized, and their efficiency efforts are viewed as mere cost-cutting rather than value optimization.

The root cause of this misalignment lies in measurement itself. A legal department cannot manage what doesn’t get measured, and more importantly, it cannot demonstrate value for what remains invisible.

The 4 spinning plates: A complete picture of legal value

Through extensive analysis of strategic priorities across hundreds of legal departments, TRI identified four core areas of responsibility that remain evergreen regardless of changing business environments, regulatory shifts, or technological disruption.

protecting

The four spinning plates model captures these perpetual responsibilities — effective, efficient, enable, and protect — in a deliberate metaphor. Like a performer keeping multiple plates spinning simultaneously, GCs must maintain constant attention across all four areas. They are fundamentally interconnected — efficiency gains can enable strategic work, while strong risk management builds the trust necessary for bolder business strategies.

Yet when metrics are focused primarily on cost and time, they tell only a fraction of this story. Many legal departments have built their measurement framework around the Efficiency plate alone, leaving the other three plates far less visible to enterprise leadership and limiting their understanding of legal’s comprehensive roles and strategic influence.

Closing the gap: the value alignment toolkit

TRI has spent years conducting research, developing frameworks, and facilitating strategic planning sessions with legal department leaders on this challenge. Now, it is making this expertise broadly accessible through a comprehensive new resource: the Value Alignment Strategic Toolkit.

This free online resource center provides practical, immediately actionable guidance to better define, measure, and communicate a corporate legal department’s full value to the organization. The toolkit is built on benchmark data from hundreds of legal departments along with proven strategic frameworks and expert insights that all is organized into six interconnected sections that guide users from foundational clarity to strategic execution. These six sections include:

      1. Define your department’s strategic goals — Establish business-connected objectives with clear ambitions
      2. Design metrics that matter — Select measurements that demonstrate value creation, not just cost
      3. Strengthen your data — Build robust collection and analysis methods, including feedback involving the voice of the stakeholder
      4. Tell your value story — Develop compelling narratives that resonate with enterprise leadership
      5. Review, refine & advance — Implement continuous improvement processes
      6. Maximize your impact — Scale success across all four spinning plates of value

Each section includes practical resources, including assessment tools, templates, checklists, framework guides, and real-world examples. The metrics masterclass features more than 50 legal department metrics aligned to the four-plate framework, including 12 recommended core metrics that span all four strategic areas.

value

For example, a GC preparing for a quarterly check-in with the CFO could use the appropriate templates, guides, best practices, and the recommended metrics to create a one-page dashboard. The dashboard would provide customized metrics to align with their CFO’s priorities, such as deals accelerated, risks avoided, or initiatives supported.

The AI imperative: Why better metrics matter more than ever

Not surprisingly, the emergence of generative AI (GenAI) adds new urgency to this work, presenting both opportunity and vulnerability. On one hand, AI holds significant potential to enhance legal department capabilities by automating routine tasks, accelerating research, improving contract analysis, and freeing lawyers to focus on higher-value strategic work. At the same time, however, if legal departments continue to be viewed primarily through an efficiency lens, advances in AI that reduce time and cost could conceivably threaten department resources and headcount.

Comprehensive value measurement can help legal departments demonstrate enterprise value that cannot be replaced by AI. When legal departments can clearly articulate how they protect enterprise value, enable faster time-to-market for new products, strengthen board confidence through proactive governance, and maintain high stakeholder satisfaction scores, they establish their strategic necessity regardless of technological advancement.

The Value Alignment Toolkit provides frameworks and tools to build this comprehensive measurement approach, ensuring legal departments are positioned to leverage AI’s benefits, while at the same time demonstrating the irreplaceable value that the legal department provides, including:

      • Quantifying strategic legal department contributions that AI cannot replicate, such as judgment, relationship-building, business counsel, risk navigation, and more
      • Demonstrating value beyond efficiency to justify budgets and resources
      • Identifying high-impact opportunities in which legal department expertise can best leverage AI to address the most pressing business needs
      • Assessing ROI of specific AI use cases to prioritize where to adopt and scale, and conversely, areas that are not ready yet

Moving from cost center to strategic partner

For a corporate legal department, the transformation from cost center to strategic partner requires more than aspiration, it requires data-driven evidence. It demands a systematic approach to measurement that captures the complete picture of the department’s contributions and then communicates that value in clear business language.

The Value Alignment Strategic Toolkit enables legal departments to shift from reporting simple cost metrics, such as:

We reduced outside counsel spend by 15%

to telling a more complete story:

We delivered value by maintaining 90% stakeholder satisfaction while handling 25% more strategic matters, reducing costs through technology and process improvements, preventing potential regulatory exposure through proactive compliance programs, and accelerating product launch timelines through innovative legal structures.

This is not merely reframing — it’s revealing what was always present but had remained largely invisible. This enables strategic conversations about the department’s complete contribution rather than defaulting to discussions solely around cost.

The path forward

Many corporate legal departments today create enterprise value every day across multiple dimensions by providing sound advice, managing risk exposure, and enabling growth. Yet too often, that value remains unrecognized simply because it isn’t being measured or communicated effectively.

At a moment when business transformation is accelerating, regulatory complexity is increasing, and technology is reshaping legal service delivery, continuing to rely on cost and time metrics alone isn’t just insufficient, it actively undermines a legal department’s strategic position.

The complete value story of legal departments deserves to be told. It’s time to move from defending budgets to demonstrating impact, from reporting costs to revealing value, and from being seen as a necessary expense to being recognized as an essential strategic partner. Better frameworks and tools can shift the conversation from cost center scrutiny to strategic leadership discussions about how GCs and their teams enable business growth.


Transform how your legal department demonstrates value by accessing the free frameworks, metrics, and strategic guidance in the Value Alignment Strategic Toolkit

]]>
The false comfort of AI engineering: Building the reusable enterprise /en-us/posts/technology/ai-engineering-building-reusable-enterprise/ Thu, 20 Nov 2025 13:49:21 +0000 https://blogs.thomsonreuters.com/en-us/?p=68471

Key takeaways:

      • Shifting from engineering to architecture — Focusing solely on building better AI models and engineering solutions leads to isolated, non-reusable outputs. Instead, organizations should build AI into the broader enterprise, emphasizing reusable, machine-readable intelligence that integrates with business operations and data structures.

      • Regulation as opportunity for reusability and efficiency — Regulatory frameworks are not just compliance burdens; they also are catalysts for sustainable AI. By mandating standardized, machine-readable data, these regulations force organizations to design systems for reuse, enabling operational efficiency and scalable innovation.

      • Reusable enterprise is the path to sustainable reinvention — The future of AI leadership lies in building adaptable, reusable data and AI infrastructures. When standardized data, AI models, and regulatory compliance reinforce each other, organizations can continuously reinvent themselves, support multiple business outcomes from the same information assets, and achieve compound returns on their investments.


Across industries, executives are confronting an uncomfortable truth: AI projects are delivering outputs, not outcomes.

For years, organizations have poured time and capital into the mechanics of AI — the algorithms, the computation power, the data pipelines, and the engineering teams to support them. Yet results remain uneven. Models keep getting larger, but lasting, reusable business value hasn’t followed.

The problem isn’t the math, it’s the mindset.

Too many enterprises have tried to engineer AI into existence instead of architecting it into the enterprise. The focus has been on perfecting models, not integrating them into the broader data and operational fabric of the business. The assumption has been that a technically superior model naturally creates a competitive edge. It doesn’t.

Without consistent governance, shared definitions, and reusable data structures, every AI initiative becomes its own isolated experiment. One line of business builds a credit-risk model. Another develops an environmental, social, and governance (ESG) classifier. A third deploys a generative assistant for customer support. Each team moves fast, but none build on each other’s work. The result is a proliferation of proofs of concept — impressive on paper but disconnected in practice.


For years, organizations have poured time and capital into the mechanics of AI — the algorithms, the computation power, the data pipelines, and the engineering teams to support them. Yet results remain uneven.


And this fragmentation carries a financial cost. Every new model adds complexity — new pipelines, new monitoring requirements, and additional governance checkpoints. These systems rarely scale together, and as integration demands grow, executives find themselves in a paradox: Make massive investments in AI infrastructure yet see declining agility and uncertain ROI.

The AI engineering mindset has optimized the structural parts, not the whole when it comes to a production solution set. In general, it has produced models that predict, but not organizations that learn.

In short, the AI engineering mindset has reached its limit — a sign that AI is entering sustainable growth cycles. Many leaders are beginning to realize that they don’t need more AI engineers, rather they need system designers who can embed intelligence into reusable business frameworks — all while navigating a regulatory environment increasingly defined by machine-readable data standards such as the Financial Data Transparency Act (FDTA) and Standard Business Reporting (SBR).

Regulation as catalyst, not constraint

At first glance, FDTA and SBR may appear to be just another layer of regulatory complexity. They are not. In fact, they represent one of the most powerful architectural opportunities available to organizations today.

By mandating machine-readable data standards, these frameworks force companies to design for reuse. They turn what once felt like a compliance exercise into an infrastructure strategy — one that connects regulatory requirements directly to operational efficiency. Build once. Reuse often.

For decades, compliance has been treated as a cost of doing business. Under FDTA and SBR, it can become the scaffolding of reinvention. Machine-readable, standardized data provides the foundation for models that are verifiable, shareable, and reusable across domains. Reporting ceases to be an afterthought and becomes a living data layer that fuels forecasting, stress testing, and product innovation.

When viewed through this lens, regulation isn’t an obstacle; it’s the blueprint for sustainable AI. It forces clarity, consistency, and interoperability — qualities every enterprise says it wants, but few achieve voluntarily. Regulation may finally deliver what AI engineering alone could not: The discipline of reusability.

From proofs of concept to proofs of architecture

For most organizations, AI success has been measured by the number of proofs of concept completed, or how fast a model moves into production. However, the real test of maturity isn’t how many experiments you run, it’s how easily those experiments can be scaled, reused, or extended.

That’s where the next evolution lies. We are now shifting from proofs of concept to proofs of architecture. And that means the question leaders should be asking isn’t, “Did it work once?” but “Can it work again, and with half the effort?” Only when a single domain’s data can serve multiple regulatory, compliance, and analytical purposes, can the enterprise start to gain compound returns on its information assets.


When viewed through this lens, regulation isn’t an obstacle; it’s the blueprint for sustainable AI. It forces clarity, consistency, and interoperability — qualities every enterprise says it wants, but few achieve voluntarily.


This approach turns data from a static resource into a dynamic capability. AI is no longer something you deploy; rather, it’s something you design for reuse.

Engineering adaptability

Organizations that embrace this shift are learning to engineer adaptability rather than one-off innovation. Their data and AI systems act like interchangeable components, each capable of supporting new regulations, mergers, or market disruptions without starting from scratch.

Some industry examples of this development include:

      • Financial services — Stress-testing data used for regulatory compliance can also inform pricing analytics and liquidity simulations, reducing cycle time between audit and strategy.
      • Healthcare — Patient outcome models built for quality reporting can be reused to predict staffing needs or optimize clinical supply chains, extending beyond compliance and into operations.
      • Legal and compliance sectors — AI used for document classification under discovery protocols can be repurposed for internal policy audits or ESG disclosure mapping, turning regulatory data into a strategic asset.
      • Manufacturing and supply chain — Sensor and maintenance data initially used for safety reporting can drive predictive production planning and carbon-emission forecasting under emerging sustainability standards.
      • Public sector and critical infrastructure — Data collected for transparency and open-data mandates can be reused to model risk exposure across utilities, cybersecurity, and climate resilience programs.

In each of these cases, the same information infrastructure supports different outcomes. That’s the hallmark of a reusable enterprise.

AI engineering

The above chart’s interconnected components illustrate how standardized data, reusable AI, and regulatory compliance can reinforce one another to create a continuous cycle of enterprise reinvention — standardized data supports reusable AI, which in turn enhances reporting and regulatory alignment. The result is a virtuous loop that replaces isolated projects with scalable, data-driven reinvention.

A call to reusable leadership

The next phase of digital leadership won’t be defined by how sophisticated a company’s models are, but instead by how seamlessly those models integrate into decision-making.

The leaders who succeed will be those who align AI investments with evolving regulatory and data standards. Their organizations will speak a common data language in which AI, compliance, and analytics operate within a shared architectural framework.

As FDTA and SBR converge globally, the line between compliance and competitiveness will blur. What once felt like regulatory overhead will become the foundation of reusable intelligence. Reinvention, in this sense, isn’t a campaign or initiative — it’s a discipline. This is not AI as a project; it’s AI as infrastructure and the architecture of continuous reinvention.

For executives navigating 2026’s convergence of regulation, consolidation, and automation, the difference between thriving and merely surviving will depend on whether they can build organizations that learn, adapt, and continuously reinvent themselves through data.


You can find more blog postsĚýby this author here

]]>