Data Governance Archives - Thomson Reuters Institute https://blogs.thomsonreuters.com/en-us/topic/data-governance/ Thomson Reuters Institute is a blog from ¶¶ŇőłÉÄę, the intelligence, technology and human expertise you need to find trusted answers. Thu, 16 Apr 2026 06:20:41 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 Country-by-country reporting is getting more complicated — and the window to get ahead is closing /en-us/posts/corporates/country-by-country-reporting/ Tue, 14 Apr 2026 12:22:22 +0000 https://blogs.thomsonreuters.com/en-us/?p=70335

Key takeaways:

      • Country-by-country reporting will only increase in complexity — Australia’s enhanced Country-by-country reporting (CbCR) requirements — reconciling taxes accrued against taxes credited — are a preview of where other high-scrutiny jurisdictions are heading, and companies need to build that explanatory analysis capability now, systematically, rather than scrambling later.

      • There has to be a shared narrative from corporate teams — The EU’s public CbCR is a reputational event, not just a filing. So that means tax, communications, and investor relations teams need a shared narrative before the data goes public — inconsistencies create exposure you do not want to manage reactively.

      • Rethink your filing jurisdiction in light of changes — If EU filing jurisdiction was chosen at initial implementation and never revisited, look again. Guidance has matured, and a more efficient or better-suited option may now be available.


WASHINGTON, DC — Among the many pressing topics discussed in detail at the recent , country-by-country reporting (CbCR) and its ability to reshape the corporate tax industry, certainly had its place. Between escalating local jurisdiction requirements, the , and for deeper explanatory disclosures, CbCR has quietly evolved from a transfer pricing filing obligation into something far more strategically consequential.

The floor is just the floor

The creation of the by the Organisation for Economic Co-operation and Development (OECD) was intended as a minimum standard for countries. And now jurisdictions are increasingly layering additional requirements on top of the OECD’s basic template, resulting in a widening gap between the standard requirements and what tax authorities actually want.

Currently, Australia is the most pointed example. Australian tax authorities are now requiring multinational groups to go beyond the standard CbCR data fields and provide explanatory narratives that reconcile taxes accrued against taxes actually credited. This requires corporate tax departments to bridge the gap between financial statement accruals and their organizations’ cash tax positions in a way that is coherent, defensible, and consistent with positions taken elsewhere.

At the TEI event, panelists explained that for tax departments this will carry complex timing differences, deferred tax positions, or significant jurisdictional mismatches between booked and cash taxes. Indeed, this additional layer of scrutiny will need dedicated attention.


Please add your voice to ¶¶ŇőłÉÄę’ flagship , a global study exploring how the professional landscape continues to change.Ěý


The broader signal matters: Australia will not be the last jurisdiction to move in this direction. So that means that tax departments should treat Australia’s approach as a leading indicator of where other high-scrutiny jurisdictions could be heading. Building the capability to produce this kind of explanatory analysis systematically — rather than scrambling jurisdiction by jurisdiction — would be the smarter long-term investment for corporate tax teams.

Public CbCR in the EU: The transparency ratchet has turned

For US-based multinationals with significant European operations, the EU’s public CbCR directive has fundamentally changed the calculus. Unlike the confidential tax authority filings most corporate tax departments are accustomed to, the EU’s public CbCR rules put organizations’ jurisdictional profit and tax data into the public domain, making it visible to investors, journalists, civil society groups, and organizations’ employees and customers.

The EU framework specifies which entities trigger the reporting obligation and which entity within the group is responsible for making the public filing. That scoping analysis is not always straightforward for complex multinational structures and getting it wrong could present both reputational and legal risk.


Choosing a filing jurisdiction is not purely an administrative decision — it is a choice that affects the regulatory environment that governs the disclosure, the language requirements, the timing, and the interpretive framework that applies to data.


For US-headquartered groups, the implications extend well beyond Europe. Public CbCR data is now being read alongside US disclosures, reporting on ESG activities, and public narratives about tax governance. Inconsistencies, including those technically explainable, could create unwanted noise about the company. This is clearly another reason why the tax function should partner across the business — in this case with the communications team — to make they both are aligned to tell the CbCR story instead of being caught off guard by a journalist or an investor during an earnings call.

Questions that US multinationals should be asking

Fortunately, US multinationals with multiple EU subsidiaries are not required to file public CbCR reports in every EU member state in which they have a presence. Instead, under the EU framework, a qualifying ultimate parent or standalone undertaking can satisfy the public disclosure requirement through a single filing in one EU member state, provided the relevant conditions are met. Germany and the Netherlands have emerged as two of the more popular choices for this consolidated filing approach, given their well-developed regulatory frameworks and the depth of available guidance on what compliant disclosure looks like in practice.

The strategic implication is meaningful. Choosing a filing jurisdiction is not purely an administrative decision — it is a choice that affects the regulatory environment that governs the disclosure, the language requirements, the timing, and the interpretive framework that applies to data. Corporate tax departments that defaulted to a filing jurisdiction early in the EU implementation process should take a fresh look. Regulatory guidance has matured significantly, and there may be a more efficient or better-suited path available than the one originally chosen.

The uncomfortable divergence

There is a notable irony in the current environment. Domestically, the IRS and U.S. Treasury’s 2025-2026 Priority Guidance Plan reflects an explicit focus on deregulation and burden reduction, detailing dozens of projects aimed at reducing compliance costs for US businesses. Meanwhile, the international compliance environment has moved in the opposite direction, adding disclosure layers, explanatory requirements, and public transparency obligations that many US businesses cannot avoid simply because they are headquartered in the United States.

This divergence has a direct implication for how tax departments allocate resources and make the internal case for investment in international compliance infrastructure. The burden internationally is not going down — indeed, it is intensifying — and that argument is now backed by concrete examples rather than projections.

3 things worth doing now

There are several actions that corporate tax teams should consider, including:

Audit CbCR data quality with Australia’s enhanced requirements in mind — If you cannot readily reconcile taxes accrued to taxes credited at the jurisdictional level, that gap needs to be closed before it becomes an authority inquiry.

Revisit EU filing jurisdiction strategy — If your jurisdictional decision was made at the time of initial implementation and has not been reviewed since, it is worth a fresh look before the next reporting cycle.

Develop an internal narrative around public CbCR data before it circulates externally — Your company’s tax story should not be a surprise to the corporate teams involved in communications, investor relations, or ESG — and in today’s world, assuming such news stays quiet is no longer a safe assumption.

While CbCR started as a tool for tax authorities, it today has become something more visible, more public, and more consequential than that — and that trajectory is not reversing any time soon.


You can download a full copy of the Thomson Reuters Institute’s

]]>
Scaling Justice: AI is scaling faster than justice, revealing a dangerous governance gap /en-us/posts/ai-in-courts/scaling-justice-governance-gap/ Mon, 13 Apr 2026 16:57:55 +0000 https://blogs.thomsonreuters.com/en-us/?p=70330

Key takeaways:

      • AI frameworks need to keep up with implementation — While AI governance frameworks are being developed and enacted globally, their effectiveness depends on enforceable mechanisms within domestic justice systems.

      • Access to justice is essential for trustworthy AI regulation — Rights and protections are only meaningful if individuals can understand, challenge, and seek remedies for AI-driven decisions. Without operational access, governance frameworks risk remaining theoretical.

      • People-centered justice and human rights must anchor AI governance — Embedding human rights standards and ensuring equal access to justice in AI regulation strengthens public trust, accountability, and the credibility of both public institutions and private companies.


AI governance is accelerating across global, national, and local levels. As public investment in AI infrastructure expands, new oversight bodies are emerging to assess safety, risk, and accountability. The global policy conversation has from principles to the implementation of meaningful guardrails and AI governance frameworks, which legislators now are drafting and enacting.

These developments reflect growing recognition that AI systems demand structured oversight and a shift from voluntary safeguards and standards to institutionalized governance. One critical dimension remains underdeveloped, however: how do these frameworks function in practice? Are they enforceable? Do they provide accountability? Do they ensure equal access?

AI governance will not succeed on the strength of international declarations or regulatory design alone; rather, domestic justice systems will determine whether it works. At this intersection, the connection between AI governance and access to justice becomes real.

In early February, leaders across government, the legal sector, international organizations, industry, and civil society convened for an expert discussion. The following reflections attempt to build on that dialogue and its urgency.

From principles to enforcement

Over the past decade, AI governance has evolved from hypothetical ethical guidelines to voluntary commitments, binding regulatory frameworks, and risk-based approaches. Due to these game-changing advancements, however, many past attempts to provide structure and governance have been quickly outpaced by technology and are insufficient without enforcement mechanisms. As Anoush Rima Tatevossian of The Future Society observed: “The judicial community should have a role to play not only in shaping policies, but in how they are implemented.”

Frameworks establish expectations, while courts and dispute resolution mechanisms interpret rules, test rights, evaluate harm, assign responsibility, and determine remedies. If individuals are not empowered to safeguard their rights and cannot access these mechanisms, governance frameworks remain theoretical or are casually ignored.

This challenge reflects a broader structural constraint. Even without AI, legal systems struggle to meet demand. In the United States alone, 92% of people do not receive the help they need in accessing their rights in the justice system. Introducing AI into this environment without strengthening access can risk widening, rather than narrowing, the justice gap.


Please add your voice to ¶¶ŇőłÉÄę’ flagship , a global study exploring how the professional landscape continues to change.Ěý


Justice systems serve as the operational core of AI governance. By inserting the rule of law into unregulated areas, they provide the infrastructure that enables accountability by interpreting regulatory provisions in specific cases, assessing whether AI-related harms violate legal standards, allocating responsibility across public and private actors, and providing accessible pathways for redress.

These frameworks also generate critical feedback. Disputes involving AI systems expose gaps in transparency, fairness, and accountability. Legal professionals see where governance frameworks first break down in real-world conditions, often long before policymakers do. As a result, these frameworks function as an early signal of policy effectiveness and rights protections.

Importantly, AI governance does not require entirely new legal foundations. Human rights frameworks already provide standards for legality, non-discrimination, due process, and access to remedy, and these standards apply directly to AI-enabled decision-making. “AI can assist judges but must never replace human judgment, accountability, or due process,” said Kate Fox Principi, Lead on the Administration of Justice at the United Nations (UN) Office of the High Commissioner for Human Rights (OHCHR), during the February panel.

Clearly, rights are only meaningful when individuals can exercise them — this constraint is not conceptual, it’s operational. Systems must be understandable, affordable, and responsive, and institutions should be capable of evaluating complex, technology-enabled disputes.

Trust, markets & accountability

Governance frameworks that do not account for these dynamics risk entrenching inequities rather than mitigating them. An individual’s ability to understand, challenge, and seek a remedy for automated decisions determines whether governance is credible. A people-centered justice approach, as established in the , asks whether individuals can meaningfully engage with the system, not just whether rules exist. For example, women face documented barriers to accessing justice in any jurisdiction. AI systems trained on biased data can replicate or amplify existing disparities in employment, financial services, healthcare, and criminal justice.

“Institutional agreement rings hollow when billions of people experience governance as remote, technocratic, and unresponsive to their actual lives,” said Alfredo Pizarro of the Permanent Mission of Costa Rica to the UN. “People-centered justice becomes essential.”

AI systems already shape outcomes across employment, financial services, housing, and justice. Entrepreneurs, law schools, courts, and legal services organizations are already building AI-enabled tools that help people navigate legal processes and assert their rights more effectively. Governance design will determine whether these tools help spread access to justice and or introduce new barriers.

Private companies play a central role in developing and deploying AI systems. Their products shape economic and social outcomes at scale. For them, trust is not abstract; it is a success metric. “Innovation depends on trust,” explained Iain Levine, formerly of Meta’s Human Rights Policy Team. “Without trust, products will not be adopted.” And trust, in turn, depends on enforceability and equal access to remedy.

AI governance will succeed or fail based on access

As Pizarro also noted, justice provides “normative continuity across technological rupture.” Indeed, these principles already exist within international human rights law and people-centered justice; although they precede the advent of autonomous systems, they provide standards for evaluating discrimination, surveillance, and procedural fairness, and remain durable as new challenges to upholding justice and the rule of law emerge.

People-centered justice was not designed for legal systems addressing AI-related harms, but its outcome-driven orientation remains durable as new justice problems emerge.

The current stage presents an opportunity to align AI governance with access to justice from the outset. Beyond well-drafted rules, we need systems that people can use. And that means that any effective governance requires coordination between policymakers, legal professionals, and the public.


You can find other installments ofĚýour Scaling Justice blog seriesĚýhere

]]>
From emerging player to contender: How Latin America can compete in the global AI race /en-us/posts/technology/latam-ai-investment/ Mon, 06 Apr 2026 11:57:46 +0000 https://blogs.thomsonreuters.com/en-us/?p=70259

Key takeaways:

      • Strategic collaboration is becoming a defining strength for the region — Latin American organizations are realizing that progress in AI accelerates when they combine forces by linking industry expertise, academic talent, and public‑sector support.

      • AI initiatives rooted in real local challenges are gaining global relevance — By developing solutions grounded in the region’s own structural needs, whether in infrastructure, finance, agriculture, education, or mobility, many LatAm firms are producing technologies that are both highly impactful and naturally scalable.

      • Demonstrating clear outcomes is becoming fundamental — Organizations that show concrete operational improvements, measurable efficiencies, or stronger customer outcomes are strengthening their position with investors and partners.


In recent years, Latin America has experienced significant growth in investments related to AI, accounting for . This is strikingly low given that the region makes up around 6.6% of global GDP, highlighting the region’s opportunities to scale AI initiatives even further. Although there are notable differences among countries, Mexico and Brazil — the two largest LatAm economies — stand out for their volume of AI projects and funding, followed by other nations such as Chile, Colombia, and Argentina.

By recognizing the region’s strengths — which include cost-effective operations, access to data, clean energy, and public support — the region’s businesses can better position themselves and design strategies to draw in international investors that may be increasingly seeking promising locations for AI development.

Lessons from LatAm’s AI success stories

Latin America has produced remarkable AI success stories that can serve as models to build confidence among investors. These cases — involving companies that attracted substantial investment and achieved growth — demonstrate valuable best practices that range from technological innovation to working with governments and corporations. Some of these best practices include:

Building strategic alliances

The journey of innovation rarely unfolds in isolation. At times, the presence of large, established companies, whether local industry leaders or multinationals, has served as a catalyst for AI projects. The experience of that specializes in AI-powered agricultural irrigation, proves it. Now, Kilimo is partnering with EdgeConneX, a data center company based in the United States, on a community .

Academia, too, can be woven into this narrative. Collaborations with research centers or universities offer scientific credibility and connect ventures with emerging talent. In Mexico, AI startups often originate within university settings — such as computer vision projects from the National Autonomous University of Mexico (UNAM), for instance — and maintain agreements that sustain ongoing innovation and technical progress even with modest resources. And academic validations, whether in published papers or conference accolades, tend to resonate with foreign investors. Indeed, the emergence of this ecosystem that features early corporate clients and academic mentors frequently lends a distinctive appeal for those seeking investment.

Focusing on local problems with global impact

Within Latin America, certain issues prove especially relevant in situations in which AI solutions intersect with sectors renowned for regional strengths, such as fintech and financial inclusion, agrotech optimizing agriculture, and foodtech drawing on local ingredients. The experience of Chilean food startup NotCo — in which and subsequently exported — suggests how innovations rooted in local context may generate broader attention.

By addressing needs in urban transport, education, mining and related areas, local LatAm companies can provide access to homegrown data and users, which can further refine technology and open pathways for investors into similar emerging markets. When AI solutions respond to genuine pain points rather than mere novelty, momentum often builds more quickly, and the model finds validation among that evaluate investments.

Showing results and AI ROI early on

Questions linger for many executives . Evidence of clear metrics like cost savings, sales growth, or error reduction can prove persuasive, especially when complemented by success stories from local clients.

Recent studies show that companies ; and such figures tend to reassure those considering investment by illustrating tangible improvements. Testimonials or independent validations, such as a university study, can further illuminate achievements.

The act of quantifying impact — whether in efficiency, revenue, or other relevant KPIs — has a way of transforming perceptions from uncertainty toward clarity.

Leveraging government incentives and collaborations

Many Latin American nations have put forth support programs for AI and tech projects, such as non-repayable funds, soft loans, and tax benefits for innovation illustrated in , , , or the .

Public financing, when present, often acts as a stamp of validation for private investors. For example, this trust extended to Brazilian startups receiving Finep support for AI health projects, which in turn can shift perceptions for foreign ventures capitals. Engagement in government pilots, such as smart city initiatives or solutions for ministries, provides valuable exposure. In such contexts, public-private partnerships and incentives seem to act as quiet levers for growth and legitimacy.

Seeking smart and diversified financing

Financial strategies in Latin America have been shaped by the interplay of local and foreign capital. Local funds often bring insights and patience, while foreign funds may offer larger investments and global scaling experience. Ownership dilution sometimes accompanies the arrival of strategic investors, whose networks can prove invaluable, such as . Programs like 500 Startups, Y Combinator, MassChallenge, and international competitions have ushered LatAm AI startups such as Heru, Rappi, Bitso, and Clip into new rounds of capital following increased exposure.

Efficiency in capital management, which can be demonstrated with lean burn rates and milestone achievement with limited resources, signals an ability to execute within the realities of LatAm, which may enhance the appeal for future investments. The cultivation of relationships and responsible stewardship of capital frequently matters as much as the funds themselves, suggesting that the value of mentorship, contacts, and reputation is often intertwined with deepening financial support.

Unlocking AI Investment

By applying these principles, Latin American companies have achieved a better position to attract AI investments to their projects and help position the region as a viable destination for technology capital. These recent experiences show that when a LatAm company combines innovation, talent, and strategy — while communicating its story well — it can win over global and local investors alike. Each of the best practices noted above is based on real lessons: international alliances (NotCo with US funds), leveraging incentives (Brazilian companies funded by Finep), talent formation (Santander and Microsoft programs), focus on ROI (successful use cases that convince boards), and more.

Latin America has challenges but also unique advantages. Companies that manage to navigate this environment intelligently will increase their chances of securing the financing needed to innovate and grow. By doing so, they will contribute to a virtuous circle in which each new success attracts more investment to the region and opens doors for the next generation of LatAm AI ventures.


You can find more about the challenges and opportunities in the Latin American region here

]]>
Reinventing the data core: The arrival of the adaptable AI data foundry /en-us/posts/technology/reinventing-data-core-adaptable-data-foundry/ Thu, 05 Mar 2026 16:08:59 +0000 https://blogs.thomsonreuters.com/en-us/?p=69795

Key takeaways:

      • There is a widening gap between AI ambition and readiness — The gap between AI ambition and data readiness is widening, making the adoption of an adaptable data foundry essential for scalable, explainable, and compliant AI outcomes.

      • A data foundry model directly addresses the root cause — A data foundry model enables organizations to industrialize data production, automate compliance, and ensure consistent data lineage, thereby overcoming the limitations of brittle, legacy data architectures.

      • Incorporate the data core into your AI planning — Reinventing the data core is now a strategic imperative for those enterprises that aim to thrive in 2026 and beyond, as agentic AI, regulatory demands, and integration complexity accelerate.


This article is the third and final installment in a 3-part blog series exploring how organizations can reset and empower their data core.

A defining theme of this year so far is the widening distance between organizational ambition and data readiness. Leaders want the hype and inherent capabilities they believe are instantly contained within agentic AI — automated compliance, predictive integration for M&A, and decision-intelligence pipelines that reduce operational friction.

Without a data foundry, however, much of that will be impossible. Instead, workflows will remain brittle, AI agents will hallucinate under inconsistent semantics, and data lineage will break down across federated sources. Further, without a data foundry, regulatory mappings involved with the Financial Data Transparency Act (FDTA) and the Standard Business Reporting (SBR) framework cannot be automated, cross-functional insights will require manual reconciliation, and auditability will collapse under scrutiny.

This is not a failure of leadership. It is a failure of architectural design to recognize the congealment of data as a predecessor to technologies and the critical priorities of data security, auditability, and lineage.

data core

For decades, organizations built monolithic systems that were optimized for stability and reporting. Today’s world demands modularity, continuous adaptation, and agent-driven interoperability. Architecture has shifted from build and operate to build and evolve. This is precisely what a data foundry enables.

Why reinvention can no longer wait

Throughout 2025 and now into the early months of 2026, data and AI have quietly shifted from innovation topics to enterprise constraints. Leaders across regulated markets are starting to recognize that the obstacles limiting their AI ambitions are neither mysterious nor technical — they are structural. These obstacles sit inside the data core, waiting inside the silent architecture that determines whether any form of automation, intelligence, or compliance can scale beyond a pilot.

The data bears this out. When you examine the work coming from Tier-1 research bodies, supervisory institutions, and global transformation benchmarks, a consistent narrative emerges beneath the headlines: AI is accelerating, regulation is hardening, and integration demands are expanding. Moreover, organizational data remains pinned to assumptions that were forged in static, pre-AI operating environments. This gap is not theoretical; rather, it is measurable, persistent, and directly correlated to business performance.

data core

Let’s look at the AI results first. Across industries, organizations continue to experience a familiar pattern: early promise, limited adoption, and rapid degradation once the model encounters inconsistent semantics or fragmented lineage. Global studies show that the vast majority of enterprise AI initiatives still struggle to reach full production maturity, and among those that do, many encounter performance drift within the first year.

The driver is remarkably consistent. It is not the sophistication of the model nor the skill of the data science team — it is the quality, clarity, and traceability of the data that is feeding the system.

Taken together, these signals deliver a clear message. The gap between AI ambition and data readiness is widening, not narrowing. This is why the data foundry conversation matters now. It is not an abstract architectural concept. It is a response to the full stack of quantitative pressures the market has been telegraphing for years — costs rising, compliance hardening, AI accelerating, and integration straining under inconsistent semantics and fragile lineage.

A data foundry model directly addresses the root cause of this by industrializing the creation of consistent, reusable, explainable data products that can fuel agentic AI, support regulatory defensibility, and accelerate enterprise reinvention.

The numbers point to a simple conclusion. Reinvention is no longer optional, and the window to address the data core before agentic AI becomes standard practice is narrow and closing. The organizations that act now will be the ones that define what compliant, explainable, interoperable AI looks like in the next decade. Those that defer the work will find themselves restructuring under pressure rather than reinventing by choice.

This is the inflection point. In truth, the quantitative signals have made the case more clearly than a multitude of strategy narratives ever could.

The data foundry: A model for continuous alignment

Unsurprisingly, agentic AI introduces new, more demanding requirements, including:

      • machine-interpretable semantics;
      • context-preserving lineage across federated systems;
      • decomposition of enterprise knowledge into reusable data products;
      • dynamic trust-scoring tied to source reliability and timeliness;
      • automated compliance overlays and regulatory logic; and
      • cross-domain metadata orchestration.

These capabilities are not optional, and they are non-negotiable. Indeed, they determine whether AI elevates risk or mitigates it, whether it accelerates productivity or introduces unrecoverable inconsistencies. And they determine whether AI augments decision quality or produces volatility.

A data foundry shifts organizations from artisanal, one-off data preparation and toward industrialized data production, in which patterns replace pipelines, and building blocks replace custom engineering. This shift will mean that lineage is generated, not documented; semantics are governed, not patched; and compliance is automated, not reconstructed. In this way, reuse becomes the default, not the exception.

In fact, this process is analogous to manufacturing. Instead of producing bespoke components for each need, the enterprise creates standardized, high-fidelity data assets that can be assembled into any workflow, any AI use case, and any reporting requirement.

A data foundry becomes the quiet architecture behind every future capability, making these capabilities systematic rather than ad-hoc. The chart below showcases the progressive build-up using a data factory, beginning with data intake and harmonization and ending with the AI agent orchestration and reusable data products that learn from their deployment.

data core

Unfortunately, organizations are still building increasingly advanced AI decisioning and efficiency solutions on top of an aging and brittle data foundation. The results are predictable: stalled initiatives, compliance exposure, and stakeholder frustration. Additionally, instead of asking why, organizations keep adding more tools — more dashboards, more cloud services, more AI pilots, and more flavors of transformation.

Clearly, enterprises aren’t dealing with an AI problem. They’re dealing with a data alignment problem disguised as progress within fragmented, AI enclosures.

Reinvention starts at the data core

For more than a decade, firms across regulated industries have repeated the same mantra: Data is our most critical asset. When you peel back the layers or when you sit in board review sessions or integration meetings or regulatory remediation audits, however, the evidence does not match the rhetoric.

Reinvention is no longer optional. Instead, it is the starting point for meeting the demands of 2026 and beyond. The institutions that thrive will be those that understand that the data core is not a technical asset — it is the operational backbone of the enterprise. Indeed, the institutions that succeed will be those that recognize the truth early: AI is an output, and the data core is the strategy. And the organizations able to industrialize their data — through a foundry model, through AXTent, through repeatable semantic structures — will be the ones leading innovation, reducing compliance risk, accelerating M&A synergies, and achieving enterprise-wide reinvention.

In the end, the real question isn’t whether AI will transform business; the question is whether the data foundation will allow it. And the answer is rebuilding your data core so AI can actually deliver the outcomes your organization needs — and that work begins now.


You can find more blog postsĚýby this author here

]]>
When courts meet GenAI: Guiding self-represented litigants through the AI maze /en-us/posts/ai-in-courts/guiding-self-represented-litigants/ Thu, 19 Feb 2026 18:20:08 +0000 https://blogs.thomsonreuters.com/en-us/?p=69532

Key insights:

      • Considering courts’ approach — Although many courts do not interact with litigants prior to filings, courts can explore how to help court staff discuss AI use with litigants.

      • Risk of generic AI tools — AI use in legal settings can’t be simply categorized as safe or risky; jurisdiction, timing, and procedure are vital factors, making generic AI tools unreliable for court-specific needs.

      • Specialty AI tools require testing — Purpose-built court AI tools offer a safer alternative for litigants, yet these require development and extensive testing.


Self-represented litigants have always pieced together legal help from whatever sources they can access. Now that AI is part of that mix, courts are working to help people use this advanced technology responsibly without implying an endorsement of any particular tool or even the use of AI.

Many litigants cannot afford an attorney; others may distrust the representation they have or may not know where to begin. In any case, people need a meaningful way to interact with the legal system. Used carefully and responsibly, AI can support access to justice by helping self-represented litigants understand their options, organize information, and draft documents, while still requiring litigants to verify their information and consult official court rules and resources.

These issues were discussed in a recent webinar, , hosted by . The panel explored the potential benefits of AI for access to justice and the operational challenges of integrating AI into public-facing guidance for litigants.

The problem with “Just ask AI”

Angela Tripp of the Legal Services Corporation noted that people handling legal matters on their own have long relied on a mix of resources, “some of which were designed for that purpose, and some of which were not.” AI is simply a new tool in that environment, she added. The primary challenge is that court processes are rule-based and time-sensitive; and a mistake can mean missing a deadline, submitting the wrong document, or misunderstanding a requirement that affects the case.

Access to justice also requires more than just access to information in general. Court users need information that is relevant, complete, accurate, and up to date. Generic AI systems, such as most public-facing tools, are trained on broad internet text may not reliably deliver that level of specificity for a particular court, case type, or stage of a proceeding. In these cases, jurisdiction, timing, and procedure all matter. Unfortunately, AI can omit key steps or emphasize the wrong issues, and self-represented litigants may not have the legal experience to recognize what is missing.

At the same time, AI offers several potential benefits to self-represented litigants. It can explain concepts in plain language, help users structure a narrative, and produce a first draft faster than many people can on their own. The challenge is aligning those strengths with the precision that court processes demand.

A strategic pivot: from teaching litigants to equipping staff

In the webinar, Stacey Marz, Administrative Director of the Alaska Court System, described her team’s early efforts to give self-represented litigants clear guidance about safer and riskier uses of AI, including examples of how to properly prompt generative AI queries.

The team tried to create traffic light categories that would simplify decision-making; however, they found this approach very challenging despite several draft efforts to create useful guidance. Indeed, AI use can shift from low-risk to high-risk depending on context, and it was hard to provide examples without sounding like the court was endorsing a tool or sending people down a path to which the court could not guarantee results.

The group ultimately shifted to a more practical approach — training the people who already help litigants. The new guidance targets public-facing staff such as clerks, librarians, and self-help center workers. Instead of teaching litigants how to prompt AI, it equips staff to have informed, consistent conversations when litigants bring AI-generated drafts or AI-based questions to the counter.

The framework emphasizes acknowledgment without endorsement. It suggests language such as:

“Many people are exploring AI tools right now. I’m happy to talk with you about how they may or may not fit with court requirements.”

From there, staff can explain why court filings require extra caution and direct users to court-specific resources.

This approach also assumes good faith. A flawed filing is often a sincere attempt to comply, and a litigant may not realize that an AI output is incomplete or incorrect.

Purpose-built tools take time

The webinar also discussed how courts also are exploring purpose-built AI tools, including judicial chatbots designed around court procedures and grounded in verified information. Done well, these tools can reduce common problems associated with generic AI systems, such as jurisdiction mismatch, outdated requirements, or fabricated or hallucinated citations.

However, building reliable court-facing AI demands significant time and testing. Marz shared Alaska’s experience, noting that what the team expected to take three months took more than a year because of extensive refinement and evaluation. The reason is straightforward: Court guidance must be highly accurate, and errors can materially harm someone’s legal interests. In fact, even after careful testing, Alaska still included cautionary language, recognizing that no system can guarantee perfect answers in every situation.

The path forward

Legal Services’ Tripp highlighted a central risk: Modern AI tools can be clear, confident, and easy to trust, which can lead people to over-rely on them. And courts have to recognize this balance. Courts are not trying to prevent AI use; rather, many are working toward realistic norms that treat AI as a drafting and organizing aid but require litigants to verify claims against official court sources and seek human support when possible.

Marz also emphasized that courts should generally assume filings reflect a litigant’s best effort, including in those cases in which AI contributed to confusion. The goal is education and correction rather than punishment, especially for people navigating complex processes without representation.

Some observers describe this moment as an early AOL phase of AI, akin to the very early days of the world wide web — widely used, evolving quickly, and uneven in its reliability. That reality makes clear guidance and consistent messaging more important, not less.

This shift among courts from teaching litigants to use AI to teaching court staff and other helpers how to talk to litigants about AI reflects a practical effort on the part of courts to reduce the risk of harm while expanding access to understandable information.

As is becoming clearer every day, AI can make legal processes feel more navigable by helping self-represented litigants draft, summarize, and prepare; and for courts to realize that value requires clear guardrails, court-specific verification, and careful implementation, especially when a missed detail can change the outcome of a case.


You can find out more about how AI and other advanced technologies are impactingĚýbest practices in courts and administrationĚýhere

]]>
ESG is evolving and becoming embedded in global trade operations /en-us/posts/international-trade-and-supply-chain/esg-embedded-in-global-trade/ Thu, 05 Feb 2026 12:09:16 +0000 https://blogs.thomsonreuters.com/en-us/?p=69328

Key insights:

      • ESG is becoming more operationalized — ESG is being conducted with a lower public profile while also playing an increasingly strategic role in supplier governance frameworks.

      • Data collection remains widespreadand robust — Companies continue to collect comprehensive ESG data from their suppliers.

      • Technology usage in ESG is increasing — Greater investment in automation demonstrates continuing commitment to effectively managing ESG.


Environmental, social and governance (ESG) issues have played an increasing role in global trade operations in recent years. As the United States government sharply pulled back its role in encouraging ESG in global trade in 2025, concerns were raised over whether that would impact ESG efforts globally.

However, ESG-related efforts in global trade have not diminished, although they are evolving in form and positioning, according to the Thomson Reuters Institute’s recent 2026 Global Trade Report. In fact, the report’s survey respondents said that ESG data collection from suppliers is now largely structurally embedded in trade operations, although at the same time, it is being carried out with a lower public profile than in previous years.

ESG management remains a core trade function

Managing ESG remains one of the most widespread responsibilities among trade professionals. Almost two-thirds (62%) of those surveyed said their role includes ensuring ESG compliance throughout the supply chain. That represents a higher percentage than for other responsibilities, such as procurement and sourcing, supplier management, trade systems management, risk management, customs clearance, and regulatory compliance. The only more widespread role being done by those global trade professionals surveyed is business strategy for global trade and supply chain.

More importantly, ESG remains integral and nearly universal when it comes to the supplier selection process. All respondents in the Asia-Pacific region (APAC), Latin American and the European Union-United Kingdom, as well as 99% of US respondents, report that ESG considerations remain moderately important, important, or very important in influencing their decisions around using a supplier. And overwhelming 78% say it is an important or very important consideration.

Clearly, as the report demonstrates, ESG remains a core component of the trade function for most businesses.

ESG moves toward structural governance frameworks

Only a very small proportion of respondents — 3% in the US and 4% globally — said they stopped ESG-related data collection entirely in 2025. Meanwhile, ESG data collection has increased across several major metrics.

As companies move to embed ESG expectations directly into their supplier governance frameworks, they are shifting these efforts from being a publicly declarative initiative to becoming operationalized as a permanent compliance and sourcing discipline alongside other operational considerations.

Businesses are focusing on supplier information in areas that have direct operational relevance. For example, companies collecting data on Free Trade Agreement (FTA) eligibility status for ESG purposes can also leverage the data to reduce costs, ensure supply chain security through Customs Trade Partnership Against Terrorism (CTPAT) participation, and better maintain compliance with country-of-origin requirements. Similarly, Country of Origin (COO) and Authorized Economic Operator (AEO) status are both classified under ESG but are also highly trade operations specific. These metrics merge the lines, representing areas in which ethical considerations intersect with practical trade strategy.

Supplier data collection is shifting to operational relevance as well. Indeed, the scope of supplier data being gathered remains broad and reflects a holistic view of the supply chain. The most common areas for ESG data collection in 2025 were: i) environmental metrics, such as water usage, waste management, energy management, and carbon emissions, including Scope 3 emissions; ii) social metrics, such as health and safety, labor standards, human rights including modern slavery or indentured service, and diversity in employees; and iii) governance and compliance, including data privacy, business ethics, and anti-corruption.

Data collection from suppliers

global trade

Meanwhile, ESG data collection has been scaled back in areas such as trade evaluation, AEO/CTPAT status in some jurisdictions, diversity in ownership, and anti-corruption assessments. The most cited reason for the pullbacks was insufficient cost-benefit return for collecting data in areas in which customer scrutiny was minimal. This trade-off reflects a rational reprioritization: companies are focusing their ESG diligence in areas in which regulatory risk is more material rather than reputational.

Integrating ESG into broader trade workflows

The report also shows that businesses are leveraging ESG to make it more operationally effective, drive greater efficiency, reduce costs, and add greater value for the organization. ESG is becoming less of a marketing and brand building exercise, and more of a compliance and sourcing discipline that factors into strategic decision-making — it is subject to the same analytical rigor as financial or operational risks.

To this end, organizations are less prone to make a string of bold public goals and commitments, or issue standalone ESG reports, updates, or scorecards that tout their progress. Instead, ESG data is being seamlessly embedded into supplier evaluation and selection alongside non-ESG business metrics and other considerations. As such, organizations are using ESG to quietly build the structural frameworks, data infrastructure, and management approaches they’ll need for more strategic planning.


ESG is shifting to strategically supporting business growth and away from reputational focus


Helping this shift along, the report shows, is that the use of technology to manage ESG has accelerated significantly in 2025. One-third of respondents said their organizations use automated ESG solutions, a major increase from only 20% in 2024. This provides a clear indication that more organizations are not only continuing but strengthening their commitment to effectively managing ESG.

And this provides a boost, because greater automation can improve the efficiency and ability of trade professionals to manage ESG efforts, further enhancing the integration of ESG data into other operational workflows as organizations incorporate ESG data to drive greater value.

What lies ahead for ESG

ESG practices and organizations’ embrace of them remain near-universal across trade operations. This continuation presents a clear indication that there is no widespread retreat from ESG management. For trade professionals, ESG is here to stay and is evolving into an operational discipline to help grow their business.

For organizations to have continued success in this evolving ESG environment, they should take several steps that require strategic thinking, including:

      • Identify which metrics truly matter — Connect ESG metrics that affect trade operations, particularly those that impact supply chain cost, efficiency, and reliability.
      • Invest in the technology infrastructure — Improve efficiency in tracking and analyzing key ESG metrics.
      • Articulate ESG value — Develop the ability to demonstrate the value of ESG to the trade function and communicate it in business terms to senior management.

The shift of ESG towards operational trade management may represent a more sustainable long-term path forward than the earlier wave of ESG enthusiasm — embedding ethical considerations into core business processes rather than treating them as separate compliance exercises. By focusing on metrics that genuinely matter to business operations, companies are building practices that will persist regardless of any political winds or public relations trends.

Those corporate trade departments that can skillfully navigate this evolving environment will be positioned to more effectively leverage ESG considerations as a strategic asset and competitive differentiator. And in an increasingly complex and volatile global trading landscape, they will find themselves playing a more central role in their organizations’ success.


You can download a copy of the Thomson Reuters Institute’s 2026 Global Trade Report here

]]>
The child exploitation crisis online: Gaps in digital privacy protection /en-us/posts/human-rights-crimes/children-digital-privacy-gaps/ Wed, 04 Feb 2026 18:39:04 +0000 https://blogs.thomsonreuters.com/en-us/?p=69312

Key highlights:

      • Fragmented protection creates vulnerability —Current US privacy laws operate as a patchwork system without comprehensive national standards, leaving children and other users exposed to data exploitation across state lines and international borders.

      • Body data collection opens future manipulation potential —Virtual reality platforms collect granular biometric information through sensors that can reveal deeply sensitive information about users.

      • Use-based regulations outlast technology changes — Restricting harmful applications of data provides more durable protection than the current regulatory approach, which relies on categorizing rapidly evolving data types.


Virtual reality (VR), social media, and gaming companies have long avoided robust content moderation, largely out of concern over implementation costs and the risk of alienating users. This reluctance stems from platforms wanting to have the widest pool of users as possible. Yet, the shortsightedness of this decision has consequences, including insufficient protection of children and long-term cost to companies’ bottom-lines.

The child exploitation crisis in digital spaces requires better laws and a reimagining of how VR, gaming, and social medial companies balance privacy, safety, and accountability across diverse platform architectures, according to , an expert in child exploitation methods in digital spaces and Policy Advisor at the NYU Stern Center for Business and Human Rights.

Limitations of existing regulatory frameworks

The current regulatory landscape is insufficient to protect children online. The lack of a comprehensive national privacy law in the United States, the use of consent mechanisms, and the haphazard rollout of age verification all expose protection gaps and come with economic and psychological costs, according to Olaizola Rosenblat. For example, some of the dangers include:

Gaps in patchwork of regulations leave children vulnerable — Regulatory demands for child safety often collide with privacy protections, creating contradictory obligations that platforms cannot realistically satisfy. In the absence of unified standards, however, companies operate in a jurisdictional maze that leaves most users, including children, exposed to data exploitation across borders.

America’s regulatory landscape remains especially fragmented, with no comprehensive national privacy law to provide consistent protection. comes close to establishing meaningful safeguards, according to Olaizola Rosenblat, yet it still permits companies to collect data even after users opt out of the sale or sharing of their data.

digital privacy
Mariana Olaizola Rosenblat, of the NYU Stern Center for Business and Human Rights

Federal reform attempts, including the , collapsed amid conflicts between states demanding stronger protections and tech lobbyists aligned with conservative representatives seeking weaker standards. In addition, child-specific laws, such as the , provide protection only for those under 13, which leaves older minors and adults vulnerable.

“Once users turn 13, they fall off a regulatory cliff,” says Olaizola Rosenblat. “There is no federal child-specific data protection regime, and existing state-level safeguards are patchy and largely ineffective for teens.”

Internationally, the European Union’s (GDPR), although considered the gold standard for regulation, suffers from a persistent gap between its ambitious text and its uneven enforcement.

Age verification tensions — These regulatory shortcomings also are evident in debates over age verification. Protecting children requires collecting data to determine user age, yet privacy advocates frequently oppose such measures. Without pragmatic guidance acknowledging these inherent trade-offs, platforms often face contradictory obligations they cannot simultaneously fulfill.

Current consent frameworks offer little protection — Current consent mechanisms offer users an illusory choice that fails to protect children from data exploitation. Even relatively robust frameworks like the GDPR rely on consent models in which refusal means exclusion from digital spaces essential to modern life. This approach proves particularly inadequate for younger users. Indeed, that about one-third of Gen Z respondents expressed indifference to online tracking.

VR data collections may allow future exploitation

VR platforms differ fundamentally from traditional gaming spaces and social media platforms. Users with VR headsets embody avatars that move through thousands of interconnected experiences. While no actual touching occurs, the experiences feel visceral. Indeed, the psychological and physiological responses can mirror aspects of real-world experiences, which include sexual exploitation, even though no physical contact occurs.

Olaizola Rosenblat explains that the data collected from the sensors can open up the potential for future exploitation. “The inferences that can be drawn from your body-based data collected by these sensors is granular and often intimate,” she explains. “The power that gives to companies is pretty remarkable in terms of knowing things about you that you might not even know yourself.”

Recommended actions to address challenges

Addressing the child exploitation crisis in digital spaces requires coordinated action, according to Olaizola Rosenblat, and that needs to include:

Universal protection standards — Corporate action in partnership with legislators is necessary for effective reform that protect all users rather than fragmenting safeguards by age or vulnerability status. Current approaches that shield only younger children create dangerous gaps and leave adolescents and adults exposed once they age out of protected categories.

Enforce existing regulations — Even well-crafted legislation proves meaningless without robust enforcement mechanisms. Commitment by government agencies along with the appropriate levels of funding is the most meaningful approach to achieve desired outcomes.

Technology-agnostic use regulation — Rather than attempting to categorize rapidly evolving data types, companies in the VR, gaming, and social media sectors must work with legislators to restrict harmful uses of data such as manipulation, exploitation, and unauthorized surveillance, regardless of technical collection methods. Regulating data use — rather than the current method of regulation based on categories of data, which include personally identifiable information — is the right approach.

Public mobilization is essential — Citizens must understand that the stakes of data exploitation beyond corporate collection also include hacking vulnerabilities and manipulative deployment. Without consumer demand for better protection and the willingness for legislators to pass the laws, regulation will not happen.

The path forward

The digital exploitation of children demands immediate action that transcends partisan divides and corporate interests. Only through coordinated regulatory reform, meaningful enforcement, and sustained public pressure can we create digital spaces in which innovation thrives without sacrificing our privacy and safety. The cost of continued inaction grows steeper each day we delay.


You can find out more on how organizations and agencies are fighting child exploitation here

]]>
Chief Marketing & Business Development Officer Forum 2026: Mapping the tides of change in the legal market /en-us/posts/legal/cmbdo-forum-2026-tides-of-change/ Thu, 29 Jan 2026 13:21:50 +0000 https://blogs.thomsonreuters.com/en-us/?p=69200

Key insights:

      • Despite a strong 2025, law firms face growing challenges — Client expectations continue to evolve, as more clients are now more sophisticated around AI and pricing, pushing law firms to provide greater transparency and communication.

      • Client relationships are becoming shallower — As clients increasingly demand transparency and collaboration, particularly regarding AI adoption and pricing models, law firms must adapt quickly to meet these new expectations.

      • Differentiation is more vital than ever — Responsiveness, speed, and clear communication about value and technology have emerged as key factors for law firms to stand out and deepen client relationships.


AMELIA ISLAND, Fla. — It may have already become cliché to say that the legal industry is at a significant crossroads: Firms are coming off what appears by all measures to be a very successful 2025, yet the industry also is facing fundamental structural change, driven mainly by AI and subsequent changing client expectations.

Subsequently, that temperament permeated the opening sessions of the Thomson Reuters Institute’s 33rd Annual Chief Marketing & Business Development Officer Forum (formerly the Marketing Partner Forum) held this week.

“No matter how well we’re all doing, the angst level has never been higher,” said one law firm leader at the Forum.

Jen Dezso, Director of Client Relations at the Thomson Reuters Institute, opened the event giving a data-rich thumbnail of the legal market, based mostly on the recently released 2026 Report on the State of the US Legal Market, published jointly by the ¶¶ŇőłÉÄę® Institute and the Center on Ethics and the Legal Profession at Georgetown Law. Dezso demonstrated that almost all key indicators for law firm performance are up — demand, fees worked, lawyer growth — and that firms seem to be “monetizing the work they capture.” The main drivers pushing firm growth, she explained, are being moved by strategic wins of high-value business rather than a higher volume of ordinary work.


“No matter how well we’re all doing, the angst level has never been higher.”


Yet there are some darker clouds on the horizon, she added, noting that client relationships may be a bit shallow. For example, while just over one-third of large clients (36%) said they plan to increase their legal spend in the coming year, less than one-quarter of that spend (23%) goes to the firm that the client uses most — a figure that has been dropping over the last several years. Indeed, that most-used firm now gets engaged for less than three work types, and only 15% of clients say they will use their most-used firm more in the coming year.

Not surprisingly then, these figures weighed heavily as panels of top lawyers and law firm marketing and business development specialists discussed these matters during the Forum.

“Clearly, the softening of client relationships is a key piece of this,” said one business development officer. “And you can see that in RFPs and the level of transparency that clients are asking for. I think a lot of work needs to be done by law firms to ensure these deeper trusting relationships with clients.”

Others on the panel agreed. “Financially we’re doing very well, but we should be looking at what has changed with the clients,” one said, adding that many outside law firms may not have fully processed the impact the global pandemic has had on client relationships over the ensuing five years.

What’s changed in clients’ minds?

Understanding and adapting to this change in clients’ mindsets should be mission critical for law firms today. Indeed, all other initiatives — collaboration, pricing, business development, and more — will flounder on the rocks if law firms don’t engage with their clients directly. And the primary result of that engagement should have law firms coming away with an understanding of what clients want and need and, even more importantly, where clients see their outside firms failing to meet those needs.

Though obviously a difficult conversation, this level of client engagement is the only way firms are going to be able to deliver for clients while remaining sustainable, innovative, and profitable themselves.


You can read the full here


Perhaps the most dramatic shift these panelists perceive is the change in client expectations around AI. Several noted that there is a growing disconnect between what clients believe AI should enable law firms to do and what firms are actually delivering — and many said this was the fault of poor communication. For example, RFPs now routinely include references to AI, with clients moving from a stance of caution — You can use AI, but not with my data — to one of collaboration — Where can we work together within the AI space? This rapid evolution requires firms to be able to communicate their clear roadmap for AI adoption and pricing innovation that is understood by partners and can be conveyed easily to clients.

“Transparency and communication are paramount,” offered one law firm executive. “Firms must be able to explain their approach to AI and demonstrate its value to clients.” In fact, several panelists suggested that the best opportunities to deepen client relationships often arise in these conversations around technology and innovation.

In many cases it is the role of the Chief Marketing and Business Development Officers to lead these conversations, especially as these talks can help differentiate the firm. “The leaders in these roles may have the most important job within their firm,” noted one panelist. “The capability of these roles to see outside the walls of the firm is incredibly important.”

CMBDO Forum
Jen Dezso, of the Thomson Reuters Institute, discusses the state of the legal market at the Chief Marketing & Business Development Officer Forum in Amelia Island, Fla.

Several panelists pointed out that increasingly in today’s crowded marketplace, differentiation is more vital than ever, yet seemingly more difficult to achieve. “Sometimes it does come down to responsiveness and speed — these age-old client service tenants that we’ve all pursued forever,” said another firm marketing professional.

In fact, according to Thomson Reuters Institute data, clients look at several areas of differentiation when considering outside legal services, including the firm’s AI implementation, with 40% of clients citing that. And while clients ranked both cost efficiency and the use of value-based pricing lower, at 29% and 16% respectively, many law firm leaders said they consider pricing a critical challenge for the industry, especially given the mounting pressure on the traditional billable hour model.

“We need to get clients to look at value, and we need to get our own partners to look at our own value proposition,” explained one firm leader. “If we can’t segment the work and see what it takes to deliver this, we are in trouble.”

As the Forum discussions illustrated, as clients become much more sophisticated around pricing, law firms have to make sure their lawyers and partners can communicate the firm’s value to clients. “We, as law firm leaders, need to have confidence in what are partners are saying — I mean, that’s true marketing — and we need to talk through these issues with partners, so everyone is more comfortable addressing this with clients.”


You can find out more about next year’s Chief Marketing & Business Development Officer Forum 2027Ěýhere

]]>
Becoming a strategic partner: Elevating the tax function’s brand /en-us/posts/corporates/tax-function-strategic-partner/ Tue, 09 Dec 2025 15:30:45 +0000 https://blogs.thomsonreuters.com/en-us/?p=68644

Key takeaways:

      • Reframe your value proposition — Translate tax achievements into business language the C-suite understands, such as protecting shareholder value, enabling growth, and mitigating risk rather than simply reporting compliance metrics.

      • Invest strategically in technology and talent — Prioritize automation and AI tools while outsourcing strategically to free internal resources for high-value strategic work that demonstrates the department’s business impact.

      • Build cross-functional partnerships — Proactively collaborate with IT, legal, operations, and HR on enterprise-wide initiatives that will position the tax function as an essential strategic partner rather than an isolated compliance department.


SAN FRANCISCO — In recently released , published by the Thomson Reuters Institute and Tax Executives Institute,Ěýa large portion of the tax department professionals surveyed expressed their desired to do more strategic work compared to simple tactical work. This was a theme we’ve seen repeatedly across our research: Tax professionals are shedding their traditional compliance-focused image and moving toward becoming strategic business partners to their organizations.

By articulating their value proposition, investing strategically in technology and talent, and aligning with broader business objectives, tax department leaders can secure the resources and influence needed to drive meaningful organizational impact.

Yet, the tax function has long been viewed as a necessary cost center — a department that ensures compliance, files returns, and manages audits — despite the essential work that in-house tax professionals do. Rarely did these professionals feel they are treated as strategic business partners. However, perception is rapidly changing, according to the insights shared at the recentĚý.

Today’s tax leaders are positioning their teams as strategic partners who provide critical insights that influence business resilience, growth strategies, and organizational risk management, conference panelists explained.

The evolving role of the tax function

Amid ongoing tax and trade policy shifts and increased business uncertainty, opportunities abound for tax professionals in corporate tax departments. Indeed, several panelists noted that the State of the Corporate Tax Department report showed that tax leaders are increasingly becoming deeply involved in strategic decisions ranging from business resilience strategy (with 63% of survey respondents saying their tax department is involved in this area) to M&A transactions (60%), organizational risk management (58%), and supply chain management (55%).

Further, CFOs are increasingly looking to their in-house tax leaders for support across multiple strategic areas, including digital transformation and AI, ESG strategy, workforce strategy, and economic resilience planning. This expanded role creates for the tax team creates both opportunities and challenges for those seeking to demonstrate their strategic value.


By articulating their value proposition, investing strategically in technology and talent, and aligning with broader business objectives, tax department leaders can secure the resources and influence needed to drive meaningful organizational impact.


In fact, one of the most pressing question tax leaders face is how to secure adequate budget funding in an environment of competing corporate priorities. The answer lies in strategic thinking about resource allocation and being intentional about having a seat at the table to better advocate for necessary investments. Tax department leaders must educate executive leadership on the risks that come with not having enough budget resources — from trying to do more with less to the potential for the company to face more exposure and risk that includes increased audits and fines.

As session panelists explained, the key is to frame discussions in terms that C-Suite leaders understand. Rather than simply requesting more resources, tax leaders should articulate how investments in the tax function can all it to better protect revenue, enable growth opportunities, and mitigate organizational risk.

Creating a value-focused identity

That articulation to management is a big step toward a tax function’s goal to move from feeling and acting like a cost center to being a strategic partner to the business. Indeed, corporate tax department leaders must change their own perceptions of how the department is perceived first — in essence, rebranding themselves and reimagining their identity. This starts with creating a compelling value story that resonates with the C-suite.

Start with creating (or recreating) a department mission statement that emphasizes value creation rather than mere compliance, aligning with broader priorities of the organization, such as business partnership and growth. Then, work to provide insights to drive decisions, and support regulatory demands while maintaining transparency.


Check out for more insight on how corporate tax professionals shift from compliance to strategic work


One practical approach is to speak the language of the C-suite by translating tax achievements into business metrics that executives care about, panelists added. For example, rather than reporting that the department completed the tax provision on time, frame it instead as the department protected $X million in shareholder value through accurate financial reporting or enabled the acquisition to close on schedule by providing timely tax due diligence.

It is also important for tax departments to track and communicate their wins consistently, panelists said, creating regular touchpoints with executive leadership to share accomplishments that position the tax function as a proactive business partner.

Navigating technology, talent, and collaboration

Technology investment represents both an opportunity and a challenge for tax departments, as the State of the Corporate Tax Department report makes clear. More than half of the respondents say they expected some increase in their budgets to invest in new tech tools over the next few years, and many indicate they plan to invest in tools and solutions to automate their workflow, especially those that support machine learning and generative AI (GenAI).

While it is great they are anticipating an increased budget, panelist explained that tax department leaders must educate management on the practical challenges of AI adoption, including the need for clean, well-structured data as a foundation.


It is also important for tax departments to track and communicate their wins consistently, creating regular touchpoints with executive leadership to share accomplishments that position the tax function as a proactive business partner.


On another point, staffing remains one of the most critical challenges facing tax departments, and many survey respondents cited hiring as key strategic priority, according to the report. Many departments will also look to technology to augment the missing talent and strategically use outsourcing and co-sourcing to alleviate talent pressure as well. And by partnering with external advisors for specialized compliance work or surge capacity during peak periods, tax departments can further free up internal resources to focus on higher-value strategic activities.

In fact, a central theme the session panelists leaned into was how the most effective tax departments build strong collaborative relationships across the organization. According to the report, 94% of CFOs and tax leaders report that the CFO helps facilitate cross-collaboration between tax and other functions such as legal, IT, operations, and finance.

Tax department leaders should proactively seek these opportunities to partner with other departments on strategic initiatives; for example, collaborating with IT on digital transformation, working with operations on supply chain optimization, partnering with legal on M&A transactions, and supporting HR on workforce strategy.

Today, the transformation of the corporate tax function from cost center to strategic partner is not merely aspirational — it is already underway in many forward-thinking organizations. As tax, audit, and trade policy become more complex and business uncertainty continues to mount, the opportunity for tax leaders to demonstrate their strategic value to the organization has never been greater.


You can downloadĚýa full copy of theĚý, from the Thomson Reuters Institute and Tax Executives Institute, here

]]>
Improving corporate governance requires managing AI’s footprint /en-us/posts/sustainability/corporate-governance-ai-footprint/ Mon, 08 Dec 2025 18:33:23 +0000 https://blogs.thomsonreuters.com/en-us/?p=68692

Key insights:

      • Elevate AI governance to the board — Companies should tie their AI deployment to enterprise risk management with explicit KPIs for energy intensity, water withdrawals and consumption, and supply‑chain human rights.

      • Make transparency a competitive asset — Implement auditable disclosures on AI workload footprints, water stewardship, and supplier traceability, and then link executive compensation and vendor contracts to measurable efficiency and resiliency outcomes.

      • Demand transparency despite practical challenges — Although demanding transparency from suppliers may not be practical now due to current challenges, collectively asking for detailed information sends a notable requirement to AI infrastructure providers that the company is seeking to drive change and preserve trust in an AI-driven economy.


AI now sits at the center of corporate sustainability governance as it supercharges data gathering, analytics, and reporting. Indeed, there is is areas of energy optimization, emissions monitoring, land‑use assessment, and climate scenario analysis.

At the same time, AI’s rise is colliding with sharply growing electricity and water demands from data centers and concerns over geopolitically exposed supply chains. The governance challenge for companies therefore is to manage risk at this intersection. This means treating AI as a capital‑intensive, cross‑border infrastructure program whose environmental footprint and supply dependencies must be actively governed.

Why electricity and water are now board‑level AI risks

AI has turned electricity and water from background utilities into constraints that should be dealt with on the board level. Indeed, AI magnifies water risk across cooling, power generation, and chip manufacturing. This makes sourcing and efficiency choices strategic imperatives for many organizations.

Electricity demand — AI use and the data centers that power the tools already account for a significant and rising share of electricity use in the United States. The finds , a figure poised to grow as AI workloads scale. Forward‑looking projections from the U.S. Department of Energy indicate that by 2028 could be attributed to AI workloads.

If you translate those projections into , you can get an idea of the potential magnitude of the problem. Together, these sources suggest that the fastest‑growing part of AI’s energy appetite is not just for training models, but the steady, pervasive inference capabilitiesĚýrequired to power AI features in everyday products and operations.

Direct and indirect water use — Data centers powering AI also negatively impact local water footprints. It shows up in three places: i) data‑center cooling; ii) the electricity feeding those facilities, including thermoelectric and hydroelectric generation; and iii) AI’s own hardware supply chain. In regions already facing scarcity, these demands compound local stress. For example, the average per capita water withdrawal is 132 gallons per day; yet a large data center consumes water .

This makes data centers one of theĚý in the country, which incidentally is home to . At the end of 2021, aroundĚý from moderately to highly stressed watersheds in the western US. This is a common situation as well.

Geopolitical exposure — The hardware that powers AI includes advanced logic and memory chips, which depend on concentrated manufacturing nodes and supply chains with access to critical minerals. Extraction and processing of inputs, such as lithium and cobalt, are often clustered in jurisdictions with elevated levels of human‑rights, environmental, or geopolitical risk. This potential amplifies exposure to export controls, sanctions, or resource nationalism, especially directly for companies’ supply chains and indirectly for those companies using AI.

Companies need to ensure their communication on legal and policy issues are pointing in the same direction in regard to these concerns. Indeed, companies need to deepen value‑chain due diligence while navigating evolving supply‑chain and AI‑specific regulatory regimes.

Recommended actions for companies

These intersections have clear implications for corporate governance. AI’s promise to accelerate decarbonization, improve transparency, and strengthen decision‑making will be realized only if leaders can properly manage the physical, political, and social realities underpinning the technology. Recommended actions to manage risk in areas in which AI and geopolitics converge include:

Demand transparency in electricity and water consumption of AI infrastructure — Companies building AI infrastructure need to conduct AI workload planning. Companies using AI can demand transparency of their suppliers’ 24- to 36-month forecast of training and inference by region with overlays in grid carbon and local water stress to better understand their indirect environmental impacts.

De‑risk impact by incentivizing clarity in supply chains — Companies using AI can begin asking AI infrastructure companies to provide due diligence in tier 2, 3, and 4 suppliers, all the way down to smelters, refiners, and miners to make sure that companies are not indirectly contributing to environmental and social harms.

The bottom line

While these recommendations generally align with evolving corporate practices in sustainability and risk management, the challenge of implementation will vary based on the company’s size, influence over suppliers, and existing governance structures. The most challenging aspect will likely be achieving transparency and clarity in supply chains, which requires cooperation from suppliers and the investment of potentially significant resources.

At the same time, however, if more companies collectively ask for this level of detailed information from their AI infrastructure providers, it will send a notable demand signal. Indeed, AI is both a sustainability tool and a sustainability liability, but its benefits will be realized only if leaders confront the physical and geopolitical constraints that make AI possible.

Those companies that begin asking for this level of transparency can preserve the trust that underwrites their license to navigate successfully in an AI‑driven economy.


You can find out more on the sustainability issues companies are facing around the environment here

]]>