Tax Tech & Innovation Archives - Thomson Reuters Institute https://blogs.thomsonreuters.com/en-us/topic/tax-tech-and-innovation/ Thomson Reuters Institute is a blog from ¶¶ŇőłÉÄę, the intelligence, technology and human expertise you need to find trusted answers. Mon, 13 Apr 2026 20:33:23 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 Country-by-country reporting is getting more complicated — and the window to get ahead is closing /en-us/posts/corporates/country-by-country-reporting/ Tue, 14 Apr 2026 12:22:22 +0000 https://blogs.thomsonreuters.com/en-us/?p=70335

Key takeaways:

      • Country-by-country reporting will only increase in complexity — Australia’s enhanced Country-by-country reporting (CbCR) requirements — reconciling taxes accrued against taxes credited — are a preview of where other high-scrutiny jurisdictions are heading, and companies need to build that explanatory analysis capability now, systematically, rather than scrambling later.

      • There has to be a shared narrative from corporate teams — The EU’s public CbCR is a reputational event, not just a filing. So that means tax, communications, and investor relations teams need a shared narrative before the data goes public — inconsistencies create exposure you do not want to manage reactively.

      • Rethink your filing jurisdiction in light of changes — If EU filing jurisdiction was chosen at initial implementation and never revisited, look again. Guidance has matured, and a more efficient or better-suited option may now be available.


WASHINGTON, DC — Among the many pressing topics discussed in detail at the recent , country-by-country reporting (CbCR) and its ability to reshape the corporate tax industry, certainly had its place. Between escalating local jurisdiction requirements, the , and for deeper explanatory disclosures, CbCR has quietly evolved from a transfer pricing filing obligation into something far more strategically consequential.

The floor is just the floor

The creation of the by the Organisation for Economic Co-operation and Development (OECD) was intended as a minimum standard for countries. And now jurisdictions are increasingly layering additional requirements on top of the OECD’s basic template, resulting in a widening gap between the standard requirements and what tax authorities actually want.

Currently, Australia is the most pointed example. Australian tax authorities are now requiring multinational groups to go beyond the standard CbCR data fields and provide explanatory narratives that reconcile taxes accrued against taxes actually credited. This requires corporate tax departments to bridge the gap between financial statement accruals and their organizations’ cash tax positions in a way that is coherent, defensible, and consistent with positions taken elsewhere.

At the TEI event, panelists explained that for tax departments this will carry complex timing differences, deferred tax positions, or significant jurisdictional mismatches between booked and cash taxes. Indeed, this additional layer of scrutiny will need dedicated attention.


Please add your voice to ¶¶ŇőłÉÄę’ flagship , a global study exploring how the professional landscape continues to change.Ěý


The broader signal matters: Australia will not be the last jurisdiction to move in this direction. So that means that tax departments should treat Australia’s approach as a leading indicator of where other high-scrutiny jurisdictions could be heading. Building the capability to produce this kind of explanatory analysis systematically — rather than scrambling jurisdiction by jurisdiction — would be the smarter long-term investment for corporate tax teams.

Public CbCR in the EU: The transparency ratchet has turned

For US-based multinationals with significant European operations, the EU’s public CbCR directive has fundamentally changed the calculus. Unlike the confidential tax authority filings most corporate tax departments are accustomed to, the EU’s public CbCR rules put organizations’ jurisdictional profit and tax data into the public domain, making it visible to investors, journalists, civil society groups, and organizations’ employees and customers.

The EU framework specifies which entities trigger the reporting obligation and which entity within the group is responsible for making the public filing. That scoping analysis is not always straightforward for complex multinational structures and getting it wrong could present both reputational and legal risk.


Choosing a filing jurisdiction is not purely an administrative decision — it is a choice that affects the regulatory environment that governs the disclosure, the language requirements, the timing, and the interpretive framework that applies to data.


For US-headquartered groups, the implications extend well beyond Europe. Public CbCR data is now being read alongside US disclosures, reporting on ESG activities, and public narratives about tax governance. Inconsistencies, including those technically explainable, could create unwanted noise about the company. This is clearly another reason why the tax function should partner across the business — in this case with the communications team — to make they both are aligned to tell the CbCR story instead of being caught off guard by a journalist or an investor during an earnings call.

Questions that US multinationals should be asking

Fortunately, US multinationals with multiple EU subsidiaries are not required to file public CbCR reports in every EU member state in which they have a presence. Instead, under the EU framework, a qualifying ultimate parent or standalone undertaking can satisfy the public disclosure requirement through a single filing in one EU member state, provided the relevant conditions are met. Germany and the Netherlands have emerged as two of the more popular choices for this consolidated filing approach, given their well-developed regulatory frameworks and the depth of available guidance on what compliant disclosure looks like in practice.

The strategic implication is meaningful. Choosing a filing jurisdiction is not purely an administrative decision — it is a choice that affects the regulatory environment that governs the disclosure, the language requirements, the timing, and the interpretive framework that applies to data. Corporate tax departments that defaulted to a filing jurisdiction early in the EU implementation process should take a fresh look. Regulatory guidance has matured significantly, and there may be a more efficient or better-suited path available than the one originally chosen.

The uncomfortable divergence

There is a notable irony in the current environment. Domestically, the IRS and U.S. Treasury’s 2025-2026 Priority Guidance Plan reflects an explicit focus on deregulation and burden reduction, detailing dozens of projects aimed at reducing compliance costs for US businesses. Meanwhile, the international compliance environment has moved in the opposite direction, adding disclosure layers, explanatory requirements, and public transparency obligations that many US businesses cannot avoid simply because they are headquartered in the United States.

This divergence has a direct implication for how tax departments allocate resources and make the internal case for investment in international compliance infrastructure. The burden internationally is not going down — indeed, it is intensifying — and that argument is now backed by concrete examples rather than projections.

3 things worth doing now

There are several actions that corporate tax teams should consider, including:

Audit CbCR data quality with Australia’s enhanced requirements in mind — If you cannot readily reconcile taxes accrued to taxes credited at the jurisdictional level, that gap needs to be closed before it becomes an authority inquiry.

Revisit EU filing jurisdiction strategy — If your jurisdictional decision was made at the time of initial implementation and has not been reviewed since, it is worth a fresh look before the next reporting cycle.

Develop an internal narrative around public CbCR data before it circulates externally — Your company’s tax story should not be a surprise to the corporate teams involved in communications, investor relations, or ESG — and in today’s world, assuming such news stays quiet is no longer a safe assumption.

While CbCR started as a tool for tax authorities, it today has become something more visible, more public, and more consequential than that — and that trajectory is not reversing any time soon.


You can download a full copy of the Thomson Reuters Institute’s

]]>
Agentic AI following GenAI’s growth trajectory in legal, but with unique oversight challenges, new report shows /en-us/posts/technology/agentic-ai-oversight-challenges/ Thu, 09 Apr 2026 08:45:55 +0000 https://blogs.thomsonreuters.com/en-us/?p=70278

Key takeaways:

      • Agentic AI poised for adoption uptick — Agentic AI is following GenAI’s rapid adoption in the legal industry, with less than 20% of firms currently implementing agentic systems but half planning or considering adoption in the near future, according to a new report.

      • Adoption depends on human oversight answers — Legal professionals are generally optimistic about agentic AI’s potential, but successful adoption depends on explicit guidance about human oversight and the lawyer’s role in maintaining ethical standards.

      • Time to retool AI education? — Agentic AI’s increased autonomy introduces new oversight and ethical challenges for law firms, making targeted education and clear guidance essential to understanding the differences from GenAI.


Over the past several years, law firms and corporate legal departments have turned towards generative AI en masse. At the beginning of 2024, just 14% of all law firms and legal departments featured an enterprise-wide GenAI tool. Just two years later, that number had already risen to 43% of all firms and departments, according to the 2026 AI in Professional Services Report, from the Thomson Reuters Institute (TRI). For large law firms or legal departments, those percentages — not surprisingly — are beginning to approach 100%.

With GenAI adoption now this widespread, legal industry leaders are now turning their attention to two primary initiatives. One, of course, is how to get the most out of the AI tools they already have — a task that is proving a bit elusive. Currently, less than 20% of lawyers say their organizations measure AI’s return-on-investment, and most corporate lawyers say they have no idea how their outside law firms are approaching AI. Thus, instituting not just AI tools, but also an AI strategy is the second top priority for law firms and corporate legal departments in 2026 and beyond.

However, even as the legal industry reaches a tipping point in adopting GenAI tools, technology innovation still continues unabated. Agentic AI has emerged as the next wave of innovation that could change how lawyers work on a daily basis, offering a way to autonomously complete multi-step tasks. For example, agentic AI systems are already being built for the legal industry that independently researches a regulation or law, drafts a document based on the finding, identifies pitfalls, and revises the document, with stops for human guidance only instituted as desired.

According to the AI in Professional Services Report, the legal industry is already making headway towards implementing agentic AI systems. For agentic AI to truly take hold in legal, however, lawyers still require more education around not only how it differs from the GenAI systems they already have in place, but also when and where human intervention needs to occur within an agentic system.

The early stages of agentic AI

Examining current agentic AI adoption for the legal industry almost takes one back in time — two years, to be exact. Following the public release of GenAI in late-2022, many legal industry organizations spent 2023 evaluating and experimenting with AI systems, usually with a small working group of interested guinea pigs. As a result, only 14% of survey respondents said their law firms or corporate legal departments were engaged in organization-wide GenAI rollouts at the start of 2024. However, more than half of respondents said their organizations expected to be rolling out large-scale GenAI systems over the next 1 to 3 years. The intervening two years since then have proved that prediction to be largely true.

Agentic AI usage in the first half of 2026 looks largely similar to GenAI in 2024. The legal industry started to experiment with agentic AI at the beginning of 2025, with an eye towards actual implementation in 2026 and beyond (particularly as legal software providers began to integrate agentic systems into their own products). As such, less than 20% of recent survey respondents say their organization is engaged in widespread agentic AI adoption, but with about half of respondents said their organization is either planning to use or considering whether to use agentic AI in the near future.

agentic ai

By and large, lawyers feel positive about the agentic AI movement. When asked about their sentiment towards agentic AI, 51% of legal industry respondents said they felt excited or hopeful, while just 19% said they felt concerned or fearful. Further, about half (47%) said they actively believe agentic AI should be used for legal work, while 22% felt it should not, with the remainder saying they were unsure. These figures largely track with the sentiments expressed about GenAI in 2024, which have only grown over time from about 50% positive two years ago to two-thirds of all legal professionals feeling positive currently.

This all lends further credence to a rise in agentic AI usage similar to what law firms and corporate legal departments experienced with GenAI over the course of 2024 and 2025. Indeed, when asked when they expect agentic AI to be a central part of their workflow, few have baked agentic systems into their daily work currently, but a majority of legal industry respondents expect it to be central within the next 3 to 5 years.

agentic ai

The unique barriers of agentic AI adoption

Agentic AI does differ from GenAI in one crucial area that may limit its growth potential within the legal industry, however — autonomy. By and large, GenAI systems operate on a back-and-forth basis: Users provide the tool a prompt, receive its output, and then iterate back-and-forth from there. Agentic AI is intended to be more automated by design, only requiring human input at pre-determined points in the process. And that makes some lawyers understandably nervous.

When asked why they might feel hesitant about using agentic AI for legal tasks, the most common answer was a general fear of the unknown, but the second most common answer dealt with the need for careful monitoring and oversight. In fact, some respondents said they were excited about GenAI, but more cautious about agentic AI’s potential.

“Agentic AI, while exciting, to me removes oversight a step too far,” said one such lawyer from a US law firm. “I like the idea of prompting and reviewing a result. It is something else to have a machine have so much autonomy in the actual doing of a thing and potentially acting on my behalf without that very concrete review.”


Please add your voice to ¶¶ŇőłÉÄę’ flagship , a global study exploring how the professional landscape continues to change.Ěý


An assistant GC at a US company also pointed to potential privacy and security concerns, adding: “The fact that agentic AI operates in a much more autonomous way, with a lack of control from the user, means there are many unknowns that are hidden beneath the process.”

For law firm and corporate legal department leaders looking to potentially implement agentic AI systems into their practice, this means re-thinking what AI education and training will mean moving forward. Beyond that, however, legal AI educators also will need to make sure to pinpoint and perhaps over-explain those specific instances in which human oversight needs to occur in agentic systems. More autonomous does not mean fully autonomous, and particularly for lawyers with ethical duties to their work product, lawyer oversight will in fact be a necessary part of any agentic system.

For law firm or legal department leaders, that means that finding the right balance between efficient workflows and human intervention will be key to agentic AI adoption. And those organizations that can best communicate human-in-the-loop to their professionals up-front will be rewarded with more increased and reliable adoption.

Clearly, lawyers feel positively about the agentic AI future, after all. They just need it spelled out explicitly as to what the lawyer’s role will be in this new paradigm.

“Agentic AI is powerful, but its moral compass must come from humans,” one UK law firm barrister noted aptly. “Lawyers are trained to safeguard fairness, rights, and the rule of law — principles that should guide how AI is designed, governed, and deployed. Hope lies in our ability to shape AI through these values for fairer values for society as a whole.”


You can download a full copy of the Thomson Reuters Institute’sĚý2026 AI in Professional Services ReportĚýhere

]]>
IEEPA tariff refunds: What corporate tax teams need to do now /en-us/posts/international-trade-and-supply-chain/ieepa-tariff-refunds/ Tue, 31 Mar 2026 13:30:41 +0000 https://blogs.thomsonreuters.com/en-us/?p=70165

Key takeaways:

      • Only IEEPA‑based tariffs are up for refund — Refunds will flow electronically to importers of record through ACE, the government’s digital import/export system, but only once CBP’s process is finalized.

      • Liquidation and protest timelines are now critical — An organization’s tax concepts that directly influence which entries are eligible and how long companies have to protect claims.

      • Tax functions must quickly coordinate with other corporate functions — In-house tax teams need to coordinate with their organization’s trade, procurement, and accounting functions to gather data, assert entitlement, and get the financial reporting right on any tariff refunds.


WASHINGTON, DC — When the United States Supreme Court issued its much-anticipated ruling on President Donald J. Trump’s authority to impose mass tariffs under the International Emergency Economic Powers Act (IEEPA) in February it set the stage for what it to come.

The Court ruled the president did not have authority under IEEPA to impose the tariffs that generated an estimated $163 billion of revenue in 2025. In response, the Court of International Trade (CIT) issued a ruling in requiring the U.S. Customs and Border Protection (CBP) to issue refunds on IEEPA duties for entries that have not gone final. That order, however, is currently suspended while CBP designs the refund process and the government considers an appeal.

AtĚýthe recent , tax experts discussed what this ruling means for corporate tax departments, outline what is and isn’t a consideration for refunds and the steps necessary to apply for refunds.

As panelists explained, the key issue for tax departments is that only IEEPA tariffs are in scope for refund — many other tariffs remain firmly in place. For example, on steel, aluminum, and copper; Section 301 tariffs on certain Chinese-origin goods; and new of 10% to 15% on most imports still apply and will continue to shape effective duty rates and supply chain costs.

So, which entities can actually get their money back?

Legally, CBP will send refunds only to the importer of record, and only electronically through the government’s digital import/export system, known as the Automated Commercial Environment (ACE) system. That means every potential claimant needs an with current bank information on file. And creating an account or updating it can be a lengthy process, especially inside a large organization.

If a business was not the importer of record but had tariffs contractually passed through to it — for example, by explicit tariff clauses, amended purchase orders, or separate line items on invoices — they may still have a commercial basis to recover their share from the importer. In practice, that means corporate tax teams should sit down with both the organization’s procurement experts and its largest suppliers to identify tariff‑sharing arrangements and understand what actions those importers are planning to take.

Why liquidation suddenly matters to tax leaders

As said, the Atmus ruling is limited to entries that are not final, which hinges on the . CBP typically has one year to review an entry and liquidate it (often around 314 days for formal entries) with some informal entries liquidating much sooner.

Once an entry liquidates, the 180‑day protest clock starts. Within that window, the importer of record can challenge CBP’s decision, and those protested entries may remain in play for IEEPA refunds. There is also a 90‑day window in which CBP can reliquidate on its own initiative, raising questions about whether final should be read as 90 days or 180 days — clearly, an issue that will matter a lot if your company is near those deadlines.

Data, controversy risk & financial reporting

The role of in-house tax departments in the process of getting refunds requires, for starters, giving departments access to entry‑level data showing which imports bore IEEPA tariffs between February 1, 2025, and February 28, 2026. If a business does not already have robust trade reporting, the first step is to confirm whether the business has made payments to CBP; and, if so, to work with the company’s supply chain or trade compliance teams to access ACE and run detailed entry reports for that period.

Summary entries and heavily aggregated data will be a challenge because CBP has indicated that refund claims will require a declaration in the ACE system that lists specific entries and associated IEEPA duties. Expect controversy pressure: As claims scale up, CBP resources and the courts could see backlogs. If that becomes the case, tax teams should be prepared for protests, documentation requests, and potential litigation over entitlement and timing.

On the financial reporting side, whether and when to recognize a refund depends on the strength of the legal claim and the status of the proceedings. If tariffs were listed as expenses as they were incurred, successful refunds may give rise to income recognition. In cases in which tariffs were capitalized into fixed assets, however, the accounting analysis becomes more nuanced and may implicate asset basis, depreciation, and potentially transfer pricing positions.

Coordination between an organization’s financial reporting, tax accounting, and transfer pricing specialists is critical in order that customs values, income tax treatment, and any refund‑related credits remain consistent.

Action items for corporate tax departments

Corporate tax teams do not need to become customs experts overnight, but they do need to lead a coordinated response. Practically, that means they should:

      • confirm whether their company was an importer of record and, if so, ensure ACE access and banking information are in place now, not after CBP turns the refund system on.
      • map which entries included IEEPA tariffs, identify which are non‑liquidated or still within the 180‑day protest window, and file protests where appropriate to protect the company’s rights.
      • inventory all tariff‑sharing arrangements with suppliers, assess contractual entitlement to pass‑through refunds, and align with procurement and legal teams on a consistent recovery approach.
      • work with accounting to determine the financial statement treatment of potential refunds, including whether and when to recognize contingent assets or income and any knock‑on effects for transfer pricing and valuation.

If tax departments wait for complete certainty from the courts before acting, many entries may go final and fall out of scope. The opportunity for tariff refunds will favor companies that are data‑ready, cross‑functionally aligned, and willing to move under time pressure.


You can find out more about the changing tariff situation here

]]>
SALT changes in 2026 and beyond: What indirect tax teams need to know /en-us/posts/corporates/salt-changes-indirect-tax-teams/ Fri, 20 Mar 2026 13:27:08 +0000 https://blogs.thomsonreuters.com/en-us/?p=70037 Key takeaways:

      • Changing the balance of taxes — Budget‑driven tax swaps and incentive reforms are changing the balance between income, property, and sales taxes, forcing large companies to revisit their multistate footprint.

      • How revenue is sourced is changing, too — Rapidly evolving digital and AI‑related taxes are creating new nexus, sourcing, and base‑definition issues for businesses that rely on revenue from digital advertising, social platforms, data monetization, and automated tools.

      • Planning amid continued uncertainty — New federal tax regulations, tariff‑related uncertainty, and even the elimination of the penny are all amplifying state‑by‑state complexity for in‑house tax departments.


WASHINGTON, DC — Tax industry experts who gathered at to provide updates on the current landscape of state and local tax (SALT) policy and offer insight that corporate tax departments should consider found, not surprisingly, that they had a lot to talk about in the current economic environment.

Mapping the new SALT frontier

For starters, this year’s SALT agenda is not just an abstract policy story for large, multistate businesses, rather, it’s a direct driver of cash taxes, effective tax rate (ETR) volatility, and audit exposure. Indeed, several state legislatures are advancing new taxes on digital advertising and data, revisiting incentives and data center exemptions, and using conformity to federal law — especially the tax provisions in the One Big Beautiful Bill Act (OBBBA) — as a policy lever, all against the backdrop of slowing revenues and contentious elections.

“Tax swaps” and incentives — States that are facing budget pressure are, unsurprisingly, looking at tax swaps to reduce income or property taxes while broadening the sales & use tax base and trimming exemptions. For example, on March 3, the state of Florida — which already doesn’t have a state income tax — passed legislation that in the state.

Moreover, with the rapid expansion of AI come the extensive need for data centers. Several states are reassessing data center exemptions and credits, either tightening qualification standards, requiring centers to supply more of their own power, or repealing incentives outright. A decision in Virginia to , for example, is viewed as a potential template for other states, particularly in those areas in which energy and environmental concerns are priorities. At the same time, proposals targeting include expanded corporate tax disclosures, CEO compensation surcharges, and enhanced reporting on apportionment and group filing methods.

What companies should consider — Large companies operating over multiple states should consider making an inventory of their credits and incentives by jurisdiction, including looking at sunset dates and political risk indicators.

Companies should also build forward‑looking models that show how any sales tax base expansion would interact with their supply chain and their procurement of digital and professional services.

New exposure for tech, marketing & data

Bipartisan legislators in several states are continuing to expand on digital economies as a revenue and policy target. For example, Maryland continues to lead with its digital advertising tax; while Washington state’s expansion of its sales tax to include certain digital and IT services and Chicago’s social media taxes illustrate the variety of approaches that state and local jurisdictions are exploring to expand their tax base and raise revenue.

Data and “digital resource” taxes — Proposals in states such as New York would tax companies that derive income from resident data, treating data as a natural resource. While no state has fully implemented a comprehensive data tax, however, large platforms and data‑driven enterprises are monitoring these bills closely.

AI‑related SALT rules — Many states still classify AI solutions under existing Software as a Service (SaaS) or data‑processing categories, but some — including New York — are exploring surcharges tied to AI‑driven workforce reductions. And at least two states are explicitly taxing AI, similarly to the way software is taxed.

For corporate tax leaders, some practical next steps should include mapping those areas in which your group has digital ad spending, user bases, data monetization, or AI deployments. Then, overlaying that with current and pending digital tax proposals. In parallel, it is increasingly critical for the tax team to partner with IT and marketing teams to understand how contracts, invoicing structures, and platform design will affect nexus, tax base definition, and sourcing.

Federal shifts magnify multistate complexity

The OBBBA made permanent several of , while expanding SALT relief on the individual side and creating new interactions for multinational groups. Because most states start from federal taxable income — either on a rolling, static, or selective conformity basis — OBBBA changes reverberate across state corporate income tax bases, especially in those states that have decoupled themselves from interest limits, R&D expensing, or new production‑related incentives.

Corporate tax departments must now juggle different conformity dates and selective decoupling rules across rolling and static states, including jurisdictions that automatically decouple when a federal change exceeds a revenue impact threshold. This requires more granular state‑by‑state modeling of OBBBA impacts on apportionable income, deferred tax balances, and cash tax forecasts. It also heightens the risk that political disputes — such as — produce mid‑cycle changes that complicate provision and compliance processes.

Penny elimination — With federal , states now are moving toward symmetrical rounding for cash transactions, rounding the final tax‑inclusive total to the nearest five cents while attempting not to alter the underlying tax computation. For retailers and consumer‑facing enterprises, this shifts the focus to point of sale (POS) configuration, consumer‑protection exposure, and class‑action risk if rounding is implemented incorrectly.

Tariffs and refunds — The U.S. Supreme Court’s Learning Resources, Inc. v. Trump decision under the International Emergency Economic Powers Act in February leaves open how more than $100 billion in and what that means for prior sales & use tax treatment. Streamlined guidance generally treats tariffs embedded in product prices as part of the taxable sales price but excludes tariffs paid directly by a consumer‑importer from the tax base, raising complex questions if tariff refunds reduce costs or sales prices retroactively.

For indirect tax department teams, the confluency of the 2026 SALT changes — including the impacts around everything from data center credits to the recent Supreme Court tariff decision — the need to rely on internal partners across the business has never been stronger. Combining that with a greater reliance on technologies, including dedicated research tools to stay abreast of state-by-state tax changes, may be the best way for corporate tax teams to keep up with compliance requirements and avoid penalties.


You can download a full copy of here

]]>
Corporate tax teams eager for AI, but frustrated by pace of change, new report shows /en-us/posts/corporates/corporate-tax-department-technology-report-2026/ Mon, 16 Mar 2026 13:06:11 +0000 https://blogs.thomsonreuters.com/en-us/?p=69963

Key insights:

      • Possibilities vs. practicality — There is a growing frustration gap between what corporate tax professionals want to achieve and what their current technological tools will allow.

      • Expectations about AI — Tax professionals have significantly accelerated the timeframe in which they expect AI to become a central part of their workflow.

      • Proactive progress — Automation is enabling a gradual shift toward more strategic, proactive tax work, although not as quickly as many tax professionals would like.


The recently released , from the Thomson Reuters Institute and Tax Executives Institute, reveals that while automation of routine tax functions is indeed enabling a long-desired shift toward more strategic, proactive tax work in some corporate tax departments, a majority of tax leaders surveyed say upgrading their department’s tax technology is still a relatively low priority at their company.

Jump to ↓

2026 Corporate Tax Department Technology Report

 

The report surveyed 170 tax leaders from companies of all sizes to find out how corporate tax professionals are using technology, overcoming obstacles, and planning for the future.

A growing “frustration gap”

In general, the report found that while many companies (especially larger ones) are actively upgrading their tax department’s technological capabilities, there is a growing frustration gap between what tax professionals know they can accomplish with more robust technologies and what their current tools allow them to do.

Adding to this frustration is a growing discrepancy between the additional budget and resources tax departments hope to get each year and the harsher reality they often face. Indeed, even though tax leaders remain optimistic that their budgets and capabilities will expand and improve in the coming years, fewer than half of the respondents surveyed said their departments received a budget increase last year, and many saw budget cuts.


corporate tax

Further, the report shows that the prospect of incorporating ever more sophisticated forms of AI and AI-driven tools into tax workflows is also very much on the minds of tax professionals. Even though the actual usage of AI in corporate tax departments is still relatively low, the report reveals that tax professionals now expect AI become a central part of their workflow within one to two years, much faster than they did in last year’s report.

Indeed, as the report explains, this expectation of more imminent AI adoption represents a significant shift in attitude, because most corporate tax departments are rather circumspect about how, when, and why they incorporate new tech tools into their established routines.

If today’s technological capabilities continue to accelerate, companies that have been slow to invest in the infrastructure necessary to keep pace may soon find themselves struggling to catch up with their more tech-savvy counterparts, the report warns.

Moving toward more proactive work, albeit slowly

For companies that have invested in the technological infrastructure necessary to support advanced tax technologies, the payoff is becoming increasingly evident.

According to the report, about two-thirds (67%) of tax professionals surveyed said their company’s investment in technology had enabled a shift toward more proactive tax work within their departments. This shift is particularly noticeable at large corporations, at which, unsurprisingly, investment in tax technology has been more generous.

The 2026 Corporate Tax Department Technology Report also explores other aspects of corporate tax departments, including their hiring practices, tech training, purchasing strategies, what they see as the most popular tech tools for tax, and numerous other factors that affect how tax departments operate.


You can download

a full copy of the Thomson Reuters Institute’s here

]]>
Reinventing the data core: The arrival of the adaptable AI data foundry /en-us/posts/technology/reinventing-data-core-adaptable-data-foundry/ Thu, 05 Mar 2026 16:08:59 +0000 https://blogs.thomsonreuters.com/en-us/?p=69795

Key takeaways:

      • There is a widening gap between AI ambition and readiness — The gap between AI ambition and data readiness is widening, making the adoption of an adaptable data foundry essential for scalable, explainable, and compliant AI outcomes.

      • A data foundry model directly addresses the root cause — A data foundry model enables organizations to industrialize data production, automate compliance, and ensure consistent data lineage, thereby overcoming the limitations of brittle, legacy data architectures.

      • Incorporate the data core into your AI planning — Reinventing the data core is now a strategic imperative for those enterprises that aim to thrive in 2026 and beyond, as agentic AI, regulatory demands, and integration complexity accelerate.


This article is the third and final installment in a 3-part blog series exploring how organizations can reset and empower their data core.

A defining theme of this year so far is the widening distance between organizational ambition and data readiness. Leaders want the hype and inherent capabilities they believe are instantly contained within agentic AI — automated compliance, predictive integration for M&A, and decision-intelligence pipelines that reduce operational friction.

Without a data foundry, however, much of that will be impossible. Instead, workflows will remain brittle, AI agents will hallucinate under inconsistent semantics, and data lineage will break down across federated sources. Further, without a data foundry, regulatory mappings involved with the Financial Data Transparency Act (FDTA) and the Standard Business Reporting (SBR) framework cannot be automated, cross-functional insights will require manual reconciliation, and auditability will collapse under scrutiny.

This is not a failure of leadership. It is a failure of architectural design to recognize the congealment of data as a predecessor to technologies and the critical priorities of data security, auditability, and lineage.

data core

For decades, organizations built monolithic systems that were optimized for stability and reporting. Today’s world demands modularity, continuous adaptation, and agent-driven interoperability. Architecture has shifted from build and operate to build and evolve. This is precisely what a data foundry enables.

Why reinvention can no longer wait

Throughout 2025 and now into the early months of 2026, data and AI have quietly shifted from innovation topics to enterprise constraints. Leaders across regulated markets are starting to recognize that the obstacles limiting their AI ambitions are neither mysterious nor technical — they are structural. These obstacles sit inside the data core, waiting inside the silent architecture that determines whether any form of automation, intelligence, or compliance can scale beyond a pilot.

The data bears this out. When you examine the work coming from Tier-1 research bodies, supervisory institutions, and global transformation benchmarks, a consistent narrative emerges beneath the headlines: AI is accelerating, regulation is hardening, and integration demands are expanding. Moreover, organizational data remains pinned to assumptions that were forged in static, pre-AI operating environments. This gap is not theoretical; rather, it is measurable, persistent, and directly correlated to business performance.

data core

Let’s look at the AI results first. Across industries, organizations continue to experience a familiar pattern: early promise, limited adoption, and rapid degradation once the model encounters inconsistent semantics or fragmented lineage. Global studies show that the vast majority of enterprise AI initiatives still struggle to reach full production maturity, and among those that do, many encounter performance drift within the first year.

The driver is remarkably consistent. It is not the sophistication of the model nor the skill of the data science team — it is the quality, clarity, and traceability of the data that is feeding the system.

Taken together, these signals deliver a clear message. The gap between AI ambition and data readiness is widening, not narrowing. This is why the data foundry conversation matters now. It is not an abstract architectural concept. It is a response to the full stack of quantitative pressures the market has been telegraphing for years — costs rising, compliance hardening, AI accelerating, and integration straining under inconsistent semantics and fragile lineage.

A data foundry model directly addresses the root cause of this by industrializing the creation of consistent, reusable, explainable data products that can fuel agentic AI, support regulatory defensibility, and accelerate enterprise reinvention.

The numbers point to a simple conclusion. Reinvention is no longer optional, and the window to address the data core before agentic AI becomes standard practice is narrow and closing. The organizations that act now will be the ones that define what compliant, explainable, interoperable AI looks like in the next decade. Those that defer the work will find themselves restructuring under pressure rather than reinventing by choice.

This is the inflection point. In truth, the quantitative signals have made the case more clearly than a multitude of strategy narratives ever could.

The data foundry: A model for continuous alignment

Unsurprisingly, agentic AI introduces new, more demanding requirements, including:

      • machine-interpretable semantics;
      • context-preserving lineage across federated systems;
      • decomposition of enterprise knowledge into reusable data products;
      • dynamic trust-scoring tied to source reliability and timeliness;
      • automated compliance overlays and regulatory logic; and
      • cross-domain metadata orchestration.

These capabilities are not optional, and they are non-negotiable. Indeed, they determine whether AI elevates risk or mitigates it, whether it accelerates productivity or introduces unrecoverable inconsistencies. And they determine whether AI augments decision quality or produces volatility.

A data foundry shifts organizations from artisanal, one-off data preparation and toward industrialized data production, in which patterns replace pipelines, and building blocks replace custom engineering. This shift will mean that lineage is generated, not documented; semantics are governed, not patched; and compliance is automated, not reconstructed. In this way, reuse becomes the default, not the exception.

In fact, this process is analogous to manufacturing. Instead of producing bespoke components for each need, the enterprise creates standardized, high-fidelity data assets that can be assembled into any workflow, any AI use case, and any reporting requirement.

A data foundry becomes the quiet architecture behind every future capability, making these capabilities systematic rather than ad-hoc. The chart below showcases the progressive build-up using a data factory, beginning with data intake and harmonization and ending with the AI agent orchestration and reusable data products that learn from their deployment.

data core

Unfortunately, organizations are still building increasingly advanced AI decisioning and efficiency solutions on top of an aging and brittle data foundation. The results are predictable: stalled initiatives, compliance exposure, and stakeholder frustration. Additionally, instead of asking why, organizations keep adding more tools — more dashboards, more cloud services, more AI pilots, and more flavors of transformation.

Clearly, enterprises aren’t dealing with an AI problem. They’re dealing with a data alignment problem disguised as progress within fragmented, AI enclosures.

Reinvention starts at the data core

For more than a decade, firms across regulated industries have repeated the same mantra: Data is our most critical asset. When you peel back the layers or when you sit in board review sessions or integration meetings or regulatory remediation audits, however, the evidence does not match the rhetoric.

Reinvention is no longer optional. Instead, it is the starting point for meeting the demands of 2026 and beyond. The institutions that thrive will be those that understand that the data core is not a technical asset — it is the operational backbone of the enterprise. Indeed, the institutions that succeed will be those that recognize the truth early: AI is an output, and the data core is the strategy. And the organizations able to industrialize their data — through a foundry model, through AXTent, through repeatable semantic structures — will be the ones leading innovation, reducing compliance risk, accelerating M&A synergies, and achieving enterprise-wide reinvention.

In the end, the real question isn’t whether AI will transform business; the question is whether the data foundation will allow it. And the answer is rebuilding your data core so AI can actually deliver the outcomes your organization needs — and that work begins now.


You can find more blog postsĚýby this author here

]]>
Inside the Shift: Why your agentic AI pilot probably will fail (and what that says about you) /en-us/posts/technology/inside-the-shift-agentic-ai-pilot-failure/ Fri, 20 Feb 2026 16:03:35 +0000 https://blogs.thomsonreuters.com/en-us/?p=69576

You can read TRI’s latest “Inside the Shift” feature,ĚýPremortem: Your 2028 agentic AI pilot program failedĚýhere


Picture this: It’s 2028, your law firm spent real money on an agentic AI pilot, and now it’s quietly been shut down. No press release, no victory lap — just a post‑mortem that nobody wants to read. In our latestĚýInside the Shift feature article, we see that such a future is very likely unless firms start preparing for agentic AI in a way that’s very different than how they think they should.

The big idea is simple but uncomfortable: Success with generative AI (GenAI) does not mean your organization is ready for agentic AI. GenAI works because it’s forgiving. You can paste text into a tool, get a decent answer, and move on — even if your data is messy and your workflows live in people’s heads. Agentic AI doesn’t work that way. It expects clean data, documented processes, and clear rules. If your firm runs on institutional memory, workarounds, and a kind of just ask Linda problem-solving process, then the system will eventually break down.


To examine this and many more situations, the Thomson Reuters Institute (TRI) has launched a new feature segment,ĚýInside the Shift, that leverages our expert analysis and supporting data to tell some of the most compelling stories professional services today.


Our latest Inside the Shift feature, Premortem: Your 2028 agentic AI pilot program failedĚýby Bryce Engelland, Enterprise Content Lead for Innovation & Technology for the Thomson Reuters Institute, walks us through two fictional but painfully familiar failure stories of how two separate firms handled their agentic AI pilot programs.

The author explains how the first firm moves fast after crushing their GenAI rollout and assuming agentic AI is just the next logical step. Everything looks great in a sandbox; but then the system hits real‑world chaos: Undocumented exceptions, fragmented document storage, and conflict checks that only work because humans intuitively know when something feels off. One bad intake decision later, client trust is damaged and the pilot is frozen. In this example, the tech didn’t fail — the organization did.

The second firm goes the opposite direction. They’re cautious, thoughtful, and obsessed with governance. They build guardrails, limit risk, and launch a perfectly reasonable pilot. And then… nothing happens. Attorneys ignore the system — not because they hate AI, but because using it only adds risk with no reward. If it works as it’s supposed to, nothing changes; but if something goes wrong, they’ll be blamed. So, unsurprisingly, the rational choice is to nod in meetings and quietly keep doing things the old way until the project dies of inertia.


Inside the ShiftThe challenge is that “preparing” doesn’t mean what most people think. It doesn’t mean buying early, and it doesn’t mean waiting for maturity. Rather, preparing means understanding now why these systems fail, and building the institutional capacity to avoid those failures when the technology arrives in full.


The feature article points out the common thread here: These failures have very little to do with AI capability; rather, they’re about incentives, documentation, and institutional honesty. Firms that succeed with agentic AI won’t be the ones that buy in early or wait patiently. The winners, the piece explains, be the ones doing the boring, unsexy work now: Writing things down, fixing information architecture, identifying hidden dependencies, and aligning rewards so adoption isn’t all risk and no upside.

In short, this article isn’t a warning about technology. It’s a warning about pretending your organization is ready when it’s not — and mistaking optimism or caution for preparation.

So, dive a little deeper behind the headlines about AI adoption and how to make agentic AI work for your organization. Click through and read today’s Inside the Shift feature. It might help you see more clearly than before whether the path your organization is pursuing with agentic AI will carry it over the goal line and into the next decade… or leave your team watching from the sidelines.


You can find moreĚýInside the Shift feature articlesĚýfrom the Thomson Reuters Institute here

]]>
Architecting the data core: How to align governance, analytics & AI without slowing the business /en-us/posts/technology/architecting-data-core-aligning-ai-governance-analytics/ Thu, 12 Feb 2026 19:02:55 +0000 https://blogs.thomsonreuters.com/en-us/?p=69436

Key takeaways:

      • Legacy data architectures can’t keep up with modern demands — Traditional, centralized data cores were designed for stable, predictable environments and are now bottlenecks under continuous regulatory change, rapid M&A, and AI-driven business needs.

      • AXTent aims to unify modern data principles for regulated enterprises — The modern AXTent framework integrates data mesh, data fabric, and composable architecture to create a data core built for distributed ownership, embedded governance, and adaptability.

      • A mindset shift is required for lasting success — Organizations must move from project-based data initiatives to perpetual data development, focusing on reusable data products and decision-aligned outcomes rather than one-off integrations or platform refreshes.


This article is the second in a 3-part blog series exploring how organizations can reset and empower their data core.

For more than a decade, enterprises have invested heavily in data modernization — new platforms, cloud migrations, analytics tools, and now AI. Yet, for many organizations, especially in regulated industries, the results remain underwhelming. Data integration is still slow because regulatory reporting still requires manual remediation, M&A still exposes hidden data liabilities, and AI initiatives struggle to move beyond pilots because trust and reuse in the underlying data remains fragile.

The problem is not effort, it is architecture. Since 2022, the buildup around AI has been something out of science fiction — self learning, easy to install, displace workers, autonomous, even Terminator-like. Moreover, while AI may indeed revolutionize research, processes, and profits, the fundamental challenge is not the advancing technology, rather it is the data used to train and cross-connect these exploding capabilities.

Most data cores in use today were designed for an earlier operating reality — one in which data was centralized, reporting cycles were predictable, and governance could be applied after the fact. That model breaks down under the modern pressures of continuous regulation, compressed deal timelines, ecosystem-based business models, and AI systems that consume data directly rather than waiting for curated outputs.

So, why is the AI hype not living up to the anticipated benefits? Why is the data that underpinned process systems for decades failing to scale across interconnected AI solutions? The solution requires not another platform refresh, but rather, a structural reset of the data core itself.

That reset uses data meshes, data fabrics, and modern composable architecture as a single, integrated system, and aligns it to the AXTent architectural framework, which is designed explicitly for regulated, data-intensive enterprises.

Why the traditional data core no longer holds

Legacy data cores were built to optimize control and consistency. Data flowed inward from operational systems into centralized repositories, where meaning, quality, and governance were imposed downstream. That approach assumed there were stable data producers, limited use cases, human-paced analytics, and periodic regulatory reporting.

Unfortunately, none of those assumptions hold today. Regulatory expectations now demand traceability, lineage, and auditability at all times (not just at quarter-end). M&A activity requires rapid integration without disrupting ongoing operations. And AI introduces probabilistic decision-making into environments built for deterministic reporting, with business leaders expecting insights in days, not months.

The result is a growing mismatch between how data is structured and how it is used. Centralized teams become bottlenecks, pipelines become brittle, and semantics drift. Compliance then becomes reactive, and the cost of change increases with every new initiative.

The AXTent framework starts from a different premise: The data core must be designed for continuous change, distributed ownership, and machine consumption from the outset. Indeed, AXTent is best understood not as a product or a platform, but as an architectural framework for reinventing the data core. It combines three design principles into a coherent operating model:

      1. Data mesh — Domain-owned data products
      2. Data fabric — Policy- and metadata-driven connectivity
      3. Data foundry — Composable, evolvable data architecture

Individually, none of these ideas are new. What is different — and necessary — is treating them as a single system, rather than independent initiatives as conceptually illustrated below:

data core

Fig. 1: The AXTent model of operation

The 3 operating principles of AXTent

Let’s look at each of these three design principles individually and how they interact with each other.

Data mesh: Reassigning accountability where it belongs

In regulated enterprises, data problems are rarely technical failures. Instead, they are accountability failures. When ownership of data meaning, quality, and timeliness sits far from the domain that produces it, errors propagate silently until they surface in regulatory filings, audit findings, or failed integrations.

A structured framework applies data mesh principles to address this directly. Data is treated as a product, owned by business-aligned domains that are then accountable for semantic clarity, quality thresholds, regulatory relevance, and consumer usability.

This is not decentralization without guardrails, however. AXTent enforces shared standards for interoperability, security, and governance, ensuring that domain autonomy does not fragment the enterprise. For executives, the benefit is practical: faster integration, fewer semantic disputes, and clearer accountability when things go wrong.

Data fabric: Embedding control without re-centralization

However, distributed ownership alone does not solve enterprise-scale problems. Without a unifying layer, decentralization simply recreates silos in new places.

A proper framework addresses this through a data fabric that operates as a control plane across the data estate. Rather than moving data into a single repository, the fabric connects data products through shared metadata, lineage, and policy enforcement.

This allows the organization to answer critical questions continuously, such as:

      • Where did this data come from?
      • Who owns it?
      • How has it changed?
      • Who is allowed to use it — and for what purpose?

In this way, governance is no longer a downstream reporting activity; rather, it is embedded into how data is produced, shared, and consumed. Compliance becomes a property of the architecture, not a periodic remediation effort.

And in M&A scenarios, the fabric enables incremental integration, which allows acquired data domains to remain operational, while being progressively aligned rather than forcing immediate and costly consolidation.

Composable architecture: Designing for evolution, not stability

The third pillar of the AXTent model is a modern data architecture that’s designed to absorb change rather than resist it. Traditional architectures usually rely heavily on rigid pipelines and tightly coupled schemas. While these work when requirements are stable, but they may collapse under regulatory change, new analytics demands, or AI-driven consumption.

AXTent replaces pipeline-centric thinking with composable services, including event-driven ingestion and processing, API-first access patterns, versioned data contracts, and separation of storage, computation, and governance.

This approach supports both human analytics and machine users, including AI agents that require direct, trusted access to data. The result is a data core that evolves without constant re-engineering, which is critical for organizations operating under continuous regulatory scrutiny or frequent structural change. AXTent allows acquired entities to plug into the enterprise architecture as domains while preserving context and enabling progressive harmonization.

The architectural compass

This framework exists for one purpose: to provide a practical, business-oriented methodology for building a reusable, decision-aligned, compliance-ready data core. It is not a product nor a platform. It is a vocabulary that’s backed by building blocks, patterns, and repeatable workflows — and it’s one that executives can use to organize data around outcomes instead of systems.

data core

Overall, the AXTent model prioritizes data clarity over system modernization, decision alignment over model sophistication, continuous compliance over intermittent remediation, reusable data products over disconnected pipelines, and enterprise knowledge codification over one-off integration work.

In essence, organizations should move away from project thinking and toward perpetual data development, in which every output contributes to a compound knowledge base. This is the mindset shift the industry has been missing as it prioritizes AI engineering over business purpose.


In the final post in this series, the author will explain how to shift from “build and operate” to “build and evolve” via a data foundry. You can find more blog postsĚýby this author here

]]>
2026 AI in Professional Services Report: AI adoption has hit critical mass, but now comes the tough business questions /en-us/posts/technology/ai-in-professional-services-report-2026/ Mon, 09 Feb 2026 13:05:35 +0000 https://blogs.thomsonreuters.com/en-us/?p=69356

Key findings:

      • AI adoption accelerates across professional servicesĚý— Organization-wide use of AI in professional services almost doubled to 40% in 2026, with most individual professionals now using GenAI tools, and many preparing for the next wave of tools such as agentic AI.

      • Strategic integration and measurement lag behind usage — While AI use is widespread, only 18% of respondents say their organization tracks ROI of AI tools, and even fewer measure AI’s impact on broader business goals such as client satisfaction or revenue generation.

      • Communication around AI use remains inconsistentĚý— While most corporate departments want their outside firms to use AI on client matters, less than one-third are aware whether their firms are doing so. Meanwhile, firms report receiving conflicting instructions from clients about AI use, highlighting a need for clearer dialogue and shared strategy around AI adoption.


Over the past several years, AI usage within professional services industries has come into focus. As we enter 2026 in earnest, the early adoption phase of generative AI (GenAI) has come and gone. Today, most professionals have experimented with some form of GenAI, and many organizations integrated GenAI into their workflows — and now, a number are preparing for the next wave of technological innovation such as agentic AI.

Given this, the question for professionals and organizational leaders has now become: What will be AI’s long-term impact on my business?

Jump to ↓

2026 AI in Professional Services Report

 

To delve into this question further, the Thomson Reuters Institute has released its 2026 AI in Professional Services Report, which takes a broad view into the current usage and planning, sentiment towards, and business impact of AI for legal, tax & accounting, corporate functions, and government agencies. Taken from a survey of more than 1,500 respondents across 27 different countries, the report finds a professional services world that has embraced AI’s use but is continuing to evolve business strategy around its implementation.

For instance, the report shows that to 40% in 2026, compared to 22% in 2025 — and for the first time, a majority of individual professionals reported using publicly-available tools such as ChatGPT. Additionally, a majority of respondents said they feel either excited or hopeful for GenAI’s prospects in their respective industries, and about two-thirds said they felt GenAI should be applied to their work in some manner.

At the same time, however, many are exploring GenAI tools without much guidance as to how that use will be quantified or measured. Only 18% of respondents said they knew their organization was tracking return-on-investment (ROI) of AI tools in some manner, roughly the same proportion as last year. And even among those tracking AI metrics, most are tracking mainly internally-focused, operational metrics; and only a small proportion analyzed AI’s impact on their organization’s larger business goals — such as client satisfaction, external revenue generation, and new business won.

AI in Professional Services

This slow move to strategic thinking also impacts client-firm relationships. Although more than half of both corporate legal departments and corporate tax departments want their outside firms to use AI on client matters, less than one-third said they were aware whether their firms were doing so or not. From the firm standpoint, meanwhile, confusion reigns: 40% of firm respondents said they have received orders both to use AI on matters and not to use AI on matters from various clients.

Indeed, bout three-quarters of corporate respondents and firm respondents agreed that firms should be taking the lead in starting these conversations around proper AI use. Yet these discussions have not yet happened en masse. “Firms are reluctant — they claim it would compromise quality and fidelity,” said one U.S.-based corporate chief legal officer. “I think they are threatened by it.”

All the while, technological innovation progresses ever quicker. This year’s version of the report measures agentic AI use for the first time, finding that already 15% of organizations have adopted some type of agentic AI tool. Perhaps more interesting, however, is that an additional 53% report their organizations are either actively planning for agentic AI tools or are considering whether to use them, indicating perhaps an even more rapid pace of adoption than we’ve already seen with the speedy rise of GenAI.

AI in Professional Services

Overall, the report makes it clear that most professionals do understand that change, driven by AI in the workplace, is undoubtedly here. Even compared with 2025, a higher proportion of professionals said they believe that AI will have a major impact on jobs, billing and revenue, and even the need for legal or tax & accounting professionals as a whole. The percentage of lawyers calling AI a major threat to the unauthorized practice of law rose to 50% in 2026 from 36% in 2025.

Further, this report paints the picture of a professional services world that has embraced AI, begun to see its impact, and realized that it will have broader business and industry implications than previously imagined. As a result, the time for professionals and organizations to begin planning in earnest for an AI future has already arrived.

As a corporate general counsel from Sweden noted: “We cannot keep up with the modern-day corporations’ demands unless we also develop and adapt our way of working.”

You can download

a full copy of the Thomson Reuters Institute’s 2026 AI in Professional Services ReportĚýhere


]]>
Understanding the data core: From legacy debt to enterprise acceleration /en-us/posts/technology/understanding-data-core-enterprise-acceleration/ Tue, 03 Feb 2026 14:47:41 +0000 https://blogs.thomsonreuters.com/en-us/?p=69255

Key takeaways:

      • The real bottleneck for AI is the data core — AI is advancing rapidly, but most organizations’ data architectures, governance, and legacy assumptions can’t keep up. Without a repeatable, business-aligned data foundation, AI initiatives will struggle to scale and deliver reliable results.

      • AI success relies on explainable, traceable, and reusable data — For AI to be reliable and compliant, organizations must design data environments that emphasize lineage, semantics, and trust; and that means that compliance and auditability need to be built into the data core, not added on later.

      • Business should shift from tool-centric upgrades to business-driven, data-centric reinvention — Efforts focused only on modernizing tools or platforms miss the root issue: legacy data structures. Leaders must prioritize building a cohesive, reusable data core that aligns with business strategy.


This article is the first in a 3-part blog series exploring how organizations can reset and empower their data core.

Across boardrooms, regulatory briefings, and strategic off-sites, leaders are asking with growing urgency some variation of the same question: How do we make AI reliable, scalable, auditable, and economically defensible? The surprising answer is not in the AI technology, nor in the cloud stack, nor in another round of system upgrades.

It is in the data. Not the data we store, not the data we report, and not the data we move across our pipelines. It is in the data that we must now explain, contextualize, trace, validate, and reuse continuously as agentic AI becomes embedded in every workflow, every decision system, and every regulatory outcome.

The stark reality across industries then becomes what to do as AI matures faster than our data cores can support. For the first time, technology is not the bottleneck — architecture is, organizational assumptions are, and governance strategies are. More importantly, the lack of a repeatable, business-aligned data foundry has become the strategic inhibitor standing between today’s operations and tomorrow’s autonomy-ready enterprises.

The realities of 2026

As 2026 gets underway, the pressures of regulation, AI adoption, data lineage requirements, and cross-system consistency have converged into a single strategic reality: We can’t keep modernizing data at the edges. The data core itself must be reimaged and compartmentalized.

For leaders across highly regulated industries, the challenge is recognizing that our data architectures were never designed for the world we’re moving into. Historically, solutions were built for predictable siloed-data systems, linear programmatic processes, and dashboard reporting. Today’s demands are continuous, variable, cross-domain, and machine-interpreted and not bound by traditional methods and techniques of process efficiency and system adaptability. Tomorrow’s systems will be comprehensively trained by data. To properly frame these realities, leaders must understand:

      • Agentic AI exposes weak data architecture immediately — Models may scale, but data debt does not. This is a new, priority constraint.
      • Lineage, semantics, and trust scoring — not models — will determine enterprise readiness — AI will only be as reliable as the meaning and traceability of enterprise data.
      • Compliance cannot be retrofitted; rather, it must be designed into the data core — Compliance no longer ends in reporting, it must exist upstream and be addressed continuously.
      • Return on investment in AI is impossible without composable, modular, and reusable data products — Data that cannot be composed, traced, and made consistent cannot be automated.
      • The bottleneck is not talent or tools, it is the absence of a data foundry — Without robust, industrial-grade data production, AI will remain fragmented and experimental.

By delivering a practical, business-first path integrated with a data-centric design, organizations enable reuse, compliance, and measurable ROI. AI is accelerating, but data readiness is not. This mismatch is where many transformation efforts die.

Agentic AI demands a data environment that simply does not exist with most legacy solutions. It requires decision-aligned semantics, federated trust scoring, cross-domain lineage, dynamic compliance overlays, and consistent interpretability. No model, no matter how advanced, can compensate for data environments that have been engineered for static reporting and linear process logic. We are entering a cycle of reinvention in which data becomes the organizing principle.

The business need, not the engineering myth

Executives are rightfully fatigued by transformation programs. They have seen modernization initiatives expand scope, escalate cost, and ultimately underdeliver. They have heard the promises of clean data, enterprise data platforms, microservices, cloud migration, and AI-readiness. However, when agentic AI begins interacting with these ecosystems, the fragility of the entire operation becomes instantly visible.

Why? Because most data modernization initiatives have been driven by tool-centric solutions rather than architecture-centric capabilities. Prior data governance is about oversight, not enablement and reuse, as is being demanded by emerging AI designs. Often, legacy methods kept their audit and lineage contained within siloed processes, bridging bridged them with replicated data warehouses, extract, transform, load systems (ETLs), and application programming interfaces (API) protocols.

However, this tool-centric, legacy-enabled approach is the problem. We keep optimizing the wrong layers, and we keep modernizing the components.

As a result, we too often see that AI pilots succeed, but enterprise scaling fails. Or, that regulatory reporting improves marginally, but compliance costs increase. Or M&A integrations appear straightforward, but post-close data convergence drags on for years.

The gap between ambition and reality

As a solution, a data foundry approach corrects that imbalance by formalizing the factory-grade patterns required to support agentic AI systems. It becomes the production line for reusable data products, compliant semantics, and decision-aligned datasets. It also eliminates reinvention by institutionalizing repeatable structures; and, most importantly, it restores business leadership over AI outcomes, rather than relegating decision logic to engineering workstreams and emerging technologies.

As illustrated below, AI requirements and realities need to be tempered with business demands, organizational risks, and data agility capabilities (including skill sets) to achieve realistic roadmaps of action — not strategic aspirations.

data core

Today, the question isn’t whether organizations understand the importance of data, it’s whether leaders know how to build environments in which data becomes reusable, trustworthy, and ready for agentic AI. The issue, however, continues to be that our data cores — the architectural, operational, and standards ecosystems beneath all this — were not designed for continuous change.

Before they mobilize and execute against AI plans, business leaders need to answer the question: What business decisions are we trying to improve — and what data do these decisions actually requires today, and for tomorrow?

The organizations that will lead in the coming decade will do so not because they found the perfect technology stack, but because they built a reusable, continuously improving data foundation that can support AI, regulation, risk, and innovation simultaneously.

The question for leaders then becomes: Are we prepared to reinvent?

The work begins now — quietly, deliberately across the data core where tomorrow’s competitive advantages will be created. The chart below illustrates the business-driven AI elements that must be addressed, and how the old sequence of system provisioning must be replaced, beginning with outcomes and ending with engineered AI tools.

data core

AI is an output — a capability that’s unlocked after the underlying data foundation becomes coherent, traceable, explainable, and aligned with business decisions. For leaders, the data core is no longer a back-office concern or one-off IT initiative. It is a strategic asset that can shape speed, resilience, and trust across the organization.


In the next post in this series, the author will explain how to architect an integrated data core, particularly through the AXTent architectural framework for regulated organizations. You can find more blog postsĚýby this authorĚýhere

]]>