Business Technology Archives - Thomson Reuters Institute https://blogs.thomsonreuters.com/en-us/topic/business-technology/ Thomson Reuters Institute is a blog from ¶¶ŇőłÉÄę, the intelligence, technology and human expertise you need to find trusted answers. Mon, 06 Apr 2026 11:57:46 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 From emerging player to contender: How Latin America can compete in the global AI race /en-us/posts/technology/latam-ai-investment/ Mon, 06 Apr 2026 11:57:46 +0000 https://blogs.thomsonreuters.com/en-us/?p=70259

Key takeaways:

      • Strategic collaboration is becoming a defining strength for the region — Latin American organizations are realizing that progress in AI accelerates when they combine forces by linking industry expertise, academic talent, and public‑sector support.

      • AI initiatives rooted in real local challenges are gaining global relevance — By developing solutions grounded in the region’s own structural needs, whether in infrastructure, finance, agriculture, education, or mobility, many LatAm firms are producing technologies that are both highly impactful and naturally scalable.

      • Demonstrating clear outcomes is becoming fundamental — Organizations that show concrete operational improvements, measurable efficiencies, or stronger customer outcomes are strengthening their position with investors and partners.


In recent years, Latin America has experienced significant growth in investments related to AI, accounting for . This is strikingly low given that the region makes up around 6.6% of global GDP, highlighting the region’s opportunities to scale AI initiatives even further. Although there are notable differences among countries, Mexico and Brazil — the two largest LatAm economies — stand out for their volume of AI projects and funding, followed by other nations such as Chile, Colombia, and Argentina.

By recognizing the region’s strengths — which include cost-effective operations, access to data, clean energy, and public support — the region’s businesses can better position themselves and design strategies to draw in international investors that may be increasingly seeking promising locations for AI development.

Lessons from LatAm’s AI success stories

Latin America has produced remarkable AI success stories that can serve as models to build confidence among investors. These cases — involving companies that attracted substantial investment and achieved growth — demonstrate valuable best practices that range from technological innovation to working with governments and corporations. Some of these best practices include:

Building strategic alliances

The journey of innovation rarely unfolds in isolation. At times, the presence of large, established companies, whether local industry leaders or multinationals, has served as a catalyst for AI projects. The experience of that specializes in AI-powered agricultural irrigation, proves it. Now, Kilimo is partnering with EdgeConneX, a data center company based in the United States, on a community .

Academia, too, can be woven into this narrative. Collaborations with research centers or universities offer scientific credibility and connect ventures with emerging talent. In Mexico, AI startups often originate within university settings — such as computer vision projects from the National Autonomous University of Mexico (UNAM), for instance — and maintain agreements that sustain ongoing innovation and technical progress even with modest resources. And academic validations, whether in published papers or conference accolades, tend to resonate with foreign investors. Indeed, the emergence of this ecosystem that features early corporate clients and academic mentors frequently lends a distinctive appeal for those seeking investment.

Focusing on local problems with global impact

Within Latin America, certain issues prove especially relevant in situations in which AI solutions intersect with sectors renowned for regional strengths, such as fintech and financial inclusion, agrotech optimizing agriculture, and foodtech drawing on local ingredients. The experience of Chilean food startup NotCo — in which and subsequently exported — suggests how innovations rooted in local context may generate broader attention.

By addressing needs in urban transport, education, mining and related areas, local LatAm companies can provide access to homegrown data and users, which can further refine technology and open pathways for investors into similar emerging markets. When AI solutions respond to genuine pain points rather than mere novelty, momentum often builds more quickly, and the model finds validation among that evaluate investments.

Showing results and AI ROI early on

Questions linger for many executives . Evidence of clear metrics like cost savings, sales growth, or error reduction can prove persuasive, especially when complemented by success stories from local clients.

Recent studies show that companies ; and such figures tend to reassure those considering investment by illustrating tangible improvements. Testimonials or independent validations, such as a university study, can further illuminate achievements.

The act of quantifying impact — whether in efficiency, revenue, or other relevant KPIs — has a way of transforming perceptions from uncertainty toward clarity.

Leveraging government incentives and collaborations

Many Latin American nations have put forth support programs for AI and tech projects, such as non-repayable funds, soft loans, and tax benefits for innovation illustrated in , , , or the .

Public financing, when present, often acts as a stamp of validation for private investors. For example, this trust extended to Brazilian startups receiving Finep support for AI health projects, which in turn can shift perceptions for foreign ventures capitals. Engagement in government pilots, such as smart city initiatives or solutions for ministries, provides valuable exposure. In such contexts, public-private partnerships and incentives seem to act as quiet levers for growth and legitimacy.

Seeking smart and diversified financing

Financial strategies in Latin America have been shaped by the interplay of local and foreign capital. Local funds often bring insights and patience, while foreign funds may offer larger investments and global scaling experience. Ownership dilution sometimes accompanies the arrival of strategic investors, whose networks can prove invaluable, such as . Programs like 500 Startups, Y Combinator, MassChallenge, and international competitions have ushered LatAm AI startups such as Heru, Rappi, Bitso, and Clip into new rounds of capital following increased exposure.

Efficiency in capital management, which can be demonstrated with lean burn rates and milestone achievement with limited resources, signals an ability to execute within the realities of LatAm, which may enhance the appeal for future investments. The cultivation of relationships and responsible stewardship of capital frequently matters as much as the funds themselves, suggesting that the value of mentorship, contacts, and reputation is often intertwined with deepening financial support.

Unlocking AI Investment

By applying these principles, Latin American companies have achieved a better position to attract AI investments to their projects and help position the region as a viable destination for technology capital. These recent experiences show that when a LatAm company combines innovation, talent, and strategy — while communicating its story well — it can win over global and local investors alike. Each of the best practices noted above is based on real lessons: international alliances (NotCo with US funds), leveraging incentives (Brazilian companies funded by Finep), talent formation (Santander and Microsoft programs), focus on ROI (successful use cases that convince boards), and more.

Latin America has challenges but also unique advantages. Companies that manage to navigate this environment intelligently will increase their chances of securing the financing needed to innovate and grow. By doing so, they will contribute to a virtuous circle in which each new success attracts more investment to the region and opens doors for the next generation of LatAm AI ventures.


You can find more about the challenges and opportunities in the Latin American region here

]]>
Reinventing the data core: The arrival of the adaptable AI data foundry /en-us/posts/technology/reinventing-data-core-adaptable-data-foundry/ Thu, 05 Mar 2026 16:08:59 +0000 https://blogs.thomsonreuters.com/en-us/?p=69795

Key takeaways:

      • There is a widening gap between AI ambition and readiness — The gap between AI ambition and data readiness is widening, making the adoption of an adaptable data foundry essential for scalable, explainable, and compliant AI outcomes.

      • A data foundry model directly addresses the root cause — A data foundry model enables organizations to industrialize data production, automate compliance, and ensure consistent data lineage, thereby overcoming the limitations of brittle, legacy data architectures.

      • Incorporate the data core into your AI planning — Reinventing the data core is now a strategic imperative for those enterprises that aim to thrive in 2026 and beyond, as agentic AI, regulatory demands, and integration complexity accelerate.


This article is the third and final installment in a 3-part blog series exploring how organizations can reset and empower their data core.

A defining theme of this year so far is the widening distance between organizational ambition and data readiness. Leaders want the hype and inherent capabilities they believe are instantly contained within agentic AI — automated compliance, predictive integration for M&A, and decision-intelligence pipelines that reduce operational friction.

Without a data foundry, however, much of that will be impossible. Instead, workflows will remain brittle, AI agents will hallucinate under inconsistent semantics, and data lineage will break down across federated sources. Further, without a data foundry, regulatory mappings involved with the Financial Data Transparency Act (FDTA) and the Standard Business Reporting (SBR) framework cannot be automated, cross-functional insights will require manual reconciliation, and auditability will collapse under scrutiny.

This is not a failure of leadership. It is a failure of architectural design to recognize the congealment of data as a predecessor to technologies and the critical priorities of data security, auditability, and lineage.

data core

For decades, organizations built monolithic systems that were optimized for stability and reporting. Today’s world demands modularity, continuous adaptation, and agent-driven interoperability. Architecture has shifted from build and operate to build and evolve. This is precisely what a data foundry enables.

Why reinvention can no longer wait

Throughout 2025 and now into the early months of 2026, data and AI have quietly shifted from innovation topics to enterprise constraints. Leaders across regulated markets are starting to recognize that the obstacles limiting their AI ambitions are neither mysterious nor technical — they are structural. These obstacles sit inside the data core, waiting inside the silent architecture that determines whether any form of automation, intelligence, or compliance can scale beyond a pilot.

The data bears this out. When you examine the work coming from Tier-1 research bodies, supervisory institutions, and global transformation benchmarks, a consistent narrative emerges beneath the headlines: AI is accelerating, regulation is hardening, and integration demands are expanding. Moreover, organizational data remains pinned to assumptions that were forged in static, pre-AI operating environments. This gap is not theoretical; rather, it is measurable, persistent, and directly correlated to business performance.

data core

Let’s look at the AI results first. Across industries, organizations continue to experience a familiar pattern: early promise, limited adoption, and rapid degradation once the model encounters inconsistent semantics or fragmented lineage. Global studies show that the vast majority of enterprise AI initiatives still struggle to reach full production maturity, and among those that do, many encounter performance drift within the first year.

The driver is remarkably consistent. It is not the sophistication of the model nor the skill of the data science team — it is the quality, clarity, and traceability of the data that is feeding the system.

Taken together, these signals deliver a clear message. The gap between AI ambition and data readiness is widening, not narrowing. This is why the data foundry conversation matters now. It is not an abstract architectural concept. It is a response to the full stack of quantitative pressures the market has been telegraphing for years — costs rising, compliance hardening, AI accelerating, and integration straining under inconsistent semantics and fragile lineage.

A data foundry model directly addresses the root cause of this by industrializing the creation of consistent, reusable, explainable data products that can fuel agentic AI, support regulatory defensibility, and accelerate enterprise reinvention.

The numbers point to a simple conclusion. Reinvention is no longer optional, and the window to address the data core before agentic AI becomes standard practice is narrow and closing. The organizations that act now will be the ones that define what compliant, explainable, interoperable AI looks like in the next decade. Those that defer the work will find themselves restructuring under pressure rather than reinventing by choice.

This is the inflection point. In truth, the quantitative signals have made the case more clearly than a multitude of strategy narratives ever could.

The data foundry: A model for continuous alignment

Unsurprisingly, agentic AI introduces new, more demanding requirements, including:

      • machine-interpretable semantics;
      • context-preserving lineage across federated systems;
      • decomposition of enterprise knowledge into reusable data products;
      • dynamic trust-scoring tied to source reliability and timeliness;
      • automated compliance overlays and regulatory logic; and
      • cross-domain metadata orchestration.

These capabilities are not optional, and they are non-negotiable. Indeed, they determine whether AI elevates risk or mitigates it, whether it accelerates productivity or introduces unrecoverable inconsistencies. And they determine whether AI augments decision quality or produces volatility.

A data foundry shifts organizations from artisanal, one-off data preparation and toward industrialized data production, in which patterns replace pipelines, and building blocks replace custom engineering. This shift will mean that lineage is generated, not documented; semantics are governed, not patched; and compliance is automated, not reconstructed. In this way, reuse becomes the default, not the exception.

In fact, this process is analogous to manufacturing. Instead of producing bespoke components for each need, the enterprise creates standardized, high-fidelity data assets that can be assembled into any workflow, any AI use case, and any reporting requirement.

A data foundry becomes the quiet architecture behind every future capability, making these capabilities systematic rather than ad-hoc. The chart below showcases the progressive build-up using a data factory, beginning with data intake and harmonization and ending with the AI agent orchestration and reusable data products that learn from their deployment.

data core

Unfortunately, organizations are still building increasingly advanced AI decisioning and efficiency solutions on top of an aging and brittle data foundation. The results are predictable: stalled initiatives, compliance exposure, and stakeholder frustration. Additionally, instead of asking why, organizations keep adding more tools — more dashboards, more cloud services, more AI pilots, and more flavors of transformation.

Clearly, enterprises aren’t dealing with an AI problem. They’re dealing with a data alignment problem disguised as progress within fragmented, AI enclosures.

Reinvention starts at the data core

For more than a decade, firms across regulated industries have repeated the same mantra: Data is our most critical asset. When you peel back the layers or when you sit in board review sessions or integration meetings or regulatory remediation audits, however, the evidence does not match the rhetoric.

Reinvention is no longer optional. Instead, it is the starting point for meeting the demands of 2026 and beyond. The institutions that thrive will be those that understand that the data core is not a technical asset — it is the operational backbone of the enterprise. Indeed, the institutions that succeed will be those that recognize the truth early: AI is an output, and the data core is the strategy. And the organizations able to industrialize their data — through a foundry model, through AXTent, through repeatable semantic structures — will be the ones leading innovation, reducing compliance risk, accelerating M&A synergies, and achieving enterprise-wide reinvention.

In the end, the real question isn’t whether AI will transform business; the question is whether the data foundation will allow it. And the answer is rebuilding your data core so AI can actually deliver the outcomes your organization needs — and that work begins now.


You can find more blog postsĚýby this author here

]]>
The professional judgment gap: Tracing AI’s impact from lecture hall to professional services /en-us/posts/corporates/ai-professional-judgment-gap/ Thu, 05 Mar 2026 12:59:12 +0000 https://blogs.thomsonreuters.com/en-us/?p=69771

Key highlights:

      • Universities face pressure over pedagogy— Academic institutions are adopting AI as a reputational marker that’s driven by market pressure rather than educational need, creating a risk for students who can work with AI but not independently of it.

      • Entry-level roles under threat— AI is being deployed most heavily to automate the grunt work of entry-level positions in which foundational professional skills are traditionally built through struggle and feedback.

      • K-shaped cognitive economy emerging— Experienced professionals with existing expertise are gaining efficiency from AI, while entry-level workers are losing access to skill-building experiences.


According to Harvard University’s Professional & Executive Development division, innovation is defined as a “process that guides businesses through developing products or services that deliver value to customers in new and novel ways.” Along this journey, professional judgement in decision-making is used numerous times to determine next steps at key stages.

Notably, the word technology is nowhere to be found in this definition — an absence , Assistant Professor of Learning Technologies at the University of Minnesota, has long found revealing. Instead, innovation is framed as creative problem-solving, contextual intelligence, and the ability to work across perspectives. Interestingly, Dr. Heinsfeld adds, none of these require constant automation. In fact, many of them are undermined by it.

However, AI adoption has the real potential to automate away the very experiences that build these capabilities from university lecture halls to corporate offices. With notable data already suggesting that , the risk that the current approaches to AI use in universities and companies are engineering away innovation and professional judgement skills is real, notes , Group Leader in AI Research at Harvard and NTT Research.

Indeed, some observers view AI as the largest unregulated cognitive engineering experiment in human history. Yet, unlike medical drugs that require years of approval and testing, AI systems are reshaping how millions of students think, learn, and make decisions without a comparable approval process or a shared framework for discussing any potential “side effects,” as Dr. Heinsfeld pointed out.


Most worrisome is that AI is being deployed most heavily to automate precisely the entry-level roles where foundational professional skills are built.


So, what happens when an entire generation of future employees learn to delegate judgment before they develop it? And what actions do universities and companies need to take now to avoid this reality?

Risks of universities adopting AI under pressure

For universities, AI “has become a reputational marker, and not adopting AI is framed as institutional risk, regardless of whether an educational case has been made or not,” says Dr. Heinsfeld, adding that this is being driven, in part, by market pressure rather than pedagogical need.

Already, companies can greatly influence universities as employers of new graduates; and as such, AI systems are currently being optimized for speed, agreeability, and accessibility to stimulate ongoing use. However, as Dr. Heinsfeld contends, as universities race to earn the label AI ready without a careful, cautious and detailed understanding of how it may impact students’ cognitive processes, they run the risk of damage to their reputations of pedagogical integrity.

In addition, the “data as truth” paradigm is a complicating factor, she says. Drawing on her research, Dr. Heinsfeld explains how data “is often framed as the idea of being a single source of truth based on the assumption that when collected and analyzed, it can reveal objective, indisputable facts about the world.” Indeed, this ubiquitous mindset across universities and corporations treats data — such as that used to train large and small language models — as objective and indisputable.

Yet this obscures critical decisions about what gets measured, whose perspectives are included, and what forms of knowledge are systematically excluded from AI systems. As Dr. Heinsfeld warns, when data becomes synonymous with truth, “knowledge is what is measurable and optimizable.” This narrows professional judgment to efficiency metrics rather than the interpretive depth, ethical reasoning, and cultural context that are essential for sound decision-making.

Judgment gap widens in workforce downstream

Under the current AI adoption approach, students could leave universities able to workĚýwithĚýAI but not independentlyĚýofĚýit, a distinction emphasized by Dr. Heinsfeld. Like calculators, AI works as a tool only when foundational skills for its use exist first. Without this, graduates enter the workforce with a critical judgment gap that compounds from their lives as students at college campuses to becoming employees working in corporations.


AI adoption has the real potential to automate away the very experiences that build these capabilities from university lecture halls to corporate offices.


Most worrisome is that AI is being deployed most heavily to automate precisely the entry-level roles where foundational professional skills are built, warns Dr. Tanaka. Indeed, this is exactly the type of grunt work that teaches judgment through struggle and feedback. Over time, overuse of AI will result in quality being sacrificed because critical evaluation skills have atrophied.

Looking into the future, Dr. Tanaka foresees a K-shaped economy of cognitive capacity. Experienced professionals with existing expertise and contextual judgment built through years of experience will gain increasing efficiency from AI. Entry-level workers, however, will lose access to the valuable experiences that build professional judgement. This gap widens between professionals who can independently accelerate their workflows using AI and those whose traditional tasks are merely displaced by it.

Intervention may be able to break the cycle

The pattern is not inevitable, as both Dr. Tanaka and Dr. Heinsfeld explain. Drawing on Dr. Heinsfeld’s emphasis on institutional agency, meaningful intervention will depend on conscious, intentional choices made at every level. Both experts share their guidance for how different organizations can manage this:

Academic institutions — Universities must first recognize that AI adoption is a decision rather than an inevitability and make educational need the North Star for decision-making around AI. In her analysis, Dr. Heinsfeld emphasizes that when vendors set defaults, they quietly redefine academic practice. Defaults shape what is made visible or invisible and what becomes normalized. In AI-driven environments, universities often lose control over how models are trained and updated, what data shapes outputs, how knowledge is filtered and ranked, and how student and faculty data circulate beyond institutional boundaries — especially if decision-making is left to vendors. As a result, the intellectual byproducts of teaching and learning increasingly become inputs into external systems that universities do not govern.

Private entities — For organizations, Dr. Tanaka calls for feedback loops and other mechanisms that will promote more open discussion about AI use without stigma. In addition, companies need to proactively redesign entry-level rolesĚýto ensure these positions continue to cultivate judgment and foundational skills in an AI-driven environment. Likewise, Dr. Tanaka suggests that companies explicitly provide feedback about cognitive trade-offs to employees, fostering an understanding of possible skill entrophy.

Employees — Similarly, individuals working for organizations bear much of the responsibility for making sure critical thinking is enhanced by AI. Indeed, strategic decisions about when to use AI while seeking to preserve cognitive capacity and professional judgement are key.

Looking ahead

In today’s increasingly AI-driven environment, a new paradigm is needed to combat the current operating assumption that optimization from AI is the sole path to progress. And because the current trajectory sacrifices human development for efficiency, the need for universities and companies to choose a different path is urgent — while they still have the judgment capacity to do so.


You can find out more about how organizations are managing their talent and training issues here

]]>
Inside the Shift: Why your agentic AI pilot probably will fail (and what that says about you) /en-us/posts/technology/inside-the-shift-agentic-ai-pilot-failure/ Fri, 20 Feb 2026 16:03:35 +0000 https://blogs.thomsonreuters.com/en-us/?p=69576

You can read TRI’s latest “Inside the Shift” feature,ĚýPremortem: Your 2028 agentic AI pilot program failedĚýhere


Picture this: It’s 2028, your law firm spent real money on an agentic AI pilot, and now it’s quietly been shut down. No press release, no victory lap — just a post‑mortem that nobody wants to read. In our latestĚýInside the Shift feature article, we see that such a future is very likely unless firms start preparing for agentic AI in a way that’s very different than how they think they should.

The big idea is simple but uncomfortable: Success with generative AI (GenAI) does not mean your organization is ready for agentic AI. GenAI works because it’s forgiving. You can paste text into a tool, get a decent answer, and move on — even if your data is messy and your workflows live in people’s heads. Agentic AI doesn’t work that way. It expects clean data, documented processes, and clear rules. If your firm runs on institutional memory, workarounds, and a kind of just ask Linda problem-solving process, then the system will eventually break down.


To examine this and many more situations, the Thomson Reuters Institute (TRI) has launched a new feature segment,ĚýInside the Shift, that leverages our expert analysis and supporting data to tell some of the most compelling stories professional services today.


Our latest Inside the Shift feature, Premortem: Your 2028 agentic AI pilot program failedĚýby Bryce Engelland, Enterprise Content Lead for Innovation & Technology for the Thomson Reuters Institute, walks us through two fictional but painfully familiar failure stories of how two separate firms handled their agentic AI pilot programs.

The author explains how the first firm moves fast after crushing their GenAI rollout and assuming agentic AI is just the next logical step. Everything looks great in a sandbox; but then the system hits real‑world chaos: Undocumented exceptions, fragmented document storage, and conflict checks that only work because humans intuitively know when something feels off. One bad intake decision later, client trust is damaged and the pilot is frozen. In this example, the tech didn’t fail — the organization did.

The second firm goes the opposite direction. They’re cautious, thoughtful, and obsessed with governance. They build guardrails, limit risk, and launch a perfectly reasonable pilot. And then… nothing happens. Attorneys ignore the system — not because they hate AI, but because using it only adds risk with no reward. If it works as it’s supposed to, nothing changes; but if something goes wrong, they’ll be blamed. So, unsurprisingly, the rational choice is to nod in meetings and quietly keep doing things the old way until the project dies of inertia.


Inside the ShiftThe challenge is that “preparing” doesn’t mean what most people think. It doesn’t mean buying early, and it doesn’t mean waiting for maturity. Rather, preparing means understanding now why these systems fail, and building the institutional capacity to avoid those failures when the technology arrives in full.


The feature article points out the common thread here: These failures have very little to do with AI capability; rather, they’re about incentives, documentation, and institutional honesty. Firms that succeed with agentic AI won’t be the ones that buy in early or wait patiently. The winners, the piece explains, be the ones doing the boring, unsexy work now: Writing things down, fixing information architecture, identifying hidden dependencies, and aligning rewards so adoption isn’t all risk and no upside.

In short, this article isn’t a warning about technology. It’s a warning about pretending your organization is ready when it’s not — and mistaking optimism or caution for preparation.

So, dive a little deeper behind the headlines about AI adoption and how to make agentic AI work for your organization. Click through and read today’s Inside the Shift feature. It might help you see more clearly than before whether the path your organization is pursuing with agentic AI will carry it over the goal line and into the next decade… or leave your team watching from the sidelines.


You can find moreĚýInside the Shift feature articlesĚýfrom the Thomson Reuters Institute here

]]>
Architecting the data core: How to align governance, analytics & AI without slowing the business /en-us/posts/technology/architecting-data-core-aligning-ai-governance-analytics/ Thu, 12 Feb 2026 19:02:55 +0000 https://blogs.thomsonreuters.com/en-us/?p=69436

Key takeaways:

      • Legacy data architectures can’t keep up with modern demands — Traditional, centralized data cores were designed for stable, predictable environments and are now bottlenecks under continuous regulatory change, rapid M&A, and AI-driven business needs.

      • AXTent aims to unify modern data principles for regulated enterprises — The modern AXTent framework integrates data mesh, data fabric, and composable architecture to create a data core built for distributed ownership, embedded governance, and adaptability.

      • A mindset shift is required for lasting success — Organizations must move from project-based data initiatives to perpetual data development, focusing on reusable data products and decision-aligned outcomes rather than one-off integrations or platform refreshes.


This article is the second in a 3-part blog series exploring how organizations can reset and empower their data core.

For more than a decade, enterprises have invested heavily in data modernization — new platforms, cloud migrations, analytics tools, and now AI. Yet, for many organizations, especially in regulated industries, the results remain underwhelming. Data integration is still slow because regulatory reporting still requires manual remediation, M&A still exposes hidden data liabilities, and AI initiatives struggle to move beyond pilots because trust and reuse in the underlying data remains fragile.

The problem is not effort, it is architecture. Since 2022, the buildup around AI has been something out of science fiction — self learning, easy to install, displace workers, autonomous, even Terminator-like. Moreover, while AI may indeed revolutionize research, processes, and profits, the fundamental challenge is not the advancing technology, rather it is the data used to train and cross-connect these exploding capabilities.

Most data cores in use today were designed for an earlier operating reality — one in which data was centralized, reporting cycles were predictable, and governance could be applied after the fact. That model breaks down under the modern pressures of continuous regulation, compressed deal timelines, ecosystem-based business models, and AI systems that consume data directly rather than waiting for curated outputs.

So, why is the AI hype not living up to the anticipated benefits? Why is the data that underpinned process systems for decades failing to scale across interconnected AI solutions? The solution requires not another platform refresh, but rather, a structural reset of the data core itself.

That reset uses data meshes, data fabrics, and modern composable architecture as a single, integrated system, and aligns it to the AXTent architectural framework, which is designed explicitly for regulated, data-intensive enterprises.

Why the traditional data core no longer holds

Legacy data cores were built to optimize control and consistency. Data flowed inward from operational systems into centralized repositories, where meaning, quality, and governance were imposed downstream. That approach assumed there were stable data producers, limited use cases, human-paced analytics, and periodic regulatory reporting.

Unfortunately, none of those assumptions hold today. Regulatory expectations now demand traceability, lineage, and auditability at all times (not just at quarter-end). M&A activity requires rapid integration without disrupting ongoing operations. And AI introduces probabilistic decision-making into environments built for deterministic reporting, with business leaders expecting insights in days, not months.

The result is a growing mismatch between how data is structured and how it is used. Centralized teams become bottlenecks, pipelines become brittle, and semantics drift. Compliance then becomes reactive, and the cost of change increases with every new initiative.

The AXTent framework starts from a different premise: The data core must be designed for continuous change, distributed ownership, and machine consumption from the outset. Indeed, AXTent is best understood not as a product or a platform, but as an architectural framework for reinventing the data core. It combines three design principles into a coherent operating model:

      1. Data mesh — Domain-owned data products
      2. Data fabric — Policy- and metadata-driven connectivity
      3. Data foundry — Composable, evolvable data architecture

Individually, none of these ideas are new. What is different — and necessary — is treating them as a single system, rather than independent initiatives as conceptually illustrated below:

data core

Fig. 1: The AXTent model of operation

The 3 operating principles of AXTent

Let’s look at each of these three design principles individually and how they interact with each other.

Data mesh: Reassigning accountability where it belongs

In regulated enterprises, data problems are rarely technical failures. Instead, they are accountability failures. When ownership of data meaning, quality, and timeliness sits far from the domain that produces it, errors propagate silently until they surface in regulatory filings, audit findings, or failed integrations.

A structured framework applies data mesh principles to address this directly. Data is treated as a product, owned by business-aligned domains that are then accountable for semantic clarity, quality thresholds, regulatory relevance, and consumer usability.

This is not decentralization without guardrails, however. AXTent enforces shared standards for interoperability, security, and governance, ensuring that domain autonomy does not fragment the enterprise. For executives, the benefit is practical: faster integration, fewer semantic disputes, and clearer accountability when things go wrong.

Data fabric: Embedding control without re-centralization

However, distributed ownership alone does not solve enterprise-scale problems. Without a unifying layer, decentralization simply recreates silos in new places.

A proper framework addresses this through a data fabric that operates as a control plane across the data estate. Rather than moving data into a single repository, the fabric connects data products through shared metadata, lineage, and policy enforcement.

This allows the organization to answer critical questions continuously, such as:

      • Where did this data come from?
      • Who owns it?
      • How has it changed?
      • Who is allowed to use it — and for what purpose?

In this way, governance is no longer a downstream reporting activity; rather, it is embedded into how data is produced, shared, and consumed. Compliance becomes a property of the architecture, not a periodic remediation effort.

And in M&A scenarios, the fabric enables incremental integration, which allows acquired data domains to remain operational, while being progressively aligned rather than forcing immediate and costly consolidation.

Composable architecture: Designing for evolution, not stability

The third pillar of the AXTent model is a modern data architecture that’s designed to absorb change rather than resist it. Traditional architectures usually rely heavily on rigid pipelines and tightly coupled schemas. While these work when requirements are stable, but they may collapse under regulatory change, new analytics demands, or AI-driven consumption.

AXTent replaces pipeline-centric thinking with composable services, including event-driven ingestion and processing, API-first access patterns, versioned data contracts, and separation of storage, computation, and governance.

This approach supports both human analytics and machine users, including AI agents that require direct, trusted access to data. The result is a data core that evolves without constant re-engineering, which is critical for organizations operating under continuous regulatory scrutiny or frequent structural change. AXTent allows acquired entities to plug into the enterprise architecture as domains while preserving context and enabling progressive harmonization.

The architectural compass

This framework exists for one purpose: to provide a practical, business-oriented methodology for building a reusable, decision-aligned, compliance-ready data core. It is not a product nor a platform. It is a vocabulary that’s backed by building blocks, patterns, and repeatable workflows — and it’s one that executives can use to organize data around outcomes instead of systems.

data core

Overall, the AXTent model prioritizes data clarity over system modernization, decision alignment over model sophistication, continuous compliance over intermittent remediation, reusable data products over disconnected pipelines, and enterprise knowledge codification over one-off integration work.

In essence, organizations should move away from project thinking and toward perpetual data development, in which every output contributes to a compound knowledge base. This is the mindset shift the industry has been missing as it prioritizes AI engineering over business purpose.


In the final post in this series, the author will explain how to shift from “build and operate” to “build and evolve” via a data foundry. You can find more blog postsĚýby this author here

]]>
2026 AI in Professional Services Report: AI adoption has hit critical mass, but now comes the tough business questions /en-us/posts/technology/ai-in-professional-services-report-2026/ Mon, 09 Feb 2026 13:05:35 +0000 https://blogs.thomsonreuters.com/en-us/?p=69356

Key findings:

      • AI adoption accelerates across professional servicesĚý— Organization-wide use of AI in professional services almost doubled to 40% in 2026, with most individual professionals now using GenAI tools, and many preparing for the next wave of tools such as agentic AI.

      • Strategic integration and measurement lag behind usage — While AI use is widespread, only 18% of respondents say their organization tracks ROI of AI tools, and even fewer measure AI’s impact on broader business goals such as client satisfaction or revenue generation.

      • Communication around AI use remains inconsistentĚý— While most corporate departments want their outside firms to use AI on client matters, less than one-third are aware whether their firms are doing so. Meanwhile, firms report receiving conflicting instructions from clients about AI use, highlighting a need for clearer dialogue and shared strategy around AI adoption.


Over the past several years, AI usage within professional services industries has come into focus. As we enter 2026 in earnest, the early adoption phase of generative AI (GenAI) has come and gone. Today, most professionals have experimented with some form of GenAI, and many organizations integrated GenAI into their workflows — and now, a number are preparing for the next wave of technological innovation such as agentic AI.

Given this, the question for professionals and organizational leaders has now become: What will be AI’s long-term impact on my business?

Jump to ↓

2026 AI in Professional Services Report

 

To delve into this question further, the Thomson Reuters Institute has released its 2026 AI in Professional Services Report, which takes a broad view into the current usage and planning, sentiment towards, and business impact of AI for legal, tax & accounting, corporate functions, and government agencies. Taken from a survey of more than 1,500 respondents across 27 different countries, the report finds a professional services world that has embraced AI’s use but is continuing to evolve business strategy around its implementation.

For instance, the report shows that to 40% in 2026, compared to 22% in 2025 — and for the first time, a majority of individual professionals reported using publicly-available tools such as ChatGPT. Additionally, a majority of respondents said they feel either excited or hopeful for GenAI’s prospects in their respective industries, and about two-thirds said they felt GenAI should be applied to their work in some manner.

At the same time, however, many are exploring GenAI tools without much guidance as to how that use will be quantified or measured. Only 18% of respondents said they knew their organization was tracking return-on-investment (ROI) of AI tools in some manner, roughly the same proportion as last year. And even among those tracking AI metrics, most are tracking mainly internally-focused, operational metrics; and only a small proportion analyzed AI’s impact on their organization’s larger business goals — such as client satisfaction, external revenue generation, and new business won.

AI in Professional Services

This slow move to strategic thinking also impacts client-firm relationships. Although more than half of both corporate legal departments and corporate tax departments want their outside firms to use AI on client matters, less than one-third said they were aware whether their firms were doing so or not. From the firm standpoint, meanwhile, confusion reigns: 40% of firm respondents said they have received orders both to use AI on matters and not to use AI on matters from various clients.

Indeed, bout three-quarters of corporate respondents and firm respondents agreed that firms should be taking the lead in starting these conversations around proper AI use. Yet these discussions have not yet happened en masse. “Firms are reluctant — they claim it would compromise quality and fidelity,” said one U.S.-based corporate chief legal officer. “I think they are threatened by it.”

All the while, technological innovation progresses ever quicker. This year’s version of the report measures agentic AI use for the first time, finding that already 15% of organizations have adopted some type of agentic AI tool. Perhaps more interesting, however, is that an additional 53% report their organizations are either actively planning for agentic AI tools or are considering whether to use them, indicating perhaps an even more rapid pace of adoption than we’ve already seen with the speedy rise of GenAI.

AI in Professional Services

Overall, the report makes it clear that most professionals do understand that change, driven by AI in the workplace, is undoubtedly here. Even compared with 2025, a higher proportion of professionals said they believe that AI will have a major impact on jobs, billing and revenue, and even the need for legal or tax & accounting professionals as a whole. The percentage of lawyers calling AI a major threat to the unauthorized practice of law rose to 50% in 2026 from 36% in 2025.

Further, this report paints the picture of a professional services world that has embraced AI, begun to see its impact, and realized that it will have broader business and industry implications than previously imagined. As a result, the time for professionals and organizations to begin planning in earnest for an AI future has already arrived.

As a corporate general counsel from Sweden noted: “We cannot keep up with the modern-day corporations’ demands unless we also develop and adapt our way of working.”

You can download

a full copy of the Thomson Reuters Institute’s 2026 AI in Professional Services ReportĚýhere


]]>
Understanding the data core: From legacy debt to enterprise acceleration /en-us/posts/technology/understanding-data-core-enterprise-acceleration/ Tue, 03 Feb 2026 14:47:41 +0000 https://blogs.thomsonreuters.com/en-us/?p=69255

Key takeaways:

      • The real bottleneck for AI is the data core — AI is advancing rapidly, but most organizations’ data architectures, governance, and legacy assumptions can’t keep up. Without a repeatable, business-aligned data foundation, AI initiatives will struggle to scale and deliver reliable results.

      • AI success relies on explainable, traceable, and reusable data — For AI to be reliable and compliant, organizations must design data environments that emphasize lineage, semantics, and trust; and that means that compliance and auditability need to be built into the data core, not added on later.

      • Business should shift from tool-centric upgrades to business-driven, data-centric reinvention — Efforts focused only on modernizing tools or platforms miss the root issue: legacy data structures. Leaders must prioritize building a cohesive, reusable data core that aligns with business strategy.


This article is the first in a 3-part blog series exploring how organizations can reset and empower their data core.

Across boardrooms, regulatory briefings, and strategic off-sites, leaders are asking with growing urgency some variation of the same question: How do we make AI reliable, scalable, auditable, and economically defensible? The surprising answer is not in the AI technology, nor in the cloud stack, nor in another round of system upgrades.

It is in the data. Not the data we store, not the data we report, and not the data we move across our pipelines. It is in the data that we must now explain, contextualize, trace, validate, and reuse continuously as agentic AI becomes embedded in every workflow, every decision system, and every regulatory outcome.

The stark reality across industries then becomes what to do as AI matures faster than our data cores can support. For the first time, technology is not the bottleneck — architecture is, organizational assumptions are, and governance strategies are. More importantly, the lack of a repeatable, business-aligned data foundry has become the strategic inhibitor standing between today’s operations and tomorrow’s autonomy-ready enterprises.

The realities of 2026

As 2026 gets underway, the pressures of regulation, AI adoption, data lineage requirements, and cross-system consistency have converged into a single strategic reality: We can’t keep modernizing data at the edges. The data core itself must be reimaged and compartmentalized.

For leaders across highly regulated industries, the challenge is recognizing that our data architectures were never designed for the world we’re moving into. Historically, solutions were built for predictable siloed-data systems, linear programmatic processes, and dashboard reporting. Today’s demands are continuous, variable, cross-domain, and machine-interpreted and not bound by traditional methods and techniques of process efficiency and system adaptability. Tomorrow’s systems will be comprehensively trained by data. To properly frame these realities, leaders must understand:

      • Agentic AI exposes weak data architecture immediately — Models may scale, but data debt does not. This is a new, priority constraint.
      • Lineage, semantics, and trust scoring — not models — will determine enterprise readiness — AI will only be as reliable as the meaning and traceability of enterprise data.
      • Compliance cannot be retrofitted; rather, it must be designed into the data core — Compliance no longer ends in reporting, it must exist upstream and be addressed continuously.
      • Return on investment in AI is impossible without composable, modular, and reusable data products — Data that cannot be composed, traced, and made consistent cannot be automated.
      • The bottleneck is not talent or tools, it is the absence of a data foundry — Without robust, industrial-grade data production, AI will remain fragmented and experimental.

By delivering a practical, business-first path integrated with a data-centric design, organizations enable reuse, compliance, and measurable ROI. AI is accelerating, but data readiness is not. This mismatch is where many transformation efforts die.

Agentic AI demands a data environment that simply does not exist with most legacy solutions. It requires decision-aligned semantics, federated trust scoring, cross-domain lineage, dynamic compliance overlays, and consistent interpretability. No model, no matter how advanced, can compensate for data environments that have been engineered for static reporting and linear process logic. We are entering a cycle of reinvention in which data becomes the organizing principle.

The business need, not the engineering myth

Executives are rightfully fatigued by transformation programs. They have seen modernization initiatives expand scope, escalate cost, and ultimately underdeliver. They have heard the promises of clean data, enterprise data platforms, microservices, cloud migration, and AI-readiness. However, when agentic AI begins interacting with these ecosystems, the fragility of the entire operation becomes instantly visible.

Why? Because most data modernization initiatives have been driven by tool-centric solutions rather than architecture-centric capabilities. Prior data governance is about oversight, not enablement and reuse, as is being demanded by emerging AI designs. Often, legacy methods kept their audit and lineage contained within siloed processes, bridging bridged them with replicated data warehouses, extract, transform, load systems (ETLs), and application programming interfaces (API) protocols.

However, this tool-centric, legacy-enabled approach is the problem. We keep optimizing the wrong layers, and we keep modernizing the components.

As a result, we too often see that AI pilots succeed, but enterprise scaling fails. Or, that regulatory reporting improves marginally, but compliance costs increase. Or M&A integrations appear straightforward, but post-close data convergence drags on for years.

The gap between ambition and reality

As a solution, a data foundry approach corrects that imbalance by formalizing the factory-grade patterns required to support agentic AI systems. It becomes the production line for reusable data products, compliant semantics, and decision-aligned datasets. It also eliminates reinvention by institutionalizing repeatable structures; and, most importantly, it restores business leadership over AI outcomes, rather than relegating decision logic to engineering workstreams and emerging technologies.

As illustrated below, AI requirements and realities need to be tempered with business demands, organizational risks, and data agility capabilities (including skill sets) to achieve realistic roadmaps of action — not strategic aspirations.

data core

Today, the question isn’t whether organizations understand the importance of data, it’s whether leaders know how to build environments in which data becomes reusable, trustworthy, and ready for agentic AI. The issue, however, continues to be that our data cores — the architectural, operational, and standards ecosystems beneath all this — were not designed for continuous change.

Before they mobilize and execute against AI plans, business leaders need to answer the question: What business decisions are we trying to improve — and what data do these decisions actually requires today, and for tomorrow?

The organizations that will lead in the coming decade will do so not because they found the perfect technology stack, but because they built a reusable, continuously improving data foundation that can support AI, regulation, risk, and innovation simultaneously.

The question for leaders then becomes: Are we prepared to reinvent?

The work begins now — quietly, deliberately across the data core where tomorrow’s competitive advantages will be created. The chart below illustrates the business-driven AI elements that must be addressed, and how the old sequence of system provisioning must be replaced, beginning with outcomes and ending with engineered AI tools.

data core

AI is an output — a capability that’s unlocked after the underlying data foundation becomes coherent, traceable, explainable, and aligned with business decisions. For leaders, the data core is no longer a back-office concern or one-off IT initiative. It is a strategic asset that can shape speed, resilience, and trust across the organization.


In the next post in this series, the author will explain how to architect an integrated data core, particularly through the AXTent architectural framework for regulated organizations. You can find more blog postsĚýby this authorĚýhere

]]>
Managing AI models’ opacity and risk management challenges /en-us/posts/corporates/ai-risk-management-challenges/ Tue, 13 Jan 2026 19:09:38 +0000 https://blogs.thomsonreuters.com/en-us/?p=69039

Key insights:

      • Opacity challenge — AI models operate fundamentally differently than traditional models. Unlike linear, traceable calculations, AI develops its own inferential logic that model owners often cannot fully explain or predict.

      • Third-party dependency risk — Most traditional financial institutions use foundational models from external providers rather than building proprietary ones in-house. This adds another opacity layer that makes traditional validation and monitoring nearly impossible.

      • Regulatory and trust implications — Regulators worldwide are demanding transparency and control despite these limitations. The inability to explain AI decisions undermines customer trust, complicates compliance, and creates governance gaps.


The challenge for financial institutions around developing customer-facing or internal models in the AI age may be simple to understand, but it’s not easy to solve. Financial institutions develop models to enhance their decision-making, improve financial reporting, and ensure regulatory compliance; and these models often are used across various banking and financial services operations, including credit scoring, loan approval, asset-liability management, and stress testing.

Traditional models — for which existing model risk management was written — often operated in a predictable, linear fashion. A model user could enter inputs, trace calculations, validate assumptions, and forecast outputs with relative confidence. These are in stark contrast to some applications of AI models, particularly those using deep learning. Often, AI model users may not be able to predict its outputs or precisely explain the model’s inferences.

The third-party complication

Here’s where things get even more complex. Most financial institutions don’t build their AI models from scratch; instead, they’re leveraging foundational models from companies like OpenAI, Anthropic, and Google. These large language models (LLMs) serve as the backbone that can be configured for everything from customer service chatbots to risk assessments.

This creates a new dimension of opacity. Banks aren’t just dealing with models they can’t fully explain; they’re utilizing models they didn’t originally build and don’t wholly control. The original training data, architecture, and parameters all remain proprietary to the model providers.

The model risk management implications are numerous. How do you validate a foundational model when you don’t have access to its training data? How do you ensure it won’t produce biased outputs when you can’t examine how it infers its data? How do you monitor for model drift when the foundational builder might update the model without notice? Traditional vendor risk frameworks weren’t designed for this level of dependency on opaque, constantly evolving systems.

When traditional risk management fails

Traditional model risk management relies on three components: initial validation, ongoing monitoring, and the ability to challenge model assumptions. Third-party foundational AI models may disrupt all three.

Initial validation becomes problematic when you’re validating a system you can only observe from the outside. Unlike traditional statistical models built on explicit assumptions, AI models develop their own inferential logic through training, which isn’t always visible.


Banks aren’t just dealing with models they can’t fully explain; they’re utilizing models they didn’t originally build and don’t wholly control.


Ongoing monitoring faces similar challenges. If an institution is relying on a foundational model like OpenAI’s GPT or Anthropic’s Claude as the basis for their own AI application, the institution is subject to the foundational model’s updates. A model that performed reliably last month might behave differently today due to changes the institution didn’t execute; the assumptions present in each version may not be readily measurable.

Further, government regulators are beginning to implement more detailed guidelines specifically targeting AI models. Financial institutions must demonstrate transparency and control over complex systems, including those they source from third parties. In mid-2024, for example, the Monetary Authority of Singapore issued guidance on AI model risk management; and now similar initiatives are emerging globally, from the United States’ Federal Reserve and Canada’s Office of Financial Institutions, to the European Union’s AI Act. However, just as fast as AI models update, global regulatory oversight and momentum can pivot near immediately.

Real-world consequences and the search for solutions

The stakes extend beyond regulatory compliance. When a model generates outputs that are understood only by a team at an external company, operational risks can cascade. For example, customer service representatives often need to explain why a fraud system flagged a transaction; or loan officers must be able to provide specific reasons why a credit model rejected an application — and black box AI makes these basic requirements nearly impossible.

The trust deficit affects everyone. Customers denied services without clear explanations lose faith, and regulators struggle to verify compliance. Internal audit teams may not offer confidence when models are proprietary third-party systems, and board members face governance questions they can’t adequately answer.

The industry is responding with various approaches. Some institutions are demanding greater transparency from AI providers, negotiating for access to model documentation and performance metrics. Others are building testing frameworks to validate third-party models through extensive input-output analysis.

attempt to illuminate black box decisions by approximating how models weight different factors. Some institutions are adopting hybrid approaches, combining simpler, interpretable models with complex foundational models to balance performance with transparency.


Financial institutions must demonstrate transparency and control over complex systems, including those they source from third parties.


These solutions involve trade-offs, however; chiefly that more interpretable models may sacrifice predictive power. Post-hoc explanation techniques provide approximations, not perfect transparency. The tools for managing third-party AI model risk are still maturing, even as deployment accelerates.

What needs to happen now

Financial institutions must build explainability and control mechanisms into their AI journeys from the start. This may require cross-functional teams of data scientists, risk managers, compliance officers, and vendor management specialists who can negotiate appropriate terms with foundational AI providers.

Institutions also need comprehensive governance frameworks that address the unique challenges of third-party foundational models. This could include enhanced vendor due diligence, continuous monitoring, contractual provisions for model transparency and update notifications, and a willingness to forgo some AI capabilities when risks can’t be adequately managed.

Still, the fundamental tension remains: AI’s power comes partly from its ability to identify trends at scale, and currently, operating in ways we don’t fully understand. When third-party providers are thrown into the mix, predictability and control become even more tenuous. Institutions must leverage the benefits of foundational models while acknowledging what remains unknown and outside their direct control.

If attained, this comprehension can be a strategic driver. Institutions that can harness third-party AI’s power while maintaining genuine oversight will gain a competitive advantage. Those that don’t may face serious consequences if black boxes from third parties produce outcomes they can neither explain, predict, nor defend. In an industry where trust and compliance are paramount, it is crucial for financial institutions to truly comprehend AI-associated risks.


You can find out more about how financial institutions and other organizations manage their risk here

]]>
The Human Layer of AI: How to build human rights into the AI lifecycle /en-us/posts/sustainability/ai-human-layer-building-rights/ Mon, 24 Nov 2025 16:33:36 +0000 https://blogs.thomsonreuters.com/en-us/?p=68546

Key takeaways:

      • Build due diligence into the process — Make human-rights due diligence routine from the decision to build or buy through deployment by mapping uses to standards, assess severity and likelihood, and close control gaps to prevent costly pullbacks and reputational damage.

      • Identify risks early on — Use practical methods to identify risks early by engaging end users and running responsible foresight and bad headlines

      • Use due diligence to build trust — Treat due diligence as an asset and not a compliance box to tick by using it to de‑risk launches, uncover user needs, and build durable trust that accelerates growth and differentiates the product with safety-by-design features that matter to buyers, regulators, and end users.


AI is reshaping how we work, govern, and care for one another. Indeed, individuals are turning to cutting-edge large language models (LLMs) to ask for emotional help and support in grieving and coping during difficult times. “Users are turning to chatbots for therapy, crisis support, and reassurance, and this exposes design choices that now touch the right to information, privacy, and life itself,” says , co-Founder & Principal at , a management consulting firm that specializes in human rights and responsible technology use.

These unexpected uses of AI are reframing risk because in these instances, safeguards cannot be an afterthought. Analyzing who might misuse AI alongside determining who will benefit from its use must be built into the design process.

To put this requirement into practice, a human rights lens must be applied across the entire AI lifecycle from the decision to build or buy to deployment and use, to help companies anticipate harms, prioritize safeguards, and earn durable trust without hampering innovation.

Understanding human rights risks in the AI lifecycle

Human rights risks can surface at every phase of the AI lifecycle. In fact, they have emerged in efforts to train these frontier LLMs in content moderation functions and now, are showing up elsewhere. For example, data enrichment workers who refine training data, and data center staff, who power these systems, are most likely to face labor risks. Often located in lower‑income markets with weaker protections, they face low wages, unsafe conditions, and limits on other freedoms.

During the development phase, biased training sets and the probabilistic nature of models can generate misinformation or hallucinations, and these can further undermine rights to health and political participation. Likewise, design choices often can translate into discriminatory outcomes.

Unfortunately, the use of AI-enabled tools also can compound these harms. Powerful models can be misused for fraud or human trafficking. In addition, deeper integration with sensitive data can heighten privacy and security risks.

A surprising field pattern exacerbates the risk when people increasingly use AI for therapy‑like support and disclose issues related to emotional crises and self‑harm. In particular, this intimacy widens product and policy obligations, which include age‑aware safeguards and clear limits on overriding protections.

Why human rights due diligence is urgent

That’s why human rights due diligence must start with people, not the enterprise. By embedding human rights due diligence into the lifecycle of AI, development teams can begin to understand the technology and its intended uses, then map those uses to international standards. Next, a cross functional team gathers to weigh benefits alongside harms and to consider unintended uses. Primarily, they need to answer the question, “What happens if this technology gets in the hands of a bad actor?”

From there, the process demands an analysis of severity — which assesses scale, scope, and remediation, and the likelihood of each use. The final step involves evaluating current controls across supply chains, model design, deployment, and use-phases to identify gaps.

The biggest barrier in layering in a human rights lens into to AI is the need for speed to market. The races to put out minimally viable products accompanied by competitive pressure can eclipse robust governance, yet early due diligence may prevent costly pullbacks and bad headlines. Article One’s Poynton notes that no one wants to see their product on the front page for enabling stalking or spreading disinformation. Building safeguards early “ensures that when it does launch, it has the trust of its users,” she adds.

How to embed safeguards without slowing teams

The most efficient path in translating human rights into the AI product lifecycle is to turn policy principles, goals, and ambitions into actionable steps for the engineers and the product teams. This requires the “engineers to analyze how they do their work differently to ensure these principles live and breathe in AI-enabled products,” Poynton explains. More specifically, this includes:

Identifying unexpected harms — One of the most critical, yet difficult components of the human rights impact assessment is brainstorming potential harms. Poynton recommends two ways to make this happen: First, engage with end users to help identify potential harms by asking, “What are some issues that we may not be considering from the perspectives of accessibility, trust, safety and privacy?” Second, run responsible foresight workshops at which individuals play the parts of bad actors to better identify harms and uncover mitigation strategies quickly. Pair that with a bad headlines exercise that can be used to anticipate front‑page failures. Then, ship with these protections in place, pre‑launch.

Implementing concrete controls — Embedding safety-by-design should cover both content and contact, a lesson from gaming in which grooming risks require more than just filters. Build age‑aware and self‑harm protocols, including parental controls and principled policies on overrides. Govern sales and access with customer vetting, usage restrictions, and clear abuse‑response pathways. In the supply chain, set supplier standards for enrichment and data center work that include fair wages, safe conditions, freedom of association, and grievance channels.

Treating due diligence as value-creating, not box-checking — Crucially, frame due diligence as an asset rather than a liability. “Make your product better and ensure that when it does launch, it has the trust of its users,” Poynton adds.

Additional considerations

Addressing equity must be front and center. Responsible strategies include diversifying training sets without exploiting communities and giving buyers clear provenance statements on data scope and limits.

Bridging the digital divide is equally urgent. Bandwidth and device gaps risk amplifying inequality if design and deployment assume privileged contexts. In the workplace, Poynton stresses that these impacts will be compounded, from entry-level to expert roles.

Finally, remember that AI’s environmental footprint is a human rights issue. “There is a human right to a clean and healthy environment,” Poynton notes, adding that energy and water demands must be measured, reduced, and sited with respect for local communities, even as AI helps accelerate the clean energy transition. This is a proactive mandate.


You can find out more about the ethical issues facing AI use and adoption here

]]>
The false comfort of AI engineering: Building the reusable enterprise /en-us/posts/technology/ai-engineering-building-reusable-enterprise/ Thu, 20 Nov 2025 13:49:21 +0000 https://blogs.thomsonreuters.com/en-us/?p=68471

Key takeaways:

      • Shifting from engineering to architecture — Focusing solely on building better AI models and engineering solutions leads to isolated, non-reusable outputs. Instead, organizations should build AI into the broader enterprise, emphasizing reusable, machine-readable intelligence that integrates with business operations and data structures.

      • Regulation as opportunity for reusability and efficiency — Regulatory frameworks are not just compliance burdens; they also are catalysts for sustainable AI. By mandating standardized, machine-readable data, these regulations force organizations to design systems for reuse, enabling operational efficiency and scalable innovation.

      • Reusable enterprise is the path to sustainable reinvention — The future of AI leadership lies in building adaptable, reusable data and AI infrastructures. When standardized data, AI models, and regulatory compliance reinforce each other, organizations can continuously reinvent themselves, support multiple business outcomes from the same information assets, and achieve compound returns on their investments.


Across industries, executives are confronting an uncomfortable truth: AI projects are delivering outputs, not outcomes.

For years, organizations have poured time and capital into the mechanics of AI — the algorithms, the computation power, the data pipelines, and the engineering teams to support them. Yet results remain uneven. Models keep getting larger, but lasting, reusable business value hasn’t followed.

The problem isn’t the math, it’s the mindset.

Too many enterprises have tried to engineer AI into existence instead of architecting it into the enterprise. The focus has been on perfecting models, not integrating them into the broader data and operational fabric of the business. The assumption has been that a technically superior model naturally creates a competitive edge. It doesn’t.

Without consistent governance, shared definitions, and reusable data structures, every AI initiative becomes its own isolated experiment. One line of business builds a credit-risk model. Another develops an environmental, social, and governance (ESG) classifier. A third deploys a generative assistant for customer support. Each team moves fast, but none build on each other’s work. The result is a proliferation of proofs of concept — impressive on paper but disconnected in practice.


For years, organizations have poured time and capital into the mechanics of AI — the algorithms, the computation power, the data pipelines, and the engineering teams to support them. Yet results remain uneven.


And this fragmentation carries a financial cost. Every new model adds complexity — new pipelines, new monitoring requirements, and additional governance checkpoints. These systems rarely scale together, and as integration demands grow, executives find themselves in a paradox: Make massive investments in AI infrastructure yet see declining agility and uncertain ROI.

The AI engineering mindset has optimized the structural parts, not the whole when it comes to a production solution set. In general, it has produced models that predict, but not organizations that learn.

In short, the AI engineering mindset has reached its limit — a sign that AI is entering sustainable growth cycles. Many leaders are beginning to realize that they don’t need more AI engineers, rather they need system designers who can embed intelligence into reusable business frameworks — all while navigating a regulatory environment increasingly defined by machine-readable data standards such as the Financial Data Transparency Act (FDTA) and Standard Business Reporting (SBR).

Regulation as catalyst, not constraint

At first glance, FDTA and SBR may appear to be just another layer of regulatory complexity. They are not. In fact, they represent one of the most powerful architectural opportunities available to organizations today.

By mandating machine-readable data standards, these frameworks force companies to design for reuse. They turn what once felt like a compliance exercise into an infrastructure strategy — one that connects regulatory requirements directly to operational efficiency. Build once. Reuse often.

For decades, compliance has been treated as a cost of doing business. Under FDTA and SBR, it can become the scaffolding of reinvention. Machine-readable, standardized data provides the foundation for models that are verifiable, shareable, and reusable across domains. Reporting ceases to be an afterthought and becomes a living data layer that fuels forecasting, stress testing, and product innovation.

When viewed through this lens, regulation isn’t an obstacle; it’s the blueprint for sustainable AI. It forces clarity, consistency, and interoperability — qualities every enterprise says it wants, but few achieve voluntarily. Regulation may finally deliver what AI engineering alone could not: The discipline of reusability.

From proofs of concept to proofs of architecture

For most organizations, AI success has been measured by the number of proofs of concept completed, or how fast a model moves into production. However, the real test of maturity isn’t how many experiments you run, it’s how easily those experiments can be scaled, reused, or extended.

That’s where the next evolution lies. We are now shifting from proofs of concept to proofs of architecture. And that means the question leaders should be asking isn’t, “Did it work once?” but “Can it work again, and with half the effort?” Only when a single domain’s data can serve multiple regulatory, compliance, and analytical purposes, can the enterprise start to gain compound returns on its information assets.


When viewed through this lens, regulation isn’t an obstacle; it’s the blueprint for sustainable AI. It forces clarity, consistency, and interoperability — qualities every enterprise says it wants, but few achieve voluntarily.


This approach turns data from a static resource into a dynamic capability. AI is no longer something you deploy; rather, it’s something you design for reuse.

Engineering adaptability

Organizations that embrace this shift are learning to engineer adaptability rather than one-off innovation. Their data and AI systems act like interchangeable components, each capable of supporting new regulations, mergers, or market disruptions without starting from scratch.

Some industry examples of this development include:

      • Financial services — Stress-testing data used for regulatory compliance can also inform pricing analytics and liquidity simulations, reducing cycle time between audit and strategy.
      • Healthcare — Patient outcome models built for quality reporting can be reused to predict staffing needs or optimize clinical supply chains, extending beyond compliance and into operations.
      • Legal and compliance sectors — AI used for document classification under discovery protocols can be repurposed for internal policy audits or ESG disclosure mapping, turning regulatory data into a strategic asset.
      • Manufacturing and supply chain — Sensor and maintenance data initially used for safety reporting can drive predictive production planning and carbon-emission forecasting under emerging sustainability standards.
      • Public sector and critical infrastructure — Data collected for transparency and open-data mandates can be reused to model risk exposure across utilities, cybersecurity, and climate resilience programs.

In each of these cases, the same information infrastructure supports different outcomes. That’s the hallmark of a reusable enterprise.

AI engineering

The above chart’s interconnected components illustrate how standardized data, reusable AI, and regulatory compliance can reinforce one another to create a continuous cycle of enterprise reinvention — standardized data supports reusable AI, which in turn enhances reporting and regulatory alignment. The result is a virtuous loop that replaces isolated projects with scalable, data-driven reinvention.

A call to reusable leadership

The next phase of digital leadership won’t be defined by how sophisticated a company’s models are, but instead by how seamlessly those models integrate into decision-making.

The leaders who succeed will be those who align AI investments with evolving regulatory and data standards. Their organizations will speak a common data language in which AI, compliance, and analytics operate within a shared architectural framework.

As FDTA and SBR converge globally, the line between compliance and competitiveness will blur. What once felt like regulatory overhead will become the foundation of reusable intelligence. Reinvention, in this sense, isn’t a campaign or initiative — it’s a discipline. This is not AI as a project; it’s AI as infrastructure and the architecture of continuous reinvention.

For executives navigating 2026’s convergence of regulation, consolidation, and automation, the difference between thriving and merely surviving will depend on whether they can build organizations that learn, adapt, and continuously reinvent themselves through data.


You can find more blog postsĚýby this author here

]]>