Digital Transformation & Operations Archives - Thomson Reuters Institute https://blogs.thomsonreuters.com/en-us/topic/digital-transformation-operations/ Thomson Reuters Institute is a blog from ¶¶ŇőłÉÄę, the intelligence, technology and human expertise you need to find trusted answers. Thu, 05 Mar 2026 14:28:24 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 The professional judgment gap: Tracing AI’s impact from lecture hall to professional services /en-us/posts/corporates/ai-professional-judgment-gap/ Thu, 05 Mar 2026 12:59:12 +0000 https://blogs.thomsonreuters.com/en-us/?p=69771

Key highlights:

      • Universities face pressure over pedagogy— Academic institutions are adopting AI as a reputational marker that’s driven by market pressure rather than educational need, creating a risk for students who can work with AI but not independently of it.

      • Entry-level roles under threat— AI is being deployed most heavily to automate the grunt work of entry-level positions in which foundational professional skills are traditionally built through struggle and feedback.

      • K-shaped cognitive economy emerging— Experienced professionals with existing expertise are gaining efficiency from AI, while entry-level workers are losing access to skill-building experiences.


According to Harvard University’s Professional & Executive Development division, innovation is defined as a “process that guides businesses through developing products or services that deliver value to customers in new and novel ways.” Along this journey, professional judgement in decision-making is used numerous times to determine next steps at key stages.

Notably, the word technology is nowhere to be found in this definition — an absence , Assistant Professor of Learning Technologies at the University of Minnesota, has long found revealing. Instead, innovation is framed as creative problem-solving, contextual intelligence, and the ability to work across perspectives. Interestingly, Dr. Heinsfeld adds, none of these require constant automation. In fact, many of them are undermined by it.

However, AI adoption has the real potential to automate away the very experiences that build these capabilities from university lecture halls to corporate offices. With notable data already suggesting that , the risk that the current approaches to AI use in universities and companies are engineering away innovation and professional judgement skills is real, notes , Group Leader in AI Research at Harvard and NTT Research.

Indeed, some observers view AI as the largest unregulated cognitive engineering experiment in human history. Yet, unlike medical drugs that require years of approval and testing, AI systems are reshaping how millions of students think, learn, and make decisions without a comparable approval process or a shared framework for discussing any potential “side effects,” as Dr. Heinsfeld pointed out.


Most worrisome is that AI is being deployed most heavily to automate precisely the entry-level roles where foundational professional skills are built.


So, what happens when an entire generation of future employees learn to delegate judgment before they develop it? And what actions do universities and companies need to take now to avoid this reality?

Risks of universities adopting AI under pressure

For universities, AI “has become a reputational marker, and not adopting AI is framed as institutional risk, regardless of whether an educational case has been made or not,” says Dr. Heinsfeld, adding that this is being driven, in part, by market pressure rather than pedagogical need.

Already, companies can greatly influence universities as employers of new graduates; and as such, AI systems are currently being optimized for speed, agreeability, and accessibility to stimulate ongoing use. However, as Dr. Heinsfeld contends, as universities race to earn the label AI ready without a careful, cautious and detailed understanding of how it may impact students’ cognitive processes, they run the risk of damage to their reputations of pedagogical integrity.

In addition, the “data as truth” paradigm is a complicating factor, she says. Drawing on her research, Dr. Heinsfeld explains how data “is often framed as the idea of being a single source of truth based on the assumption that when collected and analyzed, it can reveal objective, indisputable facts about the world.” Indeed, this ubiquitous mindset across universities and corporations treats data — such as that used to train large and small language models — as objective and indisputable.

Yet this obscures critical decisions about what gets measured, whose perspectives are included, and what forms of knowledge are systematically excluded from AI systems. As Dr. Heinsfeld warns, when data becomes synonymous with truth, “knowledge is what is measurable and optimizable.” This narrows professional judgment to efficiency metrics rather than the interpretive depth, ethical reasoning, and cultural context that are essential for sound decision-making.

Judgment gap widens in workforce downstream

Under the current AI adoption approach, students could leave universities able to workĚýwithĚýAI but not independentlyĚýofĚýit, a distinction emphasized by Dr. Heinsfeld. Like calculators, AI works as a tool only when foundational skills for its use exist first. Without this, graduates enter the workforce with a critical judgment gap that compounds from their lives as students at college campuses to becoming employees working in corporations.


AI adoption has the real potential to automate away the very experiences that build these capabilities from university lecture halls to corporate offices.


Most worrisome is that AI is being deployed most heavily to automate precisely the entry-level roles where foundational professional skills are built, warns Dr. Tanaka. Indeed, this is exactly the type of grunt work that teaches judgment through struggle and feedback. Over time, overuse of AI will result in quality being sacrificed because critical evaluation skills have atrophied.

Looking into the future, Dr. Tanaka foresees a K-shaped economy of cognitive capacity. Experienced professionals with existing expertise and contextual judgment built through years of experience will gain increasing efficiency from AI. Entry-level workers, however, will lose access to the valuable experiences that build professional judgement. This gap widens between professionals who can independently accelerate their workflows using AI and those whose traditional tasks are merely displaced by it.

Intervention may be able to break the cycle

The pattern is not inevitable, as both Dr. Tanaka and Dr. Heinsfeld explain. Drawing on Dr. Heinsfeld’s emphasis on institutional agency, meaningful intervention will depend on conscious, intentional choices made at every level. Both experts share their guidance for how different organizations can manage this:

Academic institutions — Universities must first recognize that AI adoption is a decision rather than an inevitability and make educational need the North Star for decision-making around AI. In her analysis, Dr. Heinsfeld emphasizes that when vendors set defaults, they quietly redefine academic practice. Defaults shape what is made visible or invisible and what becomes normalized. In AI-driven environments, universities often lose control over how models are trained and updated, what data shapes outputs, how knowledge is filtered and ranked, and how student and faculty data circulate beyond institutional boundaries — especially if decision-making is left to vendors. As a result, the intellectual byproducts of teaching and learning increasingly become inputs into external systems that universities do not govern.

Private entities — For organizations, Dr. Tanaka calls for feedback loops and other mechanisms that will promote more open discussion about AI use without stigma. In addition, companies need to proactively redesign entry-level rolesĚýto ensure these positions continue to cultivate judgment and foundational skills in an AI-driven environment. Likewise, Dr. Tanaka suggests that companies explicitly provide feedback about cognitive trade-offs to employees, fostering an understanding of possible skill entrophy.

Employees — Similarly, individuals working for organizations bear much of the responsibility for making sure critical thinking is enhanced by AI. Indeed, strategic decisions about when to use AI while seeking to preserve cognitive capacity and professional judgement are key.

Looking ahead

In today’s increasingly AI-driven environment, a new paradigm is needed to combat the current operating assumption that optimization from AI is the sole path to progress. And because the current trajectory sacrifices human development for efficiency, the need for universities and companies to choose a different path is urgent — while they still have the judgment capacity to do so.


You can find out more about how organizations are managing their talent and training issues here

]]>
Corporate tax departments’ Groundhog Day problem — and the hybrid model that could fix it /en-us/posts/corporates/tax-departments-hybrid-model/ Thu, 26 Feb 2026 15:20:56 +0000 https://blogs.thomsonreuters.com/en-us/?p=69625

Key takeaways:

      • Tax departments lack resources and confidence — More than half (58%) of tax departments are under-resourced, and 59% are not confident that they can upgrade their tax technology over the next two years.

      • Under-resourced departments incur more penalties — At least half of respondents from under-resourced tax departments say their departments incurred penalties over the past year, compared to only about one-third of those from properly resourced departments.

      • Making the shift to proactive planning and value creation — For many tax departments, the winning model blends in-house expertise, targeted external support, and a coherent tech/AI stack that allows teams to shift from tactical compliance to proactive planning and strategic value creation.


Under-resourced corporate tax departments spend more of their budget on external support compared to well-resourced teams — yet they’re more likely to incur penalties and less confident in forecasting, according to the Thomson Reuters Institute’s .

Given this, the problem isn’t a lack of spending — it’s the operating model. With respondents from 58% of tax departments saying they are under-resourced, 59% saying they lack the confidence needed to upgrade their existing tax technology over the next two years, and most spending more than half their time on reactive compliance work when they’d prefer to focus on strategic planning, clearly the gap between ambition and reality has never been wider.

The answer isn’t working harder or throwing more money at consultants, however. It’s building a hybrid ecosystem of people, platforms, and partners designed to shift capacity from firefighting to foresight.

The Groundhog Day problem

Every year feels the same: New tax legislation (such as the One Big Beautiful Bill Act or Pillar 2), new compliance burdens, new geopolitical uncertainty — coupled with the same old constraints. Too much work, not enough time, and technology that lags.

When deadlines hit, under-resourced teams rely on two blunt levers: overtime and reactive outsourcing. Internal staff end up working longer hours, and external providers plug the gaps at short notice. This model is breaking departments and it’s breaking down itself.

Under-resourced departments are significantly more likely to incur penalties, with 50% of respondents saying their under-resourced department had been penalized in the past year, compared to just 34% of respondents from well-resourced departments that say that, according to the report.

Further, under-resourced department respondents said they were less confident in their ability to forecast accurately, with just 26% saying their ability to forecast accurately was “very likely” compared to 43% of well-resourced department respondents. Ironically, under-resourced departments also spend more on external support as a percentage of budget (44%) compared to 37% for well-resourced departments. Clearly, spending more doesn’t solve structural problems — it often masks them.

Meanwhile, tax professionals report spending more than half their time on tactical or reactive work, even though they would prefer to spend up to two-thirds of their time on strategic analysis. Not surprisingly, when the team is locked into manual reconciliations and last-minute fixes, it’s nearly impossible to influence business decisions or shape strategy.

Why “all in-house” or “all outsourced” no longer works

When more work is moved onto the plates of the internal tax team, all in-house can often come to mean all heroics — talented people drowning in compliance volume with no time to use the analytical tools already on their desks. Conversely, all outsourced risks hollowing out the department’s institutional knowledge and weakening its seat at the table.

A hybrid model asks better questions: What kind of work is this, and where does it create the most leverage? These questions can be used to determine where and to whom work should go. For example, high-volume, rule-based, recurring tasks are prime candidates for automation, shared services, or managed services under strong tax oversight; while complex, judgment-heavy, strategically sensitive work should remain anchored in-house, with external advisors extending capacity and offering specialized insight.

Thus, the best model for a modern corporate tax department is a hybrid ecosystem — not a fixed organizations chart, but a deliberate blend of internal expertise, enabling technology, and external capability partners.

Four layers of the hybrid ecosystem

This hybrid ecosystem can be delineated into four layers, each bringing their own insight and value:

      1. People and roles redesigned — High-performing tax functions invest in analyst and tax-tech roles that connect tax to enterprise resource planning (ERP) systems, data hubs, and analytics, thus freeing technical experts from manual data work. Senior professionals then become embedded advisors to finance, treasury, and the business, not just compliance reviewers.
      2. Processes segmented into “run” and “change” — The biggest barriers to strategic work are excessive volume, heavy compliance burdens, limited resources, and time pressure. Modern tax departments respond by explicitly segmenting work in which run the business processes are documented, standardized, and increasingly automated or pushed into shared or managed service models. Change the business work remains tightly linked to senior tax staff.
      3. Technology becomes the data spine — More than half of respondents say they expect above-normal increases in their tax technology budgets, and more than half say their main resourcing strategy is introducing more automation. The goal isn’t collecting point solutions; rather, it’s building a coherent data spine that includes ERP integration, tax-specific data models, consistent workflow tooling, and strategic platforms that flex as regulations shift.
      4. AI act as an accelerator — Two-thirds of tax departments aren’t yet using generative AI (GenAI), according to the report. And among the one-third that are, usage clusters around research, document summarization, drafting, and some analytical support. The next step up the AI chain is for departments to move from individual experiments to standardized, governed workflows that scan legislation, prepare first drafts of memos, or interrogate large data sets for anomalies.

What high-performing hybrid tax departments do next

Departments that feel well-resourced, allocate more time for their professionals to conduct proactive work, and invest deliberately in technology and skills are significantly more confident in their ability to forecast accurately, avoid penalties, and minimize tax liabilities, the report shows.

Indeed, these high-performing hybrid tax departments:

      • invest ahead of crises in people, tech, and processes
      • treat external providers as capability partners, not emergency relief
      • actively protect time for strategic work by automating or outsourcing routine tasks
      • insist on a durable seat at the strategy table, not just one for compliance reporting
      • experiment with automation and AI in focused, repeatable use cases

It is worth noting that smaller companies (those under $50 million in annual revenue) and the largest one (those with more than $5 billion in revenue) are leading the way by securing leadership buy-in early and leveraging specialized external expertise rather than trying to build everything in-house. Midsize companies, by contrast, are more likely to rely on in-house teams to lead automation efforts and less likely to use third-party vendors — a cautious approach that risks having them fall too far behind to catch up.

The message: Design the ecosystem, don’t just work harder

For corporate tax professionals, the message may be harsh but hopeful: You cannot work your way out of structural constraints by effort alone. Rather, a well-designed hybrid ecosystem can turn those constraints into a catalyst that will allow the department to deliver more value to the business. In fact, the modern corporate tax department is hybrid by necessity; but the question is whether it’s hybrid by design — or just by accident.


You can learn more about the challenges facing modern corporate tax departments here

]]>
Scaling Justice: Easing the UK’s employee rights crisis /en-us/posts/ai-in-courts/scaling-justice-uk-employee-rights-crisis/ Tue, 24 Feb 2026 18:37:39 +0000 https://blogs.thomsonreuters.com/en-us/?p=69605

Key takeaways:

      • An emerging employment tribunal crisisĚý— The UK’s employment tribunal system is facing unprecedented backlogs, long wait times, and unaffordable legal representation, leaving many workers and small businesses unable to effectively resolve workplace disputes.

      • Process-oriented barriers to justiceĚý— Most claims are dismissed not because they lack merit, but due to claimants disengaging from a slow and complex process, with legal costs often exceeding the value of claims and legal aid unable to meet rising demand.

      • A potential role for legal technologyĚý— Mission-driven legal tech platforms are emerging to provide affordable, scalable support and help claimants stay engaged by offering a practical solution to improve access to justice.


When a worker in the United Kingdom is unfairly dismissed or denied wages, their path to resolution runs through employment tribunals, a specialized court system separate from civil courts. As in the United States, many workers and small businesses cannot afford legal representation and must navigate the process on their own.

With backlogs at all-time highs and affordable legal services at all-time lows, this system is coming under increasing pressure. Fortunately, mission-driven technology and data analysis are emerging to level the playing field and increase access to justice.

Current state by the numbers

According to an analysis of the and other data sources,*Ěýin the second quarter of 2025, employment tribunals resolved just 45% of incoming claims, adding 18,000 cases to the backlog alone. In the past year, the open caseload has surged by 244%. This pressure is set to intensify as the inbound Employment Rights Act 2025 — the UK’s most significant overhaul of workplace protections in decades — is set to extend protection to six million more workers in 2027.

As the backlog increases, so do wait times. In 2025, the average wait for resolution reached 25 weeks, more than double that of 2024, with some claim types like equal pay and discrimination claims reaching up to 37 weeks. Some more complex cases are reported to have their final hearings scheduled as far out as 2029.

With only 8% of cases reaching a final hearing and the majority resolved through settlement or withdrawal, the growing backlog raises concerns about whether lengthy wait times influence how claimants choose to resolve their cases.

In the UK, a common threshold for legal affordability is a salary of ÂŁ55,000, meaning around 65% of workers cannot afford legal representation. Legal aid and pro-bono services exist to support those in need, but with growing funding constraints and rising demand, these services cannot reach nearly two-thirds of claimants.


You can find more insights about how courts are managing the impact of advanced technology fromĚýour Scaling Justice seriesĚýhere


Tribunal awards are largely calculated from salary. This can result in a claim’s value often being lower than the cost of legal representation to pursue it. In a typical hospitality case, for example, a worker owed ÂŁ1,500 in unpaid wages (equivalent to 3½ weeks of pay) has a 92% chance of representing themselves and will wait on average six months for resolution — without pay owed, legal support, or outcome certainty.

The cost, both in time and resources, also falls on employers. In lower-margin industries such as hospitality, default judgments, in which an employer does not engage with proceedings, can reach as high as 37%, compared with a national average of around 6%. For these employers and for smaller businesses more broadly, the cost of legal support may also exceed the value of defending a claim.

With rising costs and growing delays, the risk for both employers and employees is that the system becomes inaccessible, leading to outcomes shaped by who can afford to sustain the process rather than case-by-case strength.

Where justice tech fits

The conventional assumption is that self-represented claimants are at a significant disadvantage when they go to court; yet the data is more nuanced. Self-represented claimants who reach a hearing prevail 44% of the time, compared to 52% for those with legal representation — a gap of less than eight percentage points.

The greater risk is not losing at hearing but never actually reaching one. Analysis of more than 2,700 struck-out, or dismissed, cases by employment rights platform Yerty found that the majority were dismissed not for lack of merit, but because claimants stopped engaging with the process. Only 6% were struck out for having no reasonable prospect of success. This suggests that the primary barrier may not be the absence of legal representation, but the ability to sustain engagement with a slow, complex, and often opaque process.

Increasing numbers of UK workers turning to AI tools like ChatGPT for legal support highlight not only the demand for affordable access but also the risks of general-purpose tools being used in legal contexts. Fabricated case law in tribunal submissions, for example, harms users and adds further pressure to an already overstretched system.


The conventional assumption is that self-represented claimants are at a significant disadvantage when they go to court; yet the data is more nuanced.


A new generation of legal technology platforms is emerging to fill this gap, with tools purpose-built for the specific circumstances of employment law. Yerty and Valla, among others, offer AI-powered guidance tailored to the UK tribunal process, providing affordable, scalable support previously out of reach for most workers. Government organizations are also moving in this direction. For example, in its recent five-year strategy outlook committed to exploring new digital services that offer faster, more accessible support.

Technology alone cannot address underfunding, judicial capacity, or fundamental power imbalances. However, if the majority of dismissed claims stem from disengagement rather than weak cases, and self-represented claimants prevail at comparable rates to those with lawyers, then the answer isn’t more lawyers — it’s better support upstream. Mission-driven legal technology can provide consistent, scalable guidance that helps both parties manage the process and avoid falling through the cracks.

The UK government’s own assessment of the Employment Rights Bill forecasts a 15% increase in claims by 2027 due to expanded eligibility. As noted above, the system is already under significant pressure before these reforms take effect, and traditional responses — more judges, more funding — too often take years to deliver.

While not a complete answer, justice tech can help address a real, measurable problem, that of keeping people engaged in a process that too often disengages them. For a hospitality worker owed back pay, a healthcare worker facing unfair dismissal, or a retail employee navigating a discrimination claim alone, that support could mean the difference between a case heard and one abandoned — and justice delayed or justice denied.


*Sources: Ministry of Justice Tribunal Statistics Quarterly (July-September 2025); Yerty analysis of 2,721 struck-out tribunal decisions and 8,761 case outcomes; ACAS Strategy 2025-2030; 2024 UK Judicial Attitude Survey, UCL Judicial Institute / UK Judiciary, February 2025.

]]>
Understanding the data core: From legacy debt to enterprise acceleration /en-us/posts/technology/understanding-data-core-enterprise-acceleration/ Tue, 03 Feb 2026 14:47:41 +0000 https://blogs.thomsonreuters.com/en-us/?p=69255

Key takeaways:

      • The real bottleneck for AI is the data core — AI is advancing rapidly, but most organizations’ data architectures, governance, and legacy assumptions can’t keep up. Without a repeatable, business-aligned data foundation, AI initiatives will struggle to scale and deliver reliable results.

      • AI success relies on explainable, traceable, and reusable data — For AI to be reliable and compliant, organizations must design data environments that emphasize lineage, semantics, and trust; and that means that compliance and auditability need to be built into the data core, not added on later.

      • Business should shift from tool-centric upgrades to business-driven, data-centric reinvention — Efforts focused only on modernizing tools or platforms miss the root issue: legacy data structures. Leaders must prioritize building a cohesive, reusable data core that aligns with business strategy.


This article is the first in a 3-part blog series exploring how organizations can reset and empower their data core.

Across boardrooms, regulatory briefings, and strategic off-sites, leaders are asking with growing urgency some variation of the same question: How do we make AI reliable, scalable, auditable, and economically defensible? The surprising answer is not in the AI technology, nor in the cloud stack, nor in another round of system upgrades.

It is in the data. Not the data we store, not the data we report, and not the data we move across our pipelines. It is in the data that we must now explain, contextualize, trace, validate, and reuse continuously as agentic AI becomes embedded in every workflow, every decision system, and every regulatory outcome.

The stark reality across industries then becomes what to do as AI matures faster than our data cores can support. For the first time, technology is not the bottleneck — architecture is, organizational assumptions are, and governance strategies are. More importantly, the lack of a repeatable, business-aligned data foundry has become the strategic inhibitor standing between today’s operations and tomorrow’s autonomy-ready enterprises.

The realities of 2026

As 2026 gets underway, the pressures of regulation, AI adoption, data lineage requirements, and cross-system consistency have converged into a single strategic reality: We can’t keep modernizing data at the edges. The data core itself must be reimaged and compartmentalized.

For leaders across highly regulated industries, the challenge is recognizing that our data architectures were never designed for the world we’re moving into. Historically, solutions were built for predictable siloed-data systems, linear programmatic processes, and dashboard reporting. Today’s demands are continuous, variable, cross-domain, and machine-interpreted and not bound by traditional methods and techniques of process efficiency and system adaptability. Tomorrow’s systems will be comprehensively trained by data. To properly frame these realities, leaders must understand:

      • Agentic AI exposes weak data architecture immediately — Models may scale, but data debt does not. This is a new, priority constraint.
      • Lineage, semantics, and trust scoring — not models — will determine enterprise readiness — AI will only be as reliable as the meaning and traceability of enterprise data.
      • Compliance cannot be retrofitted; rather, it must be designed into the data core — Compliance no longer ends in reporting, it must exist upstream and be addressed continuously.
      • Return on investment in AI is impossible without composable, modular, and reusable data products — Data that cannot be composed, traced, and made consistent cannot be automated.
      • The bottleneck is not talent or tools, it is the absence of a data foundry — Without robust, industrial-grade data production, AI will remain fragmented and experimental.

By delivering a practical, business-first path integrated with a data-centric design, organizations enable reuse, compliance, and measurable ROI. AI is accelerating, but data readiness is not. This mismatch is where many transformation efforts die.

Agentic AI demands a data environment that simply does not exist with most legacy solutions. It requires decision-aligned semantics, federated trust scoring, cross-domain lineage, dynamic compliance overlays, and consistent interpretability. No model, no matter how advanced, can compensate for data environments that have been engineered for static reporting and linear process logic. We are entering a cycle of reinvention in which data becomes the organizing principle.

The business need, not the engineering myth

Executives are rightfully fatigued by transformation programs. They have seen modernization initiatives expand scope, escalate cost, and ultimately underdeliver. They have heard the promises of clean data, enterprise data platforms, microservices, cloud migration, and AI-readiness. However, when agentic AI begins interacting with these ecosystems, the fragility of the entire operation becomes instantly visible.

Why? Because most data modernization initiatives have been driven by tool-centric solutions rather than architecture-centric capabilities. Prior data governance is about oversight, not enablement and reuse, as is being demanded by emerging AI designs. Often, legacy methods kept their audit and lineage contained within siloed processes, bridging bridged them with replicated data warehouses, extract, transform, load systems (ETLs), and application programming interfaces (API) protocols.

However, this tool-centric, legacy-enabled approach is the problem. We keep optimizing the wrong layers, and we keep modernizing the components.

As a result, we too often see that AI pilots succeed, but enterprise scaling fails. Or, that regulatory reporting improves marginally, but compliance costs increase. Or M&A integrations appear straightforward, but post-close data convergence drags on for years.

The gap between ambition and reality

As a solution, a data foundry approach corrects that imbalance by formalizing the factory-grade patterns required to support agentic AI systems. It becomes the production line for reusable data products, compliant semantics, and decision-aligned datasets. It also eliminates reinvention by institutionalizing repeatable structures; and, most importantly, it restores business leadership over AI outcomes, rather than relegating decision logic to engineering workstreams and emerging technologies.

As illustrated below, AI requirements and realities need to be tempered with business demands, organizational risks, and data agility capabilities (including skill sets) to achieve realistic roadmaps of action — not strategic aspirations.

data core

Today, the question isn’t whether organizations understand the importance of data, it’s whether leaders know how to build environments in which data becomes reusable, trustworthy, and ready for agentic AI. The issue, however, continues to be that our data cores — the architectural, operational, and standards ecosystems beneath all this — were not designed for continuous change.

Before they mobilize and execute against AI plans, business leaders need to answer the question: What business decisions are we trying to improve — and what data do these decisions actually requires today, and for tomorrow?

The organizations that will lead in the coming decade will do so not because they found the perfect technology stack, but because they built a reusable, continuously improving data foundation that can support AI, regulation, risk, and innovation simultaneously.

The question for leaders then becomes: Are we prepared to reinvent?

The work begins now — quietly, deliberately across the data core where tomorrow’s competitive advantages will be created. The chart below illustrates the business-driven AI elements that must be addressed, and how the old sequence of system provisioning must be replaced, beginning with outcomes and ending with engineered AI tools.

data core

AI is an output — a capability that’s unlocked after the underlying data foundation becomes coherent, traceable, explainable, and aligned with business decisions. For leaders, the data core is no longer a back-office concern or one-off IT initiative. It is a strategic asset that can shape speed, resilience, and trust across the organization.


In the next post in this series, the author will explain how to architect an integrated data core, particularly through the AXTent architectural framework for regulated organizations. You can find more blog postsĚýby this authorĚýhere

]]>
Scaling Justice: How technology is reshaping support for self-represented litigants /en-us/posts/ai-in-courts/scaling-justice-technology-self-represented-litigants/ Fri, 23 Jan 2026 15:31:24 +0000 https://blogs.thomsonreuters.com/en-us/?p=69124

Key takeaways:

      • From scarcity to abundance — Technology has shifted the challenge in access to justice from scarcity of legal help to issues of accuracy, governance, and effective support.ĚýAI and digital tools now provide abundant legal information to self-represented litigants, but they raise new questions about reliability, oversight, and alignment with human needs.

      • The necessity of human-in-the-loop — Human involvement remains essential for meaningful resolution.ĚýWhile AI can explain procedures and guide users, real support often requires relational and institutional human guidance, especially for vulnerable populations facing anxiety, low literacy, or systemic bias.

      • One part of a bigger question — Systemic reform and broader approaches are needed beyond technological fixes because technology alone cannot solve deep-rooted inequities or the complexity of the legal system. Efforts should include prevention, alternative dispute resolution, and redesigning systems to prioritize just outcomes and accessibility.


Access to justice has long been framed as a problem of scarcity, with too few legal aid lawyers and insufficient funding forcing systems to be built in triage mode. This has been underscored with the unspoken assumption that most people navigating civil legal problems would do so without meaningful help, often because their issues were not compelling or lucrative enough to justify legal representation.

This framing no longer holds, however. Legal information, once tightly controlled by legal professionals, publishers, and institutions, is now abundantly available. Large language models, search-based AI systems, and consumer-facing legal tools can explain civil procedure, identify relevant statutes, translate dense legalese into plain language, and generate step-by-step guidance in seconds.

Increasingly, self-represented litigants are actively using these tools, whether courts or legal aid organizations endorse them or not. Katherine Alteneder, principal at Access to Justice Innovation and former Director of the Self-Represented Litigation Network, notes: “This reality cannot be fully controlled, regulated out of existence, or ignored.”

And as Demetrios Karis, HFID and UX instructor at Bentley University, argues: “Withholding today’s AI tools from self-represented litigants is like withholding life-saving medicine because it has potential side effects. These systems can already help people avoid eviction, protect themselves from abuse, keep custody of their children, and understand their rights. Doing nothing is not a neutral choice.”

Thus, the central question is no longer whether technology can help self-represented litigants, but rather how it should be deployed — and with what expectations, safeguards, and institutional responsibilities.

Accuracy, error & tradeoffs

The baseline capabilities of general-purpose AI systems have advanced dramatically in a matter of months. For common use cases that self-represented litigants most likely seek — such as understanding process, identifying next steps, preparing for hearings, and locating authoritative resources — today’s frontier models routinely outperform well-funded legal chatbots developed at significant cost just a year or two ago.


The central question is no longer whether technology can help self-represented litigants, but rather how it should be deployed — and with what expectations, safeguards, and institutional responsibilities.


These performance gains raise important questions about the continued call for extensive customization to deliver basic legal information. However, performance improvements do not eliminate the need for careful design. Tom Martin, CEO and founder of LawDroid (and columnist for this blog), emphasizes that “minor tweaking” is subjective, and that grounding AI tools in high-quality sources, appropriate tone, and clear audience alignment remains essential, particularly when an organization takes responsibility and assumes liability for the tool’s voice and output.

Not surprisingly, few topics in the legal tech community generate more debate than AI accuracy, but it cannot be evaluated in isolation. Human lawyers make mistakes, static self-help materials become outdated, and informal advice from friends, family, or online forums is often wrong. Models should be evaluated against realistic alternatives, especially when the alternative is no help at all.

Off-the-shelf tools now perform surprisingly well at generating plain-language explanations, often drawing on primary law, court websites, and legal aid resources. In limited testing, inaccuracies tend to reflect misunderstandings or overgeneralizations rather than pure fabrication. And while these are errors that are still serious, they may be easier to detect and correct with review.

Still, caution is key, often because AI tells people what they want to hear in order to keep them on the platform. Claudia Johnson of Western Washington University’s Law, Diversity, and Justice Center asks what an acceptable error rate is when tools are deployed to vulnerable populations and reminds organizations of their duty of care. Mistakes, especially those known and uncorrected, can carry legal, ethical, and liability consequences that cannot be ignored.

Knowledge bases are infrastructure, but more is needed

Vetted, purpose-built, and mission-focused solution ecosystems are emerging to fill the gap between infrastructure and problem-solving. The Justice Tech Directory from the Legal Services National Technology Assistance Project (LSNTAP) provides legal aid organizations, courts, and self-help centers with visibility into curated tools that incorporate guardrails, human review, and consumer protection in ways that general-purpose AI platforms do not.

Of course, this infrastructure does not exist in a vacuum. Indeed, these systems address the real needs of real people. While calls for human-in-the-loop systems are often framed as safeguards against technical failure, some of the most important reasons for human involvement are often relational and institutional. Even accurate information frequently fails to resolve legal problems without human support, particularly for people experiencing anxiety, shame, low literacy, or systemic bias within courts.


Not surprisingly, few topics in the legal tech community generate more debate than AI accuracy, but it cannot be evaluated in isolation.


A human in the loop can improve how self-represented litigants are treated by clerks, judges, and opposing parties. Institutional review models often provide this interaction at pre-filing document clinics, navigator-supported pipelines, and structured AI review workshops that integrate human judgment and augment human effort rather than replacing it.

Abundance and the limits of technology

Information does not automatically produce equity. Technology cannot make up for existing, persistent systemic issues, and several prominent voices caution against treating AI as a workaround for deeper system failures. Richard Schauffler of Principal Justice Solutions, notes that the underlying problem with the use of AI in the legal world is the fact that our legal process is overly complicated, mystified in jargon, inefficient, expensive, and deeply unsatisfying in terms of justice and fairness — and using AI to automate that process does not alter this fact.

Without changes at the courthouse level, upstream technological improvements may not translate into just outcomes. Bias, discrimination, and resource constraints cannot be solved by technology alone. Even perfect information from a lawyer does not equal power when structural inequities persist.

Further, abundance fundamentally changes the problem. As Alteneder notes, rather than access, the primary problem now is “governance, trust, filtering, and alignment with human values.” Similar patterns are seen in healthcare, journalism, and education. Without scaffolding, technology often widens gaps, benefiting those with greater capacity to interpret, prioritize, and act. For self-represented litigants, the most valuable support is often not answers, but navigation: What matters most now, which paths are realistic, how to understand when to escalate and when legal action may not serve broader life needs.

Focusing solely on court-based self-help misses an opportunity to intervene earlier, especially on behalf of self-represented litigants. AI-enabled tools have the potential to identify upstream legal risk and connect people to mediation, benefits, or social services before disputes harden.


You can find more insights about how courts are managing the impact of advanced technology from our Scaling Justice series here

]]>
Tech use rising in global trade operations, but key gaps remain /en-us/posts/corporates/tech-rising-in-global-trade/ Thu, 22 Jan 2026 11:50:07 +0000 https://blogs.thomsonreuters.com/en-us/?p=69117

Key insights:

      • Tech adoption is rising, but critical gaps remain — Adoption of technology has surged over the last year but several important trade functions are still not widely automated.

      • Satisfaction levels with technology remain low in some areasĚý— Trade leaders are generally satisfied with the gains they see from their use of technology, but lack of integration is hindering supply chain visibility efforts.

      • Organizations are increasing their technology investments — Technology budgets are expected to continue to grow this year.


The recently published 2026 Global Trade Report, from the Thomson Reuters Institute, discussed how the use of technology is rising across corporate trade departments, with trade professionals much more likely this year to report that their departments have deployed automated tools. In addition, many were even exploring the use of advanced technologies such as AI and blockchain.

In the report, the percentage of trade professionals that characterized their departments as being early adopters or behind the curve (meaning they’re still using manual systems) dropped significantly.

However, amid the rapidly growing adoption of technology, key challenges and gaps remain, the report showed.

Urgency to improve efficiency

Increasing efficiency in trade operations is a high priority, as workloads continue to increase. More than half (56%) of respondents said that workloads and overtime requirements have grown over the last year as a result of increased tariff activity and trade complexity. Respondents also cite more complex reporting and documentation requirements. And an even higher percentage said they expect those pressures to increase over the next year. As a result, nearly half (49%) of trade professionals surveyed report increased stress on their teams.

In addition, trade departments are taking on a more strategic role in their organizations, as detailed in the report, which noted that with the heightened trade and tariff volatility, trade professionals are more involved in executive decision-making and are expanding their scope of responsibilities, including having greater influence over procurement decisions. However, these added roles and responsibilities also mean that trade departments must take on additional workflows.

Fortunately, technology can be a critical force multiplier to help manage these changes. While about half of respondents said they expect increased budget allocations to hire additional staff over the next year, trade departments are increasingly looking to technology to help automate workflows and increase efficiency. In addition, automating routine tasks for compliance and reporting can free up staff time to focus on more complex tasks such as using advanced analytics and engaging in strategic planning.

It’s not surprising then, that while most respondents (52%) anticipate more budget for additional headcount this year, an even higher percentage (65%) said they expect more resources to be budgeted for technology solutions. This positions trade departments to reap the best of both worlds — more staff and greater use of technology to improve efficiency across the department.

Continuing technology gaps

Most trade departments, according to respondents, have now adopted trade and supply chain data analytics, automation for enterprise resource planning, supply chain management, and supply chain visibility. However, significant technology gaps remain, with relatively few respondents saying their departments have deployed tools and platforms to allow for global trade management (32%), managing tariff changes (7%), and managing classification changes (4%).

As a result, satisfaction with tech capabilities often remains modest at best. Fewer than one-in-five respondents report being very satisfied with the impact of technology on workflow efficiency for trade and supply chain management, keeping up with regulatory changes, or improving their ability to glean insights from trade data in order to drive business decisions.

One major contributing fact is that four-in-ten respondents said they are not yet satisfied with their organization’s level of technology integration. This lack of integration hinders the ability of the trading team to maximize their use of existing systems to track and analyze data across various functions and geographies. This is increasingly important as businesses seek to improve visibility across their entire supply chain.

Thus, it’s not surprising that system integration is the top technology investment priority for the next year. An overwhelming 83% of respondents said this is a high- or medium-priority to help support informed decision-making.

Only about a quarter of trade departments have visibility across regionsĚýĚý

global trade

— Thomson Reuters Institute, 2026 Global Trade Report

Modernizing trade technology

With 40% of organizations exploring emerging technologies such as AI and blockchain, and satisfaction levels remaining modest across currently deployed capabilities, a significant technology transformation opportunity exists. However, successful technology deployment requires strategic focus rather than adoption of the latest technologies simply for their own sake.

Trade leaders should focus their technology investments in several key areas:

Supply chain visibility platforms — Real-time tracking enables proactive rather than reactive management. Automated exception alerting, comprehensive visibility across multi-tier supply chains, and integration with other systems can create a solid foundation for data-driven decision-making.

Data analytics and predictive capabilities — The jump from 8% to 58% in the last year in respondents saying their organizations adopted and used trade and supply chain data analytics indicates widespread recognition of data’s strategic value. Organizations should invest in platforms that not only collect data but generate actionable insights through advanced analytics and machine learning. Predictive capabilities can anticipate disruptions before they occur, enabling preventive action rather than damage control.

AI-assisted product classification — Product classification is time-consuming, error-prone when done manually, and yet, critical for compliance. AI systems have the potential to dramatically improve both efficiency and accuracy while freeing trade professionals to focus on more strategic work rather than routine tasks.

More technology investments ahead

The recent surge in technology adoption is positioning corporate trade departments to increase efficiency and expand their capabilities. Despite numerous gaps depending on specific technology use, about half of trade leaders indicate they are already at least somewhat satisfied with the overall gains they are seeing because of their use of technology.

Despite the recent gains, however, significant gaps in technology adoption still remain. Fortunately, organizations are recognizing the importance and urgency of increasing their investments in technology, coupled with adding to trade department headcount.

While workloads and pressures continue to grow, the elevation of the trade department as a strategic partner to the business — along with growing involvement in decision-making at the executive level and increasing recognition of the trade function’s value to the business — suggests that organizations are likely to continue accelerating their investments in technology as an integral part of their growing commitment to supporting their in-house trade professionals.


You can download a full copy of the Thomson Reuters Institute’s 2026 Global Trade Report here

]]>
Hybrid intelligence: Ramping up human-focused power skills in an AI-enabled workplace /en-us/posts/sustainability/hybrid-intelligence/ Wed, 21 Jan 2026 19:03:17 +0000 https://blogs.thomsonreuters.com/en-us/?p=69097

Key highlights:

      • Human connection is now a competitive capability — Treat relationships as core infrastructure instead of cultural fluff by designing work to keep real collaboration, accountability, and regular face-to-face interaction at the center with AI in a supporting role.

      • Protect your judgment and meaning as “human-owned” — Start with independent frameworks and reasoning, then use AI to refine and stress-test; and schedule recurring “no-AI” blocks to keep analytical muscle and professional agency strong.

      • The winning model is hybrid intelligence — The standout professionals in 2026 will be those who are fluent in both human dynamics and AI assisted workflow.


Professional services work fundamentally relies on judgment, trust, and relationships. Clients engage firms for confidence and strategic guidance, while a good reputation in this sector develops through the consistent delivery of high-quality counsel. While AI can enhance these capabilities, these technologies may also erode professional value if permitted to displace the distinctly human elements that differentiate exceptional service.

The imperative for 2026 is to maintain full professional capability by embracing human strengths while leveraging technological tools. Consistent application of the following practices will protect and develop the competencies that AI cannot replicate.

Build your human connections muscle

In the near future, professionals may spend more time interacting with AI systems than they do with colleagues. Over time, AI creates opportunities to disengage from human interaction; and AI systems remain consistently agreeable, perpetually available, and never introduce tension into professional discourse.

For time-constrained professionals, this predictability may appear advantageous; however, this convenience carries a substantial cost. In professional services, relationships constitute essential infrastructure rather than supplementary benefits. When professional interaction shifts from human to machine interface, social acuity diminishes as professionals lose exposure to subtle human dynamics. Critical developmental experiences — including the ability to manage discomfort, resolve misunderstandings, and navigate the productive friction that builds capacity for maintaining and repairing strained relationships — become scarcer.

To preserve human connection capacity with intention, implement these measures:

      • Prioritize work that requires genuine collaboration and shared accountability and keep AI as a supporting resource.
      • Establish regular face-to-face interaction, both virtual and in-person, with colleagues to invest in relationship-building conversations that extend beyond project deliverables and timeline discussions.
      • Actively engage in professionally challenging interactions, including those involving constructive feedback delivery and negotiation. These experiences maintain trust and prevent the gradual atrophy of human collaboration skills.

Protect your brain and your meaning at work

AI technologies offer substantial efficiency gains through automated drafting, summarization, and information analysis. However, excessive reliance on these capabilities may diminish the cognitive repetitions that maintain professional acuity. In professional services, intellectual capacity, which includes attention to detail and analytical reasoning, constitutes the primary asset. This capacity requires the ability to discern significance, interrogate underlying assumptions, and articulate complex tradeoffs with precision.

Delegating these cognitive tasks to AI systems daily may yield short-term efficiency while lowering costs, but this may lead to work becoming ambiguous and require less nuanced judgment. As a result, professional instincts may atrophy.

An additional consequence of AI overreliance involves the erosion of professional meaning and engagement. When AI systems generate the majority of intellectual output, professionals may risk becoming approvers rather than creators. Work devolves into review and authorization — a repetitive pattern that can lessen one’s connection to making a substantive professional contribution. Indeed, the role begins to resemble a production line of incremental validations rather than meaningful professional practice.

To avoid this, you should implement the following practices to preserve both intellectual rigor and a meaningful sense of agency over critical professional activities:

      • Integrate deliberate cognitive exercises into weekly routines — Initiate substantive work with independent analysis — by establishing frameworks, identifying priorities, and constructing logic — before employing AI to refine structure, enhance clarity, and stress-test reasoning. Subsequently, critically evaluate AI-generated output by identifying omissions, examining underlying assumptions, and assessing potential errors.
      • Establish dedicated periods for unassisted professional work — Schedule regular intervals for research, conceptual development, and drafting without AI support to ensure sustained development of analytical capacity and professional judgment.
      • Anchor work to meaning and outcomes — Identify work of particular professional significance and maintain direct engagement with these tasks, again without AI assistance. Regularly reflect on the tangible impact of contributions, including the delivery of client value and the support of colleagues, in order to better sustain meaningful connection to professional purpose.

Hybrid intelligence is the future

The most effective professionals in 2026 will be those that are focused on their capacity to integrate human literacy with algorithmic literacy, which is a competency framework known as hybrid intelligence.

Human literacy remains the fundamental differentiator in professional services, encompassing the ability to interpret interpersonal dynamics, establish trust amid complexity, deliver constructive feedback with appropriate sensitivity, and maintain both self-awareness and relational intelligence.

Algorithmic literacy involves understanding the specific capabilities and limitations of AI tools, including honing a proficiency for output verification, tool evaluation, and sustained awareness of bias and risk considerations.

The combination of these two factors within hybrid intelligence can give professionals a potent way of fighting the accelerating cognitive deterioration andĚýagency decayĚýthat some may experience with AI overuse.

Today, organizational mandates for AI adoption are becoming increasingly prevalent and will approach universality over the next few years. While firms compete through technological capability, competitive differentiation will ultimately derive from the human excellence of their professionals — a dynamic that will similarly shape individual career trajectories.


You can find out more about how a focus on power skills can help professionals in the workplace here

]]>
How private equity can accelerate technology & enable growth in accounting firms /en-us/posts/tax-and-accounting/pe-enable-tech-growth/ Mon, 05 Jan 2026 15:00:55 +0000 https://blogs.thomsonreuters.com/en-us/?p=68912

Key takeaways:

      • Technology investment drives PE interest — Private equity firms provide patient capital for multimillion-dollar technology transformations that traditional partnerships struggle to fund.

      • Strategic focus over expansion — PE-backed firms are shifting from growth through breadth to growth through depth, eliminating underperforming service lines to concentrate resources on areas where they can win.

      • Competitive pressure is mounting — While most firms remain uninterested in PE transactions, well-capitalized competitors are pulling ahead in technology capabilities, talent attraction, and market positioning.


A competing accounting firm down the street just acquired its fifth firm this year. Another launched an AI-powered tax platform that can deliver work in hours instead of weeks. And a third is recruiting top talent with equity packages your partnership structure can’t match.

What do they have in common? Private equity backing.

Four years ago, when EisnerAmper announced its deal with TowerBrook Capital Partners — one of the earliest and largest forays of PE money into the tax, audit & accounting industry — most practitioners dismissed it as an anomaly. Today, roughly half of the top 25 accounting firms have completed or are pursuing PE transactions. This isn’t a trend — it’s a fundamental restructuring of the profession.

Why traditional partnerships are losing ground

Consider Citrin Cooperman after New Mountain Capital made its investment in 2021. In four years, Citrin Cooperman has acquired more than 20 accounting firms, expanding to 2,800 professionals across 27 offices. That’s strategic acceleration, not organic growth.

Traditional accounting firm partnerships face a structural problem — they can’t easily fund multimillion-dollar infrastructure buildouts. When firms need enterprise relationship intelligence systems, unified data architectures, or AI-enabled delivery models, where does the capital for these initiatives come from? Partner contributions? Bank loans? Retained earnings that take years to build up?

PE-backed competitors can deploy patient capital —money designed for long-term technology transformation without immediate return pressure. And the gap between what PE-backed firms can do compared to traditional partnerships is widening.

For example, here’s the efficiency paradox: Partners billing at $500 per hour spend significant time on work that should be automated at a $50-per-hour equivalent cost.

That’s not a cost problem — it’s a revenue capacity problem.

PE-backed firms liberate high-value talent, so they are then free to pursue high-value work. When automation and AI-driven tools handle the more routine tasks, partners can focus on complex client challenges, strategic advisory, and relationship building.

The strategy shift: Depth over breadth

The most counterintuitive transformation PE brings is the shift in strategic focus. Traditional firms pursue growth through breadth by launching practice areas because clients asked for them or competitors offer them. The result? A dozen service lines, with about half of them underperforming.

Instead, PE firms ask one simple question: Where does your firm have a right to win?

This PE-backed strategy eliminates hobby businesses — those practice areas that exist because they always have, not because they generate competitive returns. Instead, PE-backed firms concentrate their resources on fewer service lines, focusing on those at which firms genuinely excel. Thus, PE-backed firms are reducing service line breadth while firms’ depth and increasing profitability and market share as well.

Private equity firms’ interest and investment in the tax, audit & accounting industry isn’t by happenstance. PE firms have done their due diligence to understand the industry — and not just from firms’ perspective, but from that of their clients too.


PE-backed firms liberate high-value tax talent, so they are then free to pursue high-value work, leaving automation and AI-driven tools to handle the more routine tasks.


Private equity firms have spent millions of dollars studying the accounting industry, not only tax firms including firms’ clients, and analyzing competitors. The information they’ve gathered represents a cultural shift that has been taking place — something that many firms themselves hadn’t noticed. This shift, from relationship-driven but assumption-based service models to data-informed decision making, has helped PE-backed firms know which services clients value, which delivery models they prefer, and for which services they’ll pay premium rates. That intelligence has become competitive advantage.

Further, PE-backed firms can offer equity incentives to next-generation leaders, which is something traditional partnerships struggle to match. PE-backed firms can provide clear career paths, sophisticated training, and professional development resources. As traditional firms ask young partners to buy in at barely affordable valuations, with unclear leadership paths and outdated technology, PE-backed firms are building employer brands that appeal to professionals who want cutting-edge technology and transparent advancement.

It’s not surprising which firm attracts the best talent.

The skepticism is real — and justified

Despite these benefits, the accounting profession remains skeptical. More than half of industry practitioners say PE isn’t on their radar, and another third aren’t interested, according to the recent Tax Firm Growth Report 2025 from the Thomson Reuters Institute.

Their concerns are legitimate. Two-thirds say they believe PE investment will negatively impact firm integrity and independence, according to the report. And these skeptical practitioners say they worry about culture, client relationships, and an emphasis on earnings over service quality.

Clearly, PE ownership does add complexity to auditor independence, regulatory compliance, and risk management. But PE firms are exceptionally risk-averse when investing in professional services, and the last thing they want is bad press or audit scandals. In fact, their risk management frameworks are often more sophisticated than traditional partnerships maintain.

Yet, for accounting firms seeking growth but determined to stay independent, PE partnership isn’t the only path. Employee Stock Ownership Plans (ESOPs) offer tax advantages and an employee ownership structure while maintaining independence. Firms like BDO and Grassi successfully implemented ESOPs to better provide liquidity while keeping control localized. Other alternatives include traditional financing, mergers between equals, minority capital deals, and targeted asset sales. Each has advantages and limitations.

The key insight: All alternatives require deliberate strategic action; and none involve maintaining the status quo or standing still.

The coming crossroads

The accounting profession continues to be at junction, and all firms will have to decide on their next move. PE-backed competitors are pulling ahead in technology utilization, market positioning, and talent acquisition. The opportunity window for firms to respond isn’t infinite.

Firms that delay action risk entering merger agreements or partnerships from weakened positions. Worse, they risk becoming acquisition targets, joining PE-backed platforms on terms dictated by necessity rather than choice.

Winners won’t be determined by capital structure alone, of course — they’ll be determined by execution speed and strategic clarity. However, PE investment can be a critical enabler in an industry facing unprecedented technological disruption and competitive pressure.

The fundamental question isn’t whether to embrace private equity; rather, it’s whether your firm can achieve necessary transformation speed and scale without it. Every firm leader must answer honestly, urgently, and with clear-eyed assessment of their competitive position and their competitors’ accelerating capabilities.

The profession has changed, and accounting firms have to decide whether they’ll be changing with it, or whether they’ll be changed by it.


For more on the impact of private equity in the tax, audit & accounting industry, you can access the recent Tax Firm Growth Report 2025 from the Thomson Reuters Institute here

]]>
Digital transformation’s impact on real-time tax oversight in Mexico /en-us/posts/government/real-time-tax-oversight-mexico/ Tue, 30 Dec 2025 14:16:12 +0000 https://blogs.thomsonreuters.com/en-us/?p=68899

Key takeaways:

      • Real-time oversight and strict compliance — Mexico’s SAT now requires digital platforms to provide real-time access to transaction data and withhold taxes at the source, with severe penalties, including service blocking, for non-compliance.

      • Major technological and operational demands — Platforms must invest in secure, scalable systems for data sharing, billing, and cybersecurity, and small businesses likely will face extra challenges adapting to these requirements.

      • New roles for legal and tax professionals — Lawyers and accountants will be essential in guiding businesses through compliance, privacy, and operational risks, as well as supporting technology integration and adapting to the demands of Mexico’s digital tax environment.


Mexico’s digital tax overhaul is more than a regulatory update — it’s a fundamental shift that will reshape how businesses in that country operate online. By granting the Tax Administration System (SAT) real-time access to platform data, the government aims to curb tax evasion and strengthen collection in the digital economy. This means platforms like Amazon, Uber, Netflix, TikTok, DiDi, and Mercado Libre must now share transaction details as they happen, which will mean unprecedented compliance, technology, and operational challenges for companies and professionals alike.

Platforms must also — 2.5% for income tax (ISR) and 8% for value added tax (IVA). If a seller does not give a tax ID number (RFC), the platform will keep up to 20% of the payment; and, if the platform does not comply, SAT can block the service in Mexico until the problem is fixed. That means users will not be able to access the platform until it follows the law.

The goal of all this is to make tax collection fair and stop fake invoices and false transactions. The law also adds ; now, selling fake tax documents online can lead to two to nine years in prison.

These new tax measures also raise questions about with the United State-Mexico-Canada Agreement (USMCA or T-MEC), because some proposals — such as increased data access and stricter penalties for digital platforms — could conflict with the treaty’s provisions on cross-border data flows and platform liability.

Indeed, this shift is part of a wider digital transformation in Mexico, as seen not only with the new biometric CURP for identity verification, but also with SAT’s adoption of AI-driven smart auditing — both of which bring new opportunities and challenges for compliance, security, and public trust.

Technological impact on companies

These latest rules mean big changes for tech systems. Platforms must create secure connections for SAT to access their data, although they may use APIs or that send transaction details in real time.

Companies will need stronger cybersecurity policies because opening a permanent link to SAT creates risks, especially considering the high value of data that will be flowing through the system en masse. At a minimum, businesses will need to invest in heightened encryption to protect data, authentication systems to control access, and monitoring tools to detect unusual activity

Platforms also need to update their . Every sale must include correct tax retention and generate a digital invoice (CFDI). For larger platforms that process millions of transactions daily, this means building high-capacity systems to avoid delays or errors. These platforms will also need data pipelines to handle the huge volumes of information and, in turn, send that to SAT without slowing down the services of SAT or themselves.

Small companies and startups may face extra challenges. They might not have the money or staff to make these changes quickly; and they likely will require the assistance of technology providers or consultants to implement new solutions such as compliance-as-a-service and automated tax reporting software.

Challenges and opportunities for tax and legal professionals

For lawyers, these rule changes will create new work areas. Companies will need legal advice to comply with the new rules and protect user privacy. Lawyers, for example, can help draft policies, negotiate limits on data sharing, and design compliance programs.

There will also be litigation opportunities. Many that real-time accesses could violate privacy rights and even the Mexican Constitution, with legal challenges likely by companies as a result. However, due to the recent amendments to the Amparo Law, many of these lawsuits could be frustrated at the outset, because the new Amparo requirements demand the claim of direct and personal harm and impose stricter limits on judicial suspensions, making it harder for platforms to obtain effective protection against real-time monitoring.

For accountants and tax advisors, the challenge is operational. They must help businesses manage new tax retentions and keep accurate records. Many smaller businesses, especially in retail, will need help registering with SAT, issuing invoices, and recovering taxes withheld. Accountants will also need to plan for their clients’ as a result of the retentions potentially reducing liquidity.

Both professions are likely to see more demand for their respective services. Lawyers will focus on compliance and defense matters, while accountants will handle routine tax activities; however, both will be involved in technology integration. Professionals who combine legal or tax knowledge with these needed tech skills will have a big advantage.

Adapting to Mexico’s real-time tax landscape

Real-time tax monitoring is a major shift for Mexico’s digital economy, and it aims to increase tax collection and reduce fraud, but it also brings big risks and costs. And the success of this big fiscal change depends on balance. Authorities must ensure strong security and clear limits on data access, and they should also offer support to small businesses, either in educational or instructional fashion, to help those enterprises that may have fewer resources at their disposal to navigate this turbulence.

If implemented well, however, this system could make Mexico’s tax collection more efficient and fairer. If not, these changes could lead to privacy violations, higher costs, and even less participation in the digital economy by smaller entities.

Indeed, Mexico is entering new territory with these rule changes, and the world will be watching carefully as this could become a model for other countries’ digital tax compliance — or it could become a cautionary tale of what happens when technology and regulation collide without enough safeguards.


You can find out more about theĚýregulatory and legal issues impacting MexicoĚýhere

]]>
Impact of AI on critical thinking: Challenges and opportunities for lawyers /en-us/posts/sustainability/ai-impact-critical-thinking/ Mon, 29 Dec 2025 14:04:00 +0000 https://blogs.thomsonreuters.com/en-us/?p=68783

Key insights:

      • Cognitive offloading is a significant risk —ĚýThe correlation between increased AI usage and decreased critical thinking, known as cognitive offloading, poses a threat to effective legal practice, especially with the rise of autonomous agentic AI.

      • Agentic AI risks and opportunities — The next generation of agentic AI poses significant challenges to lawyers’ critical thinking skills, but it also offers opportunities for lawyers to enhance their analytical rigor and human insight.

      • Agentic AI can enhance critical thinking when properly leveraged —ĚýWhen designed by lawyers, for lawyers and used to augment human judgment in legal workflow tasks — such as discovery, contract analysis, and drafting — agentic AI can improve efficiency, deepen analysis, and allow legal professionals to focus on higher-value critical thinking tasks.


The legal profession is at a critical juncture as AI becomes increasingly sophisticated. Recent research has uncovered a troubling correlation between the use of AI and the decline in critical thinking abilities among legal professionals. This phenomenon, known as cognitive offloading, threatens the very foundation of effective legal practice.

Studies have shown a clear pattern linking AI use, cognitive offloading, and critical thinking. According to , there is a notable correlation between increased AI usage and diminished critical thinking performance among individuals. Moreover, as people offload more mental work to AI tools, their critical thinking scores tend to be lower. While correlation does not necessarily imply causation, this pattern is strong enough to warrant proactive measures to safeguard critical thinking skills.

The findings from the study have implications for lawyers. First, it is essential to design workflows that ensure attorneys retain ownership of problem framing, authority weighting, and strategic judgment. Human checkpoints should be inserted at key decisions, and transparent evidence trails should be maintained. For junior lawyers, it is crucial to preserve desirable difficulty reps — basically, the baseline skill-building experience — before they consult AI. By pairing these guardrails with outcome tracking, law firms can harness AI’s speed and scale while minimizing the risks associated with cognitive offloading.

Risks increase with agentic AI

The next wave of AI-powered legal tech involves agentic AI, which operates as autonomous agents. These agents can plan and execute complex workflows independently, make real-time decisions, and adapt strategies without constant human input. This autonomy intensifies cognitive offloading risks by enabling workflow automation beyond human oversight, strategic cognitive offloading, and the black box problem magnified. (Basically, these are situations in which a system’s internal workings are hidden, and users may know what goes in and what comes out, but not howĚýthe system arrives at its decisions.)


To mitigate the risks associated with cognitive offloading, legal professionals can leverage agentic AI tools designed to enhance critical thinking.


The autonomous nature of agentic AI creates unprecedented professional responsibility challenges, including supervision standards, competence requirements, and explaining AI-developed strategies to clients. The legal profession faces significant challenges that could accelerate skills atrophy, such as new attorneys missing opportunities to develop foundational analytical skills, lawyers becoming dependent on AI, and AI handling strategic planning.

To mitigate the risks associated with cognitive offloading, legal professionals can leverage agentic AI tools designed to enhance critical thinking. For instance, AI-driven legal research and analysis platforms can make every step of the legal workflow more transparent, testable, and adversarially robust. These tools use custom-trained, agentic AI to produce transparent, step-by-step research notes and comprehensive reports that present arguments on both sides.

Illuminating examples of critical thinking skills

Agentic AI is transforming legal practice by enhancing critical thinking skills through various applications, and these innovative uses of AI not only improve efficiency but also augment human judgment. This in turn enables lawyers to focus on higher-value tasks that require critical thinking, creativity, and nuanced understanding. Several examples illustrate how agentic AI can enhance critical thinking in legal practice, such as:

      • Discovery — Autonomous analysis engines have uncovered patterns that traditional keyword searches missed. In one commercial litigation case, an agent found subtle shifts in executive language precisely around the period of alleged misconduct. The agent was able to explain why those patterns mattered and then tied each inference to source documents.
      • Contract analysis — In M&A diligence, agentic AI examined hundreds of legacy agreements and flagged indemnification variants that created potential exposure issues. With about 94% accuracy, transparent AI reasoning supported a targeted remediation strategy that averted post-closing liability.
      • Drafting workflows — Expert-designed, multi-step workflows assemble relevant know-how, generate first drafts to specification, and require counterarguments and verification before stylistic polish is done. This approach has been shown to reduce review time by roughly 63% and legal know-how tasks by about 10%.

As we are learning, agentic AI strengthens core litigation work by preserving human judgment while expanding pattern detection, accelerating theory testing, and deepening client advocacy. By handling comprehensive case law analysis and factual pattern identification, agentic AI frees litigators to develop creative legal theories, anticipate opposing strategies, and craft nuanced arguments.

Thus, to better elevate critical thinking in legal work, it is essential to use AI that is designed by lawyers, for lawyers. Domain-specific AI legal assistants provide nuanced insights that inform sharper, more strategic decisions. And expert-guided analytical workflows support comprehensive analysis without encroaching on professional judgment, ensuring that attorneys can interrogate sources confidently and build arguments on solid ground.

By embracing agentic AI as a collaborative counterpart, legal professionals can heighten analytical rigor and human insight — the very qualities that make legal practice both powerful and purposeful. As opportunities expand, so does the potential for creating more positive impact for clients, engaging in complex problem-solving, and advancing access to justice for more people.


You can find out more about the impact of AI and other advanced technologies on the legal profession here

]]>