Corporate Talent Archives - Thomson Reuters Institute https://blogs.thomsonreuters.com/en-us/topic/corporate-talent/ Thomson Reuters Institute is a blog from ¶¶ŇőłÉÄę, the intelligence, technology and human expertise you need to find trusted answers. Fri, 10 Apr 2026 08:47:33 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 Agentic AI following GenAI’s growth trajectory in legal, but with unique oversight challenges, new report shows /en-us/posts/technology/agentic-ai-oversight-challenges/ Thu, 09 Apr 2026 08:45:55 +0000 https://blogs.thomsonreuters.com/en-us/?p=70278

Key takeaways:

      • Agentic AI poised for adoption uptick — Agentic AI is following GenAI’s rapid adoption in the legal industry, with less than 20% of firms currently implementing agentic systems but half planning or considering adoption in the near future, according to a new report.

      • Adoption depends on human oversight answers — Legal professionals are generally optimistic about agentic AI’s potential, but successful adoption depends on explicit guidance about human oversight and the lawyer’s role in maintaining ethical standards.

      • Time to retool AI education? — Agentic AI’s increased autonomy introduces new oversight and ethical challenges for law firms, making targeted education and clear guidance essential to understanding the differences from GenAI.


Over the past several years, law firms and corporate legal departments have turned towards generative AI en masse. At the beginning of 2024, just 14% of all law firms and legal departments featured an enterprise-wide GenAI tool. Just two years later, that number had already risen to 43% of all firms and departments, according to the 2026 AI in Professional Services Report, from the Thomson Reuters Institute (TRI). For large law firms or legal departments, those percentages — not surprisingly — are beginning to approach 100%.

With GenAI adoption now this widespread, legal industry leaders are now turning their attention to two primary initiatives. One, of course, is how to get the most out of the AI tools they already have — a task that is proving a bit elusive. Currently, less than 20% of lawyers say their organizations measure AI’s return-on-investment, and most corporate lawyers say they have no idea how their outside law firms are approaching AI. Thus, instituting not just AI tools, but also an AI strategy is the second top priority for law firms and corporate legal departments in 2026 and beyond.

However, even as the legal industry reaches a tipping point in adopting GenAI tools, technology innovation still continues unabated. Agentic AI has emerged as the next wave of innovation that could change how lawyers work on a daily basis, offering a way to autonomously complete multi-step tasks. For example, agentic AI systems are already being built for the legal industry that independently researches a regulation or law, drafts a document based on the finding, identifies pitfalls, and revises the document, with stops for human guidance only instituted as desired.

According to the AI in Professional Services Report, the legal industry is already making headway towards implementing agentic AI systems. For agentic AI to truly take hold in legal, however, lawyers still require more education around not only how it differs from the GenAI systems they already have in place, but also when and where human intervention needs to occur within an agentic system.

The early stages of agentic AI

Examining current agentic AI adoption for the legal industry almost takes one back in time — two years, to be exact. Following the public release of GenAI in late-2022, many legal industry organizations spent 2023 evaluating and experimenting with AI systems, usually with a small working group of interested guinea pigs. As a result, only 14% of survey respondents said their law firms or corporate legal departments were engaged in organization-wide GenAI rollouts at the start of 2024. However, more than half of respondents said their organizations expected to be rolling out large-scale GenAI systems over the next 1 to 3 years. The intervening two years since then have proved that prediction to be largely true.

Agentic AI usage in the first half of 2026 looks largely similar to GenAI in 2024. The legal industry started to experiment with agentic AI at the beginning of 2025, with an eye towards actual implementation in 2026 and beyond (particularly as legal software providers began to integrate agentic systems into their own products). As such, less than 20% of recent survey respondents say their organization is engaged in widespread agentic AI adoption, but with about half of respondents said their organization is either planning to use or considering whether to use agentic AI in the near future.

agentic ai

By and large, lawyers feel positive about the agentic AI movement. When asked about their sentiment towards agentic AI, 51% of legal industry respondents said they felt excited or hopeful, while just 19% said they felt concerned or fearful. Further, about half (47%) said they actively believe agentic AI should be used for legal work, while 22% felt it should not, with the remainder saying they were unsure. These figures largely track with the sentiments expressed about GenAI in 2024, which have only grown over time from about 50% positive two years ago to two-thirds of all legal professionals feeling positive currently.

This all lends further credence to a rise in agentic AI usage similar to what law firms and corporate legal departments experienced with GenAI over the course of 2024 and 2025. Indeed, when asked when they expect agentic AI to be a central part of their workflow, few have baked agentic systems into their daily work currently, but a majority of legal industry respondents expect it to be central within the next 3 to 5 years.

agentic ai

The unique barriers of agentic AI adoption

Agentic AI does differ from GenAI in one crucial area that may limit its growth potential within the legal industry, however — autonomy. By and large, GenAI systems operate on a back-and-forth basis: Users provide the tool a prompt, receive its output, and then iterate back-and-forth from there. Agentic AI is intended to be more automated by design, only requiring human input at pre-determined points in the process. And that makes some lawyers understandably nervous.

When asked why they might feel hesitant about using agentic AI for legal tasks, the most common answer was a general fear of the unknown, but the second most common answer dealt with the need for careful monitoring and oversight. In fact, some respondents said they were excited about GenAI, but more cautious about agentic AI’s potential.

“Agentic AI, while exciting, to me removes oversight a step too far,” said one such lawyer from a US law firm. “I like the idea of prompting and reviewing a result. It is something else to have a machine have so much autonomy in the actual doing of a thing and potentially acting on my behalf without that very concrete review.”


Please add your voice to ¶¶ŇőłÉÄę’ flagship , a global study exploring how the professional landscape continues to change.Ěý


An assistant GC at a US company also pointed to potential privacy and security concerns, adding: “The fact that agentic AI operates in a much more autonomous way, with a lack of control from the user, means there are many unknowns that are hidden beneath the process.”

For law firm and corporate legal department leaders looking to potentially implement agentic AI systems into their practice, this means re-thinking what AI education and training will mean moving forward. Beyond that, however, legal AI educators also will need to make sure to pinpoint and perhaps over-explain those specific instances in which human oversight needs to occur in agentic systems. More autonomous does not mean fully autonomous, and particularly for lawyers with ethical duties to their work product, lawyer oversight will in fact be a necessary part of any agentic system.

For law firm or legal department leaders, that means that finding the right balance between efficient workflows and human intervention will be key to agentic AI adoption. And those organizations that can best communicate human-in-the-loop to their professionals up-front will be rewarded with more increased and reliable adoption.

Clearly, lawyers feel positively about the agentic AI future, after all. They just need it spelled out explicitly as to what the lawyer’s role will be in this new paradigm.

“Agentic AI is powerful, but its moral compass must come from humans,” one UK law firm barrister noted aptly. “Lawyers are trained to safeguard fairness, rights, and the rule of law — principles that should guide how AI is designed, governed, and deployed. Hope lies in our ability to shape AI through these values for fairer values for society as a whole.”


You can download a full copy of the Thomson Reuters Institute’sĚý2026 AI in Professional Services ReportĚýhere

]]>
From emerging player to contender: How Latin America can compete in the global AI race /en-us/posts/technology/latam-ai-investment/ Mon, 06 Apr 2026 11:57:46 +0000 https://blogs.thomsonreuters.com/en-us/?p=70259

Key takeaways:

      • Strategic collaboration is becoming a defining strength for the region — Latin American organizations are realizing that progress in AI accelerates when they combine forces by linking industry expertise, academic talent, and public‑sector support.

      • AI initiatives rooted in real local challenges are gaining global relevance — By developing solutions grounded in the region’s own structural needs, whether in infrastructure, finance, agriculture, education, or mobility, many LatAm firms are producing technologies that are both highly impactful and naturally scalable.

      • Demonstrating clear outcomes is becoming fundamental — Organizations that show concrete operational improvements, measurable efficiencies, or stronger customer outcomes are strengthening their position with investors and partners.


In recent years, Latin America has experienced significant growth in investments related to AI, accounting for . This is strikingly low given that the region makes up around 6.6% of global GDP, highlighting the region’s opportunities to scale AI initiatives even further. Although there are notable differences among countries, Mexico and Brazil — the two largest LatAm economies — stand out for their volume of AI projects and funding, followed by other nations such as Chile, Colombia, and Argentina.

By recognizing the region’s strengths — which include cost-effective operations, access to data, clean energy, and public support — the region’s businesses can better position themselves and design strategies to draw in international investors that may be increasingly seeking promising locations for AI development.

Lessons from LatAm’s AI success stories

Latin America has produced remarkable AI success stories that can serve as models to build confidence among investors. These cases — involving companies that attracted substantial investment and achieved growth — demonstrate valuable best practices that range from technological innovation to working with governments and corporations. Some of these best practices include:

Building strategic alliances

The journey of innovation rarely unfolds in isolation. At times, the presence of large, established companies, whether local industry leaders or multinationals, has served as a catalyst for AI projects. The experience of that specializes in AI-powered agricultural irrigation, proves it. Now, Kilimo is partnering with EdgeConneX, a data center company based in the United States, on a community .

Academia, too, can be woven into this narrative. Collaborations with research centers or universities offer scientific credibility and connect ventures with emerging talent. In Mexico, AI startups often originate within university settings — such as computer vision projects from the National Autonomous University of Mexico (UNAM), for instance — and maintain agreements that sustain ongoing innovation and technical progress even with modest resources. And academic validations, whether in published papers or conference accolades, tend to resonate with foreign investors. Indeed, the emergence of this ecosystem that features early corporate clients and academic mentors frequently lends a distinctive appeal for those seeking investment.

Focusing on local problems with global impact

Within Latin America, certain issues prove especially relevant in situations in which AI solutions intersect with sectors renowned for regional strengths, such as fintech and financial inclusion, agrotech optimizing agriculture, and foodtech drawing on local ingredients. The experience of Chilean food startup NotCo — in which and subsequently exported — suggests how innovations rooted in local context may generate broader attention.

By addressing needs in urban transport, education, mining and related areas, local LatAm companies can provide access to homegrown data and users, which can further refine technology and open pathways for investors into similar emerging markets. When AI solutions respond to genuine pain points rather than mere novelty, momentum often builds more quickly, and the model finds validation among that evaluate investments.

Showing results and AI ROI early on

Questions linger for many executives . Evidence of clear metrics like cost savings, sales growth, or error reduction can prove persuasive, especially when complemented by success stories from local clients.

Recent studies show that companies ; and such figures tend to reassure those considering investment by illustrating tangible improvements. Testimonials or independent validations, such as a university study, can further illuminate achievements.

The act of quantifying impact — whether in efficiency, revenue, or other relevant KPIs — has a way of transforming perceptions from uncertainty toward clarity.

Leveraging government incentives and collaborations

Many Latin American nations have put forth support programs for AI and tech projects, such as non-repayable funds, soft loans, and tax benefits for innovation illustrated in , , , or the .

Public financing, when present, often acts as a stamp of validation for private investors. For example, this trust extended to Brazilian startups receiving Finep support for AI health projects, which in turn can shift perceptions for foreign ventures capitals. Engagement in government pilots, such as smart city initiatives or solutions for ministries, provides valuable exposure. In such contexts, public-private partnerships and incentives seem to act as quiet levers for growth and legitimacy.

Seeking smart and diversified financing

Financial strategies in Latin America have been shaped by the interplay of local and foreign capital. Local funds often bring insights and patience, while foreign funds may offer larger investments and global scaling experience. Ownership dilution sometimes accompanies the arrival of strategic investors, whose networks can prove invaluable, such as . Programs like 500 Startups, Y Combinator, MassChallenge, and international competitions have ushered LatAm AI startups such as Heru, Rappi, Bitso, and Clip into new rounds of capital following increased exposure.

Efficiency in capital management, which can be demonstrated with lean burn rates and milestone achievement with limited resources, signals an ability to execute within the realities of LatAm, which may enhance the appeal for future investments. The cultivation of relationships and responsible stewardship of capital frequently matters as much as the funds themselves, suggesting that the value of mentorship, contacts, and reputation is often intertwined with deepening financial support.

Unlocking AI Investment

By applying these principles, Latin American companies have achieved a better position to attract AI investments to their projects and help position the region as a viable destination for technology capital. These recent experiences show that when a LatAm company combines innovation, talent, and strategy — while communicating its story well — it can win over global and local investors alike. Each of the best practices noted above is based on real lessons: international alliances (NotCo with US funds), leveraging incentives (Brazilian companies funded by Finep), talent formation (Santander and Microsoft programs), focus on ROI (successful use cases that convince boards), and more.

Latin America has challenges but also unique advantages. Companies that manage to navigate this environment intelligently will increase their chances of securing the financing needed to innovate and grow. By doing so, they will contribute to a virtuous circle in which each new success attracts more investment to the region and opens doors for the next generation of LatAm AI ventures.


You can find more about the challenges and opportunities in the Latin American region here

]]>
AI use and employee experience: New research reveals guidance gap in professional services /en-us/posts/technology/ai-guidance-gap/ Mon, 30 Mar 2026 11:23:47 +0000 https://blogs.thomsonreuters.com/en-us/?p=70090

Key takeaways:

      • Employees face contradictory messages or none at all — Nearly 40% of professionals surveyed report receiving conflicting directives about AI usage from clients and leadership, while half report no client conversations about AI have occurred at all.

      • Workers lack feedback on whether their AI efforts matter — Professionals who are experimenting with AI tools without knowing if their efforts are valued are left uncertain about whether investing time in developing AI skills is worth it.

      • Job displacement fears are rising — While employees remain cautiously optimistic about AI usage in their workplace, concerns about job displacement have doubled over the past year.


As generative AI (GenAI) tools flood into legal and accounting workplaces, organizations are deploying powerful technology without giving their employees clear directions on how to use it. Worse, some have received no guidance.

New research that underpinned the recent 2026 AI in Professional Services ReportĚýfrom the Thomson Reuters Institute (TRI), reveals a disconnect between AI availability and organizational guidance, which is creating confusion that may undermine both employee experience and the technology’s potential value. (The report’s data was gathered from surveys of more than 1,500 legal, tax, accounting, and compliance professionals across 26 countries.)

Employees navigate inconsistent AI policies or none at all

Approximately 40% of the professionals surveyed said they received contradictory guidance from clients and leadership about AI tool usage, with directives both encouraging and discouraging their use on projects and in RFPs. This ambivalence is slowing down decision-making at the front lines — a place in which AI could deliver the most value.

Equally concerning is the fact that half of professionals indicated that no conversations with clients about AI tool usage have taken place yet. And when discussions do occur, concerns about data protection and accuracy are the main topics.

guidance gap

This confusion extends to external relationships as well. More than two-thirds of corporate and government clients remain unaware of whether their outside professional service providers are even utilizing GenAI. And the majority of clients have provided no direction whatsoever to their outside law firms concerning AI use, respondents said.

guidance gap

Organizations often ignore what employees need to know

Perhaps most revealing is how organizations are measuring — or failing to measure — whether their AI investments are paying off. Almost half of respondents said their organizations are not measuring return on investment (ROI) at all. Among the minority (18%) of respondents that said their organizations do track ROI, the metrics they use tell a story about organizational priorities. That fact that internal cost savings and employee usage rates lead the list suggests a focus on efficiency over innovation or quality improvements.

guidance gap

This measurement vacuum has consequences for employee experience. Without clear success metrics, employees lack feedback on whether their AI experimentation is valued, discouraged, or even noticed. The absence of ROI frameworks also makes it hard to justify training investments or dedicated time that allows employees to develop AI fluency.

AI usage doubles while support systems fall behind

AI usage among professional service organizations has nearly doubled over the past year, and professionals are increasingly integrating these tools into their workflows, the report shows. Yet organizational infrastructure that could support this adoption surge lags badly. Most professionals said they expect GenAI to become central to their work within the next two years — but that may be happening without roadmaps from their employers.

In addition, notable barriers in employees’ usage of AI remain. When asked what barriers could prevent their organization from more widely adopting GenAI and agentic AI, almost 80% of professionals cited concerns over inaccurate responses. Other concerns included worries over data security, privacy, and ethical use. Most of these suggest an ongoing lack of trust in GenAI.

guidance gap

The tool landscape adds another layer of complexity. Publicly available tools dominate current usage, with more than half of respondents (57%) citing their use, while proprietary or industry-specific solutions remain largely in the consideration phase. This suggests employees are often self-provisioning AI tools rather than working within enterprise-supported ecosystems. This potentially opens organizations to increased risk exposure because of security gaps, compliance risks, and inconsistent quality.

Employees’ job displacement fears increasing

Despite these challenges, employee sentiment toward AI remains cautiously optimistic. More than half (57%) of respondents said they are either hopeful or excited about the future of GenAI in their industry. Clearly, employees see AI’s potential to enhance their efficiency, automate routine tasks, and free up their time for higher-value work.

At the same time, hesitation and concern among employees are rising, particularly around accuracy, job displacement fears, and the unknown implications of autonomous AI systems. Notably, concerns about job displacement have doubled over the past year, and this trend demands organizational attention and transparent communication about a workforce strategy to combat this concern.

What organizations need to do now

Organizational leaders who are serious about positive employee AI experiences need to step up their efforts to provide guidance to employees and gain the ROI that AI promises. Specific steps they can take include:

      • Draft clear and consistent guidance — Create explicit policies for employees about in which instances AI use is encouraged, required, or prohibited. This includes client communication protocols, data-handling requirements, and escalation procedures in those situations in which AI outputs seem questionable.
      • Develop and implement meaningful ROI metrics — Organizations must move beyond usage rates and cost savings as key success measurements. Tracking data points that capture quality improvements, time redeployed to strategic work, and client feedback on AI-enhanced deliverables present a more comprehensive picture. Also, leaders need to share these metrics transparently in order to give employees an understanding about organizational priorities.
      • Invest in structured learning — The survey shows professionals are experimenting with dozens of different tools from ChatGPT to specialized legal tech platforms. Organizations should curate recommended toolsets, provide hands-on training, and create communities of practice in which employees can share effective prompts and use cases with other users.

Our data shows that the employee experience around AI adoption reveals a workforce that is hopeful but hungry for direction and concerned about job impacts. Leaders who implement these actions effectively are more likely to unlock the strategic value that AI promises while building the trust and competence needed for their organizations and its employees to thrive in an automated future.


You can download a full copy of the Thomson Reuters Institute’sĚý2026 AI in Professional Services ReportĚýhere

]]>
The professional judgment gap: Tracing AI’s impact from lecture hall to professional services /en-us/posts/corporates/ai-professional-judgment-gap/ Thu, 05 Mar 2026 12:59:12 +0000 https://blogs.thomsonreuters.com/en-us/?p=69771

Key highlights:

      • Universities face pressure over pedagogy— Academic institutions are adopting AI as a reputational marker that’s driven by market pressure rather than educational need, creating a risk for students who can work with AI but not independently of it.

      • Entry-level roles under threat— AI is being deployed most heavily to automate the grunt work of entry-level positions in which foundational professional skills are traditionally built through struggle and feedback.

      • K-shaped cognitive economy emerging— Experienced professionals with existing expertise are gaining efficiency from AI, while entry-level workers are losing access to skill-building experiences.


According to Harvard University’s Professional & Executive Development division, innovation is defined as a “process that guides businesses through developing products or services that deliver value to customers in new and novel ways.” Along this journey, professional judgement in decision-making is used numerous times to determine next steps at key stages.

Notably, the word technology is nowhere to be found in this definition — an absence , Assistant Professor of Learning Technologies at the University of Minnesota, has long found revealing. Instead, innovation is framed as creative problem-solving, contextual intelligence, and the ability to work across perspectives. Interestingly, Dr. Heinsfeld adds, none of these require constant automation. In fact, many of them are undermined by it.

However, AI adoption has the real potential to automate away the very experiences that build these capabilities from university lecture halls to corporate offices. With notable data already suggesting that , the risk that the current approaches to AI use in universities and companies are engineering away innovation and professional judgement skills is real, notes , Group Leader in AI Research at Harvard and NTT Research.

Indeed, some observers view AI as the largest unregulated cognitive engineering experiment in human history. Yet, unlike medical drugs that require years of approval and testing, AI systems are reshaping how millions of students think, learn, and make decisions without a comparable approval process or a shared framework for discussing any potential “side effects,” as Dr. Heinsfeld pointed out.


Most worrisome is that AI is being deployed most heavily to automate precisely the entry-level roles where foundational professional skills are built.


So, what happens when an entire generation of future employees learn to delegate judgment before they develop it? And what actions do universities and companies need to take now to avoid this reality?

Risks of universities adopting AI under pressure

For universities, AI “has become a reputational marker, and not adopting AI is framed as institutional risk, regardless of whether an educational case has been made or not,” says Dr. Heinsfeld, adding that this is being driven, in part, by market pressure rather than pedagogical need.

Already, companies can greatly influence universities as employers of new graduates; and as such, AI systems are currently being optimized for speed, agreeability, and accessibility to stimulate ongoing use. However, as Dr. Heinsfeld contends, as universities race to earn the label AI ready without a careful, cautious and detailed understanding of how it may impact students’ cognitive processes, they run the risk of damage to their reputations of pedagogical integrity.

In addition, the “data as truth” paradigm is a complicating factor, she says. Drawing on her research, Dr. Heinsfeld explains how data “is often framed as the idea of being a single source of truth based on the assumption that when collected and analyzed, it can reveal objective, indisputable facts about the world.” Indeed, this ubiquitous mindset across universities and corporations treats data — such as that used to train large and small language models — as objective and indisputable.

Yet this obscures critical decisions about what gets measured, whose perspectives are included, and what forms of knowledge are systematically excluded from AI systems. As Dr. Heinsfeld warns, when data becomes synonymous with truth, “knowledge is what is measurable and optimizable.” This narrows professional judgment to efficiency metrics rather than the interpretive depth, ethical reasoning, and cultural context that are essential for sound decision-making.

Judgment gap widens in workforce downstream

Under the current AI adoption approach, students could leave universities able to workĚýwithĚýAI but not independentlyĚýofĚýit, a distinction emphasized by Dr. Heinsfeld. Like calculators, AI works as a tool only when foundational skills for its use exist first. Without this, graduates enter the workforce with a critical judgment gap that compounds from their lives as students at college campuses to becoming employees working in corporations.


AI adoption has the real potential to automate away the very experiences that build these capabilities from university lecture halls to corporate offices.


Most worrisome is that AI is being deployed most heavily to automate precisely the entry-level roles where foundational professional skills are built, warns Dr. Tanaka. Indeed, this is exactly the type of grunt work that teaches judgment through struggle and feedback. Over time, overuse of AI will result in quality being sacrificed because critical evaluation skills have atrophied.

Looking into the future, Dr. Tanaka foresees a K-shaped economy of cognitive capacity. Experienced professionals with existing expertise and contextual judgment built through years of experience will gain increasing efficiency from AI. Entry-level workers, however, will lose access to the valuable experiences that build professional judgement. This gap widens between professionals who can independently accelerate their workflows using AI and those whose traditional tasks are merely displaced by it.

Intervention may be able to break the cycle

The pattern is not inevitable, as both Dr. Tanaka and Dr. Heinsfeld explain. Drawing on Dr. Heinsfeld’s emphasis on institutional agency, meaningful intervention will depend on conscious, intentional choices made at every level. Both experts share their guidance for how different organizations can manage this:

Academic institutions — Universities must first recognize that AI adoption is a decision rather than an inevitability and make educational need the North Star for decision-making around AI. In her analysis, Dr. Heinsfeld emphasizes that when vendors set defaults, they quietly redefine academic practice. Defaults shape what is made visible or invisible and what becomes normalized. In AI-driven environments, universities often lose control over how models are trained and updated, what data shapes outputs, how knowledge is filtered and ranked, and how student and faculty data circulate beyond institutional boundaries — especially if decision-making is left to vendors. As a result, the intellectual byproducts of teaching and learning increasingly become inputs into external systems that universities do not govern.

Private entities — For organizations, Dr. Tanaka calls for feedback loops and other mechanisms that will promote more open discussion about AI use without stigma. In addition, companies need to proactively redesign entry-level rolesĚýto ensure these positions continue to cultivate judgment and foundational skills in an AI-driven environment. Likewise, Dr. Tanaka suggests that companies explicitly provide feedback about cognitive trade-offs to employees, fostering an understanding of possible skill entrophy.

Employees — Similarly, individuals working for organizations bear much of the responsibility for making sure critical thinking is enhanced by AI. Indeed, strategic decisions about when to use AI while seeking to preserve cognitive capacity and professional judgement are key.

Looking ahead

In today’s increasingly AI-driven environment, a new paradigm is needed to combat the current operating assumption that optimization from AI is the sole path to progress. And because the current trajectory sacrifices human development for efficiency, the need for universities and companies to choose a different path is urgent — while they still have the judgment capacity to do so.


You can find out more about how organizations are managing their talent and training issues here

]]>
Inside the Shift: What happens in the professional workplace when AI does too much? /en-us/posts/sustainability/inside-the-shift-ai-overuse/ Wed, 25 Feb 2026 16:21:23 +0000 https://blogs.thomsonreuters.com/en-us/?p=69610

You can read TRI’s latest “Inside the Shift” feature,ĚýThe human side of AI: The growing risks of ubiquitous use of AI on talent here


It’s no exaggeration to say that AI is everywhere in our workplaces right now. It writes our emails, summarizes our meetings, generates slides, and even helps us think through problems. On the surface, this may sound like progress — and in many ways, it is.

However, our latestĚýInside the ShiftĚýfeature, The human side of AI: The growing risks of ubiquitous use of AI on talent by Natalie Runyon, Content Strategist for Sustainability and Human Rights Crimes for the Thomson Reuters Institute, makes a clear and timely point: When AI use becomes excessive and unchecked, it can quietly undermine the very people it’s meant to help.


One major consequence of cognitive decay is the weakening of the brain’s capacity to engage deeply, question systematically, and — somewhat ironically — resist the potential manipulation of AI.


As the article goes into in much greater detail, these harms caused by AI overuse can include a slow erosion of human connections, a loss of a professional’s sense of purpose, and a general sense of feeling overwhelmed in the workplace.

Of course, the solution isn’t to reject AI, it’s to use it better. To this end, the article makes a strong case for organizations to foster hybrid intelligence, a process by which human judgment and creativity work alongside AI capabilities.

In today’s workplace, AI can be a powerful advantage; however, that is only if organizational leaders can remember that technology should enhance the human experience, not replaces the parts of professional life that workers value.


To examine this and many more situations, the Thomson Reuters Institute (TRI) has launched a new feature segment,ĚýInside the Shift, that leverages our expert analysis and supporting data to tell some of the most compelling stories professional services today

]]>
2026 AI in Professional Services Report: AI adoption has hit critical mass, but now comes the tough business questions /en-us/posts/technology/ai-in-professional-services-report-2026/ Mon, 09 Feb 2026 13:05:35 +0000 https://blogs.thomsonreuters.com/en-us/?p=69356

Key findings:

      • AI adoption accelerates across professional servicesĚý— Organization-wide use of AI in professional services almost doubled to 40% in 2026, with most individual professionals now using GenAI tools, and many preparing for the next wave of tools such as agentic AI.

      • Strategic integration and measurement lag behind usage — While AI use is widespread, only 18% of respondents say their organization tracks ROI of AI tools, and even fewer measure AI’s impact on broader business goals such as client satisfaction or revenue generation.

      • Communication around AI use remains inconsistentĚý— While most corporate departments want their outside firms to use AI on client matters, less than one-third are aware whether their firms are doing so. Meanwhile, firms report receiving conflicting instructions from clients about AI use, highlighting a need for clearer dialogue and shared strategy around AI adoption.


Over the past several years, AI usage within professional services industries has come into focus. As we enter 2026 in earnest, the early adoption phase of generative AI (GenAI) has come and gone. Today, most professionals have experimented with some form of GenAI, and many organizations integrated GenAI into their workflows — and now, a number are preparing for the next wave of technological innovation such as agentic AI.

Given this, the question for professionals and organizational leaders has now become: What will be AI’s long-term impact on my business?

Jump to ↓

2026 AI in Professional Services Report

 

To delve into this question further, the Thomson Reuters Institute has released its 2026 AI in Professional Services Report, which takes a broad view into the current usage and planning, sentiment towards, and business impact of AI for legal, tax & accounting, corporate functions, and government agencies. Taken from a survey of more than 1,500 respondents across 27 different countries, the report finds a professional services world that has embraced AI’s use but is continuing to evolve business strategy around its implementation.

For instance, the report shows that to 40% in 2026, compared to 22% in 2025 — and for the first time, a majority of individual professionals reported using publicly-available tools such as ChatGPT. Additionally, a majority of respondents said they feel either excited or hopeful for GenAI’s prospects in their respective industries, and about two-thirds said they felt GenAI should be applied to their work in some manner.

At the same time, however, many are exploring GenAI tools without much guidance as to how that use will be quantified or measured. Only 18% of respondents said they knew their organization was tracking return-on-investment (ROI) of AI tools in some manner, roughly the same proportion as last year. And even among those tracking AI metrics, most are tracking mainly internally-focused, operational metrics; and only a small proportion analyzed AI’s impact on their organization’s larger business goals — such as client satisfaction, external revenue generation, and new business won.

AI in Professional Services

This slow move to strategic thinking also impacts client-firm relationships. Although more than half of both corporate legal departments and corporate tax departments want their outside firms to use AI on client matters, less than one-third said they were aware whether their firms were doing so or not. From the firm standpoint, meanwhile, confusion reigns: 40% of firm respondents said they have received orders both to use AI on matters and not to use AI on matters from various clients.

Indeed, bout three-quarters of corporate respondents and firm respondents agreed that firms should be taking the lead in starting these conversations around proper AI use. Yet these discussions have not yet happened en masse. “Firms are reluctant — they claim it would compromise quality and fidelity,” said one U.S.-based corporate chief legal officer. “I think they are threatened by it.”

All the while, technological innovation progresses ever quicker. This year’s version of the report measures agentic AI use for the first time, finding that already 15% of organizations have adopted some type of agentic AI tool. Perhaps more interesting, however, is that an additional 53% report their organizations are either actively planning for agentic AI tools or are considering whether to use them, indicating perhaps an even more rapid pace of adoption than we’ve already seen with the speedy rise of GenAI.

AI in Professional Services

Overall, the report makes it clear that most professionals do understand that change, driven by AI in the workplace, is undoubtedly here. Even compared with 2025, a higher proportion of professionals said they believe that AI will have a major impact on jobs, billing and revenue, and even the need for legal or tax & accounting professionals as a whole. The percentage of lawyers calling AI a major threat to the unauthorized practice of law rose to 50% in 2026 from 36% in 2025.

Further, this report paints the picture of a professional services world that has embraced AI, begun to see its impact, and realized that it will have broader business and industry implications than previously imagined. As a result, the time for professionals and organizations to begin planning in earnest for an AI future has already arrived.

As a corporate general counsel from Sweden noted: “We cannot keep up with the modern-day corporations’ demands unless we also develop and adapt our way of working.”

You can download

a full copy of the Thomson Reuters Institute’s 2026 AI in Professional Services ReportĚýhere


]]>
Hybrid intelligence: Ramping up human-focused power skills in an AI-enabled workplace /en-us/posts/sustainability/hybrid-intelligence/ Wed, 21 Jan 2026 19:03:17 +0000 https://blogs.thomsonreuters.com/en-us/?p=69097

Key highlights:

      • Human connection is now a competitive capability — Treat relationships as core infrastructure instead of cultural fluff by designing work to keep real collaboration, accountability, and regular face-to-face interaction at the center with AI in a supporting role.

      • Protect your judgment and meaning as “human-owned” — Start with independent frameworks and reasoning, then use AI to refine and stress-test; and schedule recurring “no-AI” blocks to keep analytical muscle and professional agency strong.

      • The winning model is hybrid intelligence — The standout professionals in 2026 will be those who are fluent in both human dynamics and AI assisted workflow.


Professional services work fundamentally relies on judgment, trust, and relationships. Clients engage firms for confidence and strategic guidance, while a good reputation in this sector develops through the consistent delivery of high-quality counsel. While AI can enhance these capabilities, these technologies may also erode professional value if permitted to displace the distinctly human elements that differentiate exceptional service.

The imperative for 2026 is to maintain full professional capability by embracing human strengths while leveraging technological tools. Consistent application of the following practices will protect and develop the competencies that AI cannot replicate.

Build your human connections muscle

In the near future, professionals may spend more time interacting with AI systems than they do with colleagues. Over time, AI creates opportunities to disengage from human interaction; and AI systems remain consistently agreeable, perpetually available, and never introduce tension into professional discourse.

For time-constrained professionals, this predictability may appear advantageous; however, this convenience carries a substantial cost. In professional services, relationships constitute essential infrastructure rather than supplementary benefits. When professional interaction shifts from human to machine interface, social acuity diminishes as professionals lose exposure to subtle human dynamics. Critical developmental experiences — including the ability to manage discomfort, resolve misunderstandings, and navigate the productive friction that builds capacity for maintaining and repairing strained relationships — become scarcer.

To preserve human connection capacity with intention, implement these measures:

      • Prioritize work that requires genuine collaboration and shared accountability and keep AI as a supporting resource.
      • Establish regular face-to-face interaction, both virtual and in-person, with colleagues to invest in relationship-building conversations that extend beyond project deliverables and timeline discussions.
      • Actively engage in professionally challenging interactions, including those involving constructive feedback delivery and negotiation. These experiences maintain trust and prevent the gradual atrophy of human collaboration skills.

Protect your brain and your meaning at work

AI technologies offer substantial efficiency gains through automated drafting, summarization, and information analysis. However, excessive reliance on these capabilities may diminish the cognitive repetitions that maintain professional acuity. In professional services, intellectual capacity, which includes attention to detail and analytical reasoning, constitutes the primary asset. This capacity requires the ability to discern significance, interrogate underlying assumptions, and articulate complex tradeoffs with precision.

Delegating these cognitive tasks to AI systems daily may yield short-term efficiency while lowering costs, but this may lead to work becoming ambiguous and require less nuanced judgment. As a result, professional instincts may atrophy.

An additional consequence of AI overreliance involves the erosion of professional meaning and engagement. When AI systems generate the majority of intellectual output, professionals may risk becoming approvers rather than creators. Work devolves into review and authorization — a repetitive pattern that can lessen one’s connection to making a substantive professional contribution. Indeed, the role begins to resemble a production line of incremental validations rather than meaningful professional practice.

To avoid this, you should implement the following practices to preserve both intellectual rigor and a meaningful sense of agency over critical professional activities:

      • Integrate deliberate cognitive exercises into weekly routines — Initiate substantive work with independent analysis — by establishing frameworks, identifying priorities, and constructing logic — before employing AI to refine structure, enhance clarity, and stress-test reasoning. Subsequently, critically evaluate AI-generated output by identifying omissions, examining underlying assumptions, and assessing potential errors.
      • Establish dedicated periods for unassisted professional work — Schedule regular intervals for research, conceptual development, and drafting without AI support to ensure sustained development of analytical capacity and professional judgment.
      • Anchor work to meaning and outcomes — Identify work of particular professional significance and maintain direct engagement with these tasks, again without AI assistance. Regularly reflect on the tangible impact of contributions, including the delivery of client value and the support of colleagues, in order to better sustain meaningful connection to professional purpose.

Hybrid intelligence is the future

The most effective professionals in 2026 will be those that are focused on their capacity to integrate human literacy with algorithmic literacy, which is a competency framework known as hybrid intelligence.

Human literacy remains the fundamental differentiator in professional services, encompassing the ability to interpret interpersonal dynamics, establish trust amid complexity, deliver constructive feedback with appropriate sensitivity, and maintain both self-awareness and relational intelligence.

Algorithmic literacy involves understanding the specific capabilities and limitations of AI tools, including honing a proficiency for output verification, tool evaluation, and sustained awareness of bias and risk considerations.

The combination of these two factors within hybrid intelligence can give professionals a potent way of fighting the accelerating cognitive deterioration andĚýagency decayĚýthat some may experience with AI overuse.

Today, organizational mandates for AI adoption are becoming increasingly prevalent and will approach universality over the next few years. While firms compete through technological capability, competitive differentiation will ultimately derive from the human excellence of their professionals — a dynamic that will similarly shape individual career trajectories.


You can find out more about how a focus on power skills can help professionals in the workplace here

]]>
Tax changes: A strategic look ahead to 2026 for corporate tax departments /en-us/posts/corporates/tax-changes-2026/ Tue, 16 Dec 2025 14:53:44 +0000 https://blogs.thomsonreuters.com/en-us/?p=68774

Key takeaways:

      • Advocate for investment in technology and talent — As compliance and strategic demands grow, tax departments should use benchmark data from industry reports to build compelling business cases for automation, generative AI, and additional headcount.

      • Explore transferable tax credit opportunities — The transferable tax credit market has matured significantly, more departments should pursue these to offset tax liability, reduce estimated quarterly payments, and free up cash flow.

      • Proactively manage the OB3 transition — The One Big Beautiful Bill Act introduces substantial federal tax changes requiring strategic planning for 2026. Document analysis carefully as state conformity issues create future audit exposure.


The corporate tax landscape in 2025 is defined by resource constraints, regulatory complexity, and rapid technological change. And many corporate tax department leaders face mounting pressures from compliance demands, talent shortages, and evolving legislation — all while being asked to deliver more strategic value to their organizations, according to the , published by the Thomson Reuters Institute and Tax Executives Institute.

Under-resourcing and strategic gaps persist

Perhaps the most striking finding from the 2025 report is that 58% of corporate tax department professionals said their departments are under-resourced — an increase from 51% who said that the previous year. This apparent deterioration in resourcing creates cascading risks for businesses. Departments facing resource constraints report higher rates of penalties and audits, with 44% of survey respondents saying their under-resourced department experiencing penalties in the past year and 12% saying it had faced penalties exceeding $1 million.

The good news is that more departments are planning to hire rather than rely on overtime from existing staff, the report shows. However, the talent pool remains tight, making recruitment challenging. For tax department leaders, advocating for investment in both talent and technology is essential for risk management and maintaining compliance.


For tax department leaders, advocating for investment in both talent and technology is essential for risk management and maintaining compliance.


The report also showed that corporate tax departments continue to struggle with an imbalance between strategic and tactical work, with in-house tax professionals noting that they spend the majority of their time on reactive, tactical tasks while ideally wanting to reduce this to approximately 30% to 38% of their time.

What’s holding teams back? Excessive workload volume tops the list, they said, followed by complex compliance requirements, limited resources, and outdated technology. While two-thirds of respondents said their departments are still in the chaotic reactive stage of technology maturity, more than half said they expect higher-than-normal budget increases for investment in tax technology in the coming year, with many beginning to incorporate generative AI (GenAI) into their workflows.

Opportunities to create value exist

While these challenges exist, there are ways that corporate tax departments can identify and pursue value in the coming year. For example, the passage of the One Big Beautiful Bill Act (OB3) in mid-2025 introduced substantial changes to federal tax provisions including the ability to immediately expense research and experimentation costs under Section 174, reintroduction of full bonus depreciation, and liberalized interest deduction limitations.

The new Section 904(b) rules significantly improve the foreign tax credit mechanism by eliminating the allocation of interest expense and research and experimental (R&E) expenses to foreign source income, potentially lowering effective tax rates from 18.9% to approximately 14% at the aggregate level.


Departments that invest in technology, build strong business partnerships, and track their value contributions are demonstrating that having a strategic impact is possible even in resource-constrained environments.


However, OB3’s retroactive application to tax year 2025 creates immediate compliance complexity. State conformity issues compound the challenge, as many states have not yet updated their codes, creating potential mismatches between federal and state taxable income calculations.

Further, the transferable tax credit market has matured significantly, with nearly 25% of Fortune 1000 companies now participating, which is a 60% increase over 2024. Current market conditions favor buyers, with investment tax credits and production tax credits trading at discounts of 89-cent to 91-cents on the dollar.

These credits can offset tax liability, reduce estimated quarterly payments, and free up corporate cash flow. Tax departments should explore this opportunity as another tool for creating measurable value for the business.

Planning for 2026 and beyond

Despite the challenges facing corporate tax departments in 2025, success stories abound. Departments that invest in technology, build strong business partnerships, and track their value contributions are demonstrating that having a strategic impact is possible even in resource-constrained environments. The key is making the case for investment, staying ahead of regulatory changes, and continuously communicating your added value back to the business.


You can downloadĚýa full copy of theĚý, from the Thomson Reuters Institute and Tax Executives Institute, here

]]>
The Human Layer of AI: How to build human rights into the AI lifecycle /en-us/posts/sustainability/ai-human-layer-building-rights/ Mon, 24 Nov 2025 16:33:36 +0000 https://blogs.thomsonreuters.com/en-us/?p=68546

Key takeaways:

      • Build due diligence into the process — Make human-rights due diligence routine from the decision to build or buy through deployment by mapping uses to standards, assess severity and likelihood, and close control gaps to prevent costly pullbacks and reputational damage.

      • Identify risks early on — Use practical methods to identify risks early by engaging end users and running responsible foresight and bad headlines

      • Use due diligence to build trust — Treat due diligence as an asset and not a compliance box to tick by using it to de‑risk launches, uncover user needs, and build durable trust that accelerates growth and differentiates the product with safety-by-design features that matter to buyers, regulators, and end users.


AI is reshaping how we work, govern, and care for one another. Indeed, individuals are turning to cutting-edge large language models (LLMs) to ask for emotional help and support in grieving and coping during difficult times. “Users are turning to chatbots for therapy, crisis support, and reassurance, and this exposes design choices that now touch the right to information, privacy, and life itself,” says , co-Founder & Principal at , a management consulting firm that specializes in human rights and responsible technology use.

These unexpected uses of AI are reframing risk because in these instances, safeguards cannot be an afterthought. Analyzing who might misuse AI alongside determining who will benefit from its use must be built into the design process.

To put this requirement into practice, a human rights lens must be applied across the entire AI lifecycle from the decision to build or buy to deployment and use, to help companies anticipate harms, prioritize safeguards, and earn durable trust without hampering innovation.

Understanding human rights risks in the AI lifecycle

Human rights risks can surface at every phase of the AI lifecycle. In fact, they have emerged in efforts to train these frontier LLMs in content moderation functions and now, are showing up elsewhere. For example, data enrichment workers who refine training data, and data center staff, who power these systems, are most likely to face labor risks. Often located in lower‑income markets with weaker protections, they face low wages, unsafe conditions, and limits on other freedoms.

During the development phase, biased training sets and the probabilistic nature of models can generate misinformation or hallucinations, and these can further undermine rights to health and political participation. Likewise, design choices often can translate into discriminatory outcomes.

Unfortunately, the use of AI-enabled tools also can compound these harms. Powerful models can be misused for fraud or human trafficking. In addition, deeper integration with sensitive data can heighten privacy and security risks.

A surprising field pattern exacerbates the risk when people increasingly use AI for therapy‑like support and disclose issues related to emotional crises and self‑harm. In particular, this intimacy widens product and policy obligations, which include age‑aware safeguards and clear limits on overriding protections.

Why human rights due diligence is urgent

That’s why human rights due diligence must start with people, not the enterprise. By embedding human rights due diligence into the lifecycle of AI, development teams can begin to understand the technology and its intended uses, then map those uses to international standards. Next, a cross functional team gathers to weigh benefits alongside harms and to consider unintended uses. Primarily, they need to answer the question, “What happens if this technology gets in the hands of a bad actor?”

From there, the process demands an analysis of severity — which assesses scale, scope, and remediation, and the likelihood of each use. The final step involves evaluating current controls across supply chains, model design, deployment, and use-phases to identify gaps.

The biggest barrier in layering in a human rights lens into to AI is the need for speed to market. The races to put out minimally viable products accompanied by competitive pressure can eclipse robust governance, yet early due diligence may prevent costly pullbacks and bad headlines. Article One’s Poynton notes that no one wants to see their product on the front page for enabling stalking or spreading disinformation. Building safeguards early “ensures that when it does launch, it has the trust of its users,” she adds.

How to embed safeguards without slowing teams

The most efficient path in translating human rights into the AI product lifecycle is to turn policy principles, goals, and ambitions into actionable steps for the engineers and the product teams. This requires the “engineers to analyze how they do their work differently to ensure these principles live and breathe in AI-enabled products,” Poynton explains. More specifically, this includes:

Identifying unexpected harms — One of the most critical, yet difficult components of the human rights impact assessment is brainstorming potential harms. Poynton recommends two ways to make this happen: First, engage with end users to help identify potential harms by asking, “What are some issues that we may not be considering from the perspectives of accessibility, trust, safety and privacy?” Second, run responsible foresight workshops at which individuals play the parts of bad actors to better identify harms and uncover mitigation strategies quickly. Pair that with a bad headlines exercise that can be used to anticipate front‑page failures. Then, ship with these protections in place, pre‑launch.

Implementing concrete controls — Embedding safety-by-design should cover both content and contact, a lesson from gaming in which grooming risks require more than just filters. Build age‑aware and self‑harm protocols, including parental controls and principled policies on overrides. Govern sales and access with customer vetting, usage restrictions, and clear abuse‑response pathways. In the supply chain, set supplier standards for enrichment and data center work that include fair wages, safe conditions, freedom of association, and grievance channels.

Treating due diligence as value-creating, not box-checking — Crucially, frame due diligence as an asset rather than a liability. “Make your product better and ensure that when it does launch, it has the trust of its users,” Poynton adds.

Additional considerations

Addressing equity must be front and center. Responsible strategies include diversifying training sets without exploiting communities and giving buyers clear provenance statements on data scope and limits.

Bridging the digital divide is equally urgent. Bandwidth and device gaps risk amplifying inequality if design and deployment assume privileged contexts. In the workplace, Poynton stresses that these impacts will be compounded, from entry-level to expert roles.

Finally, remember that AI’s environmental footprint is a human rights issue. “There is a human right to a clean and healthy environment,” Poynton notes, adding that energy and water demands must be measured, reduced, and sited with respect for local communities, even as AI helps accelerate the clean energy transition. This is a proactive mandate.


You can find out more about the ethical issues facing AI use and adoption here

]]>
Future of Professionals: How to maximize the value of AI investments through talent /en-us/posts/technology/future-of-professionals-maximizing-ai-investment-through-talent/ Mon, 17 Nov 2025 13:06:51 +0000 https://blogs.thomsonreuters.com/en-us/?p=68459

Key highlights:

      • AI strategy should drive individual accountability — Alignment between organizational AI strategy and individual accountability is essential. Most professionals lack clarity on their organization’s AI goals, which hinders meaningful progress and innovation.

      • Strategic AI plan also should drive business revenue growth — Organizations with well-communicated and strategic AI plans are significantly more likely to realize critical business benefits and revenue growth from their AI investments.

      • Personal AI goals boost usage and accountability — Setting and linking personal AI goals for all professionals drives regular use and accountability, which is crucial for turning technology investments into tangible organizational success.


Organizations are discovering that true AI transformation in this digital age extends beyond technology alone. The key to maximizing AI’s value lies in connecting organizational strategy with individual employee accountability and responsible use, according to the ¶¶ŇőłÉÄę 2025 Future of Professionals report. Indeed, the report reveals that without clear communication of AI strategy and the setting of personal AI goals, even the best technology investments can fall short. Only by focusing on their professionals can organizations find their way forward to maximizing the value of their AI investments.

Clearly communicate organization’s AI strategy and goals

AĚýcritical yet often overlooked factor in successful AI adoption is the alignment between individual actions and the broader organizational AI strategy. In fact, almost two-thirds (65%) of professionals surveyed who said they have personal goals for AI adoption also said they are not aware of their organization’s overall AI strategy, according to the Future of Professionals report. Further, only 39% of all professionals say they have personal goals linked to AI adoption, which leaves a majority (61%) without clear direction or accountability in their own use of AI.

When professionals operate without clarity on the organization’s strategic direction, their efforts may not contribute meaningfully to broader business objectives. This leads to wasted investment, fragmented progress, and missed opportunities for cross-functional innovation.

The consequences of this misalignment are significant, especially as AI becomes increasingly central to operational efficiency and competitive advantage. The report cites that organizations that craft a strategic plan for their AI adoption and implementation are 3.5-times as likely to see critical AI benefits compared to those without any significant plans. Adding to this, those organizations with a strategic AI plan are almost twice (1.9-times) as likely to already be experiencing revenue growth as a result of their AI investment, compared to those organizations that are adopting AI informally.

These findings underscore that the mere presence of AI technology is not enough. Successful deployment depends on coordinated, intentional actions at every level. For organizations seeking to maximize the value of their AI investments, ensuring that every employee understands how their own learning, experimentation, and adoption of AI tools fits into that vision is just as important as articulating the organization’s overall vision.

Leverage professionals’ personal AI use to drive accountability

Unfortunately, there is a strong disconnect with professionals’ own AI use in the workplace, according to the Future of Professionals report, which reveals that 70% of professionals say they are not yet using AI tools on a regular basis. This gap between organizational ambition and day-to-day practice leads to a situation in which substantial investments in technology yield only limited returns.

Our research makes it clear that regular engagement with AI tools has a significant impact. Professionals who use AI routinely are 2.4-times as likely to report organizational benefits from AI adoption compared to those who use it sporadically or not at all. Yet, setting and linking personal AI goals for every professional remains a rare practice. Only 21% of professionals with AI adoption goals report using AI at least once a week, underscoring the importance of personal accountability in driving meaningful adoption. Additionally, professionals who say they have clearly defined AI goals are 1.8-times as likely to see tangible organizational benefits, highlighting the powerful link between individual commitment and collective success.

talent

To bridge this gap, organizations must move beyond simply providing access to AI tools and instead require all professionals to set personal AI learning and usage goals that are explicitly tied to broader business objectives.

Mandate human oversight

As organizations accelerate their adoption of AI, they need to require human oversight and responsible use of these technologies. The report notes that concerns about accuracy, security, and the potential for overreliance on AI remain significant barriers to robust adoption. Notably, an overwhelming 91% of professionals say they believe that computers should be held to higher standards of accuracy than humans, with 41% insisting that AI outputs must be 100% accurate before they can be used without human review. This high threshold underscores the persistent trust gap and the need for rigorous validation processes.

Beyond accuracy, professionals are also wary of the impact that excessive reliance on technology could have on their own or their colleagues’ development. Nearly a quarter (24%) of respondents say they fear that overreliance on AI may stunt the growth of essential professional skills. Without ongoing human involvement, there is a real risk that core competencies could erode over time, potentially leaving professionals less capable and more dependent on technology.

The solution lies in fostering a culture of responsible AI use, one in which human expertise remains central. Organizations must therefore set clear standards for AI oversight, provide training on ethical and critical evaluation of AI outputs, and encourage continuous skill development alongside technological advancement.

As organizations chart their course through the rapidly evolving landscape of AI, the most successful will be those that put their people at the heart of their strategy. By acting intentionally and fostering a culture in which human insight and innovation drive the use of AI, both organizations and individuals can secure lasting success and lead the way into an AI-enabled future.


You can download a full copy of the 2025 Future of Professionals report here

]]>