In our new feature segment, "Inside the Shift", we leverage our expert analysis and supporting data to go in-depth and tell the insider stories about some of the biggest challenges facing the legal, tax, accounting, corporate, and government areas
You can read TRI鈥檚 latest 鈥淚nside the Shift鈥 feature,听Premortem: Your 2028 agentic AI pilot program failed听here
Picture this: It鈥檚 2028, your law firm spent real money on an agentic AI pilot, and now it鈥檚 quietly been shut down. No press release, no victory lap 鈥 just a post鈥憁ortem that nobody wants to read. In our latest听Inside the Shift feature article, we see that such a future is very likely unless firms start preparing for agentic AI in a way that鈥檚 very different than how they think they should.
The big idea is simple but uncomfortable: Success with generative AI (GenAI) does not mean your organization is ready for agentic AI. GenAI works because it鈥檚 forgiving. You can paste text into a tool, get a decent answer, and move on 鈥 even if your data is messy and your workflows live in people鈥檚 heads. Agentic AI doesn鈥檛 work that way. It expects clean data, documented processes, and clear rules. If your firm runs on institutional memory, workarounds, and a kind of just ask Linda problem-solving process, then the system will eventually break down.
To examine this and many more situations, the Thomson Reuters Institute (TRI) has launched a new feature segment,听Inside the Shift, that leverages our expert analysis and supporting data to tell some of the most compelling stories professional services today.
Our latest Inside the Shift feature, Premortem: Your 2028 agentic AI pilot program failed听by Bryce Engelland, Enterprise Content Lead for Innovation & Technology for the Thomson Reuters Institute, walks us through two fictional but painfully familiar failure stories of how two separate firms handled their agentic AI pilot programs.
The author explains how the first firm moves fast after crushing their GenAI rollout and assuming agentic AI is just the next logical step. Everything looks great in a sandbox; but then the system hits real鈥憌orld chaos: Undocumented exceptions, fragmented document storage, and conflict checks that only work because humans intuitively know when something feels off. One bad intake decision later, client trust is damaged and the pilot is frozen. In this example, the tech didn鈥檛 fail 鈥 the organization did.
The second firm goes the opposite direction. They鈥檙e cautious, thoughtful, and obsessed with governance. They build guardrails, limit risk, and launch a perfectly reasonable pilot. And then鈥 nothing happens. Attorneys ignore the system 鈥 not because they hate AI, but because using it only adds risk with no reward. If it works as it鈥檚 supposed to, nothing changes; but if something goes wrong, they鈥檒l be blamed. So, unsurprisingly, the rational choice is to nod in meetings and quietly keep doing things the old way until the project dies of inertia.
The challenge is that “preparing” doesn’t mean what most people think. It doesn’t mean buying early, and it doesn’t mean waiting for maturity. Rather, preparing means understanding now why these systems fail, and building the institutional capacity to avoid those failures when the technology arrives in full.
The feature article points out the common thread here: These failures have very little to do with AI capability; rather, they鈥檙e about incentives, documentation, and institutional honesty. Firms that succeed with agentic AI won鈥檛 be the ones that buy in early or wait patiently. The winners, the piece explains, be the ones doing the boring, unsexy work now: Writing things down, fixing information architecture, identifying hidden dependencies, and aligning rewards so adoption isn鈥檛 all risk and no upside.
In short, this article isn鈥檛 a warning about technology. It鈥檚 a warning about pretending your organization is ready when it鈥檚 not 鈥 and mistaking optimism or caution for preparation.
So, dive a little deeper behind the headlines about AI adoption and how to make agentic AI work for your organization. Click through and read today鈥檚 Inside the Shift feature. It might help you see more clearly than before whether the path your organization is pursuing with agentic AI will carry it over the goal line and into the next decade鈥 or leave your team watching from the sidelines.
You can find more听Inside the Shift feature articles听from the Thomson Reuters Institute here
The challenge is that “preparing” doesn’t mean what most people think. It doesn’t mean buying early, and it doesn’t mean waiting for maturity. Rather, preparing means understanding