![]()
As many as 95% of enterprise AI pilots deliver zero measurable return on investment according to the MIT study released last August. The study went on to say that while roughly 88% of organizations are now using AI in some capacity, only about 7% have managed to scale their AI initiatives across their company. For technology leaders, this isn’t just a technical glitch; it’s a massive accumulation of “AI shelfware.”
This means that many of these projects are now stuck in what we call AI Purgatory—a frustrating limbo where tools show promise in an initial pilot but fail to move the needle for the broader team.
How did we get here and how do we break out of said limbo?
Solve the Problem, Not the Tool: AI success comes from identifying the soul-crushing, repetitive tasks that drain your team’s hours and then deciding which of those you would like to automate. Then and only then should you build or buy the tool that solves the problem. If you do it right the team won’t care if it’s the latest tool with the sexy algorithm as long as it kills the monotony and they can spend their day doing more meaningful work. Start with something measurable like “We want to have AI handle the 30% of support tickets where a user just resets a password” instead of just “we want to use AI for customer service.”
The 10-20-70 Rule: Where should you focus your time? AI success is 10% about the algorithm, 20% about the data, and 70% about your people and processes. Most firms stuck in purgatory invert this ratio and wonder why the tool isn’t working. We have a brilliant algorithm and our data is solid but why isn’t this moving the needle? You need to put in the time to understand what people are actually doing all day before you can have AI improve upon it.
Master the Foundations First: You can’t build an autonomous strategy on a “mystery foundation” of technical debt and dirty data. Before you invest in the next big platform, are you using the tools already built into your Salesforce instance? Is your data clean and usable? Are your business processes solid and well documented? If the answer to any of those is “no” then you are setting yourself up for an expensive project that sounds great in theory but delivers little value.
Assist the Humans, Don’t Replace Them: Many AI systems work best as copilots that allow users to do tasks more easily. The big benefit here is that you still have humans in the mix and will (or at least should!) know right away if the AI assistant is doing what it is supposed to do and can report when it is not. A constant feedback loop is only going to make these initial tools that much better. Get this crawl phase right first. Then you can think about walking and running. Projects fail when leadership expects “full automation” out of the gate.
Overhyping the model: Be very careful when asking questions like “can’t AI just do that?” Certainly that is true in many cases but only if the ask is well defined and specific. On the surface AI tools seem to be “smart” in a human way but in reality they are not smart at all. They are just very good at predicting patterns and spitting out something that seems intelligent. AI has no understanding of what you are asking it whatsoever. You need to be realistic about what you can ask AI to do in your Salesforce org. If I can take my Salesforce data and spend two days mucking around in a spreadsheet to answer a question then that would be a perfect task to sick AI on. If I am never going to get to an answer because the data just isn’t there (or clean, or correct) then AI isn’t going to come to a useful conclusion either. The difference is that I will give up and explain why. AI will just confidently hand you an answer anyway.
At DCS, we help teams move past the “Shiny Object Phase” and into the “Accountable Phase.” We would be happy to have a call to discuss what we’re seeing in the ecosystem and how we’re helping other companies build strategies that get the most out of your AI tools.