According to TechRadar, a new Dynatrace report reveals a major deployment logjam, with around half of all agentic AI initiatives still stuck in proof-of-concept or pilot stages. This stall is preventing organizations from hitting their target ROI, though 74% still plan to increase their agentic AI budgets next year. The biggest current deployment areas are IT operations and DevOps at 72%, software engineering at 56%, and customer support at 51%. However, the top barriers to progress are security, privacy, and compliance concerns for 52% of respondents, difficulty managing agents at scale for 51%, and a shortage of skilled staff for 44%. Furthermore, a full 87% of organizations are building AI agents that require human supervision, with 69% of all agentic AI decisions still being verified by a person.
The real problem isn’t the tech
Here’s the thing: the technology itself isn’t really the issue anymore. We’re past the “can we build it?” phase. The report makes it clear that the real hurdles are the messy, human, and organizational problems that always come with big change. Privacy and compliance? That’s a legal and trust minefield. Managing agents at scale? That’s an operational headache. Skills shortage? That’s a talent and training crisis. It’s easier to buy a fancy new AI model than it is to rebuild your company’s processes and retrain your team. So projects get stuck in a safe, experimental “pilot purgatory” where they can’t do much harm, but they also can’t deliver real value.
The ROI mismatch is telling
Look at where companies are deploying versus where they expect the best returns. They’re heavily investing in DevOps and software engineering now, but they think the big money will come from IT operations monitoring, cybersecurity, and data processing. That’s a classic case of chasing the shiny object versus solving the foundational, boring problems. It suggests a lack of strategic alignment. Are they building AI agents for developers because it’s cool, or are they targeting the core business functions that actually move the needle on efficiency and risk? This disconnect probably feeds that “lack of a clear business case” that one in three leaders cited.
The human is still firmly in the loop
The most fascinating data point for me is the overwhelming reliance on human supervision. 87% building supervised agents? 69% of decisions verified by a person? That screams a massive lack of trust in fully autonomous systems. And you know what? That’s smart. We’re rushing toward an “agentic” future, but everyone is quietly keeping one hand firmly on the wheel. The predicted 50:50 split for IT and support tasks shows we’re thinking about collaboration, not replacement. This isn’t about firing people and letting bots run wild; it’s about creating a new kind of workforce partnership. The challenge is making that partnership efficient and not just creating a more complicated, slower process.
So what’s the way forward?
Dynatrace’s advice to redefine ROI and scale slowly with intent is spot on. Throwing more money at the problem, which 74% plan to do, won’t fix it if the strategy is flawed. ROI for AI can’t just be a cost-saving number; it has to include risk reduction, faster response times, and enabling human workers to do higher-level tasks. And establishing clear guardrails for human-machine collaboration is non-negotiable. You need to know exactly when, where, and how a human needs to step in. Basically, you need the operational playbook before you scale. Otherwise, you’re just building a bigger, more expensive pilot project that will eventually stall out like all the others. The path out of purgatory is less about tech and more about old-fashioned planning and change management.
