In these early days of agentic AI adoption, failing fast can cause disproportionate damage considering what’s still to be learned and the increasing budgets in play. Maturity can’t be rushed, but perspective can come into greater focus.

According to a recent Gartner study, over 40% of enterprise projects with AI agents will be canceled by the end of 2027 due to excessive costs, unclear business value, and increased risks. This reiterates that the adoption of agentic AI in companies must follow a plan with clear vision of application areas and objectives, or it’ll end up wasting resources.
Anushree Verma, Gartner senior director analyst, says most agentic AI projects today are early-stage experiments or proofs of concept, fueled primarily by hype and often misapplied.
Enthusiasm lacking a vision, says Verma, can cause organizations to lose visibility into the true costs and complexity of implementing AI agents at scale, preventing projects from reaching production. “Organizations need to go beyond abstract expectations to make strategic, thoughtful decisions about where and how to apply this emerging technology,” she says.
Fausto Casati, ICT director of medical laser tech company Quanta System, concurs, saying that while AI is nothing new, gen AI accelerates the perception that speed to implement, above everything else, takes priority. “But AI is useless if the right application cases aren’t developed,” he says. “This is why we’re working on specific models.”
So rather than licensing gen AI to all staff, Quanta implements specific agents in systems like CRM for targeted functions, accompanied with specific training for management system users.
“There are ethical, safety, and sustainability aspects of AI that must always be taken into account,” Casati says.
CIOs in the exploration phase
Getting agentic AI off the ground isn’t easy. Another Gartner survey shows that overall, most CIOs are exercising caution when it comes to investment. Risks like bias, for instance, are seen as unavoidable. The potential security and privacy implications are cause for concern as well considering the sensitive data AI agents have access to. Of course, cost is an overriding concern as well — not just licensing or development of agents, but the investment required to integrate with existing systems, the underlying infrastructure, and for staff training to collaborate with the agents.
So effective implementation requires a thorough planning phase to ensure investment generates positive ROI as a result of agents well-trained on company data. Therefore, the CIO’s mantra is to proceed with the development of use cases capable of creating value. It starts with the business case and tests the ability to achieve results.
The most convincing applications
On this basis, several CIOs are making big strides, using agents to automate repetitive and time-consuming tasks such as document management, front-line customer support, and other internal processes.
“We’re watching the world of AI agents with great interest, and I’m convinced they should find the right place within companies as a support system for people,” says Lorenzo Cibrario, CIO of Italy’s Vita-Salute San Raffaele University. “For example, an AI agent that generates an analysis of internal documents is a great time-saving support for people, but using an AI agent to reconcile invoices or read a doctor’s prescription is a different matter since these types of documents are poorly coded, and the error rate is high.”
Cibrario adds that businesses must always ask themselves what level of risk they’re willing to accept since it all can’t be eliminated.
“We need to first outline potential scenarios in order to define a strategy that addresses the risks,” he says. “And the CIO must demonstrate the success rate of AI technologies with facts. This is why we do so many PoCs because they help us assess how much AI fails, test use cases, and specifically train the models on our company. We don’t need training on generic data.”
AI also requires application cases and security, which is why Casati started with agents in the CRM rather than gen AI products for the entire company workforce.
“We need targeted and controlled AI,” he says. “In the CRM project we’re launching, we’ll use agents to manage our product support. The AI agents will guide distributors and customers to access the knowledge base, and resolve the issue independently. Then, if necessary, the user can open a ticket and connect with our customer service representatives who can leverage other AI agents to speed up a technical resolution.”
Application to databases and management systems, accompanied by solid staff training, is the most frequent use case. CIOs of manufacturing companies, for example, are considering agents to facilitate product research that their customers perform through various channels. One solution is to channel requests to AI agents that query management systems.
Over at Banca Generali, though, implementations started from operations, which represent a privileged area for experimenting with agentic AI, both in terms of efficiency gains and supporting the Network, says Paolo Avallone, head of IT and operations at the private banking company.
Plus, they’ve identified other areas in which further experimentation will soon begin, from hyper-personalized customer management and risk management for simulating stress scenarios, to IT security for real-time fraud detection and HR for intelligent onboarding management.
Between the killer app and explainability
What’s also missing in agentic AI for business implementation purposes is what Fabrizio Silvestri, computer science professor at La Sapienza University, describes as the killer application.
“It’s important for companies and their IT teams to focus on valuable use cases for their needs, rather than the indiscriminate use of agents,” he says. “There are already successful applications, such as evaluation systems, that bring together evaluators and evaluated agents, or software code writing.”
Plus, there’s the issue of explainability, which makes oversight and error correction complex. But even AI explainability will never be 100% achievable, says Silvestri.
“This is inherent in the way this technology works,” he says. “When generating a text, the LLM generates one term at a time, using autoregressive generation, and each term is chosen randomly from a pool of highly probable ones. If we repeat the same question, a slightly different text will be generated each time, which is where the possibility of error lies. If the system gets a term wrong, it can still carry the error with it throughout the rest of the text because that term has already been chosen and the system moves on. Hallucinations arise from this type of behavior.”
In other cases, there are what data scientists call emergent abilities. LLMs can perform operations for which they haven’t been trained. This happens because the large amount of data and the model’s architecture allow LLMs to also learn relationships between words. So if the model has identified the association between a plus sign and a sum in the data, it’ll be able to perform a calculation even if it hasn’t been trained for that.
“It’s impossible to fully explain how it did it, because the number of calculations the LLM performs is so large and the model is so complicated that we can’t see exactly where it learned the association,” says Silvestri. “Likewise, if an error occurs, it won’t be possible to understand where it came from. We’re content to say if the output is 99.9% correct, it may be worth accepting the risk of error for the advantages the model provides compared to operations a human could perform. This is how we accept, for example, the use of LLMs in research or medical diagnostics.”
At the same time, he adds, the advantage of agents over ML systems is that there’s no need for huge amounts of data but rather a few specific examples.
“Then we need to build interactions between various agent systems and humans, and this is a truly innovative feature of these systems,” Silvestri says. “Of course, we’re also talking about fully autonomous agents, but it’s clear that leaving control to humans makes things much more explainable. These systems aren’t built, at least not yet, to operate alone.”
How to avoid project failure
The question remains about how to adopt agents and immediately seize opportunities with first-mover advantage, without ending up becoming a company having to cancel projects due to poor ROI. The first point is strategic rethinking with a technological, organizational, and cultural vision.
“AI, and gen AI in particular, requires companies to rethink their operating models, strategically and purposefully integrating tech into their operations,” says Banca Generali’s Avallone. “Adopting it from an agentic perspective means rethinking work processes and redefining the concept of adding value in everyone’s role by openly addressing emotional and cultural resistance that sees it as a tool for thinking less, rather than thinking better.”
At Banca Generali, for example, one of the biggest IT challenges is managing agents distributed throughout the organization, to ensure effective collaboration with the people who’ll use them in their daily work, says Avallone.
Challenges aside, the potential is enormous since agentic AI represents a leap forward in AI capabilities and market opportunities due to new tools that improve resource efficiency, automate complex tasks, and introduce unprecedented business innovations that go beyond the capabilities of bots and virtual assistants.
Gartner predicts that by 2028, at least 15% of daily work decisions will be made autonomously using agent AI — up from 0% in 2024.
The focus, therefore, is on the medium term. According to IDC forecasts, 26% of global IT spending will be devoted to AI by 2029, with a decisive push from agentic AI. But, analysts add, it’s ultimately a test of leadership and vision to see if companies can gain a clear competitive advantage by committing to change that’s more cultural and procedural than technological.