The really bad news: Agentic AI isn’t really ready for you, either.

Interest is growing in a new class of AI — one that doesn’t just retrieve or generate information, but reasons, strategizes, and acts. Known as agentic AI, this technology can plan, execute, and iterate, with minimal human input. Think of it less as a tool, more as an autonomous teammate.
But here’s the problem: most organizations are nowhere near ready for it. The really bad news is that neither are the models themselves.
Gartner predicts that over 40% of ongoing agentic AI projects will be canceled by 2027, citing poor fit between the technology and its intended use cases, as well as a general lack of maturity and autonomy. Gartner also noted that many ostensibly “agentic” products are mere rebrands of other workflow automations, a phenomenon dubbed “agentwashing.”
In short, the hype in the discourse risks outrunning the practical capabilities of the technology. As companies race to adopt agentic tools, many risk investing prematurely or inefficiently.
3 reasons agentic AI is hard to get right
The key challenge facing agentic AI is that talking and doing are fundamentally different. While agentic AI may seem just around the corner, its current capabilities are limited in 3 fundamental ways.
Learning and adaptation
Present-day LLMs are trained to generate text and media, not take action. LLMs are trained on abundant, publicly accessible knowledge, but not know-how. Good training sets for agentic behavior remain scarce. LLMs excel at producing language based on patterns in their training data, but lack real-world experience. In contrast, human know-how often comes from doing, not just knowing.
Even worse, these models don’t learn from experience. They don’t get better with practice the way a new hire might. That means every workflow has to be painstakingly taught, supervised, and debugged. Even then, the model might veer off course and not know it. Dynamic, unpredictable environments further compound these challenges.
Accurately modeling the world
Unlike humans, AI lacks an intuitive grasp of how the world works. It can simulate text that sounds right but has no real understanding of cause and effect. This makes long-term planning and multi-step workflows especially error-prone.
Agentic AI also struggles with social and ethical understanding, failing to grasp norms, intentions, or context in nuanced human environments.
Alignment, control, and interface challenges
Prompting an LLM and ensuring that it doesn’t hallucinate is hard. Prompting an autonomous agent to complete complex tasks without misfiring is even harder, especially given the messiness of real-world contexts and outcomes. When an AI agent has access to your software or hardware systems, the risk of unintended actions increases dramatically.
This is compounded by the fact that today’s models are black boxes: with billions of parameters and limited explainability, it can be difficult to understand why they made certain actions. Even small errors can snowball, and debugging them is far from straightforward.
Currently, it can be very challenging to obtain human-like labor from AI. Further progress will likely require continued architectural and algorithmic innovations – improvements that are beyond the means of most organizations and will require research and development by providers of foundation models.
Agentic AI is real, but not as real as the hype
It’s tempting to imagine agentic AI as a plug-and-play teammate. Still, you can make the most of what’s available and ensure your organization is well-positioned to take advantage of further developments.
How to use agentic AI here and now
Thanks to tools like Manus AI and ChatGPT Agent, early versions of agentic AI are now within reach. These tools advance the frontiers of what is possible with AI, blending LLMs’ ability to synthesize information with actions like interacting with web applications or executing code.
Currently, the best use cases for agentic AI involve relatively structured, bounded computer use tasks that extend LLMs’ capabilities with basic code-writing and browser interaction. You probably can’t tell Manus AI or ChatGPT Agent to carry out a complex end-to-end project like “do all my taxes” and have it crawl all your invoices, fill out the relevant forms, and file with the IRS (yet). Instead, the ideal candidate use cases might involve automating simpler web-based workflows and rapidly generating analytics notebooks or web applications.
Lay the groundwork for future developments
No matter how fast agentic AI evolves, one thing is clear: your organization’s ability to capitalize on it will depend on the quality of your data foundation. That means automated data integration, clear governance, and universal accessibility.
These aren’t just technical requirements. They’re the operational groundwork for everything from basic reporting to cutting-edge AI. We don’t know how fast agentic AI will evolve or how far it’ll go, but with the right foundation, you won’t just be ready for agentic AI — you’ll be ready to lead.
To learn more about Fivetran, visit here.