Salesforce’s EVP and chief digital officer Joe Inzerillo reflects on lessons and missteps from the company’s first year as customer zero of the agentic AI platform.

The Salesforce Agentforce agentic AI platform just passed its one year anniversary and it’s already reached its fourth release in Agentforce 3.
Salesforce considers itself customer zero for the platform, and Joe Inzerillo, EVP and chief digital officer of the software titan, says the past 12 months have been a wild journey of discovery and learning.
As a customer, Inzerillo says, Agentforce and the autonomous agents it enables have been revolutionary for Salesforce. The sales development rep agent has worked on more than 43,000 leads and generated $1.7 million in new pipeline from dormant ones. And service agent has handled more than 1.4 million conversations on the Salesforce help site, resolving most cases without involving human reps, and providing excellent customer satisfaction (CSAT) scores.
“We’ve been able to take about $100 million in cost out of our support function and deliver as good or better CSAT for our customers,” Inzerillo says.
But these results have not come without challenges and failures.
Agents are not chatbots
Inzerillo says one mistake made earlier in the journey was over-constraining the model, and that, early on, a customer asked an agent about a competitor and the agent answered.
“Our marketing team wasn’t thrilled about that, so we added a grounding rule to never talk about our competitors,” he says, and that rule included an extensive list of competitors. But this backfired when customers asked legitimate questions, like how to integrate Microsoft Teams with Salesforce. The agent refused to help because Microsoft was a competitor on the list.
“That’s not what we want either since there has to be a little tolerance,” he says. “You need to treat it as a college intern who’s going to make some mistakes when they get started, but every piece of feedback you get is a gift.”
That early misstep led to an important insight: Agents aren’t chatbots. Chatbots need prescriptive directions. Agents need a goal but should be free to determine how to achieve that goal.
“If you restrict it too much, the best case is you’ll get a bot,” Inzerillo says. “The worst case is you’ll get worse than a bot because you’ve constrained it to the point where it has trouble answering even basic questions.”
In this instance, the solution was simple. The team got rid of the rigid rules and instructed the agent to act in the best interests of Salesforce and its customers.
“Less is more to start,” Inzerillo says. “Be a little minimalist. Get the tuning and data retrieval right. But don’t overconstrain it because part of the magic of agents is their ability to talk like a human, and if you give it guidelines that are too strict, it’s going to lose that.”
Data hygiene is essential
The need for impeccable data hygiene has been another lesson learned.
As with other forms of gen AI, another concern is when autonomous agents hallucinate, generating false output based on a pattern perceived by the model. But Inzerillo says that when his team investigated apparent hallucinations, the cause was frequently documents containing conflicting data due to an agent’s knowledge base. When a model has two or more seemingly valid data sources that provide contradictory answers, it’s more likely to make up an answer to reconcile them.
“People used to think the hard part was actually getting the data amalgamated into a single location,” he says. “And it used to be hard. Now, with RAG and agents, accessing the data has never been easier. But the duplication of data, especially conflicting data, is something people haven’t paid a lot of attention to because humans are really good at figuring it out.”
In the age of autonomous agents, he says, data hygiene and data retirement need to be priorities. Salesforce discovered this when an agent for its customer support site pulled an outdated set of facts from an old marketing page. The site didn’t link to the page, which contradicted regularly maintained help articles.
Salesforce, Inzerillo says, has treated this as a useful diagnostic tool to help it identify old data sources that need to be cleaned up.
“When we look back 10 years from now at the dawn of agentic AI, the footnote will be that data hygiene got four times better throughout that process,” he says.
Salesforce has since adopted a multilayered approach focused on data governance, source cleanup, and consolidation, and is investing in knowledge article curation and validation processes to ensure its agents are accessing a consistent library of resources.
Inzerillo says the company has also applied Salesforce Data Cloud as its activation layer, connecting, harmonizing, and unifying data.
The human experience still rules
The past year also taught Salesforce the importance of the human experience leveraging agents. Inzerillo notes that Agentforce has allowed the company to build a multitude of agents, all fit for specific purposes, like a well-being agent to answer benefits questions, a meeting agent to schedule support, and a career agent to answer questions about professional development, for instance.
All these agents were meant to help Salesforce employees offload repetitive tasks, but effectively fragmented the employee experience across dozens of agents. Employees had to keep track of which agents could perform tasks.
So this summer, Inzerillo says, his team introduced an employee agent that could make calls to all those various agents while serving as a single point of contact for employees themselves. The team also created a manager agent that could bring together employee survey results and provide quarterly review resources.
The insight, Inzerillo says, is you need to do more than simply create agents.
“For agents to be truly effective, they must be built right into the flow of work,” he says. “Our agents aren’t siloed in a separate application. They’re seamlessly integrated into the tools our employees and customers use every day.”