Trust at scale: The key to business-ready agentic AI

BrandPost By PwC and Microsoft
Sep 30, 20254 mins

As enterprises look to transform their operations through agentic AI, putting in place strong guardrails and cybersecurity frameworks becomes critical.

Portrait of two people in server room using laptop and setting up data security network
Credit: Shutterstock

Over the past three years, Generative AI (GenAI) has dominated conversations around enterprise technology. Today, however, the spotlight has shifted to agentic AI; an innovation that promises to deliver even greater efficiency and automation.

The potential of agentic AI

Unlike traditional software that executes predefined instructions, agentic systems make adaptive, autonomous decisions grounded in reasoning. As a result, agentic AI can automate complex enterprise systems, interact directly with customers, and even learn from data to adapt to changing information.

“With agentic AI, you are creating more autonomy. You can build systems and assign tasks where these agentic systems can take action, they can plan, reason, and can execute complex workflows,” explains Leigh Bates, Global Risk AI Leader, Partner PwC UK.

The challenges of scaling agentic AI

However, agentic AI’s promise of greater automation and efficiency can only be realised if deployed responsibly, with careful attention to regulatory and ethical obligations, as well as the need for robust cybersecurity.

In its new whitepaper, “Secure, governed, and business-ready: scaling agentic AI with trust and confidence,” PwC argues that agentic AI must function within a clearly defined ethical and governance framework. It notes several key considerations:

  1. Data governance.  Autonomy without strong data governance creates new risks. High-quality data, combined with embedded guardrails and safeguards, is essential to ensure the safe and resilient adoption of AI agents.
  2. Skills. Enterprises need skilled employees to understand where agentic systems are best deployed, when to hand over workloads to humans, and an understanding of the ethical requirements needed for responsible deployment.
  3. Compliance. AI tools can make errors and there’s also a risk of leaking sensitive data, especially if businesses connect agentic systems to data sources without adequate controls. Increasingly, these controls will be mandated through regulation, such as the EU’s AI Act, and DORA.
  4. Cybersecurity. Cyber threat actors are already using agentic AI to scale and sharpen their operations. For instance, in 2025, social engineering, especially for initial access, saw widespread use of AI-generated emails, voice, and video by ransomware and BEC groups.[1]
  5. Resilience. When agents aren’t tightly constrained, they may access tools beyond their remit or initiate unintended actions. Even small misconfigurations can cascade into significant downstream consequences that threaten operational resilience.

Building trust at scale

To overcome these challenges, CISOs should first examine their governance frameworks, including AI policies, to establish whether they are ready for agentic AI.

“Security and governance for AI agents are not optional. They must be included from the start”, recommends Narayan Kumar Gupta, Global Microsoft Security Association Lead, Senior Manager, PwC Ireland.

This includes introducing agentic technology through low-risk proofs of concept, using guardrails to control how AI agents operate, especially when handling sensitive data, and ongoing monitoring, with a human in the loop.

CISOs can explore the potential of Microsoft’s AI and security ecosystem to support secure agentic deployments. This includes AutoGen for orchestrating agent behaviour and Azure AI Agent Service in Azure AI Foundry for controlled experimentation, among other tools.

Harnessing agentic AI for growth

Agentic AI is accelerating digital transformation, empowering trailblazers to reshape their industries and scale customer impact. But success won’t come from innovation alone. It will also depend on preparedness. Some groundwork is already in place, through toolsets and frameworks from partners like PwC and Microsoft, but businesses should move swiftly to ensure their teams are fully equipped to unlock agentic AI’s potential while consistently adhering to best practice in security and governance.

Download PwC’s new whitepaper to learn more about scaling agentic AI with trust and confidence.

PwC: This content is for general information purposes only, and should not be used as a substitute for consultation with professional advisors.

© 2025 PwC. All rights reserved.


[1] PwC, “Cyber Threats 2024: A Year in Retrospect,” 2024