Building AI on Trust: Why Security, Authenticity, and Governance Define Success

Anant Deshpande, Regional VP at DigiCert, India & ASEAN, explains why digital trust; verifiable identity, content authenticity, and secure governance must be embedded into every AI deployment. For him, success isn’t just scale, it’s trusted AI at scale.

CIO&Leader: How do you define success when transitioning an AI initiative from pilot to production?

Anant Deshpande: Success in transitioning an AI initiative from pilot to production means ensuring the solution not only delivers measurable value but also operates securely, responsibly, and with trust at its core. AI is a powerful enabler because it streamlines operations, enhances decision-making, and unlocks innovation. But it also introduces new risks around data integrity, identity verification, and unauthorized actions.

For DigiCert, success hinges on embedding digital trust into the AI lifecycle. That includes verifying the identity of AI agents, ensuring the authenticity of the data they consume and generate, and maintaining strong governance over their actions. A successful transition isn’t just about functionality or scale, but about deploying AI tools that are secure, accountable, and aligned with enterprise risk and compliance frameworks. In short, trusted AI is production-ready AI.

CIO&Leader: What are the core pillars of your current enterprise AI strategy?

Anant Deshpande: DigiCert’s enterprise AI strategy is rooted in a strong commitment to security, trust, scalability, and future-readiness. Every AI initiative is built with a security-first approach, adhering to zero trust principles and maintaining the integrity of our software supply chain. In today’s environment of growing misinformation, we’re also leading efforts to ensure content authenticity. By embedding digital identities into AI-generated outputs and supporting open standards like C2PA (Coalition for Content Provenance and Authenticity) and content credentials, we’re helping organizations verify the origin and integrity of digital content, strengthening trust in what people see, share, and rely on.

CIO&Leader: What key AI use cases have successfully moved into production, and what measurable impact have they delivered?

Anant Deshpande: One of the most impactful AI use cases that has moved into production is content authenticity and provenance, which ensures people can verify whether content was generated or altered by AI. As generative AI becomes more powerful and accessible, so does the risk of misinformation, deepfakes, and manipulated media.
Through our involvement in the Content Authenticity Initiative and C2PA (Coalition for Content Provenance and Authenticity), DigiCert plays a key role in building the trust layer for digital content. By attaching verifiable metadata– such as who created or edited a piece of content, when, and howwe help establish a chain of custody for digital media. This enables platforms, consumers, and enterprises to determine whether content is authentic, tampered with, or AI-generated.
The measurable impact is significant: reduced misinformation, greater consumer trust, and enhanced transparency in news, advertising, and enterprise communications. In an AI-driven world, content trust is now essential, and DigiCert is helping make that trust verifiable. We will continue to see more and more content credentials embedded on authentic images, designs, video, and writing.

CIO&Leader: What are the biggest challenges you’ve faced in operationalizing AI, and how have you addressed them?

Anant Deshpande: One of the biggest challenges in AI today is establishing digital trust especially as enterprises scale AI across systems and teams. With the rapid rise of generative AI and deepfakes, ensuring the authenticity of AI-generated content and decisions is no longer optional, it’s critical. But for large organizations, this goes beyond technology – it’s about embedding trust into every layer of operations, across every touchpoint where AI is deployed.

At DigiCert, we’re helping enterprises operationalize this trust at scale. That means not only securing every stage of AI development, but also giving organizations the tools to trace content origins, detect tampering, and maintain integrity across complex environments.

CIO&Leader: How are you preparing your workforce for scaled AI adoption, and what organizational shifts have been required?

Anant Deshpande: Preparing our workforce for AI adoption is crucial because the success of any AI initiative depends not just on the technology, but on the people behind it. Our teams understand that DigiCert’s role as a global leader in digital trust means staying ahead of both the opportunities and the risks that come with AI.

To support this, we’ve invested in regular training programs focused on AI ethics, cybersecurity, and emerging technologies especially tailored for our engineering, DevOps, and operations teams. At the organizational level, we’ve reinforced leadership by empowering our Chief Trust Officer and establishing cross-functional AI governance boards that include experts in legal, security, and data science. These groups carefully evaluate each AI use case to ensure it aligns with our principles of responsible innovation. Just as importantly, we’ve deepened collaboration between research, product, and security teams – ensuring that AI is not only advanced thoughtfully, but managed with integrity at every stage.

CIO&Leader: Looking ahead, what does your AI roadmap over the next 12–18 months look like especially in terms of GenAI or foundation model deployments?

Anant Deshpande: The next 12–18 months will be defined by the rapid rise of agentic AIautonomous systems capable of taking initiative, making decisions, and collaborating with other agents or humans. As organizations move from experimentation to deployment, the focus will shift from just building models to securing their behavior and interactions.

The roadmap includes deploying GenAI and foundation models that power intelligent agents across use cases, from workflow automation to customer service to software engineering. But as these agents take on more responsibility, trust becomes non-negotiable. Just as HTTPS and digital certificates protect websites, AI agents will need similar safeguards: verifiable identities, secure communications, and signed actions.

That’s where tools like Public Key Infrastructure (PKI) come in to enable authentication, integrity, and accountability for AI agents at the protocol level. Expect to see a surge in demand for agent-level identity, cryptographic signing of decisions or outputs, and frameworks that ensure secure, transparent collaboration between AI systems. Ultimately, the roadmap isn’t just about scaling AI; it’s about scaling trusted AI.

Share on