In an age where cyber threats are evolving faster than ever, eScan has been ahead of the curve—shifting from traditional signature-based detection to nearly 100% behaviour-based defense well before it became an industry imperative. At the helm is Govind Rammurthy, CEO and Managing Director of eScan, who has been steering the company’s innovation across more than 90 countries. In this exclusive conversation, Rammurthy discusses how AI is reshaping cybersecurity—from advanced persistent threat detection to ethical decision-making in autonomous defense systems—and shares his perspective on building resilience in critical sectors, the global impact of Indian innovations, and the evolving role of CISOs in AI-driven environments.

CEO and Managing Director
eScan
CIO&Leader: As cyber threats move beyond signature-based detection methods, how is eScan leveraging AI to develop adaptive cybersecurity solutions capable of independent, real-time detection, decision-making, and response—similar to agentic AI systems?
Govind Rammurthy: eScan made this transition from signature-based to behaviour-based detection years ago. For quite some time now, we’ve been operating on nearly 100% behaviour-based detection. While we still maintain signatures in our databases, our dependency on them is essentially zero. Every malware detection test we run in our labs is conducted without signatures—and we’re consistently achieving close to 100% success rates.
With the evolution of AI/ML, new possibilities have emerged where AI can truly make a difference. We’re no longer just detecting known malware—we’re enabling attribution analysis, identifying targeted attack patterns, blocking SQL injection attempts in real time, and delving deep into behavioural analytics.
AI’s real strength shows in areas like insider threat detection and advanced persistent threats (APT). These are situations where malicious behaviour may look completely legitimate in the short term. An APT, for example, might use normal business applications and standard protocols but, over weeks or months, quietly exfiltrate sensitive data.
Traditional correlation engines, with their short-term memory, miss such patterns because they focus on immediate red flags. AI systems, however, can maintain long-term context—tracking behaviour patterns over extended periods—and detect subtle, gradual data leaks that would otherwise go unnoticed. This is where the real value lies for us today.
CIO&Leader: With critical sectors such as financial services, healthcare, and public institutions undergoing rapid digital transformation, what do you consider the key challenges in achieving robust cyber resilience across essential infrastructures?
Govind Rammurthy: With critical sectors such as financial services, healthcare, and public institutions undergoing rapid digital transformation, the key challenges in achieving robust cyber resilience go far beyond technology — they’re fundamentally about balancing accessibility with security.
Financial institutions must serve customers 24/7 while ensuring every transaction is secure. Hospitals cannot afford downtime when lives are on the line, yet they manage highly sensitive data.
What I see repeatedly is organizations digitizing without a security-first mindset. They are adding connected devices, cloud services, accessibility features, and digital workflows far faster than they are updating their security frameworks. It’s like renovating a house while people are still living in it — you have to be extremely careful to maintain structural integrity.
A bigger concern is that these sectors are interconnected. A breach in one financial institution can ripple across the entire banking ecosystem. A compromised hospital system can impact patient care across multiple facilities. Resilience must therefore be considered not just within individual organizations, but across whole ecosystems.
Legacy systems are another major hurdle. Many critical infrastructures still run on technology never intended to be connected to the internet — yet here we are, trying to secure systems built in a different era.
CIO&Leader: How close do you believe we are to achieving truly autonomous security platforms that can replicate human-like learning, contextual understanding, and ethical decision-making—particularly when addressing zero-day or advanced persistent threats?
Govind Rammurthy: Honestly, I believe we’re still further from truly autonomous security platforms than many assume — and in my view, that’s a good thing. The technology for pattern recognition and rapid response is advancing quickly, but the ethical decision-making component is far more complex.
Consider a scenario where an autonomous system detects suspicious activity that could be either a sophisticated attack or a legitimate emergency procedure by medical staff. The technical detection might be perfect, but the contextual (and ethical) understanding – knowing when it’s appropriate to block access versus when blocking could cost lives – requires judgment that goes beyond pattern matching.
We’re making significant progress on zero-day detection through behavioral analysis and machine learning. Still, the decision-making component needs to remain collaborative between AI and human expertise for the foreseeable future. The goal shouldn’t be to replace human judgment entirely, but to augment it with AI/ML capabilities that process information faster and more comprehensively than any human could.
We’re probably 70% of the way there on the technical side, but only about 30% on the ethical decision-making framework. And frankly, that gap is intentional – we need robust governance structures before we hand over complete control.
No matter how sophisticated AI becomes, it fundamentally processes information differently from humans. For critical infrastructure and sensitive environments, fully autonomous decision-making, without human oversight, is way too risky. The potential consequences of getting it wrong in these contexts are too severe to accept that risk.
CIO&Leader: In your opinion, what steps can organizations and individuals take to improve their cyber hygiene practices, and how vital is ongoing cybersecurity awareness in combating increasingly sophisticated attack vectors?
Govind Rammurthy: The fundamentals haven’t changed as much as we might think or believe! Most successful attacks still exploit basic human errors — weak passwords, clicking on suspicious links, or downloading files from untrusted sources. However, the sophistication of social engineering has increased dramatically.
Organizations need to move beyond annual training sessions that everyone clicks through without thinking. Security awareness needs to be continuous and contextual. When someone receives an email that looks suspicious, they should have an easy way to verify it immediately, rather than waiting for the next quarterly training.
For employees, I always recommend starting with the basics: use unique passwords with a password manager, enable two-factor authentication wherever possible, and be cautious when sharing personal information online. Equally important is developing a healthy skepticism toward unexpected communications — whether they arrive via email, text, or phone. This skepticism should also apply to unusual requests from people you know. For instance, if you receive an email from your CEO claiming they’re in trouble and asking you to transfer a certain amount, double-check or even triple-check. Similarly, if you get an email from a vendor about changing their bank account details, don’t accept it at face value. The simple human element of cautiousness — much like our parents taught us in childhood not to take things from strangers or talk to strangers — applies just as much to corporate cybersecurity behavior.
Coming back to our combat scenario, the challenge is that attackers are using the same AI tools to make their attacks more convincing. Phishing emails are getting harder to spot, deepfake audio (and now video) is becoming more realistic, and social engineering attacks are becoming more personalized. This means our awareness programs need to evolve just as quickly.
CIO&Leader: Given eScan’s presence in over 90 countries, what impact do you see Indian cybersecurity innovations—especially those powered by AI—having on global norms, policy formation, and digital defense strategies?
Govind Rammurthy: India’s cybersecurity sector has reached an interesting inflection point. We’re not just adapting solutions developed elsewhere anymore – we’re creating innovations that are being adopted globally. The combination of technical talent, understanding of diverse threat landscapes, and cost-effective development is creating solutions that work well in both emerging and developed markets.
What’s particularly interesting is how Indian companies are approaching cybersecurity for resource-constrained (and out-of-life) environments. We’re building effective solutions without requiring massive infrastructure investments, which is valuable for organizations worldwide, not just in developing economies.
The regulatory environment in India is also pushing innovation. The Digital Personal Data Protection (DPDP) Act and other frameworks are creating demand for privacy-preserving security solutions that comply with multiple international standards simultaneously. This is forcing us to think more holistically about security architecture.
I see Indian innovations influencing global standards, particularly in areas like smartphone security, given our massive smartphone user base, and in developing security solutions that work across diverse technological ecosystems. As we scale our presence internationally, we’re finding that solutions designed for the Indian market often translate well to other complex, multi-layered technological environments.
CIO&Leader: What guidance would you offer to CISOs tasked with protecting complex, AI-driven environments, especially when facing evolving cyber risk landscapes?
Govind Rammurthy: The biggest mistake I see CISOs making is treating AI systems like traditional IT infrastructure. AI introduces new attack vectors and requires different security approaches. We cannot secure machine learning models the same way we secure databases.
First, it is essential to understand the AI supply chain. Many organizations are implementing (or testing for future use-cases) AI solutions without fully understanding the training data, model dependencies, or potential biases. We need visibility into how these systems make decisions and what data they’re processing.
Second is to implement AI-specific monitoring. Traditional security tools are not equipped to handle AI-related threats such as model poisoning, adversarial inputs, or data extraction attacks. We need monitoring systems that can distinguish between normal AI behaviour and behaviour indicative of an attack. To make this possible, a new wave of specialized professionals is required — AI red-teaming experts. While education and certification for these roles are still evolving, I predict they will become well-established within the next couple of years.
Third, is to develop incident response (IR) procedures specifically for AI systems. When an AI model starts behaving unexpectedly, the response protocol is different from a typical malware incident. We should isolate the model, revert to previous versions, or implement human oversight while investigating.
Perhaps most importantly, we should not let the complexity paralyze us. To document the workflow, we should start with securing the basics—data inputs, model access controls, and communication between AI systems and other infrastructure. And then to build the understanding gradually, adapting security posture as we learn more about how these systems operate in specific environments.
The key is maintaining a balance between innovation and security. We should adopt and test AI solutions while simultaneously implementing proper security considerations.