From pilots to production: How Snowflake sees enterprise AI scaling in India

Vijayant Rai, Managing Director at Snowflake outlines how enterprises in India are moving from AI pilots to production, highlighting data readiness, unified platforms, and agentic AI as key enablers, while addressing ROI concerns, governance, and evolving workforce dynamics.

Vijayant Rai, Managing Director, Snowflake

Enterprise AI adoption is entering a decisive phase, where the conversation is shifting from experimentation to measurable business outcomes. While proof-of-concept projects have dominated the past few years, organisations are now under pressure to scale AI into production, demonstrate ROI, and integrate emerging paradigms such as agentic AI into core operations. This transition is particularly critical in markets like India, where digital scale, regulatory evolution, and infrastructure readiness are shaping how enterprises operationalise AI.

In this conversation, Vijayant Rai, Managing Director for India at Snowflake, outlines how enterprises are progressing from pilots to production, the challenges of fragmented data and unclear business context, and the growing importance of unified data platforms. He discusses the role of partnerships, the balance between sovereign AI and global innovation, and how India’s scale, talent pool, and digital public infrastructure can accelerate AI adoption.

CIO&Leader: We are massively talking about AI. Many CIOs say pilots are still stuck and moving from pilot to production is a big challenge. Now we are also talking about agentic AI. How do you see this overall evolution, and how is Snowflake addressing these challenges?

Vijayant Rai: If you look at it, we are at a point where we have just released our end-goal results about 10 days ago. We finish our year in January, so for FY26, we released our financial results. We reported roughly US $4.4 billion in product revenue, which reflects actual consumption on the platform. Another important point is that we now have 13,300 customers. What stood out was that 9,100 of those customers have adopted AI on the Snowflake platform.

We are a fully managed AI Data Cloud. As customers start using AI features on top of their data estates on Snowflake, we are able to see that usage. When 9,100 out of 13,300 customers are using AI, it clearly indicates a strong movement in adoption.

There are different phases, as you mentioned—experimentation, proof of concepts, and then production. What we are seeing over the last year or year and a half is that we have moved from POCs to production. We are now seeing real-life production use cases globally and in India. Many organisations are able to see production impact and benefits. Internally at Snowflake as well, this is happening at scale.

CIO&Leader: Is adoption homogeneous across organisations? Is everyone able to scale at the same level?

Vijayant Rai: Not really. Some challenges exist, and they vary across organisations. There are cases where organisations try to fit generative AI into existing processes and systems. That often becomes challenging and creates more complexity. There are also cases where AI projects fail because the business context is not defined clearly. AI is fundamentally about context.

AI is fundamentally about context, ensuring business context is embedded everywhere, along with unified data, is critical to creating a cohesive AI strategy.”

These challenges exist, but at the same time, organisations are clearly moving toward production. Within any organisation, you will still see all three phases—experimentation, proof of concepts, and production—happening simultaneously.

We recently released a study on ROI and impact, conducted with more than 2,050 respondents globally, including a large number from India. The India statistics show that 33% of respondents already have AI use cases in production. Another 33% have initiatives in motion and expect to see impact within the next 12 months. Around 24% are still in early stages, and a small percentage is lagging. The overall trend clearly shows movement toward measurable AI impact.

At the same time, this is still an evolution. Outcomes vary depending on data maturity, internal capabilities, and how deeply AI is embedded into business processes.

CIO&Leader: We are now talking about agentic AI and automated workflows. While experimentation is happening, CFOs are still concerned about ROI. How do you see agentic AI scaling over the next few years, and what challenges do enterprises face?

Vijayant Rai: Some of the challenges include fragmented data estates, people and process gaps, and the absence of a clear AI design in terms of business impact.

If you look at it from a high level, organisations need a more coordinated approach to AI. They need to infuse AI across all layers of the organisation instead of creating silos. They also need to ensure that business context is embedded everywhere, and that data is unified.

Different industries have different concerns. Financial services companies are focused on whether they are getting enough ROI from AI investments. Manufacturing companies are more concerned about whether they have the right skills to adopt AI.

What is common across industries is the understanding that AI is transformative if implemented correctly. At Snowflake, we try to bring predictability into the process through cost management and FinOps, helping organisations manage AI consumption and internal chargebacks.

 CIO&Leader: With increasing focus on compliance, DPDP, and sovereign AI, how should enterprises approach building compliant AI architectures?

Vijayant Rai: This is extremely important, not just for AI but for any technology. Regulated industries must adhere to frameworks such as RBI regulations and consent regimes like DPDP.

One advantage with Snowflake is that we operate globally and have long-standing experience in markets like Europe and the US. All the controls required under GDPR are already built into the platform and are available globally.

When customers look at DPDP compliance, these controls are already in place. We also provide capabilities for data residency. In India, for example, we operate on AWS and Azure within Indian jurisdictions, ensuring compliance requirements are met.

Beyond technology, compliance also involves processes. That is where our partner ecosystem, including Deloitte, EY, and KPMG, plays a role in helping organisations implement governance frameworks effectively.

CIO&Leader: You run on hyperscalers but also compete with them. How does this relationship work?

Vijayant Rai: We work with all three hyperscalers globally. In India, we operate on AWS and Azure, and globally we also work with Google Cloud. We have large commitment contracts with them, and maintain strong relationships around what we host on their infrastructure. This helps ensure optimal cost dynamics for our customers.

For a Snowflake customer, the cost of compute and storage that would typically be charged separately by hyperscalers is bundled into a single pricing model. Through our partnerships, we are able to pass on both commercial and technical benefits. Given our scale as a customer for AWS, Azure, and Google, we also have a seat at the table in terms of product roadmaps and technology direction, which allows us to maintain strong alignment.

This is a key advantage. While we compete with other players in the ecosystem, each with strong technologies, the market in India still has significant room for growth. Data and AI adoption is still in its early stages for many organisations. We position Snowflake as a fully managed AI Data Cloud.

Another important advantage of operating across multiple hyperscalers is the flexibility it provides. In regulated industries, for instance, the ability to enable multi-cloud disaster recovery is highly valuable. This is possible because we operate across different cloud environments, which is not the case for many providers. It allows us to offer both flexibility and resilience, while continuing to compete where necessary.

CIO&Leader: How do partnerships with AI companies help accelerate adoption?

Vijayant Rai: We recently announced US $200 million partnerships with Anthropic and OpenAI. Our objective is to provide customers with access to the most suitable large language models (LLMs), aligned with their specific use cases, while ensuring a seamless consumption experience.

The platform operates on a unified consumption model using Snowflake credits, which applies consistently across both analytics and AI workloads.

These partnerships enable access to more than 15 of the world’s leading LLMs. The underlying models, including those from Anthropic and OpenAI, are integrated within the platform, allowing customers to leverage them without added complexity.

Given the scale of these commercial relationships, there is also a strong engineering alignment. This ensures that advancements from these LLM providers can be efficiently integrated and delivered to customers, enabling them to benefit from continuous innovation across the ecosystem.

CIO&Leader: How do you help enterprises build responsible and trustworthy AI systems?

Vijayant Rai: We work closely with our partners across multiple engagements. The approach is not limited to technology or tools; it also involves process and industry-specific expertise. Partners play a critical role in bringing domain knowledge, whether in financial services, manufacturing, or other sectors. For instance, when integrating SAP data, we collaborate with partners who specialise in that ecosystem.

Several partners also work with us on the process layer, including decisions around what data is ingested and how it is managed within Snowflake. Alongside this, the platform itself includes built-in controls and technology features for security and governance.

A core principle of our platform is that it is designed to be easy, connected, and trusted, with trust at the centre. Customer data remains fully owned and controlled by the customer. We do not access or use that data; our role is to provide the framework for security and governance.

As new security capabilities or governance features are developed, they are integrated directly into the platform, allowing customers to benefit from continuous enhancements without additional overhead. This creates a combined model where technology-driven controls are embedded within the platform, while partners contribute to the process and implementation layer.

CIO&Leader: With the rise of open-source models, how should enterprises decide between sovereign stacks and global platforms?

Vijayant Rai: There are different considerations in each case. From the perspective of Snowflake as a global AI data cloud, a key advantage is the ability to rapidly integrate new innovations. For example, when the open-source model DeepSeek was released, it was onboarded onto the platform within a short timeframe.

As an engineering-led organisation, the focus is on continuously integrating, testing, and operationalising new large language models. For most enterprises, building and maintaining such capabilities independently at scale is complex and resource-intensive.

Organisations that attempt to build their own infrastructure or data center environments often face challenges in keeping pace with the rapid evolution of AI technologies. Global platforms help address this by absorbing the complexity of integration and updates, allowing enterprises to focus on business outcomes rather than infrastructure management.

There are, however, valid use cases for sovereign or fully controlled AI stacks, particularly in sectors such as defence or highly regulated industries, where data control and governance requirements are more stringent.

The broader trend reflects increasing demand for control over data residency, security, and model deployment. In response, there has been a shift toward localised infrastructure and architectures that align with regulatory and compliance requirements, while still enabling access to evolving AI capabilities.

CIO&Leader: Where does India stand in terms of AI adoption and scalability today? What are the key strengths and challenges that will shape its growth?

Vijayant Rai: India stands out for its large talent pool and the scale at which digital-native companies operate. Organisations such as Swiggy serve tens of millions of users daily, demonstrating a level of scale that is rare globally.

The country has already proven its ability to build population-scale digital systems through initiatives like Aadhaar, UPI, and India Stack. AI can build on this foundation to drive large-scale impact in sectors such as financial inclusion, agriculture, and healthcare.

However, adoption is still evolving. A significant portion of enterprise data remains on-premises, particularly in regulated industries, where compliance and governance requirements slow cloud adoption. Organisations are progressing in phases, but the scale of AI deployments is increasing steadily.

Overall, the opportunity remains substantial, with strong momentum expected across sectors such as financial services, manufacturing, and digital-native businesses.

CIO&Leader: What is the key difference between digital-native and traditional enterprises when it comes to data and AI adoption? What challenges do each of these segments face?

Vijayant Rai: Digital-native companies are inherently cloud-first, which means they do not face the challenge of migrating data to the cloud. They are typically engineering-led organisations with strong clarity on outcomes and a deep understanding of technology. As a result, they are able to adopt AI and analytics capabilities faster and at scale, leveraging platforms like Snowflake to support large user bases.

In contrast, traditional enterprises operate on legacy systems with data distributed across multiple environments. Their transition to the cloud is more complex, particularly in regulated industries where compliance and governance requirements must be addressed. This results in a more gradual adoption curve.

However, traditional organisations also bring unique advantages, particularly access to long-term and diverse datasets. With the advancement of generative AI, they are now able to unlock value from unstructured data sources that were previously difficult to analyse. This is enabling new use cases, including areas like fraud detection and advanced analytics.

Ultimately, while digital-native companies move faster due to their cloud-native foundation, traditional enterprises are progressing steadily. In both cases, organisations are prioritising use cases that deliver clear business value and measurable return on investment.

CIO&Leader: How can enterprises balance the need for localised data residency with the use of global LLMs? Additionally, as AI evolves, what skills should the workforce focus on, and how can organisations address concerns around job displacement?

Vijayant Rai: From a data perspective, localisation is already addressed through in-country infrastructure. Platforms like Amazon Web Services and Microsoft Azure enable data residency within India, ensuring compliance with regulatory requirements.

On the AI side, the approach is to bring models closer to where the data resides. Many cloud environments already support AI infrastructure within India, and as demand scales, this capacity continues to expand. This allows enterprises to leverage global models while maintaining control over data.

From a workforce perspective, the industry is undergoing a generational technology shift, similar to earlier transitions such as the internet, mobile, and cloud. One key change is that AI tools have become significantly more accessible, enabling users to perform tasks that previously required specialised technical skills. This has led to measurable improvements in productivity across functions, including sales, operations, and analytics.

AI is also driving innovation by enabling organisations to process and analyse large-scale datasets that were previously unmanageable. This is opening up new possibilities across industries, including areas like research and complex data-driven decision-making.

“Where critical thinking and strategic decisions are required, it will require human intervention, domain expertise will come from people, while AI will define how to execute.”

In terms of skills, the emphasis is shifting toward critical thinking, strategic decision-making, and domain expertise. While AI can enhance execution and provide insights, human judgment remains essential in defining objectives and interpreting outcomes. Organisations should focus on enabling employees to work alongside AI systems, equipping them with the skills required to leverage these tools effectively while adapting to evolving roles.

Share on