AI is here to stay!

In a recent webinar organized in association with Kotak, Sreedhar Gade, Vice President of Engineering and Responsible AI at Freshworks, shared valuable insights on the evolution of artificial intelligence and its impact on enterprise operations. The session provided a roadmap for CIOs navigating the rapidly evolving AI landscape, offering practical advice on implementation strategies, cost management, and responsible governance frameworks.

The AI evolution timeline

Gade traced the journey of AI from late 2022, when ChatGPT introduced generative AI to the mainstream, through the experimental phase of 2023, to the current landscape where AI has become table stakes for businesses across industries.

“In 2023, most companies realized AI’s potential but weren’t sure how to leverage it,” explained Gade. “It was a year of skeptical proof of concepts.”

By 2024, companies began seriously incorporating AI into their strategies, with 2025 marking the transition to revenue generation, comprehensive governance frameworks, and widespread adoption across sectors from high-tech to pharmaceuticals and automotive.

Key areas of focus:

The presentation highlighted several critical considerations for technology leaders:

1. Strategic platform approach

Gade emphasized the importance of establishing a centralized AI platform rather than allowing fragmented innovation across departments. “If everybody is innovating in AI, they’ll end up spending a lot of duplicate time. Create a platform where one team is doing it, and everybody learns from their experience,” he advised.

This approach not only reduces redundancy but also enables consistent governance and knowledge sharing across the organization.

2. Balancing internal and product AI

For maximum effectiveness, Gade stressed that internal AI adoption and product-facing AI innovations must evolve in tandem. “Unless you change the mindset of your employees, they’ll not be able to build something compelling,” he noted, sharing how Freshworks established an internal program called CloudWatch (“winning with AI at home”) before taking AI solutions to market.

This internal-first approach allowed Freshworks to develop use cases ranging from attrition risk indicators to advanced resume screening tools, all built by non-technical staff using the company’s AI framework.

3. Model selection and cost management

One of the most practical insights for CIOs concerned model selection and cost management. Gade advocated for matching the right AI approach to each specific use case:

“For specific niche tasks like algorithmic trading or code generation, use discriminative AI models that do one thing very well. For general tasks like content creation, use generative models like GPT-4.”

Regular model benchmarking is essential for controlling costs, with Gade recommending a rule of thumb, “For every $100 I make, I should spend less than $5-10 on AI.”

4. Governance and risk management

With over 60% of CIOs incorporating AI into innovation plans but fewer than half confident in managing its risks, Gade introduced Freshworks’ AI Trust Framework, built on five pillars: Safety in preventing harmful outputs like bias or abusive content, Privacy in masking PII and ensuring data residency compliance, Controls in implementing role-based access for sensitive data, Traceability by providing citations to build trust in AI responses, and Security by ensuring end-to-end encryption of data

“AI governance cannot be an afterthought,” Gade emphasized. “You have to think about governance and responsible AI from the beginning.”

Practical implementation for enterprise CIOs:

Traditional product development cycles that took a year can now be compressed to less than a month with AI. Gade described Freshworks’ approach, “We run multiple models simultaneously across multiple use cases, evaluate which performs best, and go to market very fast.” This accelerated pace means enterprises can test more ideas and quickly scale successful implementations while discarding those that underperform.

For CIOs concerned with deriving ROI from AI investments, Gade outlined a shift in pricing models, “We’re moving from seat-based pricing to outcome-based pricing (paying when the agent solves a problem) and value-based pricing (paying based on improvements in customer satisfaction).”

To address concerns about AI accuracy, Gade advocated for human-in-the-loop systems that continuously improve through feedback. “When customers interact with an AI agent, they provide feedback. If the problem isn’t fully resolved, we pass that to a human agent and capture that signal to improve the AI.” According to Gade this approach allows for safe scaling as AI systems demonstrate increasing reliability.

“While LLMs forget you after a session ends, agents are like a group of PhDs working specifically for you, they know your background and don’t forget,” Gade explained. “In a year from now, you’ll see agents everywhere.”

Conclusion

As AI transitions from experimental technology to business essential, Gade offered a comprehensive framework for implementation. The key differentiators for success will be quickly identifying the right AI solutions for specific business needs, finding high-ROI use cases, developing smart monetization strategies, and focusing on hyper-personalization.

With collaboration emerging as a crucial factor, even tech giants like Apple, Microsoft and Google are partnering on AI initiatives, tech leaders must balance innovation with governance to successfully navigate the AI revolution reshaping the enterprise landscape.

For technology leaders weighing their AI strategy, Gade’s parting message was clear, “AI is here to stay. This is comparable to the computer revolution of the 1950s-60s or personal computing in the 1980s. This will be ‘AI in every person’s life.'”

Share on