AI is growing fast in India. Banks use it to check fraud. Hospitals use it to help with diagnosis. Factories use it to manage supply chains. While adoption accelerates, so do the risks. Bias, opacity, and misuse threaten to erode public trust, making responsibility the defining challenge of India’s AI journey.

Senior Director – Education
SAS Asia Pacific
Trust is clearly the key. A lending model can deny loans unfairly. A chatbot can leak private data. An AI tool can make a mistake that costs real money. In banking, some experts have observed that some AI credit scoring tools can give women lower limits than men with the same profile. In healthcare, doctors using AI diagnostic systems have raised concerns about errors when patient histories are incomplete. Rules may try to control this, but in the end, customers and regulators want to see if a company can be trusted. Trust, hence, is not an extra feature. It decides who wins.
Training has also not kept pace with the times. Many programs still focus only on coding and models. They teach people to build but not to manage bias, ethics, or risk. A Deloitte-Nasscom report says India’s AI talent pool is projected to grow from 600,000 to over 1.25 million by 2027. But how many are trained in responsible use? The Indian AI workforce is strong in skills but weak in accountability. Companies often spend months retraining fresh hires so they can work safely with sensitive data. Professionals in mid-career also find their knowledge out of date as AI tools evolve faster than training systems.
To address this, a new kind of skilling has emerged that blends technical depth with ethical awareness. Some institutions are gradually trying to change the narrative and among them is the SAS Academy for Data & AI Excellence, which integrates Responsible AI into its curriculum, not as an add-on but as a core capability. Alongside machine learning and analytics, learners explore how to manage models, handle trade-offs, and set guardrails. They don’t just build AI, they also learn when to question it. For example, students may design a GenAI tool for insurance claims and then test it for fairness across different customer groups. They may also create workflows that explain decisions in plain language, enabling managers to review and challenge outputs. This approach fosters a generation of AI professionals who are not only skilled, but also careful, accountable, and transparent.
Emerging Career Pathways in Responsible AI
- Analyst / Manager → Decision Intelligence Specialist
- Data Scientist → Model Validator → AI Governance Lead
- Senior Professional → Responsible AI Strategist → Chief AI Officer (CAIO)
Foundations in coding and analytics remain vital, but the differentiator is the ability to ensure AI systems are transparent, fair, and trustworthy. AI is no longer just for coders. Managers, analysts, and domain experts also need to work with it. They must be able to read outputs, check for bias, and apply judgment. In a retail setting, this could mean an analyst checking if an AI tool recommends higher prices in one region without reason. In a hospital, it could mean a doctor deciding when to override an AI suggestion if the patient’s symptoms don’t fit. A team that mixes technical and business skills can see problems faster. It can also build fairer systems.
For India, building a responsible AI workforce is not just about creating jobs. It is about shaping the country’s leadership in the global AI landscape. As markets around the world seek trustworthy partners, India has the opportunity to lead with both scale and integrity. A workforce trained to question, explain, and safeguard AI decisions can become the foundation of ethical innovation. The goal is not only to grow quickly but to grow wisely and that journey begins with people who understand that responsibility in AI is not a limitation but a strategic advantage.
–Attributed to Bhuvan Nijhawan, Senior Director – Education, SAS Asia Pacific