India’s AI talent challenge extends beyond brain drain; demand will soar soon: Netweb

India’s race to scale artificial‑intelligence workloads hinges on the depth of its compute stack, and Hirdey Vikram, Chief Marketing Officer and Senior Vice‑President at Netweb Technologies, sits at the intersection of that hardware‑to‑software continuum. A veteran of the domestic server and HPC market, Vikram oversees strategy for the company’s Tyrone‑branded GPU servers, Airawat supercomputing programme and the new Skylus.ai resource‑virtualisation layer—all designed, engineered and manufactured in India.

In an exclusive conversation with CIO&Leader, Vikram explains how Netweb is closing the nation’s AI‑infrastructure gap, aligning with the ₹10,300‑crore IndiaAI Mission and nurturing the next wave of home‑grown talent. He shares how Netweb has doubled down on AI‑oriented orchestration software so that universities, start‑ups and large enterprises can train large language models or run physics simulations without exporting data—or budgets—overseas.

Hirdey Vikram,
Chief Marketing Officer and Senior Vice‑President,
Netweb Technologies

CIO&Leader: With India’s AI market projected to reach $826 billion by 2030, what specific infrastructure deficiencies has Netweb identified as the most critical barriers to India’s AI development, and how are your solutions addressing these gaps?

    Hirdey Vikram: India’s AI market is projected to grow exponentially, but the growth is challenged by critical infrastructure gaps such as limited access to scalable HPC resources, high costs of GPU-based systems, and a mismatch between supply and demand for AI talent. Netweb addresses these barriers through an integrated AI infrastructure strategy. We developed Airawat, India’s fastest AI supercomputer, and a portfolio of indigenous Tyrone AI servers for training, inference, and analytics, helping the enterprise to fulfil their AI aspirations in a rapidly changing landscape.

    Furthermore, to democratize AI access, we have also launched Skylus.ai, a GPU aggregation-disaggregation platform enabling organizations to start small with POCs and scale effortlessly. Skylus.ai helps institutions and enterprises optimize GPU usage, set up AI labs cost-effectively, and support multi-user environments with optimized resource allocation. Our systems are built to scale seamlessly with proprietary AI frameworks. Moreover, through strategic collaborations with NVIDIA, AMD, and Intel, we deliver globally competitive performance, making advanced AI and HPC accessible to a broader spectrum of Indian innovators cutting across industries.

    CIO&Leader: How are Netweb’s GPU clusters and supercomputing solutions being optimized specifically for Indian enterprises that may have different computational needs or budget constraints compared to their global counterparts?

      Hirdey Vikram: Our GPU clusters and supercomputing solutions are purpose-built to meet the diverse computational and budgetary needs of Indian enterprises, offering flexibility, scalability, and affordability. From deploying large-scale sovereign infrastructures like Airawat to setting up private and public AI clouds using our HCI-powered Skylus platform, we provide a full-stack AI ecosystem encompassing every aspect of an enterprise’s AI needs.

      Our Skylus.ai solution enables dynamic GPU resource pooling and disaggregation, helping businesses optimize utilization and reduce infrastructure wastage. We support enterprises of all sizes- startups, SMBs, or large corporations – with modular and scalable AI systems that can be deployed on-premises or via a hybrid cloud model. Our solutions are tailored for industries such as Education, BFSI, Healthcare & Science, Manufacturing, Automotive & Industrial Design, Aerospace & Defense, Oil & Gas, Media & Entertainment, etc. It also includes power-efficient architecture that reduces energy consumption. With integrated orchestration tools and support for AI training and inferencing workloads, Netweb empowers Indian enterprises to build high-performance, future-ready AI applications without overarching budgets.

      CIO&Leader: As large language models become increasingly important, what specific capabilities does Netweb offer to Indian organizations looking to develop and train their own models rather than relying on international solutions?

        Hirdey Vikram: As large language models (LLMs) gain strategic importance, Netweb is enabling Indian enterprises to build, train, and operationalize their own models rather than relying solely on international solutions. On top of our AI-ready HPC infrastructure and scalable cloud platforms, we’re developing an advanced orchestration layer that simplifies the entire LLM lifecycle—from pre-training and fine-tuning to deployment and governance. This platform supports high-speed interconnects, large-scale GPU clusters, and deep integration with open-source frameworks like PyTorch and TensorFlow.

        Our infrastructure includes optimization for prompt engineering, model quantization, pruning, and cutting-edge attention mechanisms like FlashAttention and PagedAttention to maximize throughput. For enterprise users, we provide drag-and-drop agent development, one-click deployment, RLHF-based model alignment, and real-time monitoring, making the platform usable by both technical and non-technical teams. The framework ensures full data privacy, sovereignty, and adherence to compliance standards with support for on-prem, hybrid, and public cloud deployments. This holistic approach allows Indian organizations to create their own agentic AI systems with speed, security, and complete control over their data and model evolution.

        CIO&Leader: How is Netweb Technologies positioning itself within the government’s ₹10,300 crore IndiaAI Mission, and what collaborative initiatives are you pursuing with public sector entities?

          Hirdey Vikram: Netweb is striving to be a core enabler within the Government of India’s AI Mission, playing a pivotal role in shaping the country’s AI destiny. As a key contributor to Airawat, India’s fastest AI supercomputer, Netweb has successfully demonstrated its capability to build sovereign, large-scale infrastructure that powers national-level research and innovation.

           We are also in active consultation with multiple central and state government agencies to design and deploy indigenous AI labs and systems that align with national priorities. One of our major focus areas is co-developing an AI Sovereign Cloud that ensures data localization, privacy, and technology independence – offering a trusted and scalable platform for LLM training, analytics, and mission-critical applications. Our collaborations span across leading public and private sector institutions, and leading government research bodies, enabling joint innovation in various domains. By delivering end-to-end AI infrastructure—ranging from supercomputing systems to orchestration platforms—we aim to empower the IndiaAI ecosystem with the technological backbone needed to scale AI for Bharat.

          CIO&Leader: Beyond infrastructure, how is Netweb working to address India’s AI talent exodus challenge through training, research partnerships, or other initiatives?

            Hirdey Vikram: India’s AI talent challenge is not just about brain drain – it’s about preparing for the exponential demand that lies ahead. Just as the software boom of the late 1990s and early 2000 saw a rapid rise in training institutes that equipped millions with programming skills, a similar ecosystem is now urgently needed for AI. At Netweb, we’re proactively building the foundation for a scalable AI talent pipeline. Recognizing that infrastructure shouldn’t be a barrier to learning, we’re setting up plug-and-play AI Labs across educational institutions and enterprises- both government and private- accelerating hands-on, real-world AI training without the complexity of backend management. These labs are powered by our indigenous, cost-optimized HPC and GPU platforms, designed to deliver maximum learning impact with minimal investment. Such AI labs help in fostering early exposure to LLM training, inference workflows, and model lifecycle management. 

            Beyond academia, we run internal AI training programs to mentor fresh talent in managing and scaling AI environments. For startups and independent researchers, we offer access to GPU clusters, leveling the playing field. Our mission is clear: enable AI innovation to stay in India, grow in India, and scale from India.

            CIO&Leader: With India investing only 0.65% of its GDP in R&D compared to 3-4% in countries like the U.S., how is Netweb contributing to increasing private sector R&D investment in AI technologies?

              Hirdey Vikram: Yes, while India’s investment on R&D as a percentage of its GDP is on the lower side -compared to other countries -, we have our own priorities. Within our ceiling, we have to identify the right opportunity for scaling. On the brighter side, we possess some inherent strength like the available human capital pool. It is imperative that we leverage the best of this pool with world-class AI infrastructure and other crucial enablers to attract investment from the private sector.

              Netweb is actively pushing the needle forward by making AI infrastructure more accessible, scalable, and investment-worthy for the private and government sector. We believe R&D flourishes when entry barriers are low – so our strategy is to enable startups, enterprises, and research bodies to begin small and scale fast, aligning infrastructure growth with revenue traction. Today, every CEO is turning to their CIO with a pressing question: How can we leverage GenAI to boost enterprise efficiency? For CIOs, the challenge lies in solving two key problems – first, building the right data science capabilities, and second, ensuring the infrastructure is flexible enough to start small and scale seamlessly as demand grows .To address such problems, we have Skylus.ai, our advanced GPU resource appliance, to full-stack AI-ready infrastructure, we build solutions that are modular, cost-efficient, and ready for scale.

              Netweb spends a good amount on R&D and has a demonstrated history of entering new product categories, based on the gaps in the end-consumer needs and market opportunity focusing on innovation in compute optimization, AI workflow automation, and data management. We collaborate closely with Indian enterprises to co-develop AI applications and offer subsidized HPC access to AI startups, effectively lowering the cost of experimentation. Our role is to be the catalyst – from seed to tree – ensuring that AI investments deliver long-term innovation value.

              CIO&Leader: As global tech giants increasingly target India for AI infrastructure deployment, how does Netweb differentiate its offerings as a domestic provider to maintain and grow its position in the evolving AI ecosystem?

                Hirdey Vikram: In the age of AI gold rush, Netweb positions itself as the “shovel seller” – delivering the critical infrastructure that fuels innovation across sectors. While global tech giants are building cloud footprints in India, Netweb stands apart by offering end-to-end, made-in-India AI infrastructure – from compact lab systems and edge clusters to full-scale private cloud environments. We’re not just selling hardware; we’re delivering a complete stack: AI-optimized servers, orchestration middleware, and intuitive management layers tailored for Indian workloads.

                Our solutions are cost-effective, power-efficient, and rapidly deployable, making them ideal for startups, enterprises, and public institutions alike. As a key partner in the IndiaAI Mission, we’re deeply embedded in government, defense, and education sectors – areas global players often overlook. With local R&D, on-ground support, and pricing that aligns with domestic budgets, we lower the barrier to entry for AI adoption. Netweb’s vision is to empower India’s AI ambitions from the ground up, ensuring homegrown success stories that scale globally.

                Share on