SAS’s Reggie Townsend on why governance must keep pace with GenAI adoption.

As enterprises race to operationalize generative AI, trust has emerged as both a catalyst and a contradiction. While organizations publicly express confidence in AI-driven systems, investments in governance, ethics, and accountability often lag behind. This widening trust gap raises uncomfortable questions about whether enthusiasm for AI is outpacing responsibility.
In a recent interaction, Jatinder Singh, Editor, CIO&Leader, speaks with Reggie Townsend, Vice President of the Data Ethics Practice at SAS, to understand why this disconnect exists—and what leaders must do to close it. Townsend offers a detailed perspective on market pressures, misplaced trust in generative AI, and the critical role of governance as an enabler rather than an obstacle to innovation. From trustworthy AI frameworks and model transparency to India’s evolving regulatory landscape and the Global South’s opportunity to shape a more equitable AI future, the conversation explores how organizations can move from AI ambition to accountable action without losing momentum. Excerpts from the interview.
CIO&Leader: Industry data shows a significant trust gap, while almost 80% of organizations claim full trust in AI, less than half invest in governance or ethics. How do you explain this discrepancy? Does it indicate superficial corporate enthusiasm without real accountability?
Reggie Townsend: I don’t believe there’s a calculated effort to move ahead with GenAI in a reckless manner. Market pressure is driving much of this, along with the fear of missing out on GenAI’s perceived immense value. As the report points out, a more human-like interface encourages engagement and increases trust. As a result, in the rush to get on the bandwagon, more structural governance considerations can become an afterthought. Again, I don’t believe there’s a desire to act irresponsibly—people simply don’t want to risk being left behind.
There is a baseline level of AI literacy that everyone should have to understand the risks and rewards of the technology and how it shows up in daily life” ~ Reggie Townsend
CIO&Leader: Generative AI is trusted more than traditional AI despite being less understood and more unpredictable. How can SAS help prevent real-world harm from misplaced trust in GenAI?
Reggie Townsend: We aren’t training GenAI models. We are more likely to show up in applications built on the outputs of foundational GenAI models, and that’s where we can make a difference in trustworthy AI deployment. Potential harms from GenAI are more likely to originate at the foundation-model level, where bias and hallucinations remain key concerns. Our responsibility is to identify and mitigate risks and harms where we sit in the stack.
We have trustworthy AI capabilities built into our data and AI platform, SAS Viya. These include bias detection, explainability, decision auditability, model monitoring, governance, and accountability. We also offer model cards that provide users with a clear, comprehensive, and standardized overview of an AI model’s components. Autogenerated model cards offer an easy-to-use framework for assessing model performance and help businesses and developers streamline model evaluation by improving transparency and insight—supporting more informed and ethical choices.
Additionally, SAS will soon launch a unified, holistic AI governance solution capable of aggregating, orchestrating, and monitoring AI systems, models, and agents. Designed for executives but practical enough for data scientists, this offering will help align AI with policies, improve operational efficiency, and enable organizations to navigate their AI journeys with confidence.
While we may not be able to influence the quality of foundation models themselves, we believe we can make a meaningful difference in how their outputs influence decisions.
CIO&Leader: Many organizations lack fundamental AI skills and data infrastructure. How realistic is it to expect them to implement “trustworthy AI” frameworks without a broader industry-wide overhaul of talent and technology?
Reggie Townsend: Implementing frameworks to ensure AI is built and deployed responsibly doesn’t require an industry-wide overhaul, though some sectors will experience more change than others. In highly regulated industries such as healthcare and banking, the shift is more about building on existing governance and privacy frameworks. We are creating technologies built on trusted and proven platforms that improve the reliability of GenAI—so this is more an enhancement than a complete reinvention.
The proliferation of AI doesn’t mean everyone needs to become a developer or researcher. There is, however, a baseline level of AI literacy that everyone should have to understand the risks and rewards of the technology and how it shows up in daily life. While GenAI lowers the barrier to entry, the most critical skill becomes the ability to interrogate outputs and responsibly integrate them into decision-making.
CIO&Leader: Some studies link ethics investment to higher ROI, but critics argue this may simply reflect better-funded companies being more ethical, rather than a causal relationship. How do you demonstrate that ethics investments truly drive financial returns?
Reggie Townsend: The study shows that companies focused on improving customer experience, expanding market share, and strengthening business resilience report significantly higher returns than those focused on cost reduction alone. Regardless of causality, enhanced relationships, expanded product capabilities that address broader market segments, and reimagined processes that support employees all qualitatively improve trust and engagement.
Numerous studies, including those from Edelman, confirm that people are more likely to buy from brands they trust. Harms caused by AI can quickly erode that trust. The strongest defense against this is an ethical and responsible approach to AI.
At SAS, we talk about having a “duty to care.” That means doing the right thing not because of a hoped-for outcome, but because it’s in the best interest of humanity. If that leads to increased revenue, that’s a positive outcome—but it shouldn’t be the guiding objective of ethical behavior.
CIO&Leader: With AI ecosystems becoming fragmented—open-source models, proprietary systems, and plug-and-play tools—how does SAS ensure unified and enforceable ethical standards across diverse technologies and geographies?
Reggie Townsend: As I mentioned earlier, SAS’s position in the AI stack doesn’t allow us to enforce standards beyond our realm of influence. Additionally, what constitutes “ethical AI” can vary depending on cultural context and local regulations. What we can do is help customers deploy AI responsibly through our technology and 50 years of accumulated experience.
We offer a robust AI governance portfolio that helps organizations assess their governance maturity and identify the steps needed to move forward. Model cards are already available to support ongoing evaluation, and our upcoming AI governance solution will give leaders greater visibility into AI usage across their organizations.
Regardless of industry or geography, our goal is to enable leaders to comply with both local laws and internal policies. Organizations can also look to risk-based frameworks such as the EU AI Act. If they are prepared to meet the most stringent regulations, they are better positioned for whatever may come next.
CIO&Leader: How transparent is SAS in auditing bias, data provenance, and decision-making flaws in AI models, particularly generative AI? Will these audits be public or independently verified?
Reggie Townsend: We are not in a position to audit foundational GenAI models developed by others, but we can improve the reliability of AI outputs. SAS Viya’s built-in trustworthy AI capabilities include explainability and transparency, ensuring users understand why a recommendation was made.
CIO&Leader: With evolving AI and data privacy regulations in India, how is SAS positioning itself to ensure compliance while promoting innovation? Can you share partnerships or initiatives in India to co-create AI solutions or develop ecosystem skills?
Reggie Townsend: As a company with 50 years of experience, we’ve witnessed disruptive technologies emerge alongside the regulations designed to govern them. We are accustomed to working across geographies with diverse regulatory requirements and have actively contributed to AI policy and legislative discussions worldwide.
We are confident that SAS is well-positioned not only to comply with Indian regulations but also to help our customers do the same. SAS is a proud member of the Coalition for Responsible Evolution of AI (CoRE-AI), where our global expertise supports both India’s leadership and the global advancement of responsible AI development. We are also looking forward to participating in India’s 2026 AI Impact Summit.
More broadly, SAS is collaborating with the Commonwealth Secretariat and the Commonwealth AI Consortium to help build a more diverse global AI workforce. Through donations of software, computing capacity, and training, we are enabling higher-education students across Commonwealth countries to learn not only how to use AI, but how to use it responsibly.
CIO&Leader: Looking ahead, how does SAS balance rapid AI deployment with ethical governance without stifling innovation, and what is its vision for responsible generative AI adoption in India over the next three to five years?
Reggie Townsend: It’s important not to frame this as governance versus innovation. Governance is an enabler of innovation. Over time, I hope the conversation evolves from “responsible and trusted AI” to simply “AI”—with trust implicitly built in. We’re not there yet, but that should be the goal.
The global AI landscape today reflects a stark divide. Development, infrastructure, and computing resources are concentrated in the Global North, while the Global South is often underrepresented in training data and language models. As a result, northern biases permeate AI systems, and many southern languages remain absent. If left unaddressed, this gap could widen.
However, this also presents an opportunity. Second-mover adoption can be a strategic advantage. Rather than catching up, the Global South can forge its own path—shaping AI around equity, sustainability, workforce development, and resilience. With deliberate choices today, technology can drive shared prosperity instead of reinforcing dependency on northern providers.
I also want to see AI play a stronger role in accessibility. A core principle of responsible AI is that everyone should benefit. I hope to see greater adoption of multimodal capabilities that accommodate not only physical differences but cognitive ones as well. Neurodivergent individuals are already reporting greater satisfaction with GenAI tools—such as note-takers, schedule assistants, and document readers—that help them adapt to work environments not designed with them in mind.
At the same time, as workplaces increasingly rely on GenAI, we must consider how to avoid unhealthy dependencies. The same technology that supports judgment can also reinforce blind spots. Leaders must remain vigilant, understanding both the benefits and the potential harms—and actively working to mitigate them.