AI Governance Is Not a Policy Exercise. It Is a Leadership Test

As enterprises move from AI experimentation to scaled deployment, the real challenge is no longer access to the technology. It is the ability to govern it with clarity, purpose, and trust. In this conversation with CIO&Leader, Reggie Townsend, Vice President, AI Ethics, Governance & Social Impact at SAS, argues that responsible AI cannot be reduced to broad principles or compliance checklists. It must be operationalised across the enterprise through oversight, risk alignment, process redesign, and cultural readiness. He makes a compelling case that governance does not slow innovation; it gives it direction. From bias monitoring and data lineage to Gen AI risk, accountability, and the growing cultural impact of AI agents, Townsend brings a pragmatic lens to one of the most urgent leadership questions facing CIOs today: how to embed judgment, not just automation, into enterprise AI. His message is clear—AI success will depend as much on trust and leadership as on technology itself.

Reggie Townsend
Vice President, AI Ethics, Governance & Social Impact
SAS

CIO&Leader: As AI systems scale across enterprises, what are the most underestimated ethical risks CIOs should be actively monitoring today?

Reggie Townsend: The question about ethics in the enterprise becomes challenging if we’re talking about a kind of universal approach to it, because the ethics of an organisation have to be different based on what that organisation does. It’s also a different conversation because no one sees themselves as unethical. It’s almost a showstopper.

This should be a conversation about enterprise governance. In that sense, again, every enterprise is different, so the way they implement governance has to be different. But what we’ve identified is that there are four core pillars related to that.

One is oversight—ensuring the right level of cross-functional executives, such as a CIO, participate in an oversight body related to AI. The second is a focus on compliance and regulatory expectations. Understanding the organisation’s risk tolerance levels is critical because not everything unethical is illegal, and not everything ethical is legal, and vice versa.

The third pillar is the business’s operations. What AI is introducing is a need to rethink, in some cases, current processes, and this is where many enterprises are tripping up today. It is very difficult for an enterprise of any significant size that’s been around for longer than five years to say, “We’re going to take our current way of doing things and now just AI-ify it.”

Why is it difficult? Because AI needs to pull data from your enterprise that you may not have access to, may not have cleansed, or may not have structured in a way that would be useful for AI. It also upsets processes. Right now, we have processes that involve people, and when you start bringing AI into the mix, those processes need to be rethought. That is disruptive to the business.

For example, I look at our organisation. We just went through a significant overhaul of our financial system. We made a major investment, so we’d have to think carefully about how or whether to integrate AI. Prior investment decisions have to be one of the factors organizations consider.

So when you start bringing in AI, just like any other product, you have to go through all the risk reviews, security checks, training, and other processes. It makes it very difficult to say, “We’re going to bring AI in, and everything gets better.”

The other thing I’ll add before moving to the fourth pillar is that doing AI for AI’s sake is irrational. AI has to produce something better or different than what you’re currently able to do. Otherwise, keep doing what you’re doing.

The fourth pillar of governance is culture, and it’s often overlooked. Normative behaviours are displayed by people in any enterprise. So, having a good sense of how AI shows up in the enterprise, and knowing that these behaviours may need to be adjusted, is important.

One central reason is that if you’re going to continue to depend on people, they need to know that AI is not there to replace them. If so, leaders should be transparent and say, “Here are your options now.” You also have to train people. The number is something like 70% of employees who are dealing with AI today are doing so without adequate training—they are figuring it out as they go.

While that can be helpful for personal productivity, it is less beneficial at an enterprise productivity level.

Governance—those four pillars that I described—really allows you to shepherd AI into the organisation with purpose and intended use. That would be the one area CIOs really need to focus on in 2026.

CIO&Leader: What distinguishes an organisation that has an AI governance policy from one that has truly operationalised responsible AI?

Reggie Townsend: You know, I’d say you’re going to have a hard time finding any organisation that is, quote unquote, truly operationalised responsibly. I think most people are still trying to understand what that really means. So I can’t speak for all companies. I can talk about what we’re doing at SAS.

We started, like most, with a core set of principles we wanted to orient ourselves around—human-centricity, transparency, privacy, you name it —not unlike many other organisations. We spent a little time figuring out how to put those principles into practice, and now we’re in the process of doing just that.

One of the things that has come out of that is the establishment of an enablement and governance initiative, where we have started with a couple of core, what we call, plays—specific to how to use AI responsibly. These plays have everything to do with how a developer might bring AI into the development cycle, how a marketer might use content generation, and how finance or HR might use it, say, from a recruiting perspective.

So that’s one of the approaches that we’re taking. Oftentimes, however, when you hear people talk about operationalising responsible AI, what they’re really talking about is outbound—meaning they want to tell others what to do with AI, as opposed to truly incorporating those practices internally.

One of the ways that we are helping those who decide this is what they want to do is through a classic case that I’m seeing a lot now—people wanting to form AI oversight committees, very much like the oversight pillar I described earlier. We’re doing a lot of work in that regard, sharing what we’ve learned over the years about putting those cross-functional committees in place and what it really means in terms of actual policy and practice.

CIO&Leader: How should CIOs structure accountability when AI-driven decisions directly impact customers, employees, or financial outcomes?

Reggie Townsend: Well, the first thing CIOs need to do is understand the potential outcomes associated with their AI and make sure that they’re adequately testing that AI for robustness before it hits any customer.

The second thing I’d do is sandbox the AI to make sure, with a small subset of targeted users, that it’s doing what you intended. I go back to my prior statement—this should not be AI for AI’s sake.

Now, I know a lot of CIOs, understandably and rightfully so, tend to have sandboxes where they’re doing a lot of experimentation. That is the proper thing to do for a CIO and their organisation—to test what works, what doesn’t work, and understand how it gets incorporated into the organisation. But beyond that, understanding the use cases for targeting AI becomes central.

You also have to keep in mind, from a cost standpoint, that all this generative AI is token-driven and involves significant cloud consumption, which can lead to runaway costs for CIOs. Most can’t afford that. Most don’t even have AI as a line item in the budget until very recently.

Trying to understand how many, say, licenses to bring in for code development is a big deal, because some developers are more efficient than others. If you’re rolling these models out across the entire enterprise—for code development, agent creation, or even text summarisation—all of that translates into real cost to the business. Importantly, these are variable costs, not fixed. This month’s cost could be very different from next month’s, and three months from now it could increase significantly.

So understanding those parameters is central, which is why I go back to use cases. From a use-case perspective, instead of doing a broad deployment, you can determine return on investment.

For example, if you focus on becoming more efficient in marketing, you can establish a baseline of how things are done today, test AI against it, and then measure the outcomes. From there, you can determine whether the value generated justifies the effort.

These are the kinds of things we’re focusing on today at SAS—aggregating use cases from different business functions. It’s important to note that each function is very targeted in what it needs to do. For instance, the marketing team focuses on its priorities, but it may develop capabilities that also benefit finance or facilities.

So, going back to the oversight committee structure, the ability to view this at an aggregated level is critical. It helps identify synergies within the organisation—where a capability developed in one function can be reused elsewhere without onboarding new vendors or incurring additional variable costs. It also allows you to leverage the effort already invested in building the initial use case.

From our vantage point, that’s where real enterprise value begins to emerge when deploying AI. That’s the next logical step in everything we’re discussing.

CIO&Leader: With increasing regulatory momentum globally, how can enterprises future-proof their AI governing frameworks without slowing innovation?

Reggie Townsend: First, we should dismiss the notion that governance equates to slowing innovation. It doesn’t. In fact, it accelerates innovation by providing direction and clarity.

I think about governance in terms of scaling judgment. If we know the right things to do, we want to institutionalise them through governance. Businesses already have processes and policies today—those are forms of governance. So if we need to institute those kinds of policies and processes because of AI, we do. It allows us to move faster; it doesn’t slow us down.

CIO&Leader: Bias testing is often treated as a one-time exercise. What does continuous, lifecycle-based fairness monitoring actually look like in practice?

Reggie Townsend: It’s situationally dependent. I want to push back slightly on the notion that it’s a one-time event. If you’re talking about bias coming from a foundation model, maybe the foundation model providers should check for it. But then it’s incumbent on folks building applications on top of those models to decide how often they review their bias.

If you are a financial services institution, for example, you may need to check your loan models for bias routinely. If you are a healthcare provider, you should check your models more frequently. If you are a construction company, maybe not. I think it’s important that we don’t universalise this idea about bias review. It has to be specific to the use case.

Some bias is okay—that’s another notion we need to dismiss. The idea that all bias is bad is not true. Sometimes there are positive biases. For example, in a healthcare setting, a woman has different biological needs than I do. It’s important that if we’re developing treatments, they target those specific needs appropriately.

A bias review helps us identify biases that may exist in a model or the data. It then allows us to determine the best mitigation strategy. Our job, as technologists, is to flag bias. Bias exists because humans exist, so we’re not going to eliminate it, nor should we attempt to. But we should flag potential negative biases associated with the model’s intended use.

That’s why it’s very important, upfront, to establish the intended use, monitor it for any variability, and correct when the model starts to drift. These are the kinds of things that come into play when you really get into the nuance. This is not about bias being entirely bad or entirely good—it’s highly nuanced and contextual.

CIO&Leader: Generative AI introduces hallucination and misinformation risk. What governance controls should CIOs mandate before deploying Gen AI in mission-critical workplaces?

Reggie Townsend: The first thing that I’d say is, if it’s a mission-critical workflow, you should question whether or not to use Gen AI. Keep in mind that Gen AI’s underlying architecture is probabilistic.

So, if it’s a mission-critical workflow, do you want 95% accuracy or close to 100%? This is mission-critical. I think most CIOs would say, “I need 100%.” You can’t afford to be wrong 5% of the time.

So I think the fundamental question I go back to is—are we doing Gen AI for Gen AI’s sake? I understand the excitement, but I think most serious CIOs are tempering it with a combination of AI/ML, more predictive and trusted models, and the application of generative AI where appropriate.

Sometimes it could be used in mission-critical settings, but largely not, because of the unpredictability, like hallucinations and such. Where it shows up really well is in settings where you can tolerate some margin of error. It may get better over time, but right now, CIOs who are betting their careers and spending millions of dollars on solutions are going to choose what is most accurate and trusted.

CIO&Leader: Data lineage and model traceability are gaining board-level attention. How should enterprises build end-to-end visibility across complex AI systems?

Reggie Townsend: If we’re talking about board-level visibility, my experience is that you’re not having a detailed data lineage conversation at the board. At that level, you’re having a conversation about the most impactful models. This is where things like model cards and system cards come into play—to give a bird’s-eye view of what’s going on in the enterprise.

We’ve got AI governance products and an entire portfolio that enable board-level stakeholders to assess risk, because this is a risk-management conversation at that level. It’s not so much about whether a model is good or bad. It’s about the risk in the enterprise and how much can be absorbed.

If the organisation starts moving outside a defined tolerance, you want to ensure the right people address it.

Now, further down in the organisation, at the operational level, data lineage and traceability become extremely important. Again, it comes back to use-case specificity. For example, in healthcare, understanding how data was collected, whether informed consent was obtained, and how that data is being used is critical. These are meaningful questions that require clear answers.

Maybe less so if we’re talking about a consumer gaming application. So again, this comes down to the use case. But the level of technical specificity in data lineage is not typically a board-level conversation.

CIO&Leader: SAS has been strengthening its AI governance and model management capabilities. How do you see these advancements helping enterprises embed ethics directly into analytics and AI workflows?

Reggie Townsend: Earlier, I said that AI governance is a way to scale judgment. Our AI governance portfolio is where that judgement is embedded. It becomes the tool that allows us to scale governance across the enterprise.

If you’re monitoring, say, 10 highly critical models, having a dashboard that shows when those models begin to drift becomes very important. As models mature and take in new data, you need to assess which variables are driving those changes.

Our portfolio does exactly that—it provides a bird’s-eye view, from the needs of a data scientist all the way up to board-level stakeholders. We have financial services-specific examples, insurance-specific examples, and also a more agnostic, executive-level view.

So again, it comes down to specific needs, but we have a robust portfolio that addresses varying stakeholder and technical requirements.

CIO&Leader: Looking ahead, what is the next major ethical inflection point in enterprise AI that CIOs should prepare for over the next 24 months?

Reggie Townsend: The next inflection point is cultural. There’s a lot of excitement about AI, and people are taking these ideas back into their workplaces. They’re experimenting, building, and in some cases, deploying advanced capabilities like AI agents—sometimes at scale.

That raises important questions. What data are these agents accessing? Are they confined within the organisation, or can they go outside? If they go outside, what are they bringing back in? Are they serving the intended purpose of the organisation, or something else? Are they improving collaboration or harming morale?

Let me narrow it down to a few key questions. Does the ability to do 2x, 3x, or 5x more work make employees more skilled and valuable? Or does it simply change expectations? Should organisations reward that productivity with more flexibility, or demand more output?

These are complex leadership questions. AI is no longer just a technology issue—it’s a leadership issue.

Why is culture the inflection point? Because people’s perception of AI matters. For example, in the U.S., about 40% of people are concerned that AI will impact their privacy, security, or jobs. Those people go to work—are they going to embrace AI or resist it?

At the same time, those who are excited about AI—are they using sanctioned tools, or experimenting outside enterprise boundaries with company data?

These are complex issues, and each could be a deep discussion on its own. But fundamentally, I would encourage leaders to focus on three questions of ethical inquiry:

  • For what purpose? Why are we doing this?
  • To what end? How far are we willing to take it, and what defines success?
  • For whom might it fail? Who could be negatively impacted?

If harm is introduced at any level, leaders need to rethink or mitigate that impact.

At the end of the day, trust is everything. If you erode trust through poorly governed AI, it is very difficult to rebuild. Organisations must focus on preserving and strengthening trust as they adopt AI.

Share on