Generative AI, like any form of AI, requires human oversight to be trustworthy

Over the past twelve months or so, organizations across various industries have embraced the potential of generative artificial intelligence (AI) to revolutionize their operations. These Generative AI tools are perceived as potential game-changers, offering benefits such as enhanced productivity, strengthened security, and improved user experience. However, one of the most significant concerns faced by many IT decision-makers is ensuring that people are informed about responsible AI practices.

When people pose ethical questions about AI projects as a standard part of the process, we observe a dramatic decrease in unintended harm. In a recent conversation with Jatinder Singh, the Executive Editor of CIO&Leader, Reggie Townsend, Vice President of SAS Data Ethics Practice, emphasized the primary challenges the organizations confront while scaling up AI initiatives, how to tackle these challenges, and the potential risks of relying heavily on generative AI without sufficient human oversight.

Townsend spearheads a globally coordinated effort to enable employees and customers to implement data-driven systems that prioritize human well-being, agency, and equity. The US Department of Commerce has appointed Townsend to the National Artificial Intelligence Advisory Committee (NAIAC). The NAIAC’s role is to provide guidance to the president and the National AI Initiative Office on various AI-related issues.

The SAS Data Ethics Practice focuses on empowering and motivating SAS employees, its customers, and the broader analytics community to construct data-driven systems that advance human well-being, agency, and equity.

Excerpts from the interview:

CIO&Leader: As AI is becoming as ubiquitous as electricity, how critical is it for organizations and CIOs scaling their AI initiatives to have a solid moral stand when building this technology?  

Reggie Townsend: We’re already seeing examples of AI, like chatbots, handling easily automated tasks. I think we’ll see AI become a complementary tool, empowering people to work more effectively, accomplish more tasks, and focus on work that only humans can do.   

But we’re still far off from that, and we need to figure out a lot before we get there. As impressive as AI can be, it lacks the complex thinking and emotional abilities of human beings. For any workflow using AI, humans will need to be in the loop to check for bias and fairness and ensure people aren’t being harmed.  

With AI, that harm can happen at scale, so it’s essential that organizations using AI do so responsibly. That starts with identifying principles that not only guide the use of AI but provide a flexible framework for meeting regulatory requirements, whatever form they take. For example, at SAS, our principles are human-centricity, inclusivity, accountability, robustness, transparency with privacy and security. 

What are the current challenges organizations and technology leaders face when scaling up their AI initiatives? 

Reggie Townsend: The rise of generative AI has created a rush to implement for fear of being at a competitive disadvantage. It can be difficult to identify which business areas and applications are the ripest for AI that can be operationalized quickly with minimal risk, which gets us to another challenge.   

It would be reckless to rush into launching AI projects, particularly ones with newer foundation models, without careful consideration of the risks involved. We must recognize that the data used to train large language models still comes from humans. The results of generative AI, at their core, reflect us humans. There’s still an inherent risk that these models can be informed by inaccurate data, misinformation, or biases, not to mention the potential legal risks associated with copyright. When scaling up initiatives featuring foundation models, internal use with controllable purview is the recommendation for now.  

An organization that unintentionally harms vulnerable populations can substantially damage its reputation, brand, and bottom line, in addition to the human toll. Increasingly, ethical violations could come with substantial legal risks and hefty fines.  

According to a recent survey conducted by CIO&Leader, most CIOs and IT leaders agreed to have found relevance for Generative AI in driving the future of business. Except for cybersecurity and process improvement initiatives, other areas of operations are primarily in the testing phase with a lot of caution. What should be the right steps for a CIO to evaluate the benefits of AI specifically relevant to their unique needs? 

Reggie Townsend: Fundamentally, AI is about automated decision-making, so if done with a commitment to fairness, transparency, and accountability and with humans at the center, decision-making is beneficial everywhere. Whether it should be automated and how is the dilemma. There are obviously areas where AI needs to be heavily scrutinized and carefully regulated. Anywhere decisions are being made that affect health, well-being, finances, and freedoms; we must beware of AI leading to widespread harm.    

AI technologies, including generative AI, machine learning, deep learning, and computer vision, are finding success across industries and in different parts of the business. The notion that AI should be reserved for unique breakthrough projects has evolved.   

Certain industries are finding different AI technologies more applicable than others. For instance, machine learning and deep learning are two areas that are getting the broadest use with the most promising results. ML can detect patterns in the data and make predictions without being told what to look for. Deep learning does the same but gets better results with bigger and more complex data (such as video or images). As these capabilities are being applied to traditional approaches of segmenting, forecasting, customer service, and other areas, organizations are finding they get better results than without these AI technologies.  

Manufacturers are having success using computer vision to identify quality issues and reduce waste. Retailers use machine learning techniques to improve forecasts and save on inventory and product waste costs. Banks are having success using conversational AI and natural language processing to improve marketing and sales.  

For generative AI capabilities, synthetic data can be particularly useful for pharma companies that lack sufficient real-world data to evaluate treatments for rare diseases. Digital twinning is perfect for supply chain challenges, allowing manufacturers or smart cities to replicate networks of sensors. 

Guided by their organization’s strategic goals, CIOs should look at their organization’s challenges and inefficiencies for opportunities to apply AI, but also look at the things they do well. Optimized with AI, a strength can become a competitive superpower. 

What are the potential pitfalls of relying heavily on generative AI without adequate human oversight? 
Reggie Townsend: Generative AI, particularly the newer foundation models, is trained on data that comes from humans. The results of generative AI are a reflection of us, with all our biases and imperfections.

Truly, humans should be involved at the very beginning of any AI project. An ethical approach to AI keeps humans at the center. It convenes a diverse group of minds when planning a project. It considers not only who could be helped but who could be harmed. It examines the risk of perpetuating historical bias. Ethical considerations don’t just pop up when AI is being deployed. Those considerations must be present from the time an idea is conceived through the R&D process, deployment, and then the monitoring of models and outcomes.   

It’s important to note that generative AI, like any form of AI, requires human oversight to be trustworthy. We’ve not reached a moment in time where AI, or any form of technology, is automatically and persistently aligned with our values. Humans must be present to ensure that. Doing so allows an organization to mitigate risk to its reputation, brand, and bottom line.  

Based on discussions at the recently concluded G20 summit in India and the growing industry emphasis on AI ethics, where do you see the future course of AI ethics heading in the next decade?  

Reggie Townsend: Ethics should be as essential to AI as data. I think one thing that will reduce ethical concerns will be that we will train models on better data sets. Right now, AI image generators might lean towards a white man when creating a CEO or overly sexualize a woman. That’s not surprising, given the bias that exists in images across the internet. As we get better at feeding models more representative and less biased data, the outcomes will improve.  

I think we’ll also learn from case studies where AI benefitted or harmed people in unexpected ways. It will become apparent that if one organization fails in its commitment to ethics, we all fail. It will take widespread confidence in AI for it to reach its potential. There will be a learning curve, but if we are grounded in ethics and our principles, at least we know we’re working together towards a common goal of beneficial AI ubiquity.  

The next decade will also see various regulations emerge that will hopefully coalesce around some common ethical guardrails while allowing flexibility for innovation.

  
As more companies take a proactive stance towards responsible AI, what ripple effects might this have on industry standards, consumer expectations, and regulatory considerations? 

Reggie Townsend: Companies must take a proactive stance towards responsible AI because the technology will always outpace regulation. An authentic commitment to responsibility should earn the industry a consistent voice in regulatory matters. Ultimately, it will benefit organizations to have regulatory guardrails that establish consistent rules and a level playing field. 

In the Internet Age, consumers have agreed to surrender a certain amount of ownership over their data in return for better service and a better overall digital experience. If done well, AI will only improve that, and expectations will increase. If organizations have a responsibility as a priority, it will increase the trust and proliferation of AI, and overall consumer acceptance. 

AI is going to impact all industries, but ethics and responsibility will be particularly critical in industries where health, wealth, and freedom are at stake. Domain experts should be consulted in particular industries to apply AI within a specific context. For instance, people with expertise in healthcare administration, lending, and public safety should be part of the inception and development of AI applications for health, finance, and law enforcement. These are all areas where inequities have harmed people in the past, and bias remains a serious concern. That said, domain experts aren’t just there to mitigate risk. They also are better at identifying opportunities within an industry to make a real difference. In the end, technologists need the voice of the potentially impacted and most vulnerable to make better technology.

How is SAS spearheading the responsible AI movement, setting a benchmark for ethical practices in AI development? What are the trends for 2024?  

Reggie Townsend: Responsible AI requires a comprehensive approach involving people, processes, and technology. While regulation will provide the framework for instilling the responsible use of AI, widespread cooperation among developers and users of AI will be critical.   

As an early founder of analytics and AI, SAS has been dealing with ethical questions about these powerful technologies for nearly five decades. We have a unique voice and perspective that we bring to conversations with our customers, as well as various organizations and committees shaping the future of AI. 

Achieving trustworthy AI will require a large cultural shift that will take place over time. Fortunately, there is strong momentum behind trustworthy AI, and SAS is proud to be a partner and leader in those efforts. 

Looking ahead, I think we’ll continue to see governments wrestle with how best to regulate and put legal regimes in place that safeguard citizens while allowing innovation. But it will still be a struggle because of the fast-moving nature of technology. We’re hearing an increasing number of governments make statements that align with each other, and we should see regulation coalescing around certain principles like human-centricity.    

Even if it takes time to settle on the exact wording of laws, governments will likely use the leverage they have through their vast purchasing power to set de facto standards and expectations for ethical behavior.   

I also think we’ll see a growing number of non-technical roles weighing in on the AI conversation. It needs to be more than just technologists setting the agenda when there are implications for justice, well-being, and equity. We need non-technical domain experts to consider those implications and uncover risks and opportunities. 

We should see increased sophistication in how we measure and monitor the performance of AI and how we are tracking towards responsible AI goals. We need to know if a model overstepped or underperformed. For example, SAS is working on model cards that will help our customers do this sort of analysis. 

Share on

Leave a Reply

Your email address will not be published. Required fields are marked *