The trust challenge in the age of AI

So far, there hasn’t been a perfect way for AI systems to reach human-level intelligence, which leads to concerns about trusting AI.

Picture a future where your AI buddy is always there, ready to chat, provide guidance, and solve your problems around the clock. You trust the model entirely, just like your friend and a well wisher.

But one day, someone with malicious intentions tricks your AI buddy into creating harmful content by manipulating its data. Despite this, you remain unaware and continue to trust your AI buddy. Sounds unsettling, doesn’t it? It could potentially do more harm than good. But this is the direction we are heading!

There is little doubt that AI is swiftly changing the way we work and impacting how organizations function. While it undoubtedly adds the necessary element of intelligence to automate processes, improve productivity, and assist organizations in innovating and generating new revenue streams, it also risks undermining trust. This significant concern could potentially threaten the future.

To date, there has yet to be a foolproof mechanism that could enable AI systems to match the intelligence of human beings. Take, for example, driverless cars, which are touted as the next breakthrough in the era of AI. Instances worldwide have shown AI-enabled automated cars repeatedly failing to navigate situations adeptly without human intervention. One notable incident involved a GM Cruise self-driving car in the US, which collided with a pedestrian who had already been struck by another vehicle, prompting the suspension of its autonomous operations and ensuing legal actions.

According to a recent IBM report titled “Building trust in AI,” which draws insights from conversations with 30 AI scientists and thought leaders, establishing trust in AI necessitates substantial efforts to imbue it with moral sensibilities, operate with complete transparency, and offer education about the opportunities it presents for both businesses and consumers. They emphasize that this endeavor must be a collaborative effort spanning scientific disciplines, industries, and governmental bodies.

Moreover, another challenge emerges with the advent of Generative AI. Traditional approaches to AI development are now supplemented by this new form, introducing further complexities. If the launch of Yahoo! in 1994 and Google in 1998 made information search on the web easy and universally accessible, the launch of ChatGPT in November 2022 opened up an altogether new way of accessing information, and in the years to come, it may cause Generation Alpha to almost forget what web search was all about. This change is inevitable and enhances intuitiveness. 

But that’s not the real problem! The real challenge lies in whether these new LLM models will build the trust that conventional search engines might have earned over the decades. Fake information and misinformation are one of the biggest concerns growing with AI-generated content. As threat actors continue to evolve and gain the ability to create text that mimics human writing, it becomes a monumental task for organizations to ensure that the LLMs they have deployed are fair, transparent, and ethically sound for building trust among users. 

In addition to the above, the proliferation of manipulation and deepfakes makes it extremely difficult to discern fiction from reality. While establishing clear guidelines and regulations around the use of AI, particularly in sensitive areas like healthcare and finance, is essential for fostering trust among users and stakeholders, AI as a technology is outpacing the legal and regulatory framework and, hence, requires constant evolution.

“One of the significant challenges for generative AI is its struggle to replicate the human touch or empathy,” noted Prof. Toby Walsh in a discussion with CIO&Leader. This aspect distinguishes human-to-human interactions from machine-to-human engagements, presenting both a challenge and untapped potential.

“The future is unpredictable,” remarked Dr. Pavan Duggal, a leading advocate in cyberlaw. “While AI has pushed the boundaries of innovation and productivity, the absence of appropriate policies and governance models could lead to numerous challenges and legal issues that erode trust,” he emphasized in a recent interaction with CIO&L.

As experts and governments debate what trustworthy AI should encompass, achieving a global consensus still needs to be achieved. However, as the next generation increasingly relies on AI for daily tasks, we stand on the threshold of an AI-driven era similar to digital transformation. It is imperative that we proactively address associated challenges and establish trustworthy models before this era entirely unfolds.

Share on