Why 'responsible' use of technology is becoming an imperative

And how the IT fraternity must align itself with the new expectations

Why 'responsible' use of technology is becoming an imperative - CIO&Leader

“I want Odia people to know that our goodness is being noticed. It is being valued. The world is beginning to discover it as a collective, pervasive, characteristic of an entire people,” wrote Odisha Chief Minister Naveen Patnaik in a signed article that appeared in many national and regional newspapers in December. This was just after hosting what could arguably be called the best-organized World Cup Hockey ever.

“With advances in robotics and artificial intelligence in the context of aging societies, we will have to move from a narrative of production and consumption toward one of sharing and caring” - Klaus Schwab, WEF founder and executive chairman, explaining

It was a feel-good piece for sure but it was unusual. It did not talk of anything about Patnaik’s government, his party or his leadership, even in any back-handed, indirect way.

This article came in the wake of an overwhelmingly common sentiment echoed by visiting teams, officials and spectators to the World Cup about the warmth and cooperation of local people.

But at the end, It is not about Odia people or any other, it is really about the world at large pausing to recognize and appreciate goodness, simplicity and such human virtues. Maybe, as Patnaik pointed out, because of ‘its growing deficit’ at large.

It is increasingly getting clear that the global community is beginning to realize that the mindless human pursuit of technology capability, business excellence and material wealth has reached a stage where it is threatening the very existence of the traits that made human beings ‘human’ at the first place. It is not so much about a Frankenstein; it is about the constant erosion of values, what Patnaik calls ‘loss of human touch’.

Since technology has become the prime driver of development and progress today, a lot of these trends—such as growing deficit of trust, fear over livelihood and fear about man-made ‘digital’ disasters—are not just being associated with technology, they are being blamed on technology.

The practitioners of technology, as a community, are only too familiar with the manifestation of these trends—increasing vulnerability of physical infrastructure, lack of privacy of individuals, fake news and swaying of public opinions through them, financial fraud through data theft, possible loss of jobs….they are social, political, as well as individual issues.

So realistic and so ominous are some of these challenges that they have now entered the technology discourse directly. ‘Ethics’ is the new holy grail in technological progress. ‘Responsible’ is beginning to become a familiar adjective in things technological.

Recent discourses—both in technology and business—make it amply clear that the discourse has moved from the academicians and activists to the business thought leaders and practitioners—and thus changing the focus from problems and possibilities to solutions and actionables.

Globalization 4.0

Since the last few years, the World Economic Forum (WEF)—in particular, its annual meeting at Davos every January—has set the global thought leadership agenda. Ideas, before making it to the action agenda of governments, businesses and the official multilateral agencies and groups, are discussed and shaped at WEF.

In recent years, a lot of those big ideas have technology ingrained in them. Take, for example, Fourth Industrial Revolution or Industry 4.0, the now commonly-used phrase in the business and technology community.  It was the theme of WEF’s annual meeting in 2016.

The theme for this year—and from which the idea of this story originated—was Globalization 4.0, with a tagline “Shaping a New Architecture in the Age of the Fourth Industrial Revolution.”

When a serious conglomeration of global business and political leaders agree to take up and deliberate which is essentially about ‘sharing and caring’, we do understand why the Odisha CM—focused on putting his state on the world map—decide to choose this unusual topic of ‘being good’. This may well be the future.

“The unprecedented pace of technological change means that our systems of health, transportation, communication, production, distribution, and energy – just to name a few – will be completely transformed. Managing that change will require not just new frameworks for national and multinational cooperation, but also a new model of education, complete with targeted programs for teaching workers new skills. With advances in robotics and artificial intelligence in the context of aging societies, we will have to move from a narrative of production and consumption towards one of sharing and caring,” wrote Klaus Schwab, WEF founder and executive chairman, explaining Globalization 4.0.

It is not a 20,000-feet idea that just sounds good. Less than two weeks after the annual meeting, WEF published a board briefing on Responsible Digital Transformation, which presented specific findings in five ‘digital transformation’ areas, based on its consultations with business leaders and regulators through 2018.

“On many issues, business is today being increasingly challenged about its role in society. In the digital context, the responsibilities of organizations as the primary stewards of our data, or as providers of connected devices that we rely on for safety, are equally being called into question,” said the briefing foreword.

The five areas that it addressed—using a questionnaire-based toolkit that it developed—are cyber resilience, data privacy, AI, IoT, and blockchain or distributed ledger technology (DLT).

From the five, cyber resilience is already an adopted priority for business, while IoT and blockchain—especially their socially-disruptive dimensions—are still relatively new.

From the point of view of technology practitioners’ agenda, the ethical questions of AI and data privacy are important, immediate and require deliberation and specific action agenda.

Responsible AI Agenda

No other technology area has raised as many questions about ethics as Artificial Intelligence, for obvious reason—we now have a replacement for human brain.

While the concerns have been raised since long, in the last 3-4 years there are efforts to ‘do something about it’.

One of the first tangible initiatives in the realm of AI was a Stanford project, called The One Hundred Year Study on Artificial Intelligence, launched in 2014. It is a long-term investigation of AI and its influences on people, their communities, and society. It considers the science, engineering, and deployment of AI-enabled computing systems.

This prompted some companies to launch AI as Partnership for AI. Launched in September 2016, with five top tech companies—Amazon, DeepMind/Google, Facebook, IBM, and Microsoft—as the initial members, the group has expanded to include more than 80 partners—tech companies, multilateral bodies, academic units, advocacy groups, media and other businesses working on the area. Some of the major tech names include Accenture, Intel, Salesforce, Samsung and Wikimedia. 

Though the organization describes its objectives as to “conduct research, organize discussions, share insights, provide thought leadership, consult with relevant third parties, respond to questions from the public and media, and create educational material that advances the understanding of AI technologies”, its primary focus is on responsible use of AI.

This is evident from its work agenda; the six thematic pillars that it is organized around.

They are:

1.     Safety Critical AI

2.     Fair, Transparent & Accountable AI

3.     AI, Labor & The Economy

4.     Collaborations between People & The AI Systems

5.     Social & Societal Influence of AI

6.     AI & Social Good

In 2017, the MIT Media Lab and the Harvard Berkman-Klein Center for Internet and Society launched a joint project, The Ethics and Governance of AI Initiative, “that seeks to ensure that technologies of automation and machine learning are researched, developed, and deployed in a way which vindicate social values of fairness, human autonomy, and justice.”

However, it was still restricted to such collaborative initiatives, primarily targeted at the technology makers, not the users at large.

Last year, Accenture released a framework, Responsible AI & Robotics: An Ethical Framework.

“As businesses continue to expand their use of artificial intelligence, consumers will increasingly interact with digital agents. They will need to be able to put their trust in these AI systems when they apply for health insurance, student loans or a mortgage. But establishing that trust is easier said than done. There are significant challenges that your business will have to address on the way to creating trustworthy, responsible AI,” wrote Mark Wiermans, one of the senior Accenture executives in an article titled, Raising Responsible AI recently.  

But the question is: How far can an organization go to ensure that your AI remains responsible?

What could be seen as a positive development is the recent decision by an AI non-profit, OpenAI not to share the full version of a text-generation algorithm it developed called GPT-2, due to concerns over ‘malicious applications’.

However, OpenAI is a non-profit, not a commercial organization. Also, it was severely criticized, even ridiculed by many AI researchers who accused it of creating an unnecessary fear and mass hysteria around AI.

We should see a lot more effort, targeted at enterprise IT managers this year.

Privacy & Responsible Use of Data

International consultation on ethics around privacy of individuals is a well-discussed subject now.

One of the first initiatives at the international level was by the European Data Protection Supervisor (EDPS), EU’s independent data protection authority, which, in September 2015, published Towards a New Digital Ethics: Data, Dignity and Technology, an opinion that urged the EU and other international figures and organizations to promote an ethical approach to the development and employment of new technologies. It also formed an Ethics Advisory Group.

In June 2018, EDPS launched a public consultation on Digital Ethics. The responses contributed significantly to the agenda for the 40th International Conference of Data Protection and Privacy Commissioners which was organized by EDPS in October 2018. The theme was Debating Ethics: Dignity and Respect in Data-driven Life. The event was addressed by CEOs of Apple and Facebook as well as former CJI of India, Justice Jagdish Khehar.

In the consultation—which was otherwise largely open-ended—the respondents were asked if their organization have any policies and/or procedures in place for ethical assessment. As many as 37% answered in the affirmative while another 19% said they were being considered.  

That shows that Digital Ethics, especially around data privacy, at least in Europe, has already become mainstream.

  

Digital Ethics @ the Enterprise: Why the Time is Now

“As every company becomes a tech company, these new responsibilities will affect every digitally-enabled organization, said the WEF briefing on Responsible Digital Transformation, referring to the businesses’ responsibilities to safeguard data, prevent misuse of data and apply emerging technology responsibly. In short, digital ethics should be on the agenda of enterprise IT.

It probably is; if not, it should soon be. While the WEF and the European Union debates and deliberations may still be a little too futuristic for enterprise IT,

the latter often deliberates and sets its agenda based on trends identified by the likes of Gartner and Forrester. In particular, Gartner is a major influencer of enterprise IT agenda.

The research firm has already identified digital ethics as one of the top ten strategic technology trends of 2019.

“Conversations regarding privacy must be grounded in ethics and trust. The conversation should move from “Are we compliant?” towards “Are we doing the right thing?,” says Gartner.

“Companies must gain and maintain trust with the customer to succeed, and they must also follow internal values to ensure customers view them as trustworthy,” it further adds.

As it becomes clearer and clearer that business users of technology should look at some of these questions in a more proactive manner—compliance is reactive—it is time to set our own agenda.

By ‘own’, we mean an agenda that is in sync with the realities, expectations and moral standards of Indians. 

Else, we will be forced to follow the agenda set by the developed economies, whose demographics, culture, development models, privacy concerns, data security frameworks and role of policymakers and regulators, may be very different. What is worse,  we may have to follow the agenda set up by the technology suppliers, not even the enterprise IT users from those markets.

So, here are some of the basic realizations about the need for digital ethics that we must start with:

1.     Something must be done

2.     Regulations cannot achieve it

3.     Profitable growth may be everybody’s business but the other metrics such as customers’ privacy are important

4.     Wider participation of stakeholders (tech developers, business users, consumers, policymakers, academia & research community) is a must for acceptance and meaningful conversations

5.     Partnership/collaboration is the best approach to move forward. At least, till a more formal agenda is set

6.     Both business executives and technology practitioners have to play their role in devising digital ethics practices

7.     Clear actionables need to emerge, even though it may not be possible to define it fully.

8.     Digital ethics could be built into technology application; it could be a filter, as security has become

This is a suggested step-wise approach for the IT fraternity to proceed: 

  • Familiarize themselves with key global issues (e.g privacy, job loss, security)
  • See which of these are important:
  • In India
  • In their industry globally
  • For the company, based on the culture of the company
  • Examine which of these have already been addressed by existing regulation; where regulation is in the offing; which are the areas where regulation is expected, but not in near future; which are beyond regulation and law but are moral issues still
  • Set clear expectation that compliance is the starting point, not the end of the ethics journey
  • Some of the issues originate in tech; they must proactively sensitize other CXOs with those issues
  • Create a filter or define the contours

It is clear that responsible use of technology—whether it has to do with data privacy or applying AI—is becoming an imperative. As goodness, caring & sharing, ethics and trust are being proposed as thrust areas for global leaders and common people alike, we have something after long to be hopeful about for the future of humanity.

We have a chance to remove the ‘deficit’ of trust and goodness that Patnaik laments about. Shouldn’t we do our bits?

Read the CIO&Leader February 2019 Issue


Add new comment