Carnegie Mellon is the latest to announce research into possible ethical implications of artificial intelligence. From the White House to large tech companies, ethics in AI seems to be the top interest for all those who matter
Carnegie Mellon University today announced setting up of a new research center to focus on the ethics of artificial intelligence (AI). Being established with a $10 million gift from global law firm K&L Gates LLP, the center will study the ethical and policy issues surrounding artificial intelligence and other computing technologies. Called K&L Gates Endowment for Ethics and Computational Technologies, the initiative will have fellowships, conference and annual prizes.
Carnegie Mellon’s is neither the first nor the most high-profile. The list of organizations that have shown their sensitization to the issue include The White House, the World Economic Forum, the Stanfod University and top tech companies.
Last month, five top tech companies—Amazon, DeepMind/Google, Facebook, IBM, and Microsoft—announced what they called Artificial Intelligence to Benefit People and Society, a partnership, simply called Partnership on AI for short, that mandates itself primarily with studying ethical and legal issues in artificial intelligence (AI).
The concern is not new, though. Much of the fiction around robots taking over to more complex issues involving human-machine relationships and the legal and moral issues concerning machines taking decisions have arisen primarily from envisages possibilities, many of which are becoming real now.
One serious project that started almost in parallel with this wave of renewed interest in AI was a project, The One Hundred Year Study on Artificial Intelligence, launched in 2014 by Stanford University. Described as “a long-term investigation of the field of Artificial Intelligence (AI) and its influences on people, their communities, and society”, it aims to form a study panel every five years to assess the current state of AI. The 2016 report, the first one in the series, was released sometimes back. This project covers ethics of AI, in addition to other issues and aims to provide expert-informed guidance for directions in AI research and policy around AI. In fact, this project seems to be one of the influencers of the AI Partnership formed by the top five tech companies. Eric Horvitz, Microsoft Technical Fellow and managing director of Microsoft’s Redmond, Washington research lab, who is a standing committee member of the project, is the interim co-chair of the AI Partnership, formed by the tech giants.
So, what are the possible ethical issues that everyone is so concerned about? Global think-tank World Economic Forum, recently released a paper, Top 9 ethical issues in Artificial Intelligence (AI), that nicely summarizes the possible concerns. Though not exactly comprehensive—by definition it cannot be, as no one knows the future—it is the best reference list to start with.
The issues that the WEF-released paper, written by Julia Bossman, president, Foresight Institute, summarizes are the following
- Unemployment due to job loss to machines
- Ineuality: how to distribute the wealth created by machines
- How do machines affect our behavior and interaction?
- Artificial stupidity: how can we guard against mistakes by machines?
- How do we eliminate biases such as racism
- Security: how to keep robots safe
- How to protect against unintended consequences
- Will we cede control to machines at some point? How do we guard against that?
- How to treat the human like robots? In terms of legal rights and emotional relationships?
Finally, in what could be one of the most crucial testimonies to the fact that AI is here for real, the US administration has woken up to the possibility of AI creating new challenges for governance and legislation.
In a paper, Preparing for the Future of Artificial Intelligence, the White House through the National Science and Technology Council Committee on Technology, has taken a fairly comprehensive look at the implication that AI could have, including ethical and legal implications. Among other things, it talks of the need to help practitioners of AI understand the ethical issues involved. It has 23 recommendations, three of which are about ethical issues. While one of the recommendations—schools and universities should include ethics and related topics in security, privacy, and safety, as an integral part of curricula on AI, machine learning, computer science, and data science—is explicitly about ethics, two other recommendations—two more, one about security and another on international cooperation on humanitarian law and autonomous and semi-autonomous weapons, too touch the issue.
With broad consensus emerging on the need to study ethical implications across all stakeholders, hopefully, AI research would be far more complete. The next challenge is to sensitize the AI researchers across the world.