Tech Leaders establish 23 Principles for ensuring Beneficial A.I.

Spiraling clockwise from top left to center: Dr. Demis Hassabis of DeepMind, inventor Dr. Ray Kurzweil, Dr. Erik Brynjolfsson of MIT, LinkedIn CEO Reid Hoffman, Alphabet Chairman Eric Schmidt, Dr. Andrew McAfee of MIT, IIIM Director Dr. Kristinn R. Thórisson, Dr. Stuart Russell of UC Berkeley, Dr. Max Tegmark of MIT, and venture capitalist and Skype creator Jaan Tallin.

Some of the world’s most prominent AI and high-tech leaders assembled in Asilomar, California this January, to discuss opportunities and challenges related to the future of AI and steps we can take to ensure that the technology is beneficial for everyone. Among the 150 participants were IIIM’s Director Kristinn R. Thórisson, Tesla CEO Elon Musk, DeepMind founder Demis Hassabis, inventor Ray Kurzweil and Alphabet Chairman Eric Schmidt.

The five-day conference, titled ‘Beneficial AI 2017’ (BAI), was hosted by The Future of Life Institute and brought together a select group of AI researchers and thought leaders from academia and industry. The question of how to align artificial intelligence with human values was tackled from different angles, drawing on the attendees’ broad range of expertise in economics, law, ethics, and philosophy.

The conference brought forth a list of principles for guiding the development of safe and beneficial AI technologies in the coming decades. Addressing research, ethics and values, as well as longer-term issues in AI, the list of 23 principles has already been signed by more than 2,000 people since it was published online in early February. The 23 principles represent the beginning of a conversation aiming for a framework for ensuring artificial intelligence technologies benefit as many people as possible. Here you can read the complete list of the ‘Asilomar AI Principles’

It has long been a concern, and often the subject of fiction, how to make sure humans remain in control of the machines as they grow smarter and more capable. Recently leaders such as Bill gates and Stephen Hawking have expressed such concerns. Tesla CEO Elon Musk noted in a panel at the BAI conference that “We’re headed towards either super intelligence or civilization ending.”

Apart from that there is the issue of how we use and employ the technology already at our disposal. Principle 18 concerns an AI Arms Race and states that an arms race in lethal autonomous weapons should be avoided. This is a cause that IIIM has been very actively campaigning for, being the first research lab to establish and adopt an Ethics Policy for Peaceful R&D. Our ethics policy states, among other things, that IIIM will not undertake any project or activity funded by military grants or aimed at creating weapons controlled by AI.

The BAI conference continues several years of discussions on this issue. The Campaign to Stop Killer Robots was started in 2013 with the aim of calling for an international ban on autonomous weapons systems development and deployment. In 2015 the Future Life Institute wrote an open letter calling for an international ban of “killer robots”, which scientists like Stephen Hawking and 20,000 others supported and signed. In addition, discussions have taken place within the UN Human Rights Council on this subject.

Stefano Ermon. Photo: Future of Life Institute.

Participants at the ‘Beneficial AI 2017’ conference expressed their concerns on the matter. Including Stefano Ermon, Assistant Professor in the Department of Computer Science at Stanford University: “I think that the technology has a huge potential, and even just with the capabilities we have today it’s not hard to imagine how it could be used in very harmful ways. I don’t want my contributions to the field and any kind of techniques that we’re all developing to do harm to other humans or to develop weapons or to start wars or to be even more deadly than what we already have.”

Toby Walsh. Photo: Future of Life Institute.

Toby Walsh, Professor of Artificial Intelligence at the University of New South Wales, expressed similar concerns: “It’s technologies that aren’t going to be able to distinguish between combatants and civilians, and aren’t able to act in accordance with international humanitarian law, and will be used by despots and terrorists and hacked to behave in ways that are completely undesirable. And that’s something that’s happening today.”