The Ethics of Artificial Intelligence
From its inception IIIM has submitted proposals to – and been awarded numerous grants from – the Student Innovation Fund (NSN). This year an investigation into the ethics of artificial intelligence (AI) was lead by Master’s student and IIIM intern Thorbjorn Kristjánsson, under the guidance of Dr. Kristinn R. Thórisson (IIIM & Reykjavik Univ.) and Dr. Morten Dige (Aarhus Univ.). The project made some waves; it was the subject of a big article in DV and runner-up to the President’s Student Innovation Award, being among the 12 projects shortlisted for this honor. We congratulate Thorbjorn on this success, a noteworthy achievement, as the number of projects competing for this Prize was over 200.
Focus on ethical issues in the development and use of AI
AI technology is already found in telecommunications, digital photography, computer games, and robots that perform various functions such as control, baggage screening, factory production, etc. All very benign uses. Recent years have seen rapid development of AI for military use, including new weapons and various robotic technologies for war: Thorbjorn’s project focuses on ethical issues in the development and use of AI for this purpose, especially in light of potential future developments and potential for its misuse in our society. The aim of his work is moreover to identify and analyze risk factors associated with the rapid advancements on this front, risk factors that range from surveillance and data mining and its impact on privacy to its potential catastrophic use in cyberware – is a new and relatively uncharted domain of warfare. Many ethical questions can be raised in the context of such use of AI, as well as its application to espionage and surveillance, automation of energy networks, and deployment of robots in numerous new areas. With increasing autonomy, and independence from their human designers, AI technology can be expected to increasingly cross ethical lines.
An important part of Thorbjörn’s work on the ethics of AI concerns ethical guidelines that responsible AI research laboratories might adopt – currently or in the near future – if they wish to set an example for responsible behavior and take an active stance in being accountable for their own results. A IIIM technology report is planned within a few months on this subject, and we hope to make it a foundation for further defining the conduct of IIIM in its research.