IIIM’s Civilian A.I. Policy

We are proud to announce our brand new A.I. Policy for Peaceful R&D. The policy takes aim at two major threats to societal prosperity and peace. On the one hand, increases in military spending continue throughout the world, including automated weapons development. Justified by “growing terrorist threats”, these actions are themselves resulting in increased use of undue and unjustified force, military and otherwise — the very thing they are aiming to suppress. On the other, the increased possibility — and in many cases clearly documented efforts — of governments wielding advanced technologies to spy on their law-abiding citizens, in numerous ways, and sidestep long-accepted public policy intended to protect private lives from public exposure has gradually become too acceptable. In the coming years and decades artificial intelligence (AI) technologies — and powerful automation in general — has the potential to make these matters significantly worse.

In the US a large part of research funding for AI has come from the military. Since WWII, Japan took a clear-cut stance against military-oriented research in its universities, standing for over half a century as a shining example of how a whole nation could take the pacifist high road. Instead of more countries following its lead, the exact reverse is now happening: Japan is relaxing these constraints (1), as funding for military activities continues to grow in the U.S., China, and elsewhere. And were it not for the extremely brave actions of a single individual, Edward Snowden, we might still be in the dark about the NSA’s pervasive breach of the U.S. constitution, trampling on civil rights that took centuries to establish.

In the past few years the ubiquity of AI systems, such as Apple’s Siri, Google’s powerful search engine, and IBM’s question-answering system Watson, has resulted in a waxing interest in AI across the globe, increasing funding available for such technologies in all its forms. We should expect a speedup, not a status quo or slowdown, of global advances and adaptation of AI technologies, across all industries. It is becoming increasingly important for researchers and laboratories to take a stance on who is to benefit from their R&D efforts — just a few individuals, groups, and governments, or the general people of planet Earth? This is what we are doing today. This is why our Ethics Policy for Peaceful R&D exists. As far as we know, no other R&D laboratory has initiated such a policy.

REFERENCE

1. Japan Looks to End Taboo on Military Research at Universities — Government wants to tap best scientists to bolster defenses. By Eric Pfanner and Chieko Tsuneoka, March 24, 2015, 11:02 p.m. ET

IIIM’s Civilian A.I. Policy

The Board of Directors of IIIM believes that the freedom of researchers to explore and uncover the principles of intelligence, automation, and autonomy, and to recast these as the mechanized runtime principles of man-made computing machinery, is a promising approach for producing advanced software with commercial and public applications, for solving numerous difficult challenges facing humanity, and for answering important questions about the nature of human thought.

A significant part of all past artificial intelligence (AI) research in the world is and has been funded by military authorities, or by funds assigned various militaristic purposes, indicating its importance and application to military operations. A large portion of the world’s most advanced AI research is still supported by such funding, as opposed to projects directly and exclusively targeting peaceful civilian purposes. As a result, a large and disconcerting imbalance exists between AI research with a focus on hostile applications and AI research with an explicitly peaceful agenda. Increased funding for military research has a built-in potential to fuel a continual arms race; reducing this imbalance may lessen chances of conflict due to international tension, distrust, unfriendly espionage, terrorism, undue use of military force, and unjust use of power.

Just as AI has the potential to enhance military operations, the utility of AI technology for enabling perpetration of unlawful or generally undemocratic acts is unquestioned. While less obvious at present than the military use of AI and other advanced technologies, the falling cost of computers is likely to make highly advanced automation technology increasingly accessible to anyone who wants it. The potential for all technology of this kind to do harm is therefore increasing.

 

For these reasons, and as a result of IIIM’s sincere goal to focus its research towards topics and challenges of obvious benefit to the general public, and for the betterment of society, human livelihood and life on Earth, IIIM’s Board of Directors hereby states the Institute’s stance on such matters clearly and concisely, by establishing the following Ethical Policy for all current and future activities of IIIM:

1 – IIIM’s aim is to advance scientific understanding of the world, and to enable the application of this knowledge for the benefit and betterment of humankind.

2 – IIIM will not undertake any project or activity intended to (2a) cause bodily injury or severe emotional distress to any person, (2b) invade the personal privacy or violate the human rights of any person, as defined by the United Nations Declaration of Human Rights, (2c) be applied to unlawful activities, or (2d) commit or prepare for any act of violence or war.

2.1 – IIIM will not participate in projects for which there exists any reasonable evidence of activities 2a, 2b, 2c, or 2d listed above, whether alone or in collaboration with governments, institutions, companies, organizations, individuals, or groups.

2.2 – IIIM will not accept military funding for its activities. ‘Military funding’ is defined as any and all funds designated to support the activities of governments, institutions, companies, organizations, and groups, explicitly intended for furthering a military agenda, or to prepare for or commit to any act of war.

2.3 – IIIM will not collaborate with any institution, company, group, or organization whose existence or operation is explicitly, whether in part or in whole, sponsored by military funding as described in 2.2 or controlled by military authorities. For civilian institutions with a history of undertaking military-funded projects a 5-15 rule will be applied: if for the past 5 years 15% or more of their projects were sponsored by such funds, they will not be considered as IIIM collaborators.

Catalyzing innovation and high-technology research in Iceland