Two of the world's leading thinkers and innovators, Stephen Hawking and Elon Musk, along with several hundred academicians and researchers, have come up with a set of ethical principles which seek to design and guide AI development.

The prevalence of machines and their increasing intelligence which, may rival and overtake human intelligence altogether, are recognized by Tesla CEO and world-renowned cosmologist, Elon Musk. Likewise, Professor Stephen Hawking recognizes the occurrences and provides warning to humanity of the potential threat that these machines may pose to humans--a threat which may be much worse than with what nuclear weapons are actually inflict to humans.

Such potential threats prompted the Musk and Hawking to endorse 23 Asilomar AI Principles, which were drafted and drawn up by the Future of Life Institute. Such principles are said to be designed with the end goal of ensuring the subordination of machines to man and preventing their domination. The principles are elucidated as follows:

The Research Goal is stated as follows: "the goal of Artificial Intelligence research should be to create not undirected intelligence, but beneficial intelligence. The Research Funding portion reads as follows: "Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies such as: How can we make future AI systems highly robust, so that they do what we want without malfunctioning or getting hacked? How can we grow our prosperity through automation while maintaining people's resources and purpose? How can we update legal systems to be more fair and efficient, to keep pace with AI and to manage the risks associated with AI? What set of values should AI be aligned with, and what legal and ethical status should it have?"

As for the matter of science-policy link, the ethical principles state: "There should be the constructive and healthy exchange between AI researchers and policy-makers." Research Culture is stated as follows: "a culture of cooperation, trust, and transparency should be fostered among researchers and developers of Artificial Intelligence." The portion relating to Race Avoidance reads as follows: "Teams developing Artificial Intelligence systems should actively cooperate to avoid corner-cutting on safety standards."

The Ethics and Values portion of the principles cover a myriad of subject matters namely: safety, failure transparency, judicial transparency, responsibility, value alignment, human values, people privacy, liberty and privacy, shared benefit, shared prosperity, human control, non-subversion, and AI arms race.

The above Asilomar AI Principles were developed following the Future of Life Institutes bringing of various experts for their Beneficial AI 2017 Conference. Such experts cover expertise in the field of robotics, physics, economics, and philosophy among others. The final list of principles, in order to be accepted, required the approval of at least 90% of the said experts. While the 23 principles cover a myriad of topics and subject matters, The Future of Life explains, through its website that: 'This collection of principles is by no means comprehensive and it's certainly open to differing interpretations but it also highlights how the current 'default' behavior around many relevant issues could violate principles that most participants agreed are important to uphold."

These 23 principles mark the beginning of efforts to ensure the safety of humanity in the age of the machines. Whether or not man would be subservient to machinery in the future remains uncertain. A dismal future or a bright future hangs in the balance and it is up to humanity to which side the scale tilts.

© Copyright 2024 Mobile & Apps, All rights reserved. Do not reproduce without permission.