Might machines equipped with artificial intelligence spiral out of our control and destroy humanity? Those who worry include physicist Stephen Hawking and entrepreneur Elon Musk. They warn that superintelligent machines of the near future are sure to malfunction and will evolve on their own so rapidly that, at some point, they will have no use for us.
In one respect, of course, Messrs. Hawking and Musk are obviously right: Nothing made by human beings is flawless.
On the other side of the argument, however, are equally knowledgable figures, such as Facebook’s Mark Zuckerberg and Andrew Ng, chief scientist at Baidu, known as China’s Google. They see the many ways AI will serve humans, such as by diagnosing and curing disease, expanding education, improving the environment, rescuing people from natural disasters, exploring space, helping the disabled — the list goes on — all without threatening our demise. Mr. Ng believes AI can be programmed with “moral dimensions.”
I’m not worried — at least not yet.
Most smart machines today are controlled by AI that is “narrow” or “weak,” programmed to perform a specific task, such as beating a human at chess, vacuuming a floor or driving a car. “Superintelligent” machines — that can learn, reason, intuit and perform complex tasks better and faster than humans — are in their infancy. As for machines that can take over the world …
“Worrying about it is like worrying about the overpopulation of Mars before colonists set foot there; we have plenty of time to figure it out,” says Mr. Ng, who believes it may take hundreds of years before AI surpasses human intelligence. Tech writer Jeff Goodell calls most robots today “as dumb as a lawnmower.”
Of course, the follies of human history provide many reasons to be concerned about the possible misuse of artificial intelligence. That’s why Demis Hassabis, co-founder of AI developer DeepMind, thinks it is important to assess whether each particular AI advance is designed to help and heal or threaten and destroy. He favors international guidelines, and many in the AI community recently signed an open letter calling for comprehensive research into safeguards to ensure that AI systems will be “robust and beneficial.”
Universities, companies, nongovernmental organizations and governmental offices of technology are establishing AI safety strategies and guidelines. MIT’s Media Lab, for instance, is organizing collaborations among computer scientists, social scientists and philosophers aimed at predicting and controlling any problems that arise with AI. Five of the world’s largest tech companies — Amazon, Facebook, IBM, Microsoft and Google — are writing ethical standards.
One potential safety measure is to require a clearly defined mission for each new AI program and to build in encrypted barriers to unauthorized use. DeepMind and researchers at the University of Oxford are developing a “kill switch” so that AI machines can be shut down without their knowing that humans are capable of doing so. “Interruptibility” code could prevent mistakes or misue. For instance, it could be used to stop a medical robot from killing someone genetically prone to cancer in order to “cure” the disease, or a military robot from killing noncombatants, or an unscrupulous hacker from creating havoc.
In short, doomsday scenarios remain far in the distance and likely avoidable. And in fact, some AI machines might be capable of making better moral decisions than humans. Ronald Arkin, an AI expert at Georgia Tech, points out that AI-powered military robots, for example, might be ethically superior to human soldiers because they would not rape, pillage or make poor judgments under stress. Machine ethics is a whole new field of research that studies how human values can be engineered into technology.
Governments all over the world certainly are well aware of the potential dangers of artificial intelligence. Efforts are underway at the United Nations to develop what essentially would be a multilateral arms-control treaty to limit the construction and deployment of autonomous killer robots. Nonproliferation treaties limiting the development of nuclear, biological and chemical weapons have an uneven history of success but without doubt have created a world with far fewer of these weapons than there otherwise would have been.
Dystopian depictions of machines ruling the planet seem overwrought, comparable to arguments in the early 1900s that planes would fall out of the sky and cars would produce nothing but carnage. That said, these dark visions do serve as an urgent warning, one that demands the implementation of rigorous ethical standards, technological safeguards and regulatory oversight.
The life- and world-changing benefits of artificial intelligence appear infinite. We should be careful, but we should not let fear shape our future.
Henry Friedlander is a rising senior at Shady Side Academy.
First Published: July 29, 2017, 4:00 a.m.