What the Experts Think About the Existential Risks of Artificial Intelligence

Elon Musk, Bill Gates, Stephen Hawking, and others wrote an open letter calling for safety standards in industries dealing with artificial intelligence.  In particular, they called for the research and development of fail-safe control systems that might prevent malfunctioning AI from doing harm, possibly existential harm, to humanity. 
 
“In any of these cases, there will be technical work needed in order to ensure that meaningful human control is maintained,” the letter reads. Itself measured in tone, the letter was seen by many as the panic of a few machine-phobic Luddites, and in the ensuing backlash, the press was inundated with stories quoting AI researchers dismissing such concerns. 
 
“For those of us shipping AI technology, working to build these technologies now,” Andrew Ng, an AI expert at Baidu, told Fusion, “I don’t see any realistic path from the stuff we work on today, which is amazing and creating tons of value, but I don’t see any path for the software we write to turn evil.”