Predictions of when machines will make us obsolete seem to come either from AI evangelists or doom-mongers with little practical experience of the field. Now though, researchers have carried out the largest-ever survey of machine learning experts on the subject.
The advent of AI that can outperform humans at various tasks will have a dramatic impact on society, so forecasting when particular skills or jobs will be automated could be invaluable for policymakers.
But the field is so fiendishly complex and has so many specialized sub-disciplines that there are very few people in a position to forecast when these breakthroughs will come. So instead, researchers at the Oxford University’s Future of Humanity Institute decided to crowdsource the problem.
They contacted 1,634 researchers who published papers at the 2015 NIPS and ICML conferences, the two leading machine learning conferences, and asked them to complete a survey on the topic, with 352 researchers responding.
“The aggregate forecast was that there is a 50 percent chance that ‘unaided machines can accomplish every task better and more cheaply than human workers’ within 45 years.”
When all the researchers’ answers were combined, the aggregate forecast was that there is a 50 percent chance that “unaided machines can accomplish every task better and more cheaply than human workers” within 45 years, and a 10 percent chance of it occurring within nine years.
Interestingly, there was a large discrepancy between the predictions of Asian respondents, who expect this to occur in 30 years, and North Americans, who expect it to take 74 years.
And when the question was worded slightly differently to gauge when all human labor would be automated rather than just when it could be, the aggregate forecast was a 50 percent chance in 122 years from now and a 10 percent chance within 20 years.
The survey also asked for predictions for when a few specific activities would be taken over by machines such as: translating languages (by 2024), writing high-school essays (by 2026), driving a truck (by 2027), working in retail (by 2031), writing a bestselling book (by 2049), and working as a surgeon (by 2053).
However, the usefulness of specific predictions like this is exemplified by the fact that back in 2015 the researchers predicted it would take until 2027 for an AI to beat a human at the board game Go. Google DeepMind’s Alpha Go beat a top-ranked professional the following year and the world’s number one this year.
“Perhaps more interesting are some of the broader findings of the survey, such as a perception that progress in machine learning is accelerating.”
Perhaps more interesting are some of the broader findings of the survey, such as a perception that progress in machine learning is accelerating. More than two-thirds of respondents said progress was faster in the second half of their career and only 10 percent said progress was faster in the first half.
There was little support for one of the mainstays of AI evangelism, though. The “intelligence explosion” is the idea that once AI reach human-level intelligence, including in developing AI, their ability to operate in parallel and at far greater speeds than humans will lead to rapid growth in their capabilities.
When asked how likely it was that AI would perform vastly better than humans in all tasks two years after machines overtook human capabilities the median probability was just 10 percent. When asked whether there would be explosive global technological improvement after two years the median probability was 20 percent.
“The vast majority of respondents thought machines outperforming humans would have a positive impact on humanity.”
Unsurprisingly, the vast majority of respondents thought machines outperforming humans would have a positive impact on humanity. But 48 percent also said there should be more research aimed at minimizing the risks of AI.
While the results of the survey are informative, it’s important to remember that machine learning researchers are inherently enthusiastic about the technology. That means they’re liable to overestimate the speed of progress, while simultaneously underestimating the potential negative implications.
They are also probably not really qualified to judge how technological advances will interact with things like politics, economics, and human psychology. Just because a machine can do something doesn’t necessarily mean it will. There are many other factors involved in determining whether AI will be widely adopted than just technological readiness.
Nevertheless, the perspective of those on the bleeding edge of AI research is an important one. While they may have blind spots, they’re certainly better positioned to pass judgment than many of the commentators weighing in on the debate. Let’s just hope their optimism is well-founded.