Calum Chace has done considerable research into the issues surrounding AI, in particular superintelligence, for his novel Pandora’s Brain. PlanetTech News interviews him about some of these topics, and their role in science Fiction.
As a Sci-Fi author yourself (Pandora’s Brain), what are some examples of science fiction that you think depict future technologies (in particular AI) most accurately? What are some that may be far off the mark in envisioning AI?
The science fiction writer I most admire is Greg Egan, an Australian who lives in Perth. In particular I love his short stories, and two of his novels, Permutation City and Diaspora. He is unusual in that he takes seriously the idea that a superintelligence may arrive soon, which will quickly become very much smarter than us. Permutation City is about that shift happening, and Diaspora is about a world where it took place a while ago.
Egan studied physics to a high level, and he doesn’t shy away from using it in his fiction. Some of his more recent work (Schild’s Ladder, for instance) goes over my head.
There are no photos of Egan on the net, which is a considerable achievement for a successful author. There is a joke that he is himself an AI.
Another author whose work I very much enjoy is Ian M Banks. His Culture universe is an engaging near-utopia where superintelligent AIs and enhanced humans get along very well and have exciting adventures together. It seems to be something of a cheat to me, though, because I can’t see what is in it for the AIs. But he was a brilliantly talented writer and his books are great fun. Consider Phlebas was written first and is set first chronologically, but devotees suggest that Player of Games is the one to start with.
Among up-and-coming authors, I like Will Hertling. His Avogadro series also addresses the arrival of artificial general intelligence in a sophisticated way.
With all the developments in narrow AI, has there ever been created an example of artificial general intelligence, even a very limited one?
No.
Perhaps I should be less dogmatic, and say, not as far as we know.
But no.
What do you anticipate might be the key developments in AI over the next 10 years and where might they come from?
In the next ten years AI will get better and better at sensing and interpreting the world for us, and presenting us with useful knowledge about what is around us and what is out of reach. Think iPhones on steroids.
It will also becoming increasingly obvious that we are heading towards widespread technological unemployment as AIs become able to do most types of work better than we can. Can we keep inventing new types of work? Maybe, maybe not. Despite confident forecasts, I don’t think we know yet.
Also, more people will become aware of the potential threat of superintelligence, and the need to allocate significant resources to the so-called Friendly AI (FAI) project of making sure that the first superintelligence is beneficial for humanity. If FAI succeeds, humanity’s future is marvellous beyond imagining. If not we’re pretty much screwed.
The galaxy may not care whether or not humans survive, but there’s no reason why we shouldn’t.
How have public perceptions of and attitudes to the idea of AI and super-intelligence changed during the time you have been talking about these issues?
When I first read Kurzweil in 1999 and started talking to friends and colleagues about the promise and peril of AGI, most of them thought I was batshit crazy. Now they get it. The difference is Nick Bostrom’s book, Superintelligence, published a little over a year ago. That got Stephen Hawking, Elon Musk and Bill Gates talking about it, and now it’s all over the mainstream media. Hooray.
There are currently some opinions from high profile people that AI must be very tightly controlled as it could be potentially very dangerous for the world as a whole. What is your take on this?
They’re right, although we don’t know what kind of control is feasible or desirable. The main approaches to FAI discussed today are controlling the entity directly (making it an AI in a box, or an Oracle AI, for instance) and controlling its indirectly by setting its motivations. FAI is a really, really hard problem, but hey, we’re an ingenious species, and we’ve got time, probably a few decades.
It’s also important that there is no backlash against AI. It’s a science which has already improved our lives enormously, and will do so more and more each decade. And for good or bad, it’s a genie that can’t be put back in the lamp.
Has AI been progressing faster or slower than you would have anticipated at the turn of the millennium? What if anything, has really surprised you about AI and technological progress in general?
Humans are really good at habituation, so we take recent progress for granted. But what I treasure most is the fact that the world’s knowledge is at my fingertips. When I started work it wasn’t, and I wasted so much time finding out things that I now discover by pressing a button. Or I couldn’t find them out at all. You have to stop and think about how magical that is.
Which approach do you think will eventually win out in the creation of super-intelligence; improvements in narrow AIs, or whole brain simulations/emulations or a combination?
My hunch is that deep learning, or whatever comes next, will be the source of the first AGI. Consider what Deep Mind achieved with that Atari game player, self-driving cars and the systems that can discover the existence of a category called cats. It’s stunning.
That said, the brute force approach of modelling cortical columns is moving along quickly too, so it’s impossible to say.
What are your current activities in the area of futurology?
I’ve nearly completed the first draft of a non-fiction introduction to AI. It covers AI, AGI, ASI (superintelligence) and FAI. I’m hoping to publish that fairly soon.
I’ve also written the first draft of the sequel to Pandora’s Brain, in which Matt has a load more adventures. It needs a lot of work, but I hope to get it out this year as well.
Busy busy busy!