According to eminent theoretical physicist, Stephen Hawking, the biggest danger facing mankind is artificial intelligence. In the near future, possibly even within our lifetime, machines will surpass our intelligence. They will evolve at an exponential rate. What will become of the human race when we have been surpassed?
It sounds like science fiction, I know. In fact it is a common science fiction theme, explored in movies ranging from Wargames to Terminator to the Matrix. Could an artificially intelligent machine arise and wipe out humanity?
It would be easy to dismiss these concerns but they’ve echoed by some pretty prestigious people, Ellon Musk, Steve Wozniak and even Bill Gates are concerned about super intelligent machines being developed in the near future.
How soon? Take a look at this video from Boston Dynamics. It might be sooner than you think.
And despite this, I’m not that concerned. Why not? Because Linus Torvald isn’t. I have a huge amount of respect for Stephen Hawking and the others. But when it comes to what’s going on inside the guts of the supercomputer you simply can’t beat Linus Torvald, the creator of the Linux kernel.
Yeah, Bill Gates is a marketing/managing genius that put Microsoft of the top of the computer heap. Steve Wozniak helped to create Apple and make it what it is today. But Linus? 476 of the 500 fastest computers in the world run Linux. That speaks volumes. He builds the software that the vast majority of the supercomputers and robots use.
What does Linus think of the artificial intelligence apocalypse? “It’s science fiction, and not very good Sci-Fi at that, in my opinion” He goes on to explain that the kind of artificial intelligence that we will see will likely be targeted AI with little in common with human intelligence. Human-like AI machines are simply “not easy to productise” and not very reliable. They will likely be created and exist in small numbers in labs, but won’t have much use in the real world.
I tend to agree with Linus on this one. But even if artificial intelligence does appear and it does outstrip us, here’s my other question: Why would it be logical for such a system to wipe us out? An AI system does not need to compete with humanity for resources, nor will likely have human emotions or drives. A computer AI can exist in a virtual space as easily and comfortably as in our world (more so even). So why would it care or bother to take over this world?
A rogue AI is more likely to take over a few terabytes of space on some hard drive and create its own virtual world, one that has little in common with ours, and do its own thing then to try to wipe us out. How knows, maybe one already has.
In the meantime, in case you are suddenly nervous about the present generation of robots and computers, take a look at this video: