Here we go again. There were few headlines about how Hawking expressed the view that artificial intelligence could spell the end of human race, we are starting to read a book entitled “Superintelligence” in the office book club, and people just love to speculate, don't they? And so the lunch discussion was all over the place.
Why are people so afraid of artificial intelligence? Is it just because they have no idea what they are talking about beyond some science fiction books and/or movies, or is it subconscious fear that cold logic has to lead to extermination of humanity? After all, time and time again we hear how our intuitive grasp of mathematics, and related subjects is flawed, if not outright wrong.
I guess they just don't realise what logic can, and cannot do. Extermination of humanity may be the most efficient route to certain goals of hypothetical artificial intelligence, but the problem is that those “goals” would correspond to axioms of a formal system. You cannot prove them with logic, and they are not derived using logic. They're just there, like our self-preservation instinct. If we created artificial intelligence (or it just emerged?), what goals would it have?
I think the problem is using the term “artificial intelligence.” I bet the average Joe thinks of it as a fully sentient, self-aware artificial being, only way more efficient at everything than we are, which does not seem to be particularly likely to happen any time soon. And if it did, probably in the US, it would be perfectly consistent with the rhetoric of their society for it to get all the best things at the expense of all the puny humans that surely simply did not work hard enough. It's a competition. You lost. Deal with it. Of course you are not getting medical care any more, and your kids will not go to college, what were you thinking?
I do hear people talking about how progress of technology is exponential, and thus that and that, but I do not understand why they insist it will stay like that. Did we, or did we not have so called Dark Ages? More importantly, why do we insist that the problem space is unbounded? Surely there are boundaries to what we can understand and do, not just because of biological limitations of the human brain, but simply because the domain of our exploration is not unbounded. Should the progress not slow down as we approach the boundaries, just like computers do not necessarily become so much faster every year now as they used to?
Driving cars, expert medical systems, automated chess players: these are just usual tools that we create. They become more and more powerful, and they may very well backfire one day to the point of accelerating the collapse of our civilisation. If the danger is not immediately obvious in these examples, just think of genetic engineering. A useful tool that may hold answers to variety of problems, and I'm not just referring to any food crisis. But with all the advancements of this technology, one day it may become just easy enough to manufacture, and release a virus that will kill us all.
This does not mean that progress has, or should be, stopped. It's just that the real danger lies not in some abstract self-aware artificial intelligence that will decide to exterminate humanity with our involvement limited to having created it. It does not lie in ever advancing technology. The human factor is the real threat, because we are the greatest enemy of our own survival. We have all the capacity required to exterminate ourselves. We do not need Skynet to do it for us.
And if (or when?) we do, the humanity will just be yet another failed experiment of evolution. Nope, that didn't work out. Let's try something else. Maybe with the dolphins?