Public
Enough thoughtful AI researchers (including Yoshua Bengio, Yann LeCun) have criticized the hype about evil killer robots or "superintelligence," that I hope we can finally lay that argument to rest. This article summarizes why I don't currently spend my time working on preventing AI from turning evil. http://fusion.net/story/54583/the-case-against-killer-robots-from-a-guy-actually-building-ai/
View 15 previous comments
+Eric Laukien, I am little concerned about whether you can accept my opinion re the Heuristic nature of deep learning. ( since I have been following Neural networks since its inception in the 60's I am probably as up to date on the subject, unless I have missed some of your dazzling contributions). I am still waiting for a rational explanation of how it will result in new software for controlling your killer robotic systems. Appealing to cognitive brain simulation because of imagined similarities is an optimistic wish to shorten the process of brain evolution and sensory integration.Mar 10, 2015
+Pedro Marcal I never said that I believed in the killer robots. You are putting words in my mouth. It is so clear that GOFAI is behind deep learning, I do not understand how you can be so certain that it is the wrong approach. A perfect reinforcement learning system is AGI, and deep learning can be combine very well with reinforcement learning, as DeepMind has shown with their Atari experiments. Also, please look at deep learning architectures such as Hierarchical Temporal Memory, which attempts to mimic biology closely. Instead of me giving reasons as to why deep learning can lead to AGI, please (since you are attacking it) give reasons that it cannot, rather than these strange uninformed assumptions.Mar 10, 2015
+Eric Laukien, the whole discussion started around killer robots, which boils down to controlling a complex robotic system with a robotic operating system. (see for example the work of Rodney Brooks, the pioneer of modern robotics). Dr. Ng stated that he was not worried about killer robots, I tried to explain that deep learning was not the right approach because it was heuristic by design. I think Dr. Dietterich, in this discussion, pointed this out in a gentler way by trying to explain the problem that had to be solved. In the final analysis, we have to develop robust software that has the dual purpose of implementation and protection from infection. Deep learning in my opinion has its own niche such as image processing. Can it be used for automatic programming? Not in the traditional sense, but it may be possible by some other means, AGI?Mar 11, 2015
One Congressman’s Crusade to Save the World From Killer Robots
https://plus.google.com/+PaulMahofski/posts/28zifN1LngrMar 20, 2015
Have you read this article series: http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html, http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html? The best arguments for taking the prospect of superintelligence and risk thereof seriously are not presented in typical newspaper articles and shallow debates about this topic, but the arguments of people like Nick Bostrom, who wrote the book Superintelligence (which adresses all claims/arguments in the article you link to), are worth being taken seriously. Like the field of AI itself, the field of AI friendliness is a cumulative one, where advances build upon previous ones. We don’t know how long it will take, and there are central theoretical issues that can be worked on now. I'm not suggesting that you yourself should rush to progress the field of friendly AI theory, but dismissing the need for such work will make things harder for people and organizations that are trying to progress this field, like the Machine Intelligence Research Institute.May 28, 2015
I strongly suggest you read MIRI's technical research agenda (https://intelligence.org/files/TechnicalAgenda.pdf). It lists several currently tractable problems in AI that can be researched to decrease the risk of hostile superintelligence.Jul 10, 2015