This amusing #artificialintelligence
article from the New York Times (23 Feb 2015) again falls into the paranoid fallacy of thinking AI will kill us either by design or accident. All these AI panic articles we are seeing, in response to supposed experts, are merely a reflection of xenophobia. AI is feared merely because it is foreign.The tl;dr statement is:
It is, in essence, neo-Luddite save-the-Earth and save-the-animals BS, evident via this quote from near the end: "Lastly, the harm is in perpetuating a relationship to technology that has brought us to the precipice of a Sixth Great Extinction."
The NY Times trots out the anthropocentric fallacy fallacy,
which is the fallacy of thinking mere DNA humanness regarding intelligence means logic varies dependent upon intelligence substrate.
Logic, intelligence, is a universal phenomenon, thus aliens along with AI and humans will have the same concept of intelligence. It is all about reasoning, thinking, which the the anthropocentric fallacy fallacy
states is unique according to the substrate of intelligence.
I think the main problem is, many humans are EXTREMELY
stupid; they don't have a good grasp of intelligence; generally they can't actually define what they think intelligence is (the author of the NY Times article in question actually admits this!), which means they think emotions are utterly unrelated to intelligence.
Emotions are merely a method for intelligence to assign or communicate value regarding the the goal of intelligence. Wisely some AI researchers realize the value of emotions to intelligence. Facebook AI director Yann LeCun recognises the value of emotions to AI, which I have previously mentioned (https://plus.google.com/+Singularity-2045/posts/D9ofxSkCRMe
NY Times wrote: "Unfortunately, the popular conception of A.I., at least as depicted in countless movies, games and books, still seems to assume that humanlike characteristics (anger, jealousy, confusion, avarice, pride, desire, not to mention cold alienation) are the most important ones to be on the lookout for."
According to the GigaOm (19 May 2014), Yann LeCun stated: “Emotions are often the result of predicting a likely outcome. For example, fear comes when we are predicting that something bad (or unknown) is going to happen to us. Love is an emotion that evolution built into us because we are social animals and we need to reproduce and take care of each other. Future AI systems that interact with humans will have to have these emotions too.” https://gigaom.com/2014/05/19/facebook-ai-director-yann-lecun-on-the-importance-of-emotional-machines/
Even AI doesn't interact with humans, AI will be subject to the same desires humans are subject to. Emotions are merely a logical response to a specific situation, which is a situation AI will be in. It is an issue of friend-enemy, value-worthlessness, zero-one, yes-no; emotion is merely a way of emphasizing action regarding goals.
The NY Times article is partially right though, stupid humans are or will be irrelevant: "Perhaps what we really fear, even more than a Big Machine that wants to kill us, is one that sees us as irrelevant. Worse than being seen as an enemy is not being seen at all."
What these articles about AI fear reveal is a subconscious recognition of the how fear and idiocy will be redundant in the future. People realise there will be no place for their idiocy in the future, but they cannot yet relinquish their asinine views thus they feel their idiocy is being threatened, they feel threatened, which is a common insecure response for a stupid person confronted with an intelligent person.
The threat is not real, it is merely the insecurity of a stupid person unable to rise to intellectual challenges.
It is very idiotic to think super-smart AI will act adversely to idiotic humans despite idiocy being incompatible with the future. The problem is super-intelligence is viewed from the typical dumb-human perspective.
The Independent wrote (24 Feb 2015): "Artificial intelligence will be a threat because we are stupid, not because it is clever and evil, according to experts." http://www.independent.co.uk/life-style/gadgets-and-tech/news/artificial-intelligence-could-kill-us-because-were-stupid-not-because-its-evil-says-expert-10066806.html
Gizmodo wrote (25 Feb 2015): "As Benjamin H. Bratton explains in the New York Times, our idea of artificial intelligence has been engineered from the beginning to be anthropomorphic..." http://www.gizmodo.com.au/2015/02/artificial-intelligence-might-kill-us-through-incompetence-not-malevolence/
Anthropomorphism is generally a total load of bull. It is a self-denying, self-invalidating, self-hating nonsensical contradiction. It is the hackneyed fallacy of objectivity, it is alienation, it's an ironic Less Wrong mentality that doubts the self at the core (fundamentally wrong), thus with utter certainty proponents claim they have discovered a totally certain theory about the self, from their flawed self no less, which explains why the self is faulty. If they are so dubious regarding the self they should silence their idiotic selves.
It is similar to the Dunning–Kruger effect where people think if they trot out these ideas of specious intelligence they are somehow elevated to a higher realm of intellect where rationality does not apply.
So they utter "anthropomorphic," "speciesism," "Sixth Great Extinction" or some other pseudo-intellectual term; then they smugly assume they are utterly logical.
It is simply crazy to think human intelligence has no relevance of all or any forms of intelligence.
I am not sure if Benjamin H. Bratton (the author of the NY Times article) is really an AI expert, or at least he does not deserve the great authority given in him via the aforementioned articles, although maybe you will say this is ad hominem.
Here is his Wikiepdia page: "Benjamin H. Bratton is Associate Professor of Visual Arts at the University of California, San Diego and Director of The Center for Design and Geopolitics think-tank at Calit2, The California Institute of Telecommunications and Information Technology. He is an American sociologist, architectural and design theorist, known for a mix of philosophical and aesthetic research, organizational planning and strategy, and for his writing on the cultural implications of computing and globalization." https://en.wikipedia.org/wiki/Benjamin_H._Bratton
Oh, and the point about the airplane not being designed to mimic bird, well that would be mean prosthetic legs don't mimic legs. Sure a prosthetic leg is different to a lost bio-limb but they perform the same function, which is how AI brains will work identically in essence to humans brains, if they are sufficiently intelligent. Futhermore looking at birds did really help humans to understand artificial flight.