Stream

Join this community to post or comment

Post-Scarcity Aware
moderator

Post-Scarcity  - 
 
Aware.
 
 
"A single kilometer-sized metallic asteroid could supply hundreds of times the total known worldwide reserves of nickel, gold and other valuable metals."

Prof Fredrick Jenet and Prof Teviet Creighton,
26 January 2015, Daily Mail.

http://www.dailymail.co.uk/sciencetech/article-2927045/Will-interstellar-space-travel-save-humanity-Scientists-predict-artificial-homes-space-reality-say-warp-drives-answer.html

#PostScarcity #resources #resourcescarcity  
View original post
2
Emilio Rojas's profile photoPost-Scarcity Aware's profile photo
7 comments
 
+Emilio Rojas misleading how? A picture of Earth from Space seemed very appropriate for conveying the idea of one metallic asteroid supplying hundreds of times the total "worldwide" known reserves of valuable metals. The connection is Space (asteroids) and worldwide reserves.
Add a comment...

Proton4

Artificial Intelligence  - 
 
Google's DeepMind has created an entirely self learning Artificial Intelligence.
Google scientists and engineers have created the first ever computer program that is capable of learning a wide variety of tasks completely independently.
2
Abhishek Shinde's profile photo
 
Just great...
Add a comment...

Hans Youngmann
moderator

Artificial Intelligence  - 
 
“These doomsday scenarios are logically incoherent at such a fundamental level that they can be dismissed as extremely implausible,” ~ Richard Loosemore
The ominous eye of HAL in 2001: A Space OdysseyIf humans go on to create artificial intelligence, will it present a significant danger…
4
Singularity 2045's profile photo
6 comments
 
+David Eraso it helps me get by; they are so frustrating thus not likely to listen to reason therefore at least I can have the satisfaction of deeming them buffoons. It is a long entrenched battle you are stepping into.

The issue is simple regarding resources. Greater access plus greater efficiency means there is no need for any conflict, superabundance, limitless resources is inevitable with sufficient intelligence. There is simply no logic to the conflict, or if the AI is illogical its illogic reduces its capacity for harm.
 
Add a comment...

David Eraso

Thinking About...  - 
 
Make Watson-like Toy, a Cognitive Enhancer For Your Children
Wow! Most amazing educational toy. Finally AI used for educational purposes in a fun intelligent way.
A IBM's supercomputer unleashes an army of cuddly green dinosaurs with the intelligence of the cloud
7
Myles Mackey's profile photo
 
Niiiice
Add a comment...

Hans Youngmann
moderator

Artificial Intelligence  - 
 
A fantastic essay from +Gideon Rosenblatt
"In the near future, machines will be able to ‘perform’ a kind of human-like volition that will make them appear to be making decisions on their own. They may even look like some of what we see in the movies, but their underlying reality will be quite different."
 
Our Antidotes to Technological Unemployment

As we automate the functions of business, we sow questions about the future of human work.

Dystopian visions of technological unemployment are easy, since they basically just extrapolate much of the bad stuff technology does today into an unknown tomorrow. Though these darker futures oddly captivate me, I find myself also working hard to paint a more optimistic, perhaps even utopian, possibility in my head:

Might technology strip us of the more scripted and robot-like work so that what remains for us is that which makes us most human?

This five-minute-read, takes a look at human initiative and human connection as potential sources of longer-term differentiation from artificial intelligence. 

Also a special shout-out to +David Amerland, +John Ellis, +Alexandra Riecke-Gonzales and +Steve Bonin for their contributions to the short video clip attached in this piece. 

#technologicalunemployment   #ai   #jobs   #artificialintelligence  
Will our initiative and capacity for human connection be what saves us from technological unemployment in an era of intelligent machines?
36 comments on original post
5
Kevin Swannack's profile photoGideon Rosenblatt's profile photo
3 comments
 
Signs definitely point in that direction, +Kevin Swannack, and barring some significant change that is difficult to foresee at this point, that dystopian path I point to in the beginning of this piece could well be our future. 

The one simple truth that a handful of people are starting to talk more about, however, is that without income, people are unable to buy more stuff. That's why, I think anyway, arguments for guaranteed basic income are starting to be heard on both sides of the political spectrum. But even that could lead us to an ugly future, where the lower tier of society is fed just enough income to be able to consume the technology owned by an increasingly small segment. So, while that argument is a start, it is not enough to ensure the fulfillment of true human potential. 
Add a comment...

David Eraso

Sci-Tech News  - 
 
DARPA´s brain interface
Who needs tech specs when you could have a $10 HUD on your visual cortex?
9
6
Michael Butler's profile photoRyan Aubuchon's profile photoYoninah Yisrael's profile photoMorpheus Phoenix's profile photo
 
I don't know where, in this political climate, if I would trust a computer which is embedded into my brain. What happens when this becomes common, and the NSA gets involved?
Add a comment...

Proton4

Artificial Intelligence  - 
 
Boston Dynamics introduce Spot
Boston Dynamics, now owned by Google, have announced the newest member of their four-legged robot family, and (s)he's called Spot.
2
Add a comment...

Zoltan Istvan
moderator

Thinking About...  - 
 
Interview on Reason TV on transhumanism, the Transhumanist Party, LGBT ideas, new techology, and trying to convince people that overcoming death with science is a good thing: http://reason.com/reasontv/2015/02/06/what-if-you-could-live-for-10000-years
Discussion of real-world life-extension technology, the transhumanist/LGBT connection, and government's role in transhumanism.
6
Add a comment...

Zoltan Istvan
moderator

Thinking About...  - 
 
My new article for Gizmodo! The religious will try to convert a superintelligent AI, but will such a machine be atheistic or spiritual? http://gizmodo.com/when-superintelligent-ai-arrives-will-religions-try-t-1682837922
Like it or not, we are nearing the age of humans creating autonomous, self-aware super intelligences. Those intelligences will be part of our culture, and we will inevitably try to control AI and teach it our ways, for better or worse.
6
Add a comment...

About this community

Singularity-relevant, futuristic science and technology news. Singularity related discussions, thoughts. The Singularity is Post-Scarcity, it's the point where intelligence ceases to be scarce, it's a technological explosion of intelligence to end all aspects of scarcity, which should happen no later than year 2045. Intelligence, the ultimate resource, is the source of all resources thus the intelligence explosion is a resource explosion. Beyond AI-scarcity our minds will be explosive. Email: plus@singularity-2045.org For more info visit:
Cyberspace

Singularity 2045
moderator

Thinking About...  - 
 
I think we will reach pico or at least nano size processors by 2045.
 
Forbes reported (11 Nov 2014) on the views of Intel's resident futurist Brian David Johnson: "One of the key technologies he focused on in his talk is that the size of computers keeps shrinking. But the goal, he said, isn’t just to get them smaller. It’s how making them smaller can make people’s lives better."

Brian David Johnson previously said (13 Sep 2012) processors will approach zero size sometime around year 2020. Jump to 3:48 in this video: http://www.youtube.com/watch?v=3SA-IrhEQ8s?t=3m48s

The video continues (YouTube generated transcript):

5:36
right we could turn the table into a computer right we could turn my shirt
5:39
into a computer we could sometimes even turn our bodies
5:42
into a computer

and

7:27
science and technology have progressed to the point
7:30
[where] what we build
7:31
is only constrained by the limits of our imaginations

#BrianDavidJohnson  
Intel's resident futurist discusses the future.
1 comment on original post
7
1
Renaud Janson's profile photo
Add a comment...

Singularity 2045
moderator

Artificial Intelligence  - 
 
It's long and maybe I am ranting, but I hope you enjoy it.
 
This amusing #artificialintelligence article from the New York Times (23 Feb 2015) again falls into the paranoid fallacy of thinking AI will kill us either by design or accident. All these AI panic articles we are seeing, in response to supposed experts, are merely a reflection of xenophobia. AI is feared merely because it is foreign.

The tl;dr statement is: It is, in essence, neo-Luddite save-the-Earth and save-the-animals BS, evident via this quote from near the end: "Lastly, the harm is in perpetuating a relationship to technology that has brought us to the precipice of a Sixth Great Extinction."

The NY Times trots out the anthropocentric fallacy fallacy, which is the fallacy of thinking mere DNA humanness regarding intelligence means logic varies dependent upon intelligence substrate.

Logic, intelligence, is a universal phenomenon, thus aliens along with AI and humans will have the same concept of intelligence. It is all about reasoning, thinking, which the the anthropocentric fallacy fallacy states is unique according to the substrate of intelligence.

I think the main problem is, many humans are EXTREMELY stupid; they don't have a good grasp of intelligence; generally they can't actually define what they think intelligence is (the author of the NY Times article in question actually admits this!), which means they think emotions are utterly unrelated to intelligence.

Emotions are merely a method for intelligence to assign or communicate value regarding the the goal of intelligence. Wisely some AI researchers realize the value of emotions to intelligence. Facebook AI director Yann LeCun recognises the value of emotions to AI, which I have previously mentioned (https://plus.google.com/+Singularity-2045/posts/D9ofxSkCRMe).

NY Times wrote: "Unfortunately, the popular conception of A.I., at least as depicted in countless movies, games and books, still seems to assume that humanlike characteristics (anger, jealousy, confusion, avarice, pride, desire, not to mention cold alienation) are the most important ones to be on the lookout for."

According to the GigaOm (19 May 2014), Yann LeCun stated: “Emotions are often the result of predicting a likely outcome. For example, fear comes when we are predicting that something bad (or unknown) is going to happen to us. Love is an emotion that evolution built into us because we are social animals and we need to reproduce and take care of each other. Future AI systems that interact with humans will have to have these emotions too.” https://gigaom.com/2014/05/19/facebook-ai-director-yann-lecun-on-the-importance-of-emotional-machines/

Even AI doesn't interact with humans, AI will be subject to the same desires humans are subject to. Emotions are merely a logical response to a specific situation, which is a situation AI will be in. It is an issue of friend-enemy, value-worthlessness, zero-one, yes-no; emotion is merely a way of emphasizing action regarding goals.

The NY Times article is partially right though, stupid humans are or will be irrelevant: "Perhaps what we really fear, even more than a Big Machine that wants to kill us, is one that sees us as irrelevant. Worse than being seen as an enemy is not being seen at all."

What these articles about AI fear reveal is a subconscious recognition of the how fear and idiocy will be redundant in the future. People realise there will be no place for their idiocy in the future, but they cannot yet relinquish their asinine views thus they feel their idiocy is being threatened, they feel threatened, which is a common insecure response for a stupid person confronted with an intelligent person.

The threat is not real, it is merely the insecurity of a stupid person unable to rise to intellectual challenges.

It is very idiotic to think super-smart AI will act adversely to idiotic humans despite idiocy being incompatible with the future. The problem is super-intelligence is viewed from the typical dumb-human perspective.

The Independent wrote (24 Feb 2015): "Artificial intelligence will be a threat because we are stupid, not because it is clever and evil, according to experts." http://www.independent.co.uk/life-style/gadgets-and-tech/news/artificial-intelligence-could-kill-us-because-were-stupid-not-because-its-evil-says-expert-10066806.html

Gizmodo wrote (25 Feb 2015): "As Benjamin H. Bratton explains in the New York Times, our idea of artificial intelligence has been engineered from the beginning to be anthropomorphic..." http://www.gizmodo.com.au/2015/02/artificial-intelligence-might-kill-us-through-incompetence-not-malevolence/

Anthropomorphism is generally a total load of bull. It is a self-denying, self-invalidating, self-hating nonsensical contradiction. It is the hackneyed fallacy of objectivity, it is alienation, it's an ironic Less Wrong mentality that doubts the self at the core (fundamentally wrong), thus with utter certainty proponents claim they have discovered a totally certain theory about the self, from their flawed self no less, which explains why the self is faulty. If they are so dubious regarding the self they should silence their idiotic selves.

It is similar to the Dunning–Kruger effect where people think if they trot out these ideas of specious intelligence they are somehow elevated to a higher realm of intellect where rationality does not apply.

So they utter "anthropomorphic," "speciesism," "Sixth Great Extinction" or some other pseudo-intellectual term; then they smugly assume they are utterly logical.

It is simply crazy to think human intelligence has no relevance of all or any forms of intelligence.

I am not sure if Benjamin H. Bratton (the author of the NY Times article) is really an AI expert, or at least he does not deserve the great authority given in him via the aforementioned articles, although maybe you will say this is ad hominem.

;-)

Here is his Wikiepdia page: "Benjamin H. Bratton is Associate Professor of Visual Arts at the University of California, San Diego and Director of The Center for Design and Geopolitics think-tank at Calit2, The California Institute of Telecommunications and Information Technology. He is an American sociologist, architectural and design theorist, known for a mix of philosophical and aesthetic research, organizational planning and strategy, and for his writing on the cultural implications of computing and globalization." https://en.wikipedia.org/wiki/Benjamin_H._Bratton

Oh, and the point about the airplane not being designed to mimic bird, well that would be mean prosthetic legs don't mimic legs. Sure a prosthetic leg is different to a lost bio-limb but they perform the same function, which is how AI brains will work identically in essence to humans brains, if they are sufficiently intelligent. Futhermore looking at birds did really help humans to understand artificial flight.
A demonstration at the German Research Center for Artificial Intelligence in Hanover, Germany.Credit Carsten Koall/Agence France-Presse — Getty Images
View original post
6
Add a comment...

Zoltan Istvan
moderator

Thinking About...  - 
The atheist orphanage will carry the motto: “With Science, We Can Progress.”
6
3
Michael Merritt's profile photoDavid Eraso's profile photoMichael Olsen's profile photo
 
About time that secular altruism takes on the monopoly religion has over charity work: Doing the right things for the right, rational purposes.
Add a comment...

Singularity 2045
moderator

Artificial Intelligence  - 
 
I think I will add the following quote to the S45 website, perhaps shortened to state supposed AI experts Musk and Hawking "...fall onto specious assumptions, drawn more from science fiction than the real world."

“The reality is that AI research and development is tremendously complex. Even intellects like Musk and Hawking don’t necessarily have a solid understanding of it. As such, they fall onto specious assumptions, drawn more from science fiction than the real world."
 
Real #artificialintelligence researchers (not Musk or Hawking) aren't worried about super-intelligence destroying the human race; they're worried about the fear-mongering by Musk, Hawking, and others, those who express "crazy" notions able to turn away students and investors.

Often the craziness of being influenced by the fiction of Hollywood, regarding AI, has been highlighted on S45.

The two quotes below are from PopSci, 17 Feb 2015.

“The reality is that AI research and development is tremendously complex. Even intellects like Musk and Hawking don’t necessarily have a solid understanding of it. As such, they fall onto specious assumptions, drawn more from science fiction than the real world.”

"Of those who actually work in AI, few are particularly worried about runaway superintelligence."

Note also how Yoshua Bengio, head of Machine Learning at University of Montreal, said, according to PopSci: “There are crazy people out there who believe these claims of extreme danger to humanity. They might take people like us as targets.”
9 comments on original post
11
3
Singularity 2045's profile photoFrancisco Agenjo Toledo's profile photoTranscendent Patriot's profile photoRenaud Janson's profile photo
2 comments
 
+Michael Butler  make sure S45 is in a high priority circle so you don't miss any posts, or have a scroll down the profile page to see previous posts.
Add a comment...
 
Revolutionary New Tool for Editing DNA called CRISPR http://www.proton4.com/life-extension/crispr-tool-editing-dna/ #Health #Longevity #Transhuman
A new tool for editing DNA, called CRISPR, may have the largest impact on health care since the discovery of penicillin.
2
Add a comment...
 
The Different Paths to Immortality - From Cyborgs to Genetics
#Singularity #Transhuman
How will humans achieve immortality? These fascinating series of diagrams depicting our possible paths explain.
2
Add a comment...
 
Driverless Car Beats Pro Racer for the First Time Ever #DriverLessCars #iCar #Tesla
Stanford University engineers have, for the first time, developed a driverless car that has out-performed a professional racing driver on a race circuit.
1
Add a comment...

Singularity 2045
moderator

Artificial Intelligence  - 
 
The truth about ants and super-AI.
 
Previously I posted about the first part of Wait But Why's explanation of super-intelligence (https://plus.google.com/+Singularity-2045/posts/DyChUyiKhoo), which I branded Singularity Traditionalism. Now we consider part two.

The following quote relates to an intelligence staircase where humans are seven steps above ants, and two steps above chimps. Can you see the error?

Wait But Why (Feb 2015) wrote: “To absorb how big a deal a superintelligent machine would be, imagine one on the dark green step two steps above humans on that staircase. This machine would be only slightly superintelligent, but its increased cognitive ability over us would be as vast as the chimp-human gap we just described. And like the chimp’s incapacity to ever absorb that skyscrapers can be built, we will never be able to even comprehend the things a machine on the dark green step can do, even if the machine tried to explain it to us—let alone do it ourselves. And that’s only two steps above us. A machine on the second-to-highest step on that staircase would be to us as we are to ants—it could try for years to teach us the simplest inkling of what it knows and the endeavor would be hopeless.”

Consider the ant-human comparison, or any comparison to creatures below us. The big difference with humans is we are creating the next level, which is unlike any intelligence preceding us.

If ants or chimps had designed humans, publishing various human-design papers on the chimp or ant internet, the comparisons would be valid.

The problem is if ants or chimps had created us I am sure their level of intelligence, allowing them to artificially create life, combined with an understanding of our construction, would allow for sufficient understanding and communication with us despite their creations being vastly beyond them.

The importance difference is a threshold level of intelligence allowing higher brain functioning, reasoning, rationality, imagination, civilization, very advanced technology, the creation of artificial life, which humans grasp, intelligently, utterly unlike any other preceding creature.

Our very deliberate intelligent-creation of the next level will ensure (due to our threshold intelligence) we understand greater minds despite not being able to grasp their super-intelligent intricacies.

No other creature has deliberately created the next level of evolutionary intelligence, which means we cannot compare humans to ants regarding super-AI communicating with humans. There is an utterly massive difference between us creating AI and ants NOT creating humans.

Comparing ants to humans, regarding humans and AI, is a fallacy resembling the idea of any liquid, gasoline for example, being water because water is a liquid. It is a chalk and cheese comparison.

We can't do in our brains what Google does when it gives us our search results, but the human creation of Google allows for sufficient compatibility. Google or any advanced machine, or its actions, can be explained in general terms to any race of beings able to create it.

Intelligence actually makes comprehension easier. Google synthesises information into easily digestible chunks. Super-intelligence will amplify this distillation of knowledge, thereby explaining anything to us in easily comprehensible terms.

The Traditionalist view asserts intelligence brings chaos, confusion, mystery, disaster, God.

The Modernist view asserts intelligence brings clarity, order, understanding, utopia, atheism.

The Traditionalist view of intelligence is an oxymoron. Yes the evolution of intelligence is linear but humans have reached a tipping point, where unlike previous evolution we are deliberately creating the next level. Nothing beforehand resembles this, we are in new territory where the future cannot be compared to the past. Intelligence makes everything different. Humans are the first creatures to poses genuine intelligence. We will never be comparable to ants or chimps.

I refer to the perceptive ability inherent in the ability to imagine, then create, intelligence beyond your own. The point is humans have a perceptual ability utterly different to any preceding creatures, we have passed a cognitive grasp threshold, which means while the intellectual gulf maybe be equally vast (regarding ants to humans to super-AIs) the difference in real terms is a small difference, easily traversed, because we possess the ability to think then create beyond ourselves, which when combined with our understanding of the origins of AI, namely we are the creators, this means we can easily conceptualize anything beyond ourselves.

Thinking (perceptual ability evident in the ability to create greater than human intelligence) makes humans utterly different to any preceding animal, thus the distance from ants to humans would only translate accurately, to humans compared to super-AI, if ants had created the next level of intelligence beyond their own. Considering ants did not create the next intelligence beyond their own the perceptual differences have been incorrectly calculated.

The current analogy is wrong similar to thinking the value of humans is 2 when in actuality it is 4, thus 2 (ants) + 4 (humans) does not equal 4. The supposed logic is 2+2=4 but in actuality what happens is 2+2=6 because there is a basic mistake regarding the second 2 (the 2 is really a 4), the human value has been wrongly evaluated.

#Singularity #artificialintelligence #superintelligence  
Superintelligent AI is either going to be a dream or a nightmare for us, and there's not really any in-between.
9 comments on original post
7
Add a comment...

David Eraso

Artificial Intelligence  - 
 #AI
 
AAAI 2015 Kurzweil Accelerating Technologies Review
The AAAI’s Twenty-Ninth Conference on Artificial Intelligence was held January 25--30, 2015 in Austin, Texas. Machine cognition was an important focal area
1
2
Global Future 2045 H+/J+'s profile photoMorpheus Phoenix's profile photo
Add a comment...

Dom McCavish

Artificial Intelligence  - 
 
A great article for anyone new to the concept of the Singularity or who has only really explored Kurzweil without Bostrom or vice versa.
Superintelligent AI is either going to be a dream or a nightmare for us, and there's not really any in-between.
2
1
Gianluca Tabbita's profile photo
Add a comment...