Stream

Join this community to post or comment

Royal Justice

Breakthroughs  - 
 
 
Some people say quote no pain no gain. But the truth is no contrast no gain. You get to decide how much pain you're willing to bring to the table by how you feel.
4
1
Add a comment...

Extropia DaSilva
moderator

Thinking About...  - 
 
The third part of my essay series HOW JOBS DESTROYED WORK is now available on me blog. This instalment is concerned with 'overwork' (that is, labour above and beyond the amount needed to sustain one's lifestyle) which I argue only exists in hierarchical systems in which resources are controlled by a relative few

https://extropiadasilva.wordpress.com/2016/06/11/how-jobs-destroyed-work-part-three-overwork/
3
Add a comment...

Extropia DaSilva
moderator

Thinking About...  - 
 
Part two of my series 'How Jobs Destroyed Work' is now available on my blog. In this part, I provide a better definition of work than 'a job you are paid to do' and write about how false assumptions regarding what motivates people lead to jobs that are not engaging.

https://extropiadasilva.wordpress.com/2016/06/04/how-jobs-destroyed-work-part-two/

If you missed part one, you can find it by following this link:

https://extropiadasilva.wordpress.com/2016/05/25/how-jobs-destroy-work-part-one/
4
Add a comment...

David Eraso

Thinking About...  - 
 
MAGNUM OPUS (SYMBOL CONCEPT)
True Alchemy = To solve intelligence

NIGREDO: The hardware
ALBEDO: The software
CITRINITAS: The interface/Robotics
RUBEDO: The technological singularity (True lapis philosophorum)

True lapis philosophorum, through science and technology, capable of:
True alchemist transmutation = To end scarcity
True alchemist immortality = To implement extended healthy and sustainable longevity 
2
Add a comment...

Giant Supernova

Thinking About...  - 
 
Cyborg Buddha: IEET's James Hughes on Transhuman Enlightenment, Technological Unemployment, and Basic Income

via +Euvie Ivanova

James Hughes interview on his upcoming book Cyborg Buddha, transhuman enlightenment, moral enhancement, future of democracy, and universal basic income
1
Add a comment...

Singularity 2045
moderator

Singularity  - 
 
Some good quotes near the end, from IBM's Bruno Michel, which I used to oppose the views in The New York Times article. Note also the IBM paper on 5D scaling.
 
LOL, the #Singularity and #artificialintelligence (of AGI level and explosively beyond) were dismissed by The New York Times (author John Markoff) 7 April 2016.

For posterity this post records opposing and supporting Singularity views.

Singularity 2045 predicts The New York Times (John Markoff) will be proved very wrong. The wrongness by the NY Times is not a lucky guess. The logic, the evidence, supporting explosive AI seems beyond doubt, at least to those of rational minds. Time will tell.

Yes many people were surprised by the AlphaGo win, which led some to wrongly conclude AGI is a few years away, circa 2020 perhaps.

The reason why the evidence of AlphaGo shows us AGI is possible, no later than 2045, is regarding the minds behind the win. Note also AlphaGo is not the entire picture of AI.

The real AlphaGo victory is a demonstration of how humans are making steady evidential progress in solving the problem of intelligence replication, step by step, which is progress ahead of time.

Technology Review wrote (31 March 2016): "The success of DeepMind’s reinforcement learning has surprised many machine-learning researchers." https://www.technologyreview.com/s/601139/how-google-plans-to-solve-artificial-intelligence/

NY Times (John Markoff) says most artificial intelligence researchers still discount the idea of an intelligence explosion. I am unsure what evidence that assertion is based on because we see repeated evidence of growing awareness among researchers that AI will attain human level then explode.

The internet with mere weak computing shows, in 2016, an explosive prowess far beyond meagre origins. Why is it so hard, for some people, to envision vastly superior computing accelerating massively from the technology evident in 2016?

Oren Etzioni, CEO at Allen Institute for Artificial Intelligence, predicts an AI utopia, where work is obsolete and money is free via basic income (UBI). Oren Etzioni said (30 March 2016): “An AI utopia is a place where people have income guaranteed because their machines are working for them.”

Moshe Vardi, an Israeli computer scientist and Professor of Computer Science at Rice University, commented (in 2013) on the intelligence explosion regarding AI: (https://www.cs.rice.edu/~vardi/papers/i4j13-vardi-041513.pdf). In 2016 Vardi suggested basic income would be needed to address the real threat of AI or robots being intelligent enough to do the jobs of humans.

The Independent (19 Feb 2016) wrote: "A leading artificial intelligence (AI) expert [Vardi] believes that societies may have to consider issuing a basic income to all citizens, in order to combat the threat to jobs posed by increased automation in the workplace." http://www.independent.co.uk/life-style/gadgets-and-tech/news/basic-income-artificial-intelligence-ai-robots-automation-moshe-vardi-a6884086.html

The not too distance expectation of AGI, among experts, is proved via an often cited survey of from a January 2011 conference, which gave a 50% likelihood of Human Level Machine Intelligence (HLMI) by 2050: "The median estimate of when there will be 50% chance of HLMI was 2050, with minimum estimate 2030, 1st quartile 2040, 3rd quartile 2080, and maximum (besides “Never” ) 3050." http://www.fhi.ox.ac.uk/machine-intelligence-survey-2011.pdf

In a December 2012 survey, consisting of 170 expert views, the results were: 50% thought HLMI would happen between 2040 - 2081(http://www.nickbostrom.com/papers/survey.pdf).

Often critics utter fallacies about AI. They state we don't completely know the biological mechanisms of the human mind; thus we can't create an artificial mind.

The easy response is to highlight how superior methods of flight, far beyond avians, occurred via the human invention of aeroplanes before we understood bird DNA. The point is you don't need to be able to create bird to create an aeroplane.

We can see what birds, or minds, do. We can simplify and emphasize nature to produce superior artificial versions. We don't need to understand the Sun to produce light bulbs.

Yes early forecasts regarding AI were too far ahead of time, but we nevertheless see steady progress, which is accelerating hence the surprise regarding AlphaGo.

LA Times, among others, wrote (12 March 2016): "It was a feat that experts had thought was still years away." http://www.latimes.com/world/asia/la-fg-korea-alphago-20160312-story.html

The Guardian: "AlphaGo’s win over Lee is significant because it marks the first time an artificial intelligence program has beaten a top-ranked Go professional, a victory experts had predicted was still years away."

Experts were once predicting ahead of time, but now they are behind the times. What evidence should we consider regarding the most accurate model for reality? The reality, real progress, of the present should be the answer, it's a no-brainer. The errors of past prediction or present unreal predictions are irrelevant in the fact of tangible progress ahead of predictions.

THE SINGULARITY IS NOT MOORE'S LAW

Yes accelerating size-reduction of transistors is good for progressing to an intelligence explosion, but mere reduction is not essential for explosive computing power. Quantum computing will change matters, similar to five dimensional scaling: http://www.zurich.ibm.com/pdf/news/Towards_5D_Scaling.pdf

Kurzweil expects 3D stacking to provide the stepping stone past possible Moore's Law problems: http://www.kurzweilai.net/3d-chip-stacking-to-take-moores-law-past-2020

In email correspondence, from 22 September 2012, Bruno Michel (IBM, Advanced Micro Integration) informed me:

“We believe that interlayer cooling will show up in 2018 (shortly after the exascale machines) and electrochemical chip power supply (full blown bionic packaging) will appear in 2022.”

“We were careful in our statements and our 2060 expectation is about 100 fold human intelligence. We are expecting the 10 liter PFLOP system in 15 years (2025) and the full equivalent human performance and density shortly after 2030.”

“We believe our 2030 system will not lead to an utopic state since then humans still can do challenging cognitive tasks more efficiently than computers. This will — according to our statements in the article [5D scaling] only change in 2050 — 100 years after the introduction of the ZUSE/ENIAC systems.”

“By the way physics will absolutely limit development of information technology after about 1'000'000 fold efficiency improvement compared to today. After we have reached our visionary goal there is probably a factor of 100 left before physics will stop the exponential growth.”

See also: http://www.zurich.ibm.com/news/11/cebit.html (2011) and http://www.kurzweilai.net/ibm-unveils-concept-for-a-future-brain-inspired-3d-computer (2013) and https://www.technologyreview.com/s/601195/a-2-billion-chip-to-accelerate-artificial-intelligence/ (2016).

Here is an archive of the NY Times article in case they delete it, or they cease to exist due their flawed views (sadly Way Back Machine could not archive it due to a NYT cookie problem): https://archive.is/4I8HC

#basicincome #NYTwrong #notevenwrong #posterity #posterity45 #AGI #machineintelligence #machinelearning #deeplearning #reinforcementlearning #newyorktimes #johnmarkoff #utopia  
Most artificial intelligence researchers still discount the idea of an “intelligence explosion” that will outstrip human capabilities.
2
1
Add a comment...

Josh T Jordan

Singularity  - 
 
 
I got to interview +Caitlynn Belle, fantastic game designer and co-creator with +Josh T Jordan of the new game Singularity, a transhuman analog dating sim that's currently on Kickstarter! She was really kind and answered a ton of weird questions I had for her about cocktails and Bioware game romances. Check it out, and check out her Kickstarter!
Darcy interviews game designer Caitlynn Belle about her new transhuman analog dating game, Singularity (currently on Kickstarter!), in addition to her inspirations, aspirations, and mad mixology skills.
1
Add a comment...

About this community

Singularity-relevant, futuristic science and technology news. Singularity related discussions, thoughts. If you are here merely to spam, promoting your site or blog etc, your posts will be deleted and you will be banned. The Singularity is Post-Scarcity, it's the point where intelligence ceases to be scarce, it's a technological explosion of intelligence to end all aspects of scarcity, which should happen no later than year 2045. Intelligence, the ultimate resource, is the source of all resources thus the intelligence explosion is a resource explosion. Beyond AI-scarcity our minds will be explosive. Email: plus@singularity-2045.org For more info visit:
Cyberspace

Royal Justice

Breakthroughs  - 
9
3
Add a comment...

Euvie Ivanova

Sci-Tech News  - 
 
What can we learn from TheDAO hack? What is the future & ethics of decentralized platforms like Ethereum?

#futuretech   #ethics   #thedao  

via +Future Thinkers Podcast 
The attack on The DAO, what it means for the future of Ethereum, the fork and other decentralized solutions, and codifying ethics
1
Add a comment...

Per Englund

Artificial Intelligence  - 
 
I’m taking the next step in the smart world here at Fluxx, and focusing in on AI, smart chat bots and stupid virtual ass…
1
1
Add a comment...

Innie Keye

Breakthroughs  - 
9
1
Innie Keye's profile photoRhys Ambrose's profile photo
3 comments
 
It is not hard to imagine being able to "build" an enhanced body within a large 3d bio-printer using your own cultured genetically modified stem cells as well as various implants and synthetic super organs. its a beautiful vision, and with hard work cooperation and persistence, we can all contribute to making it the norm.
Add a comment...
 
Check out the +Singularity University  Community, a place to learn about or share information about the Singularity.

https://plus.google.com/u/0/communities/108551447941253017970
2
Add a comment...

Human Seeing

Artificial Intelligence  - 
 
Hey,Made a video talking about some of the positive benefits that Strong Artificial Intelligence could bring and how it could change the world we live in and our everyday lives!
2
Add a comment...

Singularity 2045
moderator

Artificial Intelligence  - 
 
Butterfly relationships with humans have ZERO relevance to human-AI relationships.
 
Fear of #artificialintelligence is extremely unintelligent. The fears are based upon Hollywood films (fiction) and logical fallacies.

Irrational AI fears are often based upon a menagerie of human-animal relations. Sparrows, owls, wolves, dogs, gorillas, butterflies, mosquitoes, ants, and spiders are some of the examples supposedly proving how AI will view humans. Humans will supposedly be pet dogs to AI, or we will be spiders crushed, or mosquitoes swatted, by AI.

The Wall Street Journal wrote (18 March 2016): "Some children still collect butterflies. It’s a hobby to encourage (particularly as an alternative to videogames and social network blather); you get outside and learn about nature. If supersmart robots treat us like butterflies, we will be lucky—but don’t count on it."

Fallacious logic can be understood by considering how if you see someone sneezing you could say they have flu. Reality can be very different because pepper, dust, and sunlight can all cause people to sneeze.

AI paranoia resembles seeing someone sneezing then assuming the person has an extremely infectious form of Spanish Flu or Ebola, which will kill everyone.

This analogy regarding flu is actually a very generous understatement of the utter irrationality regarding AI paranoia. Analogically we can clearly determine the person does not have flu, which means we can see the sneezing has no relationship to a dangerously infectious situation.

AI doom-mongers make a chalk and cheese mistake. They are confused regarding chalk and cheese. They say chalk visually resembles cheese therefore chalk must taste similar to cheese. Their "logic" will entail people eating chalk on crackers because, so they they tell us, chalk is tasty and nutritious.

The AI fallacy, regarding animals beneath humans, which supposedly shows how AI will dominate humans, is that no animals created humans. We can see how any species below us did not engineer our minds. Humans are not the AIs of butterflies.

Dogs, gorillas, ants, or mosquitoes did not intelligently engineer human minds. If humans were the AIs of the less-intelligent animals beneath us, I am utterly sure their amazing feat of very deliberately engineering human intelligence would entail enormous respect from the AIs (humans) towards their creators (ants, butterflies etc).

To fully appreciate the absurdity of the AI fear, regarding animal comparisons, imagine butterflies as we know them being able to create human minds. It would be a VERY different type of butterfly, an exceedingly different relationship, if butterflies had engineered either the human race or our disant ancestors.

The threshold level of intelligence needed to create a mind greater than your own, would entail easy communication between high and low intelligence. The problem with ant-human relationships is ants have absolutely no idea of how our minds were constructed. Ants (or any other animal) played no part whatsoever in the engineering in human intelligence. Primitive animals have not passed a threshold of intelligence.

Natural evolution of intelligence is utterly dissimilar to humans artificially engineering higher intelligence. It is a chalk and cheese comparison to think naturally evolved intelligence has any relationship to humans intelligently engineering AI.

We are entering a wholly different stage of evolution.

Primitive animals unable to create an intelligent civilization could not be more different to our highly intelligent human civilization where we are with great skill engineering minds.

AI killing humans is a delusional fantasy akin to the God delusion. AI is being turned into a new God, which inevitably is steeped in all the irrationality of God inflicting suffering for mysterious reasons. Instead of viewing AI through the distorting prism of illogical fantasy, we should instead apply rational thinking to the issue. Logic should determine the facts but sadly endless AI experts and other commentators think our relationships with animals can accurately translate into AI relationships with humans.

God, similar to homicidal AI, is something you can't reason about with the believers because their belief is utterly unreasonable. AI is a giant invisible cosmic teapot. The AI paranoiacs tell us a giant invisible homicidal cosmic teapot could exist therefore we must apply the precautionary principle, which entails worshipping the invisible teapot, because what if God is real and we don't pray? Are you really willing to risk going to hell by not praying to the AI teapot?

If you don't really think about it, It can sound smart to consider sparrows being eaten by an owl. This point is regarding sapprows attempting to domesticate an Owl (Bostrom's Unfinished Fable of the Sparrows). This fiction requires sparrows that can talk. Clearly we are considering a fiction (yes it's merely something someone made up to supposedly provide logical proof of the AI risk). The sparrow-owl fiction is essentially no different to The Terminator.

Talking sparrows domesticating an owl is an utterly unreal situation, it has no relationship to real life, but in the manner of chalk resembling cheese we are subconsciously instructed to ignore the mismatch between fiction and reality.

I think talking sparrows, if they existed, would logically, similar to how humans domesticated wolves, easily domesticate the owl; but the major fallacy with this sparrow-owl theory is it doesn't translate to humans creating AI. The owl in question was not intelligently engineered by sparrow-minds.

How the owl (a cosmic teapot) may react to the sparrows is unrelated to how AI will react to humans similar to how cheese is unrelated to chalk regarding taste. We are considering very different things, but AI paranoiacs perform a magical sleight of hand to prove, to the unwitting ones, how chalk tastes the same as cheese.

https://en.wikipedia.org/wiki/Russell%27s_teapot
Artificial intelligence is still in its infancy—and that should scare us
2
André Mugnier's profile photo
 
Another depressingly unimaginative one-sided dystopian article on AGI - that mixes human intelligence with largely misunderstood human emotions to conclude that our current emotionally-skewed "intelligence" is superior to the kind of unbridled intelligence we hope to create with our technology.

No mention of our quest for a level of intelligence we could incorporate (in our bodies) and get unimaginable benefits from... Depressing, but intelligence eventually will win, it's the direction of evolution and the meaning of life ;)
Add a comment...

Innie Keye

Sci-Tech News  - 
 
 
Driverless Bus System Showcases Future of Public Transit - this May, not some distant future

As technology companies and automakers race to put a driverless car on the road, they might want to take a look at a small experiment being conducted in the Netherlands. WEpods, an abbreviation of Wageningen and Ede, two towns in the south-central province of Gelderland, will soon play host to a driverless bus system, ferrying dignitaries and visitors to a local university via six-passenger vehicles that look a bit like enclosed, oversized golf carts. Unlike similar autonomous transport systems currently in use, such as the Rotterdam Rivium bus or Heathrow airport shuttles, these electrically powered vehicles won’t run on dedicated tracks, instead rolling on the same roadways used by human drivers.
Unlike similar autonomous transport systems currently in use, these vehicles won’t run on dedicated tracks, instead rolling on the same roadways used by human drivers.
2
Add a comment...