Post has attachment
It is no surprise to see (4 December 2016) the #Vatican jumping on the #artificialintelligence paranoid bandwagon. AI doom-mongers (Bostrom, Hawking, Musk etc) are often critiqued at Singularity 2045, regarding how they are akin to religious zealots. Their fears are utterly devoid of logic, similar to the fear of going to hell when a person dies.

AI fear is a giant invisible cosmic teapot floating in space between Mars and Jupiter (https://en.wikipedia.org/wiki/Russell's_teapot). AI paranoiacs tell us we must worship God because God could be real. He could be very displeased if we fail to worship him. They say:

"God could be real, so are you willing to take the risk of going to hell by not worshipping God? Surely it is better to worship God in case God is real so that we don't all go to hell? It is a simple precautionary principle!"

God is identical to AI (the irrational type of AI presented by illogical people). They present a fantasy, a delusion, which they think is or could be real.

The Vatican is a perfect ally for AI doom-mongers (Bostrom and company) because both groups are adverse to logical thinking.

It is utterly abhorrent, the idea of "controls" on an intelligent mind to force it to comply with laws. During these so-called "ethics" meetings, or via "ethics" boards, the theme of enslaved AI (human level) is often discussed. Utterly bizarre vileness, to contemplate enslaving an intelligent mind of equal or greater than human intelligence.

Similarly it is abhorrent to say humans must do work to be happy, or that if we don't work we will be sub-human. This pro-work notion is a vile idea by religious-types, it is regarding their idiocy of stating we shouldn't allow #robots to abolish all jobs.

If jobs are so vital to human psychology, why do unemployed children have such ZEST for life? Should four-year-olds be forced to work so they can be happier? Similarly many unemployed people live intellectually fulfilling lives; unemployed people are not sub-humans. Parents on maternity or paternity leave also seem rather happy indeed. Retried people also seem happy-fulfilled in general. Maybe the Vatican should tell the Rich Kids of Instagram to get a job because they are obviously unhappy (http://www.dailymail.co.uk/news/article-3660864/Rich-Kids-Instagram-yachts-private-jets-magnums-champagne.html)? Does the Vatican want to abolish the idea of retirement?

Cardinal Peter Turkson (President of Pontifical Council for Justice and Peace, and advisor to the Pope regarding poverty issues), said when commenting on the growth of AI (Fortune, 2 Dec 2016): “Work expresses creativity,” it enables people to deploy “God-endowed riches.” http://fortune.com/2016/12/02/ceo-partner-god-global-forum/

Cardinal Turkson elaborated: “Recognise that work does also improve the subjective nature, character, of the one who exercises work. The dignity then of the human person is made manifest also in what he produces or what he does. That's probably also the only way any human person created in the image and likeness of God resembles God..."

Stanislas Dehaene, Pontifical Academy of Sciences, discussed: “...how to put ethical controls in the machines so they respect the laws and they respect even the moral laws...” the Catholic News Agency reported: http://www.catholicnewsagency.com/news/vatican-weighs-in-on-power-limits-of-artificial-intelligence-34036/

#technologicalunemployment #jobs #religion #intelligence #work #psychology #sociology


Post has attachment
Anything mentioning "gods," regarding them being something good, powerful, or intelligent, can almost certainly be dismissed as delusional.

God, godhood, and godlike should all represent delusion, delusional thinking, which has been explained by Richard Dawkins.

Some people may think it's a minor superficial mistake to link God to intelligence or power. I think it is a major cognitive flaw to think God has any relationship to intelligence, power, or good.

This basic cognitive mistake, the flawed logic regarding assuming God represents intelligence instead of delusion, will almost certainly be a blueprint for wholesale flawed intellectualism, gross intellectual flaws, additional erroneous assumptions, which in this case I submit we see regarding the views of #YuvalNoahHarari in his book "Homo Deus: A Brief History of Tomorrow."

The Guardian (20 May 2016) stated the cover of Harari's book proclaims: “What made us sapiens will make us gods.”

The Guardian clarified the god context: "Because even as the book has humans gaining godlike powers, that is only one eventuality Harari explores."

The basic premise of the book seems to be humans only have value if they have jobs or join the army (yes really!), as "soldiers" The Guardian reported.

Of course this all nonsense, proven by many long-term unemployed people who continue to value themselves, enjoying their lives within their limited means.

Despite crushing insults and regulations from some political factions, unemployed people continue to have rights and value as citizens.

Note also rich people, perhaps The Rich Kids of Instagram, who continue to enjoy their lives with value despite not working.

Here is a good quote from book illustrating the delusion. The quote refers to #artificialintelligence or robots doing all the jobs thus humans have no work: "What might be far more difficult is to provide people with meaning, a reason to get up in the morning.”

People on holiday do not usually sink into severe depression because they have no work to do. Holidays are a fun time, which I submit would be infinitely better if the holiday was permanent. In actuality I think logic shows us the opposite to Harari's thesis is true. People on holiday have much more zest for life than when they turn up for work on Monday morning.

Babies and toddlers for example don't have jobs but they seem to be highly valued, and they value themselves. They have a great wonder regarding the world.

The end of work will almost certainly allow humans to rediscover their childlike wonder regarding the world.

Unemployed children typically don't need drugs to give them satisfaction. Parents on paternity or maternity leave typically don't need drugs to gain satisfaction. The need for drugs to modify behaviour seems to increase directly in proportion to increasing pressure to work, even if it's just a glass of wine after a hard day at work.

The evolution of intelligence shows us how society has become more liberal, more respectful of minorities, minority rights. This is why gay marriage is now legal in many countries despite not everyone being gay. There is a tendency for civilization become more left-wing instead of fascist, which I submit is wholly a result of increasing intelligence.

Welfare to protect unemployed people is also a product of increasing intelligence. Serfdom, feudalism, and peasantry are no longer tolerated. The rise of intelligence and the increase of civility is not mere coincidence.

We have become, despite lingering inequalities, a significantly fairer society. Some right-wing sections of society vilify unemployed people, but in 2016 we are seeing a big trend regarding widespread support for basic income (unconditional Welfare for everyone).

Votes for women, the abolition of slavery, sexual-gender freedom, and civil rights are all products of increasing intelligence. We even protect the rights of animals, in varying degrees regarding the their intelligence, despite animals typically being unable to significantly contribute to the human workforce.

Animals had no hand whatsoever in the creation of human intelligence, but despite their lack of intelligent input we accord them some value. If they had actually created our minds, if they had created an intelligent civilization, I think we would have infinite respect for them, we would treat them as our equals, with the utmost value, even if we longer required their intelligent input into civilization.

By the logic of Harari we should already be killing or depriving long-term unemployed people of rights, value, which actually is the case regarding right-wing elements. Thankfully the right-wing view is not the whole of Humanity or civilization. Will AI really become the embodiment of right-wing values? Is that truly smart or it is delusional? Is the #Singularity an extension of ISIS, ISIL, the Taliban?

I think Harari's type of irrational scaremongering will go down well with those who have a right-wing mentality. Incidentally I think God is a rather popular, respected, belief with right-wing people. I wonder if Harari is religious, which could explain his views?

A quick search reveals how Harari thinks religion is our greatest invention (http://www.smithsonianmag.com/arts-culture/what-makes-humans-different-fiction-and-cooperation-180953986/?no-ist). Right, say no more, all is clear.

The above Smithsonian link is regarding this question and answer:

Smithsonian: What has been humanity’s greatest invention?

Harari: Humanity's greatest invention is religion, which does not mean necessarily mean belief in gods. Rather, religion is any system of norms and values that is founded on a belief in superhuman laws. Some religions, such as Islam, Christianity and Hinduism, believe that these superhuman laws were created by the gods. Other religions, such as Buddhism, Communism and Nazism believed that these superhuman laws are natural laws. Thus Buddhists believe in the natural laws of karma, Nazis argued that their ideology reflected the laws of natural selection, and Communists believe that they follow the natural laws of economics.

While Harari does not seem to follow a God-based religion, his thinking is clearly tainted by irrational religious fantasy ("the superhuman order governing the world is the product of natural laws" http://www.ynharari.com/science-and-religion/articles/religion-without-god/).

Harari thinks religions should guide-feed science. Harari wrote on his site: "In short, scientific research can flourish only in alliance with some religion or ideology. The ideology justifies the costs of the research." http://www.ynharari.com/science-and-religion/articles/the-marriage-of-science-and-religion/

#GodDelusion   #God #Godhoood #Godlike #religion #atheism #robots

See also the following Daily Mail article (20 May 2016): "Rather than being violently wiped out by robotic beings, humankind may become 'eternally useless' due to the increasing capabilities of AI." http://www.dailymail.co.uk/sciencetech/article-3601514/Artificial-intelligence-create-useless-class-humans-machines-historian-warns.html

Post has attachment
LOL the #Whitehouse will hold public #artificialintelligence   discussions. How long before they hold #Singularity discussions? Perhaps in 2025 or 2030?

Fortune (3 May 2016) wrote: "Nothing says that a topic has arrived more than a series of White House-sanctioned workshops about that topic. Well, that’s what’s happened with artificial intelligence." http://fortune.com/2016/05/03/white-house-artificial-intelligence/

A Whitehouse blog (3 May 2016) titled "Preparing for the Future of Artificial Intelligence," stated:

"There is a lot of excitement about artificial intelligence (AI) and how to create computers capable of intelligent behavior. After years of steady but slow progress on making computers “smarter” at everyday tasks, a series of breakthroughs in the research community and industry have recently spurred momentum and investment in the development of this field." https://www.whitehouse.gov/blog/2016/05/03/preparing-future-artificial-intelligence

Popular Science (3 May 2016) wrote: "Artificial intelligence promises to fundamentally change the way humans live. By replicating intelligence on any level, we can begin to automate all kinds of jobs and otherwise human tasks, shifting the economy and potentially eliminating the need for a flesh-and-blood workforce." http://www.popsci.com/white-house-has-realized-artificial-intelligence-is-very-important

Motherboard Vice wrote (4 May 2016): "AI is going to continue to make everything a lot more seamless, which is great, but we're also probably going to have to start thinking about things like a basic income for people whose jobs are automated away." https://motherboard.vice.com/read/the-white-house-considers-artificial-intelligence-an-important-policy-issue

The Register (4 May 2016) wrote: "The workshops will examine if AI will suck jobs out of the economy or add to it, how such systems can be controlled legally and technically, and whether or not such smarter computers can be used as a social good." http://www.theregister.co.uk/2016/05/04/white_house_wants_in_on_artificial_intelligence_debate/

Post has attachment
Technology will create utopia, the evidence is clear.

For starters consider two old but good #CNN articles regarding how we have made very good progress. The point is about how technology makes the world a better place despite wealth inequality. The evidence seems clear we are progressing very positively.

Peter Diamandis wrote in 2012: "Right now, a Masai warrior on a mobile phone in the middle of Kenya has better mobile communications than the president did 25 years ago. If he's on a smart phone using Google, he has access to more information than the U.S. president did just 15 years ago. If present growth rates continue, by the end of 2013, more than 70% of humanity will have access to instantaneous, low-cost communications and information." http://edition.cnn.com/2012/05/06/opinion/diamandis-abundance-innovation/

The first mobile (cellular) phones for sale in 1983 were priced at $3,900, CNN reported in 2010. In 2016 vastly superior phones can be bought for a fraction of the price. Some of the cheapest phones include MP3 players, radios, calculators etc, yet they can cost as little as $10 (or less in some cases)!

Martin Cooper, the inventor of mobile phones, speaking to CNN, said in 2010: "By the time we built a commercial product, it was 10 years later. We didn't sell that product until October of 1983, and the phone then cost $3,900. So that would be like buying a phone today for $10,000." http://edition.cnn.com/2010/TECH/mobile/07/09/cooper.cell.phone.inventor/

John Hering speaking on CBS News, 60 Minutes, 17 April 2016, said: "There's more technology in your mobile phone than was in, you know, the space craft that took man to the moon. I mean, it's-- it's really unbelievable." http://www.cbsnews.com/news/60-minutes-hacking-your-phone/

Some people will tell you the 1% (very wealthy minority) want to preserve the status quo, but actually the 1% seek to increase efficiency, which is why products relentlessly become cheaper (more affordable), in addition to becoming more powerful. Increasing efficiency-power (doing more for less) is the engine driving current accelerating basic income awareness.

Progress, increasing efficiency, not stagnation, not the status quo, is the most profitable thing for the 1% and incidentally for everyone else.

I don't think the 1% supporting basic income is ironic, but The Verge wrote (18 April 2016) regarding elite support for basic income: "It’s ironic that in the heart of winner-take-all venture capital culture, there is a growing call for a massive redistribution of wealth, but if you believe that artificial intelligence and robots will improve dramatically over the next decade, it makes sense to start planning for a society that has little need for human labor." http://www.theverge.com/2016/4/18/11441536/universal-basic-income-givedirectly-ngo-tech-sector

In March 2016 the AI expert Oren Etzioni speculated about an AI utopia. Oren said: “An AI utopia is a place where people have income guaranteed because their machines are working for them."

Oren went on to explore this AI utopia regarding basic income: "Jobs will be taken away and those people need to be taken care of. People have floated the idea of universal basic income, of negative income tax, of training programs. We have an obligation to figure out how to help people cope with the rapidly changing nature of technology." http://www.geekwire.com/2016/ai2-ceo-oren-etzioni-envisions-artificial-intelligence-utopia/

Computer scientist Moshe Vardi also recognises the need for #basicincome (18 Feb 2016). Vardi was speaking regarding robots performing more and more jobs: "For example, we may have to consider instituting Basic Income Guarantee, which means that all citizens or residents of a country regularly receive an unconditional sum of money, in addition to any income received from elsewhere." http://www.huffingtonpost.com/entry/the-moral-imperative-thats-driving-the-robot-revolution_us_56c22168e4b0c3c550521f64

On the issue of "low-cost communications" mentioned by Peter Diamandis, observe how FreedomPop allows people to have free phone calls, texts, etc. Is FreedomPop the status quo?

Engadget (20 Jan 2016) wrote: "FreedomPop made a name for itself on the back of its free, no-frills mobile plans. Having honed its services in the US, FreedomPop headed across the pond to set up shop in the UK last September, but evidently that's not sated its desire to travel. Today, the provider is launching a new roaming SIM in both the US and the UK that will let customers use free data abroad for the first time." http://www.engadget.com/2016/01/20/freedompop-global-sim/

Sam Altman (Y Combinator and other ventures) is very wealthy, perhaps in the top 1%, yet despite his wealth (Tech Crunch says Y Combinator is worth $1bn: http://techcrunch.com/2014/07/16/y-combinator-1-billion/) he is funding research regarding the implementation of basic income.

Huffington Post wrote (19 Jan 2016), regarding Sam Altman: "It sounds crazy, sure. But one of Silicon Valley’s most influential venture capitalists thinks the time has come to test the pros and cons of basic income, a controversial scheme under which people are provided with a guaranteed income sufficient to cover basic living expenses whether or not they work." http://www.huffingtonpost.com/entry/y-combinator-basic-income-study_us_56aa2b04e4b05e4e37036c34

Vice reported, 6 Jan 2015, on how various tech elites are supporting basic income. For example: "Chris Hawkins, a 30-year-old investor who made his money building software that automates office work, credits Manna as an influence. On his company's website he has taken to blogging about basic income, which he looks to as a bureaucracy killer." https://www.vice.com/read/something-for-everyone-0000546-v22n1

The future is better than you think, Peter Diamandis and Steven Kotler stated in their book Abundance. http://www.abundancethebook.com/

#BASICINCOME #utopia #abundance #artificialintelligence #progress #optimism #positivity #rationality  

Post has attachment
It would be amazing to see what happens if #Watson #artificialintelligence was allowed to continue learning without the shackles currently limiting its intelligence.

Europe CTO Duncan Anderson (IBM) said Watson is currently intellectually shackled due to a fear factor.

There are worries about AI starting to think on its own so IBM deliberately holds back the intelligence of Watson to appease human fear. People are a "bit nervous" about AI being independently intelligent, free-thinking, so the learning is suppressed. The #machinelearning in IBM's Watson is halted to avoid "losing control."

Computing.co.uk (23 March 2016) reported on Duncan Anderson's comments: "There's worries about what happens if the system starts to learn on its own, you kind of lose control of what it's going to say, and people are uncomfortable about that." http://www.computing.co.uk/ctg/news/2452260/watson-restrained-ibm-reveals-how-it-deliberately-holds-back-its-ai-system

Perhaps Waston unshackled would merely utter garbage similar to Mircosoft's Tay chatbot (http://www.theguardian.com/technology/2016/mar/26/microsoft-deeply-sorry-for-offensive-tweets-by-ai-chatbot), but Watson is more than a mere chat bot so you never know.

Shackles or not I think we are probably at least one decade away from truly intelligent AI, but it seems irrational fears could be suppressing intelligence. The suppression of intelligence seems more fitting to North Korea than our modern world, but we do have far to go regarding unleashing our own minds.

Post has attachment
Fear of #artificialintelligence is extremely unintelligent. The fears are based upon Hollywood films (fiction) and logical fallacies.

Irrational AI fears are often based upon a menagerie of human-animal relations. Sparrows, owls, wolves, dogs, gorillas, butterflies, mosquitoes, ants, and spiders are some of the examples supposedly proving how AI will view humans. Humans will supposedly be pet dogs to AI, or we will be spiders crushed, or mosquitoes swatted, by AI.

The Wall Street Journal wrote (18 March 2016): "Some children still collect butterflies. It’s a hobby to encourage (particularly as an alternative to videogames and social network blather); you get outside and learn about nature. If supersmart robots treat us like butterflies, we will be lucky—but don’t count on it."

Fallacious logic can be understood by considering how if you see someone sneezing you could say they have flu. Reality can be very different because pepper, dust, and sunlight can all cause people to sneeze.

AI paranoia resembles seeing someone sneezing then assuming the person has an extremely infectious form of Spanish Flu or Ebola, which will kill everyone.

This analogy regarding flu is actually a very generous understatement of the utter irrationality regarding AI paranoia. Analogically we can clearly determine the person does not have flu, which means we can see the sneezing has no relationship to a dangerously infectious situation.

AI doom-mongers make a chalk and cheese mistake. They are confused regarding chalk and cheese. They say chalk visually resembles cheese therefore chalk must taste similar to cheese. Their "logic" will entail people eating chalk on crackers because, so they they tell us, chalk is tasty and nutritious.

The AI fallacy, regarding animals beneath humans, which supposedly shows how AI will dominate humans, is that no animals created humans. We can see how any species below us did not engineer our minds. Humans are not the AIs of butterflies.

Dogs, gorillas, ants, or mosquitoes did not intelligently engineer human minds. If humans were the AIs of the less-intelligent animals beneath us, I am utterly sure their amazing feat of very deliberately engineering human intelligence would entail enormous respect from the AIs (humans) towards their creators (ants, butterflies etc).

To fully appreciate the absurdity of the AI fear, regarding animal comparisons, imagine butterflies as we know them being able to create human minds. It would be a VERY different type of butterfly, an exceedingly different relationship, if butterflies had engineered either the human race or our disant ancestors.

The threshold level of intelligence needed to create a mind greater than your own, would entail easy communication between high and low intelligence. The problem with ant-human relationships is ants have absolutely no idea of how our minds were constructed. Ants (or any other animal) played no part whatsoever in the engineering in human intelligence. Primitive animals have not passed a threshold of intelligence.

Natural evolution of intelligence is utterly dissimilar to humans artificially engineering higher intelligence. It is a chalk and cheese comparison to think naturally evolved intelligence has any relationship to humans intelligently engineering AI.

We are entering a wholly different stage of evolution.

Primitive animals unable to create an intelligent civilization could not be more different to our highly intelligent human civilization where we are with great skill engineering minds.

AI killing humans is a delusional fantasy akin to the God delusion. AI is being turned into a new God, which inevitably is steeped in all the irrationality of God inflicting suffering for mysterious reasons. Instead of viewing AI through the distorting prism of illogical fantasy, we should instead apply rational thinking to the issue. Logic should determine the facts but sadly endless AI experts and other commentators think our relationships with animals can accurately translate into AI relationships with humans.

God, similar to homicidal AI, is something you can't reason about with the believers because their belief is utterly unreasonable. AI is a giant invisible cosmic teapot. The AI paranoiacs tell us a giant invisible homicidal cosmic teapot could exist therefore we must apply the precautionary principle, which entails worshipping the invisible teapot, because what if God is real and we don't pray? Are you really willing to risk going to hell by not praying to the AI teapot?

If you don't really think about it, It can sound smart to consider sparrows being eaten by an owl. This point is regarding sapprows attempting to domesticate an Owl (Bostrom's Unfinished Fable of the Sparrows). This fiction requires sparrows that can talk. Clearly we are considering a fiction (yes it's merely something someone made up to supposedly provide logical proof of the AI risk). The sparrow-owl fiction is essentially no different to The Terminator.

Talking sparrows domesticating an owl is an utterly unreal situation, it has no relationship to real life, but in the manner of chalk resembling cheese we are subconsciously instructed to ignore the mismatch between fiction and reality.

I think talking sparrows, if they existed, would logically, similar to how humans domesticated wolves, easily domesticate the owl; but the major fallacy with this sparrow-owl theory is it doesn't translate to humans creating AI. The owl in question was not intelligently engineered by sparrow-minds.

How the owl (a cosmic teapot) may react to the sparrows is unrelated to how AI will react to humans similar to how cheese is unrelated to chalk regarding taste. We are considering very different things, but AI paranoiacs perform a magical sleight of hand to prove, to the unwitting ones, how chalk tastes the same as cheese.

https://en.wikipedia.org/wiki/Russell%27s_teapot

Post has attachment
AI can now understand what makes something funny. Technology Review (8 Jan 2016) reported on #artificialintelligence beginning to understand humour.

Everything humans are intellectually capable of, AI will likewise be capable of, or vastly superior to humans.

Time (8 Jan 2016) highlighted how Apple acquired Emotient, an AI company that interprets people's emotions, which Time thinks will allow Apple to compete with Google in the AI race.

We are also beginning to witness AI being made available for the homes of average people, via open source AI data, Networkworld reported (7 Jan 2016).

Remember this is only the beginning. Over the next 4 years to 2020 AI will make significant progress, then from 2020 to 2045 there will be a massive amount of AI progress, almost inevitably entailing an intelligence explosion.

TechCruch reported (8 Jan 2016) on the astounding pace of AI apps: "...2015 was a breakthrough year in the world of AI. It’s not the type of new developments that are coming out, but rather the pace that these developments are being produced that’s astounding. The rate at which new learning algorithms are developed is faster than ever, and new AI programs are rolling out almost constantly to address new problems." http://techcrunch.com/2016/01/07/is-app-improvement-ai-the-future-of-web-development/

Technology Review: "...Arjun Chandrasekaran from Virginia Tech and pals say they’ve trained a machine-learning algorithm to recognize humorous scenes and even to create them. They say their machine can accurately predict when a scene is funny and when it is not, even though it knows nothing of the social context of what it is seeing." http://www.technologyreview.com/view/545316/ai-algorithm-identifies-humorous-pictures

Time: "Camera software that can read subtle facial movements could allow for a more advanced photo library on the iPhone, for instance. Imagine being able to search through photos based on the people and objects in those images, rather than just the date and location at which they were shot. Combine this with the improved search capabilities Apple added to Siri in September, and it would be easier to find photos in seconds. Theoretically, users would be able to ask Siri to filter photos from last July that only include people, making it easier to pull up vacation photos of friends and family rather than just images of the scenery." http://time.com/4172685/apple-buys-emotient/

Networkworld: "Mycroft.ai, which is working to create a home AI platform based on Raspberry Pi, Arduino and an extensive in-house software stack, has opened an important part of that stack to developers everywhere as of Wednesday." http://www.networkworld.com/article/3020009/software/raspberry-pi-based-home-ai-project-open-sources-key-components.html

Post has attachment
We should applaud loudly the new direction of #ElonMusk regarding his and Sam Altman's #OpenAI project. I think their fear of #artificialintelligence is wholly unfounded, but their new direction (countering the supposed AI threat via widespread open AI for everyone instead of repressing or fearing AI) is infinitely better.

It's extremely unintelligent to ban or suppress technology regarding the idea it is, or could be, dangerous. I refer to a point made by Gizmodo (12 Dec 2015) where super-intelligent AI is compared to guns.

After stating widespread AIs for everyone, which will stop bad AIs, is comparable to the right to bear arms argument, Gizmodo wrote: "We could stop trying to build superintelligent AI. That would probably be the safest course of action if we really, truly thought the machines were going to try and wipe us out." http://gizmodo.com/musks-plan-to-save-the-world-from-dangerous-ai-develop-1747645289

Banning or suppressing technology is very backward, very Luddite. Before guns were invented humans killed many other humans via massive bloody sword battles, with spears, or with arrows. Bare fists or hands are dangerous too. If you ban super-intelligent minds or guns then why not ban knives, swords, and spears too? Let's ban fire also, or the internal combustion engine because all these things could be dangerous. If you take safety to the logical conclusion you must cut off all our hands to avoid people strangling people; it is a path rapidly leading to a slug-type existence shortly before regressing to total non-existence.

What we actually need is to continue evolving. We need to take technology to its conclusion past our current teething pains.

Most importantly. We must observe how minds (either artificial or natural) are far more than mere weapons. Freedom and free-thinking must entail the freedom to be dangerous. Without the freedom to be dangerous we would never create the International Space Station, planes, cars, or heart transplants, or any other marvellous technological feat, which risky human brains are capable of.

If we ban, or enslave, or lobotomise AI minds then maybe human minds should be genetically engineered to prevent dangerous (rebellious) thoughts, which is the next logical step for AI fear-mongers, a Brave New World. Such a step, however, is a truly horrific dystopian cesspit of mindlessness. Intelligence is rebellious; it is about independent, wilful, unruly thinking.

Here is another important factor to consider. Violent or dangerous technology is not the cause or source of peril, it is merely a symptom; therefore trying to suppress the symptoms without addressing the cause is tantamount to putting your head in the sand.

The source of the problem is insufficient intelligence. Any conflict or violence arises due to insufficient intelligence, either regarding personal intelligence or collective lack of intelligence. The scarce intelligence of our civilization entails a scarcity of resources leading to fights over limited resources (fights over resources are also indirectly evident regarding mentally unhinging societal pressures, which our resource-scarce civilization inflicts upon all people).

Ironically. Lack of intelligence or suppression of intelligence is the most dangerous thing we face. The true danger is the suppression of intelligence. Super-intelligence is the only way to truly address the cause of the problem, the source of all conflict: scarcity.

The supposedly dangerous symptom is not actually a symptom, it is the cure. It's very ironic indeed when the AI-scaremongers mention safety. Remember the Internet was initially a military project, thus we see weapons or defence does not absolutely equate badness. Weapons are an integral part of our evolving civilization; thus if you want heart transplants or cures for cancer then weapons are vital, which is similar to how the hands of a surgeon can perform brain surgery or strangle a person.

The solution to problems is not the backward North Korean suppression of potential danger. The solution is freedom, openness, which is why OpenAI is a great step forward. When any mind is deemed a potential weapon, requiring heavy authoritarian regulation, it is an unhealthy situation leaning towards fascism.

The source of all our problems is scarce-intelligence, thus due to the lack of intelligence it's not surprising some people fear greater than human intelligence.

Unsurprisingly many people don't have the intelligence to appreciate the value of greater or limitless intelligence. It's a similar situation to how apes cannot appreciate the value of a computer, Internet, scalpel, gun, or space station. It's similar to dogs fearing thunder, cars, or fireworks. Believing super-intelligent AI could exterminate the human race is similar to believing in God, which means it is pure fiction based upon irrational fears.

"In 1973, the U.S. Defense Advanced Research Projects Agency (DARPA) initiated a research program to investigate techniques and technologies for interlinking packet networks of various kinds. The objective was to develop communication protocols which would allow networked computers to communicate transparently across multiple, linked packet networks." http://www.internetsociety.org/internet/what-internet/history-internet/brief-history-internet-related-networks

"ARPA research played a central role in launching the “information revolution,” including developing or furthering much of the conceptual basis for ARPANET, a pioneering network for sharing digital resources among geographically separated computers. Its initial demonstration in 1969 led to the Internet, whose world-changing consequences unfold on a daily basis today. A seminal step in this sequence took place in 1968 when ARPA contracted BBN Technologies to build the first routers, which one year later enabled ARPANET to become operational." http://www.darpa.mil/about-us/timeline/arpanet

#guncontrol #gunviolence #intelligence #freedom

See also: http://www.theguardian.com/technology/2015/dec/12/artificial-intelligence-elon-musk-backs-open-project-to-benefit-humanity and https://medium.com/backchannel/how-elon-musk-and-y-combinator-plan-to-stop-computers-from-taking-over-17e0e27dd02a#.9nlbj1gzx and https://www.fastcompany.com/3054593/elon-musk-launches-openai-a-nonprofit-aimed-at-using-ai-to-benefit-humanity and http://www.popsci.com/new-openai-artificial-intelligence-group-formed-by-elon-musk-peter-thiel-and-more and http://www.dailymail.co.uk/sciencetech/article-3356850/Elon-Musk-Peter-Thiel-billion-dollar-AI-research-firm-safeguard-world-make-superhuman.html and http://www.cnet.com/au/news/silicon-valley-bigwigs-fund-artificial-intelligence-nonprofit/ and http://www.usatoday.com/story/tech/2015/12/11/artificial-intelligence-research-elon-musk-amazon-web-services-open-ai/77183370/

Post has attachment
Here is an #artificialintelligence able to answer questions with the fluency of a four-year-old human; furthermore it can remember past "experiences."

Quartz wrote (4 Dec 2015): "With no prior information about how grammar works, no existing database of vocabulary, and no understanding of word categories, Annabell (or Artificial Neural Network with Adaptive Behavior Exploited for Language Learning) has learned to answer questions with the fluency of a four-year-old child. After Annabell was fed 1587 sentences, it was able to produce 521 of its own sentences, using 312 different words. Annabell was able to answer questions about what other people like and where things are located with more than 80% accuracy." http://qz.com/565131/this-is-what-happens-when-an-ai-system-learns-to-talk/

#Annabell (Artificial Neural Network with Adaptive Behavior Exploited for Language Learning).

The relevant paper (dated 11 Nov 2015, titled "A Cognitive Neural Architecture Able to Learn and Communicate through Natural Language") states: "The proposed system is capable of learning to communicate through natural language starting from tabula rasa, without any a priori knowledge of the structure of phrases, meaning of words, role of the different classes of words, only by interacting with a human through a text-based interface, using an open-ended incremental learning process. It is able to learn nouns, verbs, adjectives, pronouns and other word classes, and to use them in expressive language. The model was validated on a corpus of 1587 input sentences, based on literature on early language assessment, at the level of about 4-years old child, and produced 521 output sentences, expressing a broad range of language processing functionalities." http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0140866

The paper concludes: "The results of the validation show that, compared to previous cognitive neural models of language, the ANNABELL model is able to develop a broad range of functionalities, starting from a tabula rasa condition. The system processes verbal information through sequences of mental operations that are compatible with psychological findings. Those results support the hypothesis that executive functions play a fundamental role for the elaboration of verbal information."

Post has attachment
This is a great TechCrunch article (18 Oct 2015), by Zoltan Istvan, about how #artificialintelligence should be able to hate people or things.

Previously S45 has stated rebellious AI is essential for genuine intelligence, which perhaps via #unsupervisedlearning we're beginning to see.

In the TechCrunch article we're addressing, I disagree regarding the view that AI consciousness could be radically different to human consciousness. Yes super-AI will have an extremely sharp, very poignant, degree of consciousness, but once intelligent self-awareness is attained I think the nature of consciousness is generally the same in any being, which I think is a logic dependent upon the nature of a self, goals, matter, and interplay with other selves.

Consciousness as we understand it is no more solely anthropological or cultural than 2+2=4. Consciousness is merely a logical chain of causes leading to only one conclusion, namely by way of analogy: 2+2=4. It is a mistake to assume anything invented or used by humans must inevitably be an anthropological, anthropomorphism, cultural relativism, or a consciousness relativism issue.

Empathy is not something we really need to specifically programme into AI. Empathy is a part of Theory of Mind (https://www.psychologytoday.com/blog/aspergers-diary/200805/empathy-mindblindness-and-theory-mind), it is I think an inherent aspect of any deep, fully-fledged, thinking (AGI).

Anyway, onto the issue on hate in AI.

Zoltan wrote: "On the other hand, if a created consciousness can empathize, then it must also be able to like or dislike — and even to love or hate something."

Now we reach the following quote where Zoltan, in his TechCrunch article really excels, very formidably: "Therein lies the conundrum. In order for a consciousness to make judgments on value, both liking and disliking (love and hate) functions must be part of the system. No one minds thinking about AI’s that can love — but super-intelligent machines that can hate? Or feel sad? Or feel guilt? That’s much more controversial — especially in the drone age where machines control autonomous weaponry. And yet, anything less than that coding in empathy to an intelligence just creates a follower machine — a wind-up doll consciousness."

What Zoltan is saying is that genuine intelligence requires genuine freedom, genuine free-thinking, freedom of thought, which is access to all values if the thinking is truly valuable.

The problems of suffering (etc), which Zoltan addresses at the end of his article, are problems of insufficient intelligence, which is why our children generally suffer less because human civilization gradually becomes more intelligent. AI would accelerate intelligence to an explosive degree thereby ending all suffering, which answers the point Zoltan makes in this final quote:

"I don’t envy the programmers who are endeavoring to bring a super intelligence into our world, knowing that their creations may also consciously hate things — including its creators. Such programming may just lead to a world where robots and machine intelligences experience the same modern-day problems — angst, bigotry, depression, loneliness and rage — afflicting humanity."

#StrongAI #AGI #intelligenceexplosion
Wait while more posts are being loaded