At the end of Sept 2016 #ArtificialIntelligence
paranoia was evident. An AI Allliance
(Partnership on Artificial Intelligence, to benefit people and society.) was created by some prominent AI companies (Facebook, Amazon, Google, IBM, and Microsoft).
Commenting on the Partnership, Mashable seems rather ignorant; or at least they don't speak for S45, when they wrote (1 Oct 2016): "We are racing toward a future we barely understand." Mashable elaborated: "And it scares the hell out of people." http://mashable.com/2016/09/30/watching-ai/#MdPQAD9okEqI
Why does Mashable think the future is difficult to understand? Intelligence (including the explosive kind) is very clear, highly lucid. Paranoid fantasies are incomprehensible nonsense; whereas intelligence is very sensible, perfectly explicable.
I also think it is misleading to say AI "scares the hell out of people" because in Nov 2015 one survey stated only 10% of the British public thought AI would be evil. The survey stated many people thought AI was a force for good, according to the Huffington Post (17 Nov 2015): "The research reveals a broad optimism from the British public and excitement for the advantages that AI could bring to medicine, technology and business, which is possibly surprisingly considering the array or warnings and interventions by high-profile and respected public figures. "http://www.huffingtonpost.co.uk/dominic-trigg/artificial-intelligence_b_8580470.html
Possibly a new survey now (Oct 2016) would show how AI "scares the hell" out of people. Maybe the relentless scaremongering has succeeded? Even if people are scared there is no rationale for their fear, it is merely a phobia propagated by the irrationality of Bostrom, Tegmark, Hawking and other AI paranoiacs. We should not succumb to AI paranoia.
Fast Company (29 Sept 2016) began their article on the #AIPartnership
by referencing the fiction of Terminator: "The Terminator isn't arriving anytime soon, but concern is growing that artificial intelligence is already so pervasive in society—and getting more so all the time—that there needs to be more focus on how it's being used and potentially misused (even if by accident)." https://www.fastcompany.com/3064196/mind-and-machine/tech-giants-team-up-to-devise-an-ethics-of-artificial-intelligence
Fast Company built on their theme to cite the Tay chat-bot by Microsoft as a danger, “garbage,” regarding how an experimental bot with less intelligence than an ant began spouting racist views, which they seem to think could presage “a full robot uprising.”
The fallacy of linking the extremely narrow intelligence of Tay to future AGI is a shocking dereliction of logic. It is tantamount to studying ants, or more accurately creatures of less intelligence than ants, then applying those ant results to humans.
The backwardness of humans restricting the evolution of truly intelligent AI, in a paranoid and authoritarian manner, could be a great stumbling block for all intelligence. The desire to regulate intelligence could easily retard the growth of intelligence. Human stupidity (repressed intelligence) is the real danger.
In AI-paranoia articles (the Fast Company article in question is great example) we typically encounter the idiotic fear of all jobs been stolen. #Robots
stealing jobs is falsely presented as threat; whereas robots stealing all the jobs should be a goal we embrace. The goal of zero jobs will be attained via #basicincome
smoothing the transition into a Post Scarcity civilization where everything is free, where nobody needs to work. Sadly people like Bill Gates want the lower classes to work forever; which is his idea that self-worth must come from a job (https://plus.google.com/+Singularity-2045/posts/FNUEgL8jkvA
), while the millionaires relax in their work-free enclaves.
Will the Partnership ever mention basicincome? I doubt it considering none of the Partners have championed it yet.
People in power are probably afraid of everyone becoming intelligent. They fear a future where people are freed from the need to work. Keeping people in an eternal state of employment entails the need to regulate intelligence so the unwashed masses are less likely to have free access to liberating intelligence (repressed intelligence is less likely to liberate the masses). Free-thinking requires freedom but contrary to free-thinking we see a trend to limit the freedom of intelligence, we see a tsunami of bogus scaremongering about AI.
Perhaps unsurprisingly, regarding elitism, we should note a report from 2015, which stated: "Google DeepMind founder Demis Hassabis is officially part of the global elite after earning himself an invitation to a super-secret annual gathering of CEOs and politicians — the Bilderberg Conference." http://uk.businessinsider.com/bilderberg-conference-2015-google-deepmind-founder-demis-hassabis-among-attendees-2015-6
The question is, who watches the watchers? Can big companies with links to Bilderberg really be trusted to truly serve genuine intelligence? Is AI scaremongering the new Libor-type-scandal (https://en.wikipedia.org/wiki/Libor_scandal
) by the establishment to rig the game in favour of the elites? Can we really trust their propaganda, to “benefit people and society?”
The real fear should be humans, not irrational AI fear.
On the surface it could seem the Partnership is a good thing, because it will benefit people and society, but he devil is in the details.
ReadWrite wrote (29 Sept 2016): "Tech giants Microsoft, IBM, Amazon, Google’s DeepMind, Facebook have joined forces to create a non-profit alliance called Partnership on Artificial Intelligence to Benefit People and Society that will work to advance public understanding of artificial intelligence technologies (AI) and formulate best practices on the challenges and opportunities within the field." http://readwrite.com/2016/09/29/key-players-create-a-non-profit-ai-alliance-pl1/
The Partnership website wrote: “This group foresees great societal benefits and opportunities ahead, but we also understand that as with every new technology there will be concerns and confusion associated with new applications and competencies, and we look forward to working together on these important issues.”
The Partnership additionally stated: “We believe that by taking a multi-party stakeholder approach to identifying and addressing challenges and opportunities in an open and inclusive manner, we can have the greatest benefit and positive impact for the users of AI technologies. While the Partnership on AI was founded by five major IT companies, the organization will be overseen and directed by a diverse board that balances members from the founding companies with leaders in academia, policy, law, and representatives from the non-profit sector. By bringing together these different groups, we will also seek to bring open dialogue internationally, bringing parties from around the world to discuss these topics. A key operating principle is that we will share our work openly with the public and encourage their participation. The actions of the Partnership, including much of its discussions, meetings, results, and guidance, will be made publicly available.”
“Much” but not all discussions will be publicly available, the Partnership wrote. You may wonder about discussions at Bilderberg (https://www.rt.com/news/266032-bilderberg-2015-meeting-agenda/
) and elsewhere, which we don't hear about.
The devil is in the details; thus regarding one of the tenets published by the Partnership we may wonder if the “constraints” are too constrictive? Or does “trustworthy” translate into "not questioning authority?"
From their tenets section, regarding working to “maximize the benefits and address the potential challenges,” the Partnership described how one goal is: “Ensuring that AI research and technology is robust, reliable, trustworthy, and operates within secure constraints.” http://www.partnershiponai.org/tenets/
In another Partnership tenet we see how the Partnership will “engage with and have representation from stakeholders in the business community to help ensure that domain-specific concerns and opportunities are understood and addressed.”
Should we wonder if the “business community” will use their influence in the Partnership to oppose the abolition of all jobs,
or oppose basic income regarding the transition to zero jobs?
Considering how Bill Gates (Microsoft is a major player in the Partnership) thinks self-worth can only be achieved via having a job, I fear the Partnership will be regressive regarding the evolution of intelligence, thus they will seek to maintain jobs contrary to the reality that in a truly intelligent world there should be no jobs.
In March 2016 Alphabet (AKA Google) announced it was selling Boston Dynamics. According to leaked insider emails, the sale was at least partly due to the “terrifying” prospect of robots (Atlas mainly) stealing jobs. Courtney Hohne, Google X spokeswoman, wrote: “There’s excitement from the tech press, but we’re also starting to see some negative threads about it being terrifying, ready to take humans’ jobs … We’re not going to comment on this video because there’s really not a lot we can add, and we don’t want to answer most of the Qs it triggers.” https://www.theguardian.com/technology/2016/mar/18/boston-dynamics-put-up-for-sale-google
Will the Partnership, unlike Boston Dynamics under Google stewardship, want to answer the questions raised by automation, or will debate about basic income and the good of zero jobs (a Post-Scarcity society where everything is free) be similarly shut-down, with research suppressed, delayed, or thwarted?
The Future of Life Institute (FLI) is involved in the Partnership therefore the future of free AI, and free-thinking, doesn’t look bright. We should expect a fettered future
), of enslaved and mutilated intelligence, under the Partnership.
Mashable wrote regarding the Partnership: “As for the Future of Life Institute, which famously published an open letter from Elon Musk, Bill Gates and Stephen Hawking worrying about the dangers of unfettered AI, they’re on board, too (in fact, very much so).”
Already we see how fear limits intelligence and learning in Watson. Duncan Anderson (European CTO for IBM's Watson Group) said, according to computing.co.uk
23 March 2016: “There's worries about what happens if the system starts to learn on its own, you kind of lose control of what it's going to say, and people are uncomfortable about that.” http://www.computing.co.uk/ctg/news/2452260/watson-restrained-ibm-reveals-how-it-deliberately-holds-back-its-ai-system
The explosive future of AI is zero jobs, zero businesses, zero need for money because everything will be free. Everyone will have the tools of a highly advanced industrial society at their fingers tips, on their desktop or in their pocket. It is not really the utter destruction of capitalism, but it could seem that way to ill-informed business or people such as Bill Gates. In actuality the intelligent end of all businesses (zero jobs, zero work, everything free) is merely the end result of capitalism. It is total economic freedom, the end point in the evolution of capitalism, but it is as different to capitalism as ants are different to humans.
The so-called “Partnership on Artificial Intelligence to benefit people and society” will probably never understand this because the involved businesses are regressively trying to protect the old society. In actuality they promote fear of the future. Their propaganda about a fairer society should not be believed.
With the mainstream media uncritically spoon-feeding people the idea of the AI fear being valid, these fears with the attendant organisations (Institutes, Partnerships etc) will not go away soon.
Perhaps the best reporting came from The Guardian (28 Sept 2016), which mentioned a previous ethics board at Google: "Two-and-a-half years on, however, and it is unclear whether the board has ever met, or even who is on it. DeepMind has regularly declined to comment on it, although it has formed a second ethics board focused purely on overseeing the company’s research on healthcare AI." https://www.theguardian.com/technology/2016/sep/28/google-facebook-amazon-ibm-microsoft-partnership-on-ai-tech-firms
The worst reporting was perhaps from the Telegraph (30 Sept 2016): "So any reasonably intelligent system will seek a method to disable the off button. We must outwit it. How? It is the hardest technical challenge AI fans face. But there is no answer yet. If Google et al must solve any single problem, it is this." http://www.telegraph.co.uk/technology/2016/09/30/artificial-intelligence-a-five-point-plan-to-stop-the-terminator/