Previously I have posted about the Erdős discrepancy problem ( A computer, narrow AI, solved this problem but the solution was too long for humans to check. According to the Daily Mail humans had been unable to solve the problem for eighty years.

I've been thinking about narrow-AI because I wondered if human-level AI is absolutely essential for the intelligence explosion (Singularity). Already in the year 2014 there are a few examples of computer intelligence surpassing human intelligence. These narrow AI programs will logically be refined to become vastly more competent narrow-AIs.

If human-level AI is impossible I am sure narrow AI alone will be sufficient for the intelligence explosion, the Singularity. Consider how in 2009 "Robot Adam" (Robot Scientist) designed then implemented his own experiment. Adam made a discovery which had eluded human scientists since the 1960s.

The Daily Mail wrote regarding Robot Adam: "A robot called Adam that can think up theories and test them with almost no human help has become the first machine to make a new scientific discovery."

Consider also how Watson won Jeopardy. IBM researchers subsequently programmed Watson for another breakthrough, the development of debating skills. Engadget wrote:

"IBM's Watson supercomputer is already good at finding answers to tough questions, but it's going one step further: it can now argue an issue when there's no clear answer. A new Debater feature lets the machine take a given topic, scan for relevant articles, and automatically deduce the pros and cons based on the context and language of any claims. In a demo, Watson took 45 seconds to scour millions of Wikipedia articles and make cases both for and against limiting access to violent video games. It's likely that many people would take much longer, even if they're well-informed on the subject."

PopSci also addressed Waston's debating capacity: "The computer's new Debater function is what it sounds like: after being given a topic, Watson will mine millions of Wikipedia articles until it determines the pros and cons of a controversial topic, and will the enumerate the merits of both sides. Argument over. Move along."

Consider how in 2013 Google's Deep Learning system began thinking for itself in ways its programmers could not understand. The Register wrote: "This means the internet giant may need fewer experts in future as it can instead rely on its semi-autonomous, semi-smart machines to solve problems all on their own."

Note also in 2014 there were at lest two examples of AI helping to extend life via cancer detection and drug discovery:

Note how in 2014 a supercomputer passed the Turing Test. Venture Beat wrote: "Since programmers began seriously grappling with the impending reality of intelligent computers in the 1950s, pioneering Inventor Alan Turing said that the first big milestone would come when we cannot distinguish between computers and humans in conversation."

Beneath narrow-AI, merely considering general computing, we can see humans via mere computers are achieving marvellous breakthroughs. Year after year human driven breakthroughs happen at an accelerating pace. In 2014 researchers discovered a technique to create matter from light. Engadget wrote:

"Researchers at Imperial College London have discovered a technique that should produce electrons and positrons by colliding two sets of super-energetic photons. To create the first batch of photons, you have to first blast electrons with a laser, and then shoot them at a piece of gold; you produce the other batch by firing a laser at the inside of a small gold can to produce a thermal radiation field. If you collide the two photon sources inside the can, you should see electrons and positrons spilling out."

Or consider #graphene transistors. Gizmodo wrote: "The Berkeley Lab's new device is only six atomic layers thick—hence the 2D nomenclature—and leverages graphene as the gate, source, and drain; as well as hexagonal boron nitride (h-BN) as an insulator and molybdenite (molybdenum disulfide) as the channel. Each single-atom-thick layer was first mechanically exfoliated (shaved off a larger block of material) then laid carefully on a flexible silicon wafer. Van der Waal forces hold the six layers together rather than, say, chemical covalent bonds. Since each layer is individually generated and then placed on the substructure, researchers are able to minimize structural flaws at the molecular level."

In 2014 we have significant scientific progress, via humans powered by computers, happening at an accelerating rate. We also have some amazing rudimentary narrow AIs, aged no more than five years old. Let's ignore the possibility of human level AI, let's focus on what is possible already. Now if we merely add 31 years of progress, refinement, to what we already know we can do, I am sure we will see an intelligence explosion.

Human-level AI makes the intelligence explosion absolutely certain but considering a mere 31 years worth of acceleration regarding 2014 technology I think this could be sufficient. We are on the edge of a 3D-printing, robotics, synbio, narrow-AI, internet of things, ubiquitous computing revolution. Everything is becoming smart and this low-level narrow intelligence could easily be enough, collectively, to ensure explosive rewards in 2045.

#computing #bioinformatics #data #narrowAI #bigdata #robotadam #Erdősdiscrepancy  

Erdős discrepancy problem links: and and and and and and
Shared publiclyView activity