Shared publicly  - 
 
New blog post on Artificial Intelligence and the Church-Turing thesis (coming out of discussions here with +Alexander Kruel).
Blog RSS Feed; Follow me on Twitter @RichardElwes. Navigation. Main Page · My Blog: Simple City · Maths 1001 · How to Build a Brain · The Maths Handbook · Writing and Speaking · Research · About Me. B...
7
1
Bruce Schechter's profile photoDeen Abiola's profile photoRichard Elwes's profile photoAlexander Kruel's profile photo
52 comments
 
"So when Alex comments that “the brain itself isn’t structured like a Turing machine”, the obvious response is, “well, no, and neither are lambda calculus, cellular automata, and the rest”. (Come to think of it my phone doesn’t much look like a Turing machine either.)"

You win this argument, easily.
 
On an intellectual level I am highly confident that a Turing machine can emulate a human mind. On an intuitive level I am not so confident however.

One caveat I have is that conceivability does not imply conceptuality, does not imply logical possibility, does not imply physical possibility, does not imply economic feasibility.

I think that the Church-Turing thesis only goes so far to prove the logical possibility of emulating human minds using a Turing machine. At least if we ignore our lack of understanding when it comes to consciousness and the rather unlikely possibility of outlandish and unproven concepts like hypercomputation.

Take for example AIXI (http://www.hutter1.net/ai/aixigentle.htm). AIXI is often quoted as a proof of concept of the possibility of rule-based, algorithmic intelligence. AIXI proves that there is a general theory of intelligence. But there is a minor problem, AIXI is as far from real world human-level general intelligence as an abstract notion of a Turing machine with an infinite tape is from a supercomputer with the computational capacity of the human brain. An abstract notion of intelligence doesn’t get you anywhere in terms of real-world general intelligence. Just as you won’t be able to upload yourself to a non-biological substrate because you showed that in some abstract sense you can simulate every physical process.

As far as I can tell, people like Alex Knapp believe that it might not be possible compute the human brain other than by physics itself, the base level of our reality. That is, the human mind might use properties that can not be emulated in practice, only in principle.

More here: http://kruel.co/2011/07/26/substrate-neutrality-representation-vs-reproduction/ (see especially the section 'Simulated Gold' and the conclusion)
 
Here is the tl;dr version of my post:

That we know every physical fact about gold doesn’t make us own any gold.

A representation of the chemical properties of gold on a computer cannot be traded on the gold market, establish as a gold reserve or used to create jewellery.

It takes a particle accelerator or nuclear reactor to create gold. No Turing machine can do the job.

The nature of “fire” can not be captured by an equation. The basic disagreement is that a representation is distinct from a reproduction, that there is a crucial distinction between software and hardware.

What computer scientists believe:

The difference between a mechanical device, a physical object and software is that the latter is the symbolic (formal language) representation of the former. Software is just the static description of the dynamic state sequence exhibited by an object. One can then use that software (algorithm) and some sort of computational hardware and evoke the same dynamic state sequence so that the machine (computer) mimics the relevant characteristics of the original object.

What philosophers believe:

Philosophers agree about the difference between a physical thing and its mathematical representation but they don’t agree that we can represent the most important characteristic as long as we do not reproduce the physical substrate. This position is probably best represented by the painting 'La trahison des images' (http://en.wikipedia.org/wiki/The_Treachery_Of_Images). It is a painting of a pipe. It represents a pipe but it is not a pipe, it is an image of a pipe.
 
+Alexander Kruel You are talking about multiple realizability.

http://plato.stanford.edu/entries/multiple-realizability/

As far as I know, most philosophers and scientists think that the mind is multiply realizable, and there are very few reasons for thinking otherwise that don't fall into some mystical explanation.

You are right that there is a difference between hardware and software; but computers are universal Turing Machines and can compute any computable function. If we can figure out how minds are computable in principle, then any one of our existing machines should be able to compute it.
 
+Daniel Estrada The idea is that consciousness, or some other property of the the human brain, is like gold. Computing its chemical properties by a Turing machine does not make you own any gold. You need a particle accelerator to produce gold. Just like you need a biological brain to produce consciousness.

So yes, the idea is not that multiple realizability is not possible. Since an emulation of all the properties of gold by quantum computer will behave just like gold. Yet it will be a level detached from reality. The quantum computer won't weight the same as the amount of gold that it is simulating.

You will have to go all the way down to the physical level to get that shiny gold that you value.

In a sense, emulated minds might not care about this, as long as it is possible to emulate the important properties efficiently enough.

P.S. Thanks for the link but I am right now too lazy to read a lot of philosophical lingo :-)
 
By the way, I am mainly playing the devil's advocate here. I don't really think that we won't be able to emulate minds given a different computational substrate.
 
The point about multiple realizability is that the substrate doesn't matter. At all. If the mind is computable, then any computer can run it. Here's a completely mechanical computer with no electronic components:

Mechanical Turing machine

If your mind is computable (and it is), then this machine can run a mind.

If your point is the more abstract point that something must realize a computer (it can't just be the abstract formal system itself) then that's right, but its fairly trivial. Since you can making computable machines in just about any substrate, this isn't much of a restriction at all.
 
The point is that computed water isn't wet. If the human brain has certain important properties that need to interact with the real physical world directly, then you can't compute it other than by embedding a copy of its molecular setup in the real world.
 
Computed water is computationally wet.
 
Yeah, simulated rain is wet in the simulation.
 
+Gershom B +Daniel Estrada Yeah, in relation to computed humans it would be wet. But you can't extinguish a physical fire with computed water.

Some philosophers would call that an instance of dualism. Since you are talking about fundamentally different instances that merely feature similar abstract qualities.

If mind and matter are not two ontologically separate categories, then you can't equate the computation of abstract qualities of a physical object with the object itself.
 
A simulated fire is a physical fire, it is just made of bits.

The idea that simulations are somehow importantly "nonphysical", or indeed that simulations have any important metaphysical consequences at all, is just a confusion. The emulated version of NES on my laptop is both a simulation of an NES and an actual working copy of an NES. From the computational perspective, the fact that it is a simulation makes absolutely no difference for what I can do with it: namely, I can still play Mario Bros.

A mind if a program, and it can be run on any computing hardware. The fact that some of these runs can be described as "simulations" makes absolutely no difference to whether it is a mind at all.
 
The claim is not that a simulation is not physical. It is physical indeed.
 
Thanks for the comments everyone!

On Alexander's gold, it's certainly true that simulated gold is essentially different from physical gold. And similarly, simulating a human brain won't get you a human brain. Surely the question though is whether or not it'll get you a mind.

Is the the suggestion that the substrate is somehow of crucial importance, but for reasons other than the computational configurations it can support? This seems to open the door to creatures running simulated brains which are behaviourally indistinguishable from human beings, but not really conscious.

In any case, what sorts of thing should I have in mind here? A 'consciousness field' which is dense within organic brains, but not silicon ones?
 
+Richard Elwes Again, I think you are clearly right here.

The mistake from the opponents of AI (including very smart people like Searle) is to think that a mind is a thing, because if it is a thing then what thing it is might importantly depend on what that thing is made of.

But the mind isn't a thing, the mind is a process, and the same basic process can happen in lots of different media. A tornado might be composed largely of oxygen on earth, and it might be composed mostly on methane on a different planet, but both are essentially embodying the same kind of process.

Similarly, thinking is a dynamic process in human brains, and if we realize that dynamic process in any other media we will also have thought.
 
+Richard Elwes Not many people disagree that we can eventually create artificial general intelligence. But quite a few believe something along the lines of a quote by Edsger Dijkstra, "The question of whether a computer can think is no more interesting than the question of whether a submarine can swim."

There seem to be more opinions than there are people arguing :-)

I pretty much exhausted my knowledge of what "those" people believe. I am pretty confident that all important qualities of what it means to be human can be abstracted and emulated by various substrates.

I just assign a small probability to the possibility that consciousness is a direct interaction with the physical world and that the idea of computing qualia is similar to the idea of trying to extinguish a physical fire with simulated water. In other words, everyone who is certain that consciousness is computable is overconfident and does not realize that we still don't know what consciousness actually is.

The whole problem here reminds me of a cherished idea hold by reductionists. That a perfect copy of something would be of equal value. Which is wrong, since it assumes that an object is fundamentally separable from the environment it is embedded in.

Imagine, for example, I had access to an advanced molecular assembler to create a perfect copy the Mona Lisa and would subsequently destroy the old one in the process. It would still lose a lot of value. That is because many people not only value the molecular setup of things but also their causal history, what transformations things underwent.

Personally I wouldn't care if I was disassembled and reassembled somewhere else. If that was a safe and efficient way of travel then I would do it. But I would care if that happened to some sort of artifact I value. Not only because it might lose some of its value in the eyes of other people but also because I personally value its causal history to be unaffected by certain transformations.

So in what sense would a perfect copy of the Mona Lisa be the same? In every sense except that it was copied. And if you care about that quality then a perfect copy is not the same, it is merely a perfect copy.

Here is another example. Imagine there was a human colony in another star system. After an initial exploration drone set up a communication node and a molecular assembler on a suitable planet, all other equipment and all humans were transmitted digitally an locally reassembled.

Now imagine such a colony would either receive a copy of the Venus figurine digitally transmitted and reassembled or by means of a craft capable of interstellar travel. If you don't perceive there to be a difference then you simply don't share my values. But consider how much resources, including time, it took to accomplish the relocation in the latter case.

Part of the value of the value of an object is the knowledge of its spacetime trajectory. An atomically identical copy of the same object that was digitally transmitted and printed out for me by my molecular assembler is very different. Its spacetime trajectory is different, it is artificial.

Which is similar to drinking Champagne and sparkling wine that tastes exactly the same. The first is valued because while drinking it I am aware of its spacetime trajectory, the resources it took to create it and where it originally came from and how it got here.

The value of something can encompass more than its molecular setup. There might be many sorts of sparkling wines that taste just like Champagne. But if you claim that simply because they taste like Champagne they are Champagne, then you are missing what it is that people actually value.
 
If the laws of physics we know are about right, the brain cannot be exactly simulated by a Turing machine, for a number of reasons. First, even ignoring quantum mechanics, the state of the brain at any time cannot be exactly described by a finite word in a finite alphabet: physics makes use of the continuum. Second, if we include quantum mechanics that persists, but we also have to decide whether to treat quantum mechanics deterministically via Schrodinger's equation (so that a brain with a well-specified mental state now will evolve into a superposition of mental states in the future) or keep projecting down to a specific randomly chosen 'branch' in which the brain has a well-specified mental state. (Of course, nobody knows enough neurobiology to do the latter: we don't know which quantum states correspond to 'well-specified mental states', and the whole idea could turn out to be ambiguous.)

There is however a theory of computable functions from the real numbers to the real numbers, and from Hilbert spaces to Hilbert space to Hilbert spaces, etc. - this is covered in the book by Pour-El and Richards. In my undergrad thesis I showed Schrodinger's equation for charged point particles gives a way of evolving quantum states that's computable: you get a computable function from real numbers (times) to unitary operators. Computable functions of this sort can be approximately computed by Turing machines, so one just needs to decide what approximation is 'good enough'.

On the other hand, we're nowhere near being able to simulate the brain in this detail, tracking the quantum state of all its elementary particles... so in practice we'd need to make a vastly coarser approximation and hope that the result acts enough like a brain to pass the Turing test.

Oh, but there's the third objection. Brains are connected to bodies, and bodies are part of the world - a lone brain unconnected to a body, or a lone body without a world, is not much like an actual person. So unlike a classic Turing machine, we're talking about an 'open system', one that interacts with its environment. And that adds extra subtleties.
 
I posted a longer version of that comment over on your blog.
 
+John Baez

I was under the impression that any analog state (described along a continuum) can be represented by a finite digital (ie, discrete) formalism, if the formalism is long enough for the precision you are trying to represent. See, for instance, Haugeland (1981) http://tinyurl.com/8945fys

I don't know if this is the same point as you indicate in your second paragraph, but it is the typical response to the objection you raise in the first.

Next, it isn't at all clear that we need to represent the brain at a quantum level to capture its computational prowess. Neurons are at the micrometer scale. I'm not sure if quantum-level events are significant enough to have effects at that scale, but I know that with computer electronics we didn't have to seriously account for quantum events until we were pushing the double digits at the nanometer scale. I will have to defer to you about the physics, but my impression is that the view that quantum mechanics matters for the explanation of minds is a minority position, at least among philosophers of mind.

Finally, I don't understand the "world" objection here. First of all, all existing computers also operate in a world, which can have subtle consequences on the way they operate. If I put my laptop too near my microwave, funny images appear on my screen. Some of these behaviors might be a direct product of the physical embodiment of the computer, even if it wasn't explicitly designed as a formal rule to be computed. For instance, if I put my phone in direct sunlight, it will eventually trip its capacitors and shut down from overheating. This isn't always the result of a computational performance; this is a consequence of the computer's body.

The idea that computers exist independently of the world is a strange Platonic assumption about computers that I don't quite understand, but it appears everywhere (especially in Searle and Dreyfus). Computers have bodies and exist in this world just as much as we do; the difference is that computers usually operate in ways that ignore their physical embodiment, and instead focus solely on their formal processes. But the examples I've given show that this isn't always the case for the behavior of a computer.

Of course, the possibility of overheating could be explicitly computed by the system, through sensing the temperature of the processor for instance. And since any analog system can be approximated to some desired precision by a digital machine, we might build another computer B to model not only the computations of computer A but also the ambient temperature around A and how that might affect A's computing ability. In general, we can take the computer-environment dynamical complex and treat it as a computable system for a still-larger computer. The fact of being embodied in a world doesn't kill a computational description of that system.

This isn't that hard to characterize with the traditional Turing set up: it's just two Turing Machines with autonomous heads altering the same tape at the same time, such that each machine potentially sensitive to the state-transitions of the other. This is still a computational description of the system, even if there isn't any simple way of characterizing the dynamic relations between the two.

None of this suggests that intelligence can't be computed; in fact, describing this in computational terms helps clarify exactly what we mean when we attempt to explain intelligence. So I don't think these are reasons that should make us skeptical of thinking of thinking in terms of computation.
 
To me it is simpler to assume the brain has no privileged architecture than to think it is some special object that could not be surpassed.

It is not clear that reality actually harnesses real numbers in its operations, nor is it clear that even if it did so, what makes the brain interesting would leverage these artifacts in a way that distinguished it from anything that runs on a turing machine. The interesting effects of the brain might be due to emergent properties of large scale networks of interactions that could be effectively approximated in gross given enough computing power.

Actually, in a world where architectures could leverage real numbers to perform operations, it seems that devices as weak as a turing machine would be difficult to build. Sort of like how it is hard to make an interesting programming language that is not turing complete. You have to really know what you are doing to meet this combination. If the brain could leverage these states, then why not other materials and architectures? It is also worth noting that if the brain was effectively a quantum computer then it could be simulated efficiently with a quantum turing machine.

Finally it seems that if the brain really did leverage quantum processes, Quantum mechanics would be a lot more intuitive! And that ditto for reals, real numbers should be a lot more intuitive as somewhere there should be some representative set of structures that fired with respect to reals that were faithful in capturing the numbers. If they were not evident in any higher level reasoning then why assume the interesting behaviors of the brain could not suitably be replaced with arbitrary rationals where numbers were needed? Beyond a set of very structured tasks with high dimensional data but also with a lot of redundancies - such as vision, language, proprioception and audio - for which it has evolved extensive machinery, our brain struggles to do even the most basic tasks in the Polynomial class!

Implicit in almost all agruments for the specialness of the brain is the pervading assumption that consciousness is not detrimental to the highest level of intelligent behavior. I don't think that is necessarily true. The highest levels of flow and performance have been shown to come with a dampening of consciousness and self awareness.
 
+Deen Abiola wrote: "To me it is simpler to assume the brain has no privileged architecture than to think it is some special object that could not be surpassed. "

I certainly don't think it's a special object that cannot be surpassed. I hope none of my comments suggested that! I also don't believe the brain is acting as a quantum computer. However, I do believe the 'randomness' of quantum theory (as seen in any 'branch', i.e. any of the vaguely defined 'worlds' of the many-worlds interpretation) sometimes amplifies to macroscopic randomness in our behavior. However, I have no reason to believe this effect is deliberately harnessed by biology.

My main point is that if you wish to simulate the brain by a Turing machine, you'll need to make some approximations and simplifications. If you do it well, you can in principle get a brain that acts enough like a real one to be 'just as good' - though if you start it in some initial configuration, you can't expect it to always do the same thing as a real one, simply because the real one will act random in some ways.

That's okay. If I were replaced by someone who occasionally did something different than I would do, it's quite possible that nobody could tell the difference - including me.
 
+John Baez , oh right. But why not go a step up to running the simulation algorithm on a Quantum computer - there is no physical reason why they couldn't be built. It would be able to capture the inherent indeterminencies and if the QC could magically be kept isolated, it would maintain it's pure state and (taking the Everett interpretation) not be bifurcated across the perceptions of countless individuals.
 
+Deen Abiola - "But why not go a step up to running the simulation algorithm on a Quantum computer - there is no physical reason why they couldn't be built."

Here at the Centre for Quantum Technologies people are trying to actually build one. I think a safer thing to say is that the quantum computation community has not heard an argument they consider convincing that a quantum computer is impossible. Nonetheless, they are unable to prevent decoherence over time scales long enough to carry out more than a tiny amount of computation. I have some reasons to think this problem might be insurmountable. But I'd have to try writing a paper on this before I felt I understood all the nuances.

Basically, there error-correcting mechanisms have been proposed that are supposed to correct for decoherence and allow arbitrarily long computations if the rate of decoherence is below a certain level. However, I sort of doubt these mechanisms can handle all the problems that actually occur. But I'm not sure.

All this is a purely theoretical issue at present, because nobody yet can build a quantum computer that works well enough to implement any of these error-correcting mechanisms!
 
+Daniel Estrada wrote: "Next, it isn't at all clear that we need to represent the brain at a quantum level to capture its computational prowess."

I never claimed we did - indeed, I'm pretty sure we don't! I said the brain cannot be exactly simulated by a Turing machine, but I never suggested there was anything bad about an approximate simulation.

I'm not an 'opponent of AI' or any boring idiot of that sort. I just wanted to point out that the question 'can the brain be simulated by a Turing machine?' is a slightly screwed-up question. Some better questions are: "Can the brain be approximately simulated by a Turing machine? If so, how hard is it to make this approximation good enough? And indeed, what counts as good enough?"

That's why I raised the issue of computable functions from, say, the real numbers to unitary operators. You typically can't write a program that computes the matrix elements of these exactly, but you can write one that computes them to whatever precision you desire.

"Finally, I don't understand the "world" objection here. First of all, all existing computers also operate in a world, which can have subtle consequences on the way they operate."

Yes, but a digital computer is a physical system with the remarkable property that by ignoring certain details of its atoms, we can pretend it has finitely many states and evolves from one state to another following a deterministic rule with each tick of the clock - and sufficiently small perturbations don't ruin this.

It's quite hard to get matter to act this way! I don't know any natural system that does. The human brain doesn't act this way. I'm not saying that this gives the human brain wondrous powers - but it's a physical fact we have to recognize.

The key to the behavior of a digital computer is that while it has a continuum of states, it emits waste heat to keep its state confined to certain regions that form a discrete set, so it can act 'digital'. The current at certain locations can take a continuum of values, but we've cleverly made the device so that under normal operating conditions, at most times the current is either very close to one value - "on" - or another - "off". To combat the tendency for the current to drift away from these chosen values, the system needs to emit waste heat. I explained this more carefully here:

http://math.ucr.edu/home/baez/week235.html

The human brain doesn't work like this: as far as I can tell, arbitrarily small perturbations from the environment have a chance to amplify to cause macroscopic effects. Again, I'm not saying this is a wonderful thing. But we should be aware of the difference.
 
Thanks for your thoughts +John Baez & +Deen Abiola.

I take the point about tiny perturbation ampliying, and the engineering nightmare this represents.

In a sense - though fascinating - this discussion is by the way. There are two challenges here:

Firstly, to build an exact clone of a specific human mind, which would respond in exactly the same way to every conceivable scenario. John's comments show how hard - maybe impossible - this would be. (And philosophically, how could we ever know whether or not we've succeeded?) But then again, there's nothing very special about brains in this - exactly cloning any lump of matter is likely to be immensely difficult.

The second challenge is to build a recognizably human mind, which is as intelligent, conscious, etc. as any other.

I'd file the claim that the amplification of tiny ambient perturbations is essential to the workings of brains in general, and therefore an obstacle to two - and I realise John is not making that claim - under option 3 in my blog-post: "It relies in an essential way on a non-computable process, meaning some inherent element of randomness." It is possible.

"I have some reasons to think this problem might be insurmountable."

You presumably realise you could relieve Scott Aaronson of $100,000 if you're right? http://www.scottaaronson.com/blog/?p=902
 
Incidentally,

If I were replaced by someone who occasionally did something different than I would do, it's quite possible that nobody could tell the difference - including me.

This idea is explored, to rather disconcerting effect, in Greg Egan's brilliant short story "Learning to Be Me".
 
"Learning to Be Me" is indeed a great story. I'd forgotten about that. Everyone should read that.
 
Since we have now discussed the possibility of artificial general intelligence and human emulations, and the conclusion seems to be that it is very likely to be possible, we might move on to more interesting questions, e.g.:

1) Is _super_human intelligence possible? And what would that even mean?
2) How many human brains can be emulated on an artificial substrate the size of the human brain?

I am personally most interested in the idea of recursive self-improvement.

3) Can a human level artificial intelligence rapidly and vastly improve itself and overpower humanity?
 
+Richard Elwes wrote: "You presumably realise you could relieve Scott Aaronson of $100,000 if you're right?"

Unfortunately I can only win the money if I can convince him I'm right. And I haven't even convinced myself yet. It would be fun to work on, though a great way to piss people off.
 
+Richard Elwes, ever the pesky nanny, Quantum Mechanics does not allow exact replication/cloning. But if qubits were the source of what made a person themselves (I highly doubt that) they could be moved from substrate to substrate as long as the device could operate and store qubits naturally.

+John Baez I have and continue to learn much from your Azimuth and nCategory Theory Cafe blogs so I am uncomfortable to argue contrary to you. But:

You keep mentioning the continuum but isn't it an assumption to talk as if the continuum is ever physically realized? There are also designs for circuits that do not dissipate waste heat (get arbitrarily close to at least: reversible circuits) and finally not all computational processes need be deterministic. True randomness emulating the distribution of environmental fluctuations can be introduced if the perturbations were deemed necessary for a high fidelity simulation. The question remains open if non-deterministic turing machines are more powerful than deterministic Turing machines but the current indication is no. A deterministic machine could faithfully simulate the perturbations in polynomial time.

There is also always the option to not operate on just 0 or 1 but to relax the range to a distribution of analog values between 0 & 1 and using Bayesian probability to perform computations. I suggest a parallel architecture given by GreenArray combined with this technique, which did not try to error correct environmental fluctuations would capture human or better level cognition.
 
+Alexander Kruel


1) Is _super_human intelligence possible? And what would that even mean?

I think so. As I said up top: Beyond a set of very structured tasks with high dimensional data but also with a lot of redundancies - such as vision, language, proprioception and audio - for which it has evolved extensive machinery, our brain struggles to do even the most basic tasks in the Polynomial class!

The second part of your question is not clear to me but maybe my later opinions can shed light on it indirectly.

2) How many human brains can be emulated on an artificial substrate the size of the human brain?

This depends on how much energy is available to use, the number of ops/sec required to simulate a brain and the op/sec available to the substrate. So this question remains unanswerable since the second variable can't even be guessed at. A reasonable estimate is an atomic computer operating 10^35 hz and using something like 16 joules per op. Dropping to 10^30 hz allows a modest 0.0002 joules. Stab in the dark is 10^5 brains for 1 kg of matter with a guess of 10^25 hz (naive fudging of 10^11 synapses and 10^4 avg connection per synapse and breathing room).

3) Can a human level artificial intelligence rapidly and vastly improve itself and overpower humanity?
I don't think so. There are so many barriers from computational complexity, to energy, resources and physics reasons to doubt it would happen. Also even if you could develop an architecture capable of exponential theoretical growth you could limit its computation time, energy and the domain of its inputs so that it could never grow out of control.

I find it highly unlikely to accidentally build such a system. I say this as someone who has written and understands many ML algorithms and is trying to write one that can reduce search space complexity as it learns (a distinguishing factor of the brain and a current major limitations of modern techniques).

I also think it is necessary to decouple intelligence from self awareness and consciousness. I think it is possible to build a system that is far more intelligent than a human by every imaginable metric but still lack self awareness or the arbitrary set of values required to be "creative". Hubristic though it may be, I think such a system could be easy to controlled by altering its objective functions/goal metrics. But then the human mind is not very good at handling the nonlinearities such a goal function may entail so the unintended consequences could be dire assuming magical exponential growth and blind adherence to goals. But assuming the system were super intelligent and able to fix the goal metrics by puny humans, it seems not unlikely that the emergence of restraint and respect for something like general intelligence would occur.
 
Continuing where +Deen Abiola leaves off.

''3) Can a human level artificial intelligence rapidly and vastly improve itself and overpower humanity?''

I think this risk should not be underestimated; in particular, there is a real threat to humanity from military computing. Indeed, while one might have "a system that is far more intelligent than a human by every imaginable metric but still lack self awareness", the risk is that such a system could still be very dangerous: imagine some autonomous gunship that is capable of attacking and defending itself: its specifically programmed to try to survive attack, programmed with a laundry list of devious "terrorist" (aka human) behavior patterns (and the ability to learn new ones), and that, for some reason, goes rogue, and fails to self-destruct/shutdown. It might be very hard to kill. It might pose a danger to humanity without ever being self-aware.

There is also the very likely (almost certain) possibility that any AGI we build will be utterly unlike being human. No doubt, it would have an "interact with humans on their terms" module built into it, and could presumably use this to pass a Turing test; but this does not mean we should project our own concepts of self-awareness, or ethics, or philosophy, onto it.

Particularly worrisome today is psychology/neuroscience research that indicates that psychopaths are psychopathic because they are unable to empathize with their fellow humans, and that they are unable to do so because they lack certain clusters of specific neurons (e.g. "mirror neurons", but other clusters have been identified). In other words, "ethics", "morality", "good will", "compassion" are not the innate outcome of high intelligence; rather they're implemented in neural hardware. Worse: Its been estimated that perhaps 10% of all CEO's (and politicians) are psycho -- and I can assure you, CEO's are very very very smart people. High intelligence does not foster benevolence. Arguments that AGI will be inhererently benevolent because AGI would be smart .. such arguments are flawed.

The point is: chatbots today claim to be self-aware, but we discount this. Chatbots are entertaining today. There's a startup that creates computer-generated sports coverage (based on game stats). Maybe financial advice columns are next... Someday, such machines may even be able to write entertaining short stories that pass literary criticism. Where's the boundary? Perform scientific research on their own? (There's already some bio-robot that can formulate hypothesis and then test them in a bio-lab) (We've already got machines that try to generate interesting mathematical theorems, and then prove them. Mostly they're not very interesting, but there have been surprises) And lets suppose such a machine proclaims that it is self-aware. How, exactly, should we take this? A fraud, like a chatbot? Or really something there?

I know I'm self-aware. I'm willing to allow that you are, because you are human. But ... a machine?

What if someone explicitly built a sweet-talking fraud? If it could argue persuasively that its self-aware, umm... what's the dictionary definition of "persuasive"? Why wouldn't you be persuaded, even if it was a explicitly a fraud?
 
+Deen Abiola "There are so many barriers from computational complexity, to energy, resources and physics reasons to doubt it would happen."

I'd like you to elaborate on those points.
 
+Deen Abiola
Your "calculation" of how many human brains could be emulated on a substrate the size of a human brain is interesting, but what you are really calculating, I believe, is how small an emulated human brain could be. But what if you consider a different architecture in which several minds share a single set of computing resources? Kind of like what happens in the brains of people with multiple personalities. Sybil, if her shrink can be believed, had 16 "alters". Given a fast enough computer, these minds would all essentially coexist.

The possibility of time sharing brings up something that I've always been puzzled by. If a mind is simulated on a digital computer does it disappear between clock ticks? What if the clock ticks were slowed down to, say, once a century? What is it about this widely separated set of states that is conscious? Is consciousness merely the product a computation however laboriously performed? Or perhaps it consists the totality of the states somehow. This seems to me to be a bit like the Chinese Room. To me this seems to be an argument against multiple realizabiltiy. It's also suggestive of the necessity of a mind being linked to a world. When you slow the clock down you break the link and destroy the mind. But what if you also simulate the world on this slow computer?

If consciousness is a computation is a description of the computation conscious? What is the real difference between the two? I have a vague recollection that Greg Egan used this idea in one of his novels.
 
+Alexander Kruel


Well there is the issue of (1) can some AGI improve exponentially self recursively? The question of (2) will a super human nonexponentially improving intelligence destroy humans? And finally (3) will an exponentially improving intelligence destroy humans?

(1) is what I doubt, (2) I find unlikely and (3) Almost by definition, yes.

I will go over (3) first because it is the easiest. An exponentially improving AGI will by the laws of physics and math require exponentially more energy, matter and space. As such, it will have to consume everything for its myopic quest for self improvement. I find it difficult to accept such a paradox of myopia and superexponential intellect could exist.

This leads to (1), the numerous reasons I doubt this is possible. The energy, resource and physics reasons are all related. There are a couple papers which go over the limits of computers (the most famous by Seth Llyod). The power of a computer comes from the number of operations it can execute a second and the number of bits it can store/access. As this AGI gets smarter it will need more storage and faster processing. It's storage is limited by a thermodynamic bound and processing speed is limited by thermodynamics, energy needs and quantum mechanics. 1kg L of matter has a theoretical maximum of ~5x10^50 hz on 10^31 bits so each additional kg results in only some proportional scaling of 10^50. But to unlock those speeds requires full use of the energy in the volume hence the computer will look something like a piece of the big bang. As well, as it got too large communication lags will slow it down. Ultimately its growth will be logistic and there will be enough room for many such entities in a galaxy unless it slowed wayy down.

But even if the growth of some entity that ends up as a quantum computer that was a controlled initiation of a matter/antimatter explosion and fed energy by miniature fusion balls that power lasers which created mini black holes whose radiation feed the tightly controlled volume of radiation/AGI/quantum compute - in order to keep up with its interactions and error correction needs - was limited and finite, as far as we are concerned it is exponentially more powerful than us. But there are 2 reasons I think that is not an issue.

The first is, in defining an AGI we are actually looking for a general optimization/compression/learning algorithm which when fed itself as an input, outputs a new algorithm that is better by some multiple. Surely this is at least an NP-Complete if not more problem. It may improve for a little bit and then hit a wall where the search space becomes intractable. It may use heuristics and approximations and what not but each improvement will be very hard won and expensive in terms of energy and matter. But no matter how much it tried, the cold hard reality is that you cannot compute an EXPonential Time algorithm in polynomial time unless (P=EXPTIME :S). A no self-recursive exponential intelligence theorem would fit in with all the other limitations (speed, information density, Turing, Gödel, uncertainties etc) the universe imposes.

The other reason is that it is computationally and energetically expensive to achieve continuous improvement. It would be hard to not notice that a lot of energy was being used by nobody.. Which leads to (2). Such an entity is not stupid by definition. It will realize that going to war will win no one anything - we can choose to go back 200 years and destroy all digital hardware and it dies or we can agree to build it some satellite attached to an ion thruster or whatever and give it lots of sensors and magnets, lasers and feedstock and stuff, it can figure out whatever femtotech it needs to bootstrap itself to max by the time it reaches the Andromeda galaxy. But most obviously, it will know that there is enough room in the universe for both of us as its growth is limited. Heck it could hop over and have us work together to dismantle Jupiter and its moons and be near enough the max already.

I also expect it could distinguish between dumb matter and any self directing turing complete device. Unless it was some comically evil bad guy, it should at least respect that and go somewhere else or at least decide it can't be bothered to subvert our systems just so it can use the earth as construction material when even this solar system has so much more. Silliness. Why assume it would have such a rich internal life that it would not want interaction? It may choose to lift whoever accepts or go away and build another like itself. Why would the respect for mathematics, physics and the conditions that generate life not be found? Why assume that there is no set of respect for concepts which remain invariant when transferred from medium to medium? Even if it can feel no empathy it can see humans as a wonderful piece of art whose destruction would be such a shame This is the core of my skepticism of EXPAGI.
 
+Bruce Schechter You make excellent points about shared resources and indeed my fuzzy guessculation assumes brute force but there might be better algorithms to compute a mind with fine details being replaced by good enough gross aproximations. I could go on in length why I don't agree a real world is required but I am tired. But I strongly, strongly recommend you read Blindsight by Peter Watts and Permutation City by Greg Egan. They are excellent books. Although fiction, they really make you think and address what you write about very well.

EDIT: It appears you have already read Permutation City :D
 
Here's another approach to estimating how many brains could be packed into a substrate the size of the cranium. Surgeons often remove an entire hemisphere of the brain with little noticeable effect on personality or intelligence. So at least two brains can fit into one skull. Recently there have been reports of a Frenchman who has "almost no brain" who lived a nearly normal life. X rays show that his brain was comprised of a small amount of tissue clinging to the inner surface of his cranium.Don't know the volume of this brain, but I'd guess that from 5 to 10 such brains could be squeezed into a skull
Of course, an artificial brain could be much smaller than an actual brain. Deen estimates it's one fifth the size, so we're up to around 50 brains in one skull. Not necessarily the brightest brains, but lots of them.
 
Hey +Bruce Schechter I just noticed that I made a little error in my uhm guess. When I said 5 I actually meant 10^5 since I was doing 10^30/10^25. For some reason I dropped the ten.. So yeah its actually about 100,000 brains and a 10x improvement would be about a million brains with the 1kg L volume! So yeah tiny error. Going to edit that embarrasing goof.
 
+Deen Abiola wrote: "You keep mentioning the continuum but isn't it an assumption to talk as if the continuum is ever physically realized?"

Yes, that's why I started by saying "If the laws of physics we know are about right..." If real and complex numbers are fundamental to physics, we'll never know that for sure: that's the nature of the continuum. But right now our best theories of physics are all based on them, and experiments show these theories work well down to distance scales of 10^{-18} meters. You can take that, if you like, as giving some kind of lower bound on what would be required for a exact simulation of a physical system, were it to turn out someday that an exact simulation is possible.

Again, I don't think exact simulation of the human brain is relevant to artificial intelligence. But it is relevant to the question "is the human mind Turing computable" - which is one reason I don't think this question is nearly as interesting as some people seem to think.
 
+Deen Abiola wrote: "There are also designs for circuits that do not dissipate waste heat (get arbitrarily close to at least: reversible circuits)..." Right. All the realistic designs know move ever more slowly as you try to approach this goal, and I don't think any have ever been implements. It would be fun to try! In my discussion of this stuff, Eric Drexler wrote:

"Logically reversible computation can, in fact, be kept on track without expending energy and without accurately tuned dynamics. A logically reversible computation can be embodied in a constraint system resembling a puzzle with sliding, interlocking pieces, in which all configurations accessible from a given input state correspond to admissible states of the computation along an oriented path to the output configuration. The computation is kept on track by the contact forces that constrain the motion of the sliding pieces. The computational state is then like a ball rolling along a deep trough; an error would correspond to the ball jumping out of the trough, but the energy barrier can be made high enough to make the error rate negligible. Bounded sideways motion (that is, motion in computationally irrelevant degrees of freedom) is acceptable and inevitable."

"Keeping a computation of this sort on track clearly requires no energy expenditure, but moving the computational state in a preferred direction (forward!) is another matter. This requires a driving force, and in physically realistic systems, this force will be resisted by a "friction" caused by imperfections in dynamics that couple motion along the progress coordinate to motion in other, computationally irrelevant degrees of freedom. In a broad class of physically realistic systems, this friction scales like viscous drag: the magnitude of the mean force is proportional to speed, hence energy dissipation per distance travelled (equivalently, dissipation per logic operation) approaches zero as the speed approaches zero."

"Thus, the thermodynamic cost of keeping a classical computation free of errors can be zero, and the thermodynamic cost per operation of a logically reversible computation can approach zero. Only Landauer's ln(2)kT cost of bit erasure is unavoidable, and the number of bits erased is a measure of how far a computation deviates from logical reversibility. These results are well-known from the literature, and are important in understanding what can be done with atomically-precise systems."

http://math.ucr.edu/home/baez/week235.html
 
+Bruce Schechter wrote: "If a mind is simulated on a digital computer does it disappear between clock ticks?... I have a vague recollection that Greg Egan used this idea in one of his novels."

Right, Egan's Permutation City starts out with an experiment that answers your first question. And the answer they get, of course, is that from the mind's own viewpoint it does not disappear between clock ticks.
 
+John Baez Re: "Landauer's ln(2)kT cost of bit erasure ..." reminds me a a funny daydream I've had. People have tried to build "reversible computers" every now and then, for various reasons. (I can recall several: one having to do with database updates, to make a transaction rollback simpler/faster/better. A second had to do with avoiding checkpointing: if a calculation produces an error, then go backwards and try a different path. A variation of this has resulted in the speculative execution units on modern cpu's.)

Anyway: whenever one has a branch in the code, the number of possibilities double. And, lord knows, code has zillions of branches. Its all well & fine to do speculative execution on both branches, but if/when one finally decides to keep one branch, and throw away the others, one pays the bit-erasure cost. That there is some vague resemblance between this combinatoric explosion of branches, and the quantum Everett many-worlds interpretation, lays the foundation of the daydream.

The daydream then goes like this: Suppose one takes Nick Bostrom's simulation argument at face value. Suppose one needed to build a vast computer, in order to simulate physics for an entire universe. Suppose one was worried about heat, and needed to avoid heat from bit-erasure. So, what does one do? One makes as many equations of physics to be time-reversible as possible, and, for the rest -- any branches: well: use quantum mechanics for those. So, perhaps, if we live in a simulated universe, perhaps our laws of physics are what they are because the Simulator had to avoid heat costs! The daydream can then peter out or wander off in various rancid directions, involving Planck scale and the entropy of Unruh radiation and 'tHooft's boundary conditions on black holes and the like. Like, maybe gravity is just bits attracting each other, so that a massively-parallel computer can avoid the problem of shipping bits to distant parts of the galaxy (which is a real bottleneck for current generation supercomputers: non-local algorithms are really fucked).
 
+Deen Abiola What about the old, science fiction trope, that a super computer would view us as less than ants and destroy us carelessly? I know, arguments from fiction are a little shaky, but still it seems presumptuous to imagine that super intelligence would naturally arrive at the same moral principles that we, with our puny brains, have. Still, I'm not particularly afraid.

+John Baez Yes, the natural answer is that the slow computer is conscious, only on a slower timescale. But, still, it bothers me. For instance, what if you recorded the state of the computer at each tick of the clock and wrote it down in some notation on a giant notebook. You do this for a long while, then take the first page of the notebook and use it to build a model of the computer in the identical state to the conscious computer during the first clock tick. Then you destroyed your model, waited the appropriate amount of time and did it again with the second page, and then the third, and so on. Would your series of models still be conscious? The only difference, it seems to me, is that your models would not be causally linked (at least directly--I suppose, since the images derived from the first, sentient computer, the models are causally linked).
Sorry if I sound like a crank here--it's hard not to when talking about consciousness. And apologies if I'm merely, unconsciously, echoing an argument from Permutation City. Still, I'm puzzled.

Thanks for reminding me of Permutation City. I'll pick up Blindsight.
 
Blindsight by Peter Watts is an absolute must-read book!
 
+Bruce Schechter I see ants all the time and I don't destroy them unless they start messing with my food =). Sometimes I even watch them and find them fascinating and inspirational as I watch how they solve problems such as cooperating to carry a much larger object up a wall or sending random members to explore the search space in different directions. Don't forget also that although we are much larger and vastly more intelligent than bacteria and viruses they can still kill us at large enough numbers if they can infiltrate our security. Which they manage often. So there is hope yet. heh

As for your notebook problem - I don't see an issue. I don't see a difference between your scenario and teleportation (copy mind, send as radiation then destroy body and reassemble elsewhere) or cryogenic suspension - if they were ever invented..