Consciousness: Digging into Computational Equivalence and Physical Equivalence.TL;DR
* The underlying physical properties of a computational system matter.
* Computational equivalence is not enough to host consciousness.
* Real integration in time and space is key.
* This matters and needs to be testable.
I enjoyed an interesting thread last week in which I attempted to discuss computational versus physical equivalence as it pertains to the hard problem of consciousness, and which was initiated by a post (https://plus.google.com/u/0/110186693922408613972/posts/X2kuVjNsRN7
) where +Nate Gaylinn
pondered recent developments in deep learning and whether brain-like algorithms could ever be conscious. It was also the first time in a discussion I’d been strongly told I was fundamentally wrong in the same breath as being told I had spoken truthfully. It is a topic that interests me and I thought I’d expand upon it here for some wider consumption and criticism. SemanticsConsciousness
To be fair I think when Nate referred to consciousness he meant consciousness in the sense of self-consciousness, as in self-aware
or aware of oneself as an individual entity. The term consciousness more properly refers to deep and fundamental states of awareness or perception, the qualia
of conscious experience, or the raw conscious experience of the redness of the colour red to give a classic example. Self-consciousness is important of course but it sits at a higher level than raw conscious sensations by demanding - almost by definition - the smooth integration of a myriad of different conscious sensations at once. At the same time I think it is less of a jump to propose an algorithm running on a computer being aware of itself as an independent entity, than it is to propose an algorithm running on a computer experiencing a raw conscious sensation such as the colour red. Technology
We’ll be discussing hypotheticals below so project any technology out to some arbitrary point in the future when the capabilities would exist. Assume we have computers fast enough to run algorithms simulating the functioning of a human brain in real time, whether that is cellular, molecular, or a deeper model being computed. Assume we have neuromorphic hardware that broadly approximates the architecture of the human brain, with artificial neurons packed as densely as a brain, in similar or more layers as a brain, and with adaptive synapses that form, strengthen, and break connections as needed. Don’t get bogged down criticising the limitations or appearance of current technology. Concepts
Neural correlates of consciousness and specifically the conscious sensation of colour; certain regions of the visual cortex have been found to be critical in processing visual stimuli and are critical for the conscious perception of colour. This study Categorical clustering of the neural representation of colour
is an example of work in this space http://www.jneurosci.org/content/33/39/15454.full
. Damage or remove these regions and you will not experience colour; depending on the extent of the damage you may or may not experience the full breadth of remaining visual experience such as motion, shape, structure, depth, pattern, etc. More background here http://en.wikipedia.org/wiki/Neural_correlates_of_consciousness
The Philosophical Zombie - a human indistinguishable from a normal human in every
way except that it lacks any conscious sensation. It processes sensory information and responds exactly as a normal human would; asked to describe an apple they both say red but only the normal human enjoys the rich conscious sensation of the redness of the apple, the zombie does not http://en.wikipedia.org/wiki/Philosophical_zombie
The Chinese Room - a thought experiment proposed to refute the possibility that a digital computer could ever experience consciousness, itself spawning many refutations and criticisms http://en.wikipedia.org/wiki/Chinese_room
Lets suppose we build a brain on advanced neuromorphic hardware whose fundamental architecture copies the human brain with a suitable dense array of artificial neurons in similar numbers of layers and connected by a dynamic network of synaptic connections. I think it is straightforward to assume that if you feed visual sensory data into this artificial brain in a similar way as you would a normal brain, then we’d expect the artificial brain to experience deep conscious sensations such as the redness of red as perceived from an apple for example. Unless of course you’re in the camp that argues that only biological substrates can produce conscious sensations; this is something that I don’t find reasonable and won’t be dealing with here because it is only tangentially related.
In a way the neuronal connections and their strengths can be thought of as software, with the neurons themselves behaving in a certain dynamic way. Whether we consider this neuromorphic hardware or a real organic brain for that matter, this behavior can be abstracted, modelled, and reduced to suitable algorithms. These algorithms and the models they represent can be run on a conventional computer in order to simulate or emulate
the brain in question and working to process the same sensory information in the same way and so produce the same outputs in behaviour and experience (Henry Markram’s Blue Brain Project is an (incomplete) example of this latter brain simulation, as is the current Human Brain Project). The Difference
Except the conventional computer isn’t
processing the information in exactly the same way. In the case of neuromorphic or normal brain hardware there are massively parallel neural networks carrying a dense cascade of signals running through the substrate. In the case of the conventional turing-complete computer there is a sequential, step by logical step, processing of the information, in and out of memory and so on. Even if one can easily postulate computation so fast that the sequential processing of the computer is as fast as (or even a million times faster than) brain-like hardware in processing the same inputs to produce the same outputs, updating the old state to the new state to some arbitrary accuracy and tiny time interval, it is still a sequential step-by-step processor and different in kind to the massively parallel cascade of information occurring through the neuromorphic substrate.
It may be computing
the information in the same way but it is processing
it in a different way. Both systems may be computationally equivalent
but I think they are obviously not physically equivalent
and I think this difference matters.
To understand where I think this difference lies I believe it is important to dig down to a deep fundamental level and consider what is physically going on at the level of charge carriers moving dynamically on the substrate and in the case of electrical signals asking what is the difference in the behaviour of the electrons and holes in both systems.
In the neuromorphic, brain-like hardware we have a simultaneous cascade of charge carriers moving along parallel interconnected arrays of neurons (whether artificial or organic) and conveying a signal, a distinct pattern of electrical activity, through the substrate bulk, often in waves. In the serial-processing-chips of the conventional computer the physical, atomic behaviour of charge carriers moving around and changing voltage states in transistors and memory elements is very different and lacks any resemblance to the pattern seen in neuromorphic hardware. No matter how fast that computer may be it is still operating to change the states of transistors and memory elements in a sequential manner, step-by-step inducing charge carriers and currents to travel along the chip to make voltage changes to discrete transistors and other components.
Even if the cascade of charge carriers is found to cause, via diffuse electric fields etc, changes in other parts of the system or discrete components of the neuromorphic network, and this phenomenon and effect is included in the algorithmic models to account for them, the result will be the same: more accurate computational equivalence perhaps, but still fundamentally physically different in the nature of the information processing. This is true even though, as Nate pointed out, the neuromorphic system is essentially a hardware optimisation for some aspect of the algorithms running on a conventional computer.
In whatever form it takes, consciousness and raw conscious sensation and experience, as the only thing you can ever be sure of, must be a fundamental property of the physical nature of the Universe. This can be interpreted as a form of panpsychism. As such I believe that arguments based purely on computational equivalency, for example, “both systems compute the information in the same way and therefore will exhibit consciousness in the same way” entail a leap of blind faith that ignores the subtle physical differences. A Thought Experiment to Test It
A thought experiment that I came up with a while ago is as follows:
1. Imagine an advanced brain computer interface with nodes that can interface directly with potentially every individual neuron, whether in a neuromorphic substrate or more relevantly in a real brain.
2. Each node can wirelessly communicate to the other nodes as needed and also to external computational devices with arbitrarily negligible latency.
3. Identify every input neuron and output neuron for (example) those regions of the visual cortex that are known to be responsible for enabling the conscious sensation or colour.
4. Run an accurate algorithmic model and simulation of those same regions on an external, conventional computational substrate.
5. Activate the BCI to block the activity of those identified regions of the visual cortex.
6. The BCI records input signals to those regions and instead sends them to the inputs of the external simulation.
7. The external simulation sends output signals to the relevant outputs overseen by the BCI in the visual cortex.
8. Run a suitable battery of colour tests and observe and record subjective experience. Do you perceive colour when processing is handed off to the external substrate? What do you perceive of a multi-coloured scene when just the “red” processing is handed off externally? What if processing is handed off for only the left or the right eye and cover one then the other when observing a colourful scene?
9. Reset to normal or switch off BCI and again record subjective experience, particularly memory of the event, and memory of colours experienced in the tests. Do you remember colours you didn’t experience or vice versa?
10. Repeat the experiment with a neuromorphic brain-like substrate performing the external processing instead of the conventional computer.
This example obviously demands advanced technology that we are not yet near to developing. But I do wonder if simpler, cruder tests with current or near-term technology might shed light on and answers to this question. My Ideas Here are Not Original Of Course
The idea for this post originally was to re-share some of my comments from Nate’s interesting post and edit them into a more coherent form. I suspect my initial comments and thinking was influenced by Max Tegmark and a recent video / post on consciousness that I’d intended to get back to but lost time. In the process of writing this I embarked on more, and more detailed, research; reading articles and watching videos related to these topics. In the process I discovered (or rediscovered in a lot more detail after remembering I’d briefly come across it some time ago) the Integrated Information Theory of Consciousness (basics: http://en.wikipedia.org/wiki/Integrated_information_theory
) as espoused by Guilio Tononi and, for example http://www.wired.com/2013/11/christof-koch-panpsychism-consciousness/all/
Christof Koch whose work and talks I’ve always enjoyed.
After looking into the Integrated Information Theory (IIT) of Consciousness it seems I’d definitely found myself in the company of Tegmark and in the camp of Tononi and Koch - far more intelligent and accomplished thinkers in this space who had wrapped a formalism around these concepts and described them far more articulately than I ever could and much earlier than I had ever imagined. IIT has quite a bit to say about computational and physical equivalence. While I am still ascending its learning curve I do understand that it implies that conventional computers processing brain-like algorithms as we have described can never be conscious in the sense we have been discussing. They might be self-conscious in the broadest sense, and they might compute sensory inputs in the same way, and respond in the same manner, but they won’t host a rich subjective conscious experience.
This also implies a form of panpsychism although I suspect that it is at this stage more of a placeholder label until such time as future empirical tests better narrow down and describe the phenomenon. Why This Matters
As we continue along a technological trajectory that holds ubiquitous dominant machine intelligence in its future, machine intelligences that comprise both engineered AIs and uploaded humans of various forms, it is important that we get this right. It will not be biological humans that travel and expand through the Galaxy; it will be intelligent machines and substrate-independent minds. If conventional computer architectures remain dominant and are incapable of giving rise to consciousness, one of our defining traits, not only would uploads be pointless but we risk saturating the Universe with intelligence at the expense of any conscious experience to accompany it. This would be the definition of travesty. A Universe without conscious experience would not be a Universe worth existing; intelligence needs to saturate the Universe with
consciousness, not instead
This is why we need to be sure of the physical basis of consciousness, and why we need to find ways to test it, predict it, control it, build it, and engineer it. It may be that my thoughts are ignorant and misguided and the IIT is wrong and conventional computer architectures can indeed host rich conscious experiences - great! But we need to be sure. If however the thoughts are accurate and the theory turns out to be right then two things are apparent: (i) by all means we will engineer intelligences on conventional computer architectures to create a myriad of useful tools, (ii) but we will also need to engineer robust neuromorphic architectures of sufficient detail to host rich conscious experiences in order to repair brains, host uploads, and make the whole (post)human endeavour worthwhile. Having My Mind Changed
Before this week I carried a cognitive legacy on these matters influenced by Daniel Dennett and was reasonably firm in my opinion that philosophical zombies could not exist and that the Chinese Room would indeed be conscious. Basically that in the former a system that behaved exactly
the same would by default have to embody all the same properties including consciousness and in the latter, using similar reasoning, and considering the system as a whole you would have to grant it consciousness.
As I dug into these matters when preparing this post I found myself climbing the fence away from this firm belief and surveyed both sides anew. And as I considered new concepts, ideas, memes, and models to do with raw physical phenomena and the Integrated Information Theory of consciousness I found myself climbing back down the other side - much to my surprise. I found Tonini and Koch to be far more compelling, considered, and elegant and they changed my mind.
I now believe that in very specific circumstances the philosopher’s zombie can exist (and that we will probably create their equivalents at some point) and that the Chinese Room does not and cannot exhibit consciousness (and neither can conventional computers no matter how accurate the brain-like algorithms they run).
But I changed my mind once and maybe a particularly insightful and erudite commenter can convince me to change it again? Until we have more advanced technology better able to test and produce evidence one way or another we are left to debate and theorise. Interesting Side Considerations
* When discussing accurate simulations running on computers this classic xkcd comic comes to mind: http://xkcd.com/505/
* In the original thread +Shrewd Simian
mentioned a study involving evolved circuits that I remembered in which the chips had evolved seemingly useless features that turned out to be useful - unconnected logic gates that nonetheless exerted important influences on the other, connected logic gates. This was interesting to think about once again when considering consciousness, computational substrates, and the physical behaviour of charge carriers. An example explanation can be found here: http://www.damninteresting.com/on-the-origin-of-circuits/
* How the hell did you manage to read all of that in such an attention-deficit age!? Bravo if you made it to the end ;) #consciousness #computation #integratedinformationtheory