Shared publicly  - 
 
Randy Gallistel, Co-Director of the Rutgers Center for Cognitive Science, gave a really interesting talk at Ohio State on Friday afternoon. Here are some of the key points.

1. While most (but not all) cognitive scientists and neuroscientists assume that the brain is computational in nature, very little is known about how the brain actually computes and stores information in memory (e.g., how does the brain encode and integrate spike trains at time 1 with spike trains at time 2?).

2. According to Gallistel, read/write memory is an essential element of computation. For example, many very well understood behaviors such as dead reckoning can easily be understood and modeled with a read/write memory system but are very difficult issues for neural nets with no read/write memory.

3. While some memories can be represented in the brain through long-term potentiation (LTP) and changes in synaptic connectivity, it is doubtful that such mechanism(s) can account for read/write memory. Rather, Gallistel argues that such a memory system probably lies at the molecular level.

The following link is an interview with Randy Gallistel where he discusses these issues and his book titled Memory and the Computational Brain: Why Cognitive Science will Transform Neuroscience. I highly recommend listening to this interview if you are interested in cognitive neuroscience and you are not familiar with Gallistel's position. Special thanks goes out to +Ginger Campbell for putting together such a great interview!
3
2
Eamon Caddigan's profile photoJustin Kiggins's profile photoAse Innes-Ker's profile photoJonathan Peelle's profile photo
9 comments
 
Interesting talk but i'm skeptical. Also, he looks like a goat.
 
Well, you look like a tortoise +Kevin Darby (don't change your profile pic or everyone will think I'm losing it)! Skeptical is good. What don't you buy?
 
I don't think it's a strong argument to say that because computers have read write memory brains must as well just and that it must work in the same way as it does in a computer just because the neural neuropsych people don't have strong explanation for some phenomenona. Plus he had no real alternative other than an educated guess which had no support and which he didn't even mention in the talk.
 
+Kevin Darby- I might be in the minority here, but I do not have a problem with his argument. Here's my take. We have these very well understood behaviors in relatively simple organisms (e.g,. ants, bees, scrub-jays, etc), and for many years we assumed that the learning and subsequent memory traces must occur at the level of the synapse (plasticity). Given that we can't model these very simple behaviors, I am all in favor of trying something new- maybe we're looking for our car keys under the wrong streetlight. I really like the idea that memory can be stored at the molecular level. It has many more degrees of freedom, it can store more information, and there are reasons to believe that a read/write mechanism can occur at these levels (e.g., DNA information can passed down from generation to generation using a system compatible with a read/write system). Thus, while I agree with you Kevin that there are holes that need to be filled, I am also a fan of trying something new when the old assumptions simply do not seem plausible. For example, let's take the scrub-jay example. These birds will place thousands of nuts, worms, etc. in the ground. Months later they remember exactly where each bit of food is, how long the food has been there, what type of food is barried at each spot, and who watched them bury the food. Keep in mind, unlike neural nets that take thousands of epochs for weights to stabilize (connections between nodes), these birds are remembering this information in one shot. I totally agree with Gallistel here- such a behavior is more likely to be accounted for by a read/write system (even though we do not yet understand this system) than by a system that stores information at the synapse.
 
Yeah, the molecular idea is interesting, but I'm waiting for some evidence. I really don't know much about neural nets, or about birds, so I'm not in any way qualified to judge but however the brain works it's so much more complex than any network or model, with thousands upon thousands of inputs and many patterns of activation going on at any given time, so I'm not sure it's fair to necessarily rule networks out because they don't demonstrate the same behavior as an organism. And I'd be interested to know how information is transferred between neurons in his theory--he did mention that transfer within the cell itself is extremely fast since everything is so close together, and that's well and good but I don't think it would do much good for organisms to have memories encoded in microRNA in individual cells without a way to transfer the info across cells and brain regions and to muscular systems etc. Of course, again, I know very little about it so there's probably a very simple explanation that someone more versed in biology would know, but as far as I know he didn't really address it.
 
There have been some interesting public debates on the topic of memory here at Illinois between Neal Cohen and Paul Gold. I'm definitely becoming increasingly interested in the topic (coming from vision), and hope to integrate it into my research some point after grad school.
 
+Eamon Caddigan - what aspects of memory are Neal Cohen and Paul Gold debating?

+Kevin Darby- a couple of quick points. First, no one is ruling out neural networks or the notion that memory can be stored at the synapse. Gallistel is simply saying that read/write memory, if it exists in the brain, probably does not occur at the synapse. Thus, we need to keep examining memory in the traditional sense, as well as search for alternatives. Second, many people have explicitly or implicitly made the argument that neural nets cannot solve some problems because they only have a handful of nodes, whereas, the brain has millions of neurons. While this is probably true at some level, if I recall correctly, adding nodes to a network often results in poorer learning. For example, in many situations there appears to be an optimal number of nodes for a given task with too many or too few markedly attenuating learning. That said, cascade correlations are interesting. These networks start out with a few nodes and recruit new nodes when they get stuck and can't solve the problem with the current architecture. Third, I agree with you. I am also out of my comfort zone (especially at the molecular level), and your question concerning how information stored in neuron 1 is passed to neuron 2 is an interesting one. There are a lot of smart people on G+. I'll try to recruit some of these people into this discussion.
 
Prof. Cohen is a cognitive neuroscientist who primarily studies hippocampal amnesics, and Prof. Gold is a biological psychologist who uses animal models. It was a few years ago that I attended the first of a couple of talks, and I'm said to say that I hadn't yet gotten the grounding necessary to understand the details of their arguments.

I recall that Paul was more "molecular" and Neal more "synaptic", but it's possible that I'm misremembering, and almost certain that I'm oversimplifying.
 
I have no problem with Gallistel's assertion that (a) read-write memory is necessary to explain many behaviors and (b) it probably doesn't exist only in synapses.

I take issue, however, with his use of the word "the" in "the read/write memory mechanism." I find no good reason to assume that there is only one mechanism. Indeed, psychology has characterized many different types of memory (working memory, hippocampal dependent memory, etc) which have different constraints, different time constants, and evidence of differences in their physiological and anatomical correlates.

Some read-write memory may be synaptic, some may be "molecular", some may be astrocyte-mediated, some may be network-mediated and may rely on a variety of molecular, synaptic, and architectural dynamics and constraints.
Add a comment...