Post has attachment
How many different kinds of cells are there in the brain? At least 133 kinds, including two types of neurons not recognized before, according to a pair of studies featured on the cover of this week’s issue of the journal Nature.
The “parts list” builds on 15 years of work at Seattle’s Allen Institute, focused on analyzing genetic activity in nearly 24,000 of the 100 million brain cells in the mouse cortex. Each cell type exhibited a different combination of genes that were turned on or off.
This is by far the most comprehensive, most in-depth analysis of any regions of the cortex in any species senior study author Hongkui Zeng, executive director of structured science at the Allen Institute for Brain Science, said in a news release. We can now say that we understand the distribution rules for its parts list.

The region of the cortex that Zeng and her colleagues studied is responsible for processing visual and motor function. Other regions should follow similar rules of organization, the researchers said.
With all these data in hand, we can start to learn new principles of how the brain is organized, and ultimately, how it works Zeng said.
Researchers at the Howard Hughes Medical Institute’s Janelia Research Campus in Virginia used the Allen Institute’s gene expression data as well as the physical shapes of brain cells to identify two new types of pyramidal tract neurons involved in movement. Then they monitored the cells’ activity in live mice to figure out their function.
One of the neuron types plays a role in preparing for a movement, for example, the lick of a tongue. The other type works to trigger the movement itself.
Janelia’s Karel Svoboda senior author of the motor neuron study, said that tracking gene expression is a very efficient way of getting at cell types.
That’s really what the Allen Institute is at the core Svoboda said. The motor cortex study is the first salvo in a different type of cell type classification, where gene expression information, structural information and measurements of neural activity are brought together to make statements about the function of specific cell types in the brain.

The newly published study could well point the way to a comprehensive catalog of brain cells, which will help researchers get a better grip on how all those different types of cells work together to give rise to sensory input, motor function and ultimately consciousness.
Neuroscientists use a variety of methods to characterize brain cells, including their physical shape and the pattern of their electrical activity. But analyzing gene expression is arguably the best way to do a cell-by-cell characterization.
It’s only through recent advances in technology that we can measure the activity of so many genes in a single cell said Bosiljka Tasic associate director of molecular genetics at the Allen Institute of Brain Science and principal author of the cell-type study.
Ultimately, we are also working to study not only gene expression, but may of the cells’ other properties, including their function, which is the most elusive, the most difficult to define.
In a Nature commentary, Aparna Bhaduri and Tomasz Nowakowski of the Broad Center for Regeneration Medicine and Stem Cell Research say the two studies demonstrate the “transformative potential” of brain cell atlases like the ones that are the Allen Institute’s specialty.
They make a strong case for conducting similar studies of more cell types and of the brains of animals of different species, including humans, at various ages Bhaduri and Nowakowski write.
One recent study, based in part on Allen Institute data, identified a type of brain cell called the rosehip neuron that doesn’t seem to exist in mice and may be linked to higher-order cognition.
Bhaduri and Nowakowski say such studies could yield fresh insights into the vulnerability of different types of cells to different diseases, and guide stem-cell researchers as they create brain cells in the lab for research into those diseases as well as new types of drugs.
Add a comment...

Post has attachment
Neuroscientists are coming closer to understanding why some bad moods seem to tumble uncontrollably through your head like a collapsing chain of dominoes. One misbegotten thought after another drives you to imagine frightful things to come or to relive your shameful past:
Remember that one thing five years ago? Wow, I really am a loser.
The spiral into such a mood may occur in a brain network that connects two key regions involved with memory and negative emotions, says psychiatrist Vikaas Sohal at the University of California, San Francisco.
In a study he co-authored, published Thursday in Cell, Sohal says he was able to tell if someone’s mood was getting worse just by looking at whether this network was active or not.
Psychiatrists have previously used MRI scans to probe the human brain and the world of emotions within it. This technology can show how brain activity changes within a few seconds, but the brain tends to work a lot faster than that—neurons can fire dozens of times a second.
MRI readings might miss things that happen too quickly. Implanted electrodes, however, can measure changes in brain activity up to 1,000 times a second. So when U.C.S.F. neurosurgeon Edward Chang popped into Sohal’s office with an idea to use internal electrodes to elucidate the neurological underpinnings of mood, Sohal was delighted.

The brain surgery needed to implant electrodes is too risky to perform on healthy individuals for a study like this, but Chang works on epilepsy patients who need them anyway. When other treatments do not work, temporarily implanted electrodes can show what part of the brain is causing seizures, allowing Chang to cut that section out during surgery. By asking such patients to report their moods every few hours, the team hoped they could use the electrodes to get a rare window into emotion and the deep brain. We know that mood is somewhere in the brain Sohal says. His goal was to see if we can find patterns of activity that tell us what mood is.
Chang implanted electrodes on the surfaces and inside the brains of 21 patients with epilepsy, recording the organs’ activity continuously for seven to 10 days. Then Sohal scoured the recordings for instances when electrodes in different parts of a brain showed identical measurements of electrical activity. Electrical activity of the brain looks like wiggles from each electrode when displayed on a graph, Sohal says.
You ask, ‘Okay, do the size of those wiggles and the locations of the peaks go up together in sync across two electrodes?’ If they do, it suggests those brain regions are communicating. We call that a network Sohal says.
One particular network connecting the hippocampus (an area linked to recollection) and the amygdala (an area linked to negative feelings) began appearing over and over, Sohal says.
That was our first big ‘Aha!’ moment. Whenever these two brain regions created synchronized electrical pulses that fluctuated between 13 to 30 times a second, people reported their moods getting worse. We basically found that when there is less activity in this network, mood is more positive. When there’s a lot of activity in this network, mood is negative Sohal says.

The finding brings scientists closer to understanding how the brain creates bad moods, says Brendon Watson, a psychiatrist and neuroscientist at the University of Michigan who was not involved with the study.
There’s a major open question in psychiatry: How do you construct emotion or mood? People have a very vague idea of what it means to perceive or have an emotion in the brain he says, calling the new study *a great step for neuroscience._
Sohal says his team’s findings spark ideas about how the brain generates negative moods. It is possible, for example, that when these two brain regions work together they create a vicious cycle that drags you down a bad road. “It’s easy to imagine that you might be feeling bad, and then remembering bad experiences, and then feeling worse,” Sohal says. “It’s speculative, but that’s really at the heart of how we think about experiences related to depression and anxiety.”
If that is right, doctors might figure out how to interrupt that cycle with deep-brain stimulation or electroshock therapy for people with major depressive and anxiety disorders, Watson says.
If this is the part of the brain that makes you feel bad, maybe you could reverse how that’s firing and get yourself to feel better he says, adding it will be a long slog before this knowledge could be used in the clinic.
You would need to show that the network correlates with depression and bipolar episodes he says, Then study [this therapy] in rats and maybe, if you could convince patients, try studying it in people.
Add a comment...

Post has attachment
It’s easy to imagine that emotion gets in the way of the most difficult decisions. Get rid of this cumbersome human artifact and surely people would be able to make cold, calculating choices in the most exacting of situations.
Not so. Neuroscientists have long studied people with brain injuries that prevent them experiencing emotions. But instead of being precise, ruthless killers, these people are paralyzed by indecision.
The truth is that when it comes to everyday choices, deciding between cheese or ham in your sandwich, for example, it doesn’t matter how much cold hard logic you bring to bear; these decisions are ultimately emotional.
But what of more detailed calculations like those involved in mathematics or chess? Surely they can’t be governed by fickle human emotion?
Actually, they can, say Thomas Guntz at the University of Grenoble in France and a few colleagues. These guys have measured the changes in emotional state experienced by chess players as they tackle increasingly difficult problems. And they say that emotions play a key role in helping players solve complex problems.

The ability to automatically measure changes in human emotional states has advanced by leaps in bounds in recent years. Changes in pupil size are an indicator of concentration levels. Heart rate is a measure of arousal and can be monitored by looking for changes in the color of facial skin.
Body posture and gestures also indicate emotional changes, and these are straightforward to monitor with 3-D cameras such as the Kinect. All this can be correlated with the object of a person’s attention, as measured by head orientation and eye gaze.
Together, these indicators provide a comprehensive overview of an individual’s emotional state and how it changes from moment to moment.
Guntz and co turned this powerful gaze to the emotional state of 30 expert and intermediate chess players as they solved increasingly challenging chess puzzles. Each puzzle required the player to checkmate an opponent. Puzzles that can be solved in one to three moves are considered easy, while those that require four to six moves are considered challenging.
As the players tackled each problem, the team recorded changes in gaze, body posture, cardiac rhythm, facial expression, and so on. They then used this data to infer how each player’s emotional state changed during the task.
For example, the player’s basic emotional state, happiness, sadness, anger, fear, disgust, or surprise—can be judged from his or her microexpressions; changes in cardiac rhythm suggest changes in arousal; and the rate of self-touching is a measure of stress.
Our results revealed an unexpected observation of rapid changes in emotion as players attempt to solve challenging problems the researchers say.
For this reason, they think emotions must play a role in the decision-making process.
Our current hypothesis is that the rapid changes in emotion are an involuntary display in reaction to recognition of previously encountered situations during exploration of the game state they say.

This must play a crucial role in pruning the decision tree of potential moves, think Guntz and co. The way advanced chess players do this pruning is very different from the thought process beginners use. Over time, expert players learn to recognize certain patterns of play or positions of strength and weakness.
This pattern recognition significantly simplifies the process of deciding on the next move. Instead of considering all the pieces separately, the top players consider them in groups called chunks. Top players are thought to store up to 100,000 of these chunks in long-term memory. When playing a game, they transfer these chunks into short-term memory, where the reasoning takes place.
And that’s where players ought to run into trouble. There is a well-known limit on the amount of information that humans can store in short-term memory. Back in the 1960s, the American psychologist George Miller showed that we can store between five and nine chunks that way. Beyond that, we are overwhelmed.
So how do chess players manage 100,000 chunks when they can only hold a handful in their working memory at any one time?
They use emotion, say Guntz and co. When a player spots a chunk he or she has seen before, the valence associated with it causes it to be brought to the fore for further analysis or rejected as a bad option.
In this way, top players use emotion to move relevant chunks from long-term to short-term memory and back again. And it is this change in emotional state that the team was able to record.
That has huge implications for our understanding of human decision-making and for machine intelligence in general.
Guntz and co are careful to temper their result with the suggestion that their work is still in its early stages and more needs to be done.
But it provides a curious new way to think about the problem of decision-making and how machines could do it more effectively.
Until now, machines have mainly used increasingly powerful computational resources to make decisions. That effectively drains the mystery from problems like checkers, chess, and more recently Go. But ask them to choose between ham and cheese in a sandwich and they’re stumped.
Emotions clearly provide some kind of indexing system that allows us to access certain memories more quickly. Understanding how that works and how it can be applied to machines is an important goal.
Add a comment...

Post has attachment
In 2013, Google cofounder and CEO Larry Page announced the formation of a new Alphabet entity dedicated to solving the pesky puzzle of mortality. Since then, the billion-dollar longevity lab known as Calico short for California Life Company has been trying to tease apart the fundamental biology of aging in the hopes of one day defeating death.
The hyper-secretive research venture has released few details about what it actually does inside its Silicon Valley lab, but there have been hints. One of the company’s first hires was renowned geneticist Cynthia Kenyon a UC San Francisco researcher who 20 years ago doubled the lifespan of a lab roundworm by flipping a single letter in its DNA.
Shortly after joining Calico, Kenyon recruited a UCSF bioinformatics postdoc named Graham Ruby. He didn’t want to dig into worm genetics or study the company’s colony of long-lived naked mole rats. He wanted to first ask a much broader question: how big a role do genes play, anyway, in determining how long someone lives? Other scientists had tried to ask that question before, with conflicting results. To clear things up would require getting much, much more data. So Calico went to the biggest family history database in the world: the consumer genetics and genealogy firm Ancestry.
In 2015, the companies inked a research partnership to investigate the human heredity of lifespan, with Ruby leading the charge to sift through Ancestry’s vast forest of family trees. What he found by analyzing the pedigrees of more than 400 million people who lived and died in Europe and America going back to 1800 was that although longevity tends to run in families, your DNA has far less influence on how long you live than previously thought.
The results, published Tuesday in the journal Genetics is the first research to be made public from the collaboration, which ended quietly in July and whose terms remain confidential.

The true heritability of human longevity for that cohort is likely no more than seven percent says Ruby. Previous estimates for how much genes explain variations in lifespan have ranged from around 15 to 30 percent.
So what did Ruby uncover that previous studies had missed? Just how often amorous humans go against the old adage that “opposites attract.”
It turns out that through every generation, people are much more likely to select mates with similar lifespans than random chance would predict. The phenomenon, called assortative mating could be based on genetics, or sociocultural traits, or both.
For example, you might choose a partner who also has curly hair, and if the curly-haired trait winds up being somehow associated with long lifespans, this would inflate estimates of lifespan heritability passed on to your kids. Same thing for non-genetic traits like wealth, education, and access to good health care. People tend to choose partners in their same income bracket with the same terminal degree, both of which are associated with living longer, healthier lives.
The first hint that something other than genetics or a shared family environment might be at work came when Ruby tried looking at in-law relatives. His analysis started with a set of family trees comprising 400 million individuals. The data had been cleaned, de-identified, and stitched together by genealogists and computer scientists at Ancestry based on subscriber-generated public information.
Using the basic laws of heredity, everyone inherits half their DNA from one parent and half from the other, repeated across generations, Ruby’s team looked at how related two people were and how long they lived. They investigated parent-child pairs, sibling pairs, various cousins, and so on. Nothing much surprising popped out there.
But when Ruby looked at in-laws, things started to get weird. Logic suggests you shouldn’t share significant chunks of DNA with your siblings’ spouse, say your brother’s wife or your sister’s husband. But in Ruby’s analysis, people connected through a close relative’s marriage were almost as likely to have similar lifespans as people connected through blood.
I sort of kick myself for being surprised by this says Ruby.
Even though no one has shown the impact of assortative mating to such an extent before, it aligns well with how we know human societies are structured.

The research could affect the entire field of longevity studies. Ruby says it doesn’t invalidate any prior work on identifying individual genes involved in aging or age-related diseases, but it does suggest that finding more of those genes is going to be much harder going forward. To find them, scientists will need huge cohorts to reach sufficient statistical power. That shouldn't be a problem for Calico, which in addition to the family trees also had access to de-identified DNA information from millions of Ancestry's genotyped customers as part of the research partnership.
The companies have at least one more paper on the genetics of longevity currently under peer review. A spokesperson for Ancestry said that in accordance with the original terms of the agreement, the partnership between it and Calico concluded with the submission of the research covering these findings. Calico is free to pursue any leads that emerged from the analysis, but the company isn't saying anything about what those might be at the moment. (A Calico spokesperson declined to comment on the Ancestry collaboration beyond the results of today's publication.)
For now, the big takeaway seems to be that humans have more control over how long they live than their genes do. It’s all the other things that families share, homes and neighborhoods, culture and cuisine, access to education and health care, that make a much bigger difference in the set of numbers that might one day grace your tombstone.
Maybe that’s why Ancestry’s chief scientific officer Catherine Ball says the company has no plans to offer a longevity score in any of its DNA testing products any time soon. “Right now a healthy lifespan looks to be more of a function of the choices that we make,” she says. She points to places in the data where lifespans took big hits—for males during World War I, and then in two waves in the latter half of the 20th century as men and then women took up a cigarette habit.
Don’t smoke, and don’t go to war. Those are my two pieces of advice she says. And maybe make time to exercise. Ball already has a Tuesday morning workout penciled in her calendar. This time, she says, she won’t be canceling it at the last minute.
Add a comment...

Post has attachment
Not knowing is an uncomfortable experience. As human beings, we are naturally curious. We seek to understand, predict and control – it helps us learn and it keeps us safe. Uncertainty can feel dangerous because we cannot predict with complete confidence what will happen. As a result, both our hearts and minds may race.
While it is quite natural to experience uncertainty as uncomfortable, for some it is seemingly unbearable. Psychologists have even suggested that finding it difficult to cope with the experience of not knowing (also known as intolerance of uncertainty) could seriously affect our mental health, occurring alongside a number of conditions.
But does it play any part in causing them? My review, published in Cognitive Therapy and Research, aimed to find out.
It’s easy to see how the concept of uncertainty is linked to mental health. If uncertainty can feel dangerous, then it might feed our worry and anxiety. What’s more, if getting rid of that feeling of uncertainty feels essential, then the compulsion to wash our hands again and again to make sure they are clean and safe might also feel essential.
And if we ultimately feel unable to cope with the change and unpredictability life throws at us, then it’s understandable that we are at risk of feeling defeated and depressed.

The science
By looking at the scientific evidence as a whole, I asked whether intolerance of uncertainty really has the far reaching influence on mental health difficulties suggested. And importantly, does it cause those difficulties?
The answer is not straightforward. Overall, the evidence is full of mixed findings and there are strikingly few studies that actually test what happens to a person’s mental health when their ability to tolerate uncertainty changes. Such change does seem possible.
We see it in the lab, such as when people are encouraged to think of uncertainty as a problem versus something that can be accepted. And we see it in therapy, through treatments like cognitive behavioural therapy which helps people manage their problems by changing how they think or act.
We are certainly not at the point where we can confidently explain what role our response to uncertainty plays in our mental health, but we can cautiously offer some possibilities based on the research as a whole.
While the findings are mixed, the best evidence that intolerance of uncertainty may cause mental health difficulties is for anxiety. In fact, a number of studies have found it may cause or increase symptoms of anxiety. That’s because when we struggle to cope with the experience of uncertainty, our minds may worry and come up with an increasing number of frightening possibilities.
The struggle with uncertainty might also help us understand depression. Some evidence suggests that we may find that our mood is more negative when we feel less able to cope with the unknown. But low mood is only part of the experience of depression, so fuller investigation is needed.
Perhaps surprisingly, there is little evidence to support the idea that difficulty in dealing with uncertainty plays a part in causing the compulsions and obsessions seen in OCD. But, of the difficulties that have been explored, this is also the area with the least research.

Practical implications
Understanding what underpins mental health difficulties is important because it can help us understand how to provide better support for the many of us who have these experiences.
Mental health difficulties are common, in fact, they often occur together. This raises the question: are they really separate things? Over recent years, psychologists have started suggesting that what underpins one mental health difficulty may actually also be shared across others. There is some support for this suggestion. For instance, the process of thinking repeatedly and unhelpfully about our concerns may lead to both anxiety and depression.
So while these difficulties look different on the surface, underneath the same processes may be at work. This is an exciting idea. Instead of having countless treatments, we could have support that targets these shared processes and is helpful for a wide range of issues. But first, we need to be sure what the shared processes are, and serious work is going into efforts to gain this understanding.
Our ability to weather the uncertainty that life presents us with is one process that might be shared across different mental health difficulties. If so, then this understanding could helpfully add to the therapeutic toolkit across different difficulties, a possibility that is already being explored.
For example, cognitive behavioural therapy that reduces intolerance of uncertainty might help improve a range of mental health problems. What’s more, intolerance of uncertainty may also play a broader role, such as in eating disorders and psychosis. But right now, there’s too much guesswork and not enough evidence directly testing these ideas.
Ultimately, people deserve to be supported to make the changes that will help them the most. And so, we need research that clearly shows what those areas of change should be.
After some intriguing initial research on the links between uncertainty and mental health, it is clear that this is an area worth figuring out. Until then, we will all have some uncertainty to bear.
Add a comment...

Post has attachment
In 2007, The New York Times published an op-ed titled This is Your Brain on Politics. The authors imaged the brains of swing voters and, using that information, interpreted what the voters were feeling about presidential candidates Hillary Clinton and Barack Obama.
As I read this piece writes Russell Poldrack my blood began to boil. Poldrack is a neuroscientist at Stanford University and the author of The New Mind Readers: What Neuroimaging Can and Cannot Reveal about Our Thoughts (out now from Princeton University Press).
His research focuses on what we can learn from brain imagining techniques such as fMRI, which measures blood activity in the brain as a proxy for brain activity. And one of the clearest conclusions, he writes, is that activity in a particular brain region doesn’t actually tell us what the person is experiencing.
The Verge spoke to Poldrack about the limits and possibilities of fMRI, the fallacies that people commit in interpreting its results, and the limits of its widespread use. This interview has been lightly edited for clarity.

When did “neuroimaging” start to be everywhere?
My guess is around 2007. There were results coming out around 2000 and 2001 that started to show that we can probably start to decode the contents of somebody’s mind from imaging. These were mostly focused on what the person was seeing, and that doesn’t seem shocking, I think. We know a lot about the visual system but it doesn’t seem uniquely human or conscious.
In 2007, there were a number of papers that showed that you can decode people’s intentions, like whether they were going to add or sutract numbers in the next few seconds, and that seemed like really conscious cognitive stuff. Maybe that was when brain reading really broke into awareness.

A lot of your book is about the limits of fMRI and neuroimaging, but what can it tell us?
It’s the best way we have of looking at human brains in action. It’s limited and it’s an indirect measure of neurons because you’er measuring blood flow instead of the neurons themselves. But if you want to study human brains, that works better than anything else in terms of pinpointing anything.

What are some of the technical challenges around fMRI?
The data are very complex and require a lot of processing to go from an MRI scanner to the things you see published in a scientific paper. And there are things like the fact that every human brain is slightly different and we have to work them all together to get them to match.
The statistical analysis is very complex and there have been a set of controversies in the fMRI world about how statistics are being used and interpreted and misinterpreted. We’re doing so many tests, we have to make sure we’re not fooling ourselves with statistical flukes. The rate of false positive we try to enforce is 5 percent.

What about generalizability? How well can you generalize from one person’s results to say, “this happens in all humans”?
It depends on the nature of what you’re trying to generalize. There are large-scale things that we can make generalizations about. Pretty much every healthy adult human has visual processing going on in the back of the brain, stuff like that. But there’s a lot of fine-grained detail about each brain that gets lost. You can generalize coarse-grained things, but the minute you want to dig into finer-grained, you have to look at each individual more closely.

*In the book, you talk a lot about the fallacy of “reverse inference.” What is that?_
Reverse inference is the idea that presence of activity in some brain area tells you what the person is experiencing psychologically. For example, there’s a brain region called the ventral striatum. If you receive any kind of reward, like money or food or drugs, there will be greater activity in that part of the brain.
The question is, if we take somebody and we don’t know what they’re doing, but we see activity in that part of the brain, how strongly should we decide that the person must be experiencing reward? If reward was the only thing that caused that sort of activity, we could be pretty sure. But there’s not really any part of the brain that has that kind of one-to-one relationship with a particular psychological state. So you can’t infer from activity in a particular area what someone is actually experiencing.
You can’t say we saw a blob of activity in the insula, so the person must be experiencing love.

What would be the correct interpretation then?
The correct interpretation would be something like we did X and it’s one of the things that causes activity in the insula.
But we also know that there are tools from statistics and machine learning that can let one quantify how well can you quantify something from something else. Using statistical analysis, you can say, “we can infer with 64 percent accuracy whether this person is experiencing X based on activity across the brain.”

Is reverse inference the most common fallacy when it comes to interpreting neuroscience results?
It’s by far the most common. I also think sometimes people can misinterpret what the activity means. We see pictures where it’s like, there’s one spot on the brain showing activity, but that doesn’t mean the rest of the brain is doing nothing.

You write about “neuromarketing,” or using neuroscience techniques to see if we can see the effect of marketing. What are some of the limits here?
It hasn’t been fully tested yet. Whenever you have science mixed with people trying to sell something, in this case, the people are trying to sell the technique of neuromarketing, that’s ripe for overselling. There’s not much widespread evidence really showing that it works. Recently there have been some studies suggesting you can use neuroimaging to improve the ability to figure out how effective an ad is going to be. But we don’t know how powerful it is yet.
Our ability to decode from brain imaging is so limited and the data are so noisy. Rarely can we decode with perfect accuracy. I can decode if you’re seeing a cat or a house with pretty much perfect accuracy, but anything interestingly cognitive, we can’t decode. But for companies, even if there’s just a 1 percent improvement in response to the ad, that could mean a lot of money, so a technique doesn’t have to be perfect to be useful for some kind of advantage. We don’t know how big the advantage will be.

One interesting point you make is that there are some issues with the increasingly common statement that addiction is a brain disease. What’s the issue here?
Addiction causes people to experience bad outcomes in life and so to that degree it’s like other diseases, right? It results directly from things going on in one’s brain. But I think calling it a “brain disease” makes it seem like it’s not a natural thing that brains should do.
Schizophrenia is a brain disease in the sense that most people behave very differently from someone with schizophrenia, whereas addiction I like to think of as a mismatch between the world we evolved in and the world we live in now. Lots of diseases, like obesity and type II diabetes, probably also have a lot of the same flavor.
We evolved this dopamine system meant to tell us to do more of things we like and less of things we don’t like. But then if you take stimulant drugs like cocaine, they operate directly on the dopamine system. They’re this evolutionarily unprecedented stimulus to that system that drives the development of new habits. So it’s really the brain doing the thing it was evolved to do, in an environment that it’s not prepared for.

Going back to reverse inference for a second, how long do you think it’ll be before we actually are able to decode psychological states?
It depends on what you’re trying to infer. Certain things are easier. If you are talking the overall ability to make reverse inference on any kind of mental state, I’m not sure that we’re going to be able to do that with the current brain imagining tools.
There are just fundamental limits on fMRI in terms of its ability to see brain activity at the level that we might need to see it. It’s an open question and we’re certainly learning a lot about what you can predict and part of that is going to be development of better statistical models.
Ultimately, fMRI is a limited window into the biology and without a better window into human brain function, it’s not clear to me that we will be able to get to perfect reverse inference with this tool.
Add a comment...

Post has attachment
Robert King spent 29 years living alone in a six by nine-foot prison cell.
He was part of the Angola Three a trio of men kept in solitary confinement for decades and named for the Louisiana state penitentiary where they were held. King was released in 2001 after a judge overturned his 1973 conviction for killing a fellow inmate. Since his exoneration he has dedicated his life to raising awareness about the psychological harms of solitary confinement.
People want to know whether or not I have psychological problems, whether or not I’m crazy, ‘How did you not go insane?’ King told a packed session at the annual Society for Neuroscience meeting here this week.
I look at them and I tell them, ‘I did not tell you I was not insane.’ I don’t mean I was psychotic or anything like that, but being placed in a six by nine by 12 foot cell for 23 hours a day, no matter how you appear on the outside, you are not sane.
There are an estimated 80,000 people, mostly men, in solitary confinement in U.S. prisons. They are confined to windowless cells roughly the size of a king bed for 23 hours a day, with virtually no human contact except for brief interactions with prison guards. According to scientists speaking at the conference session, this type of social isolation and sensory deprivation can have traumatic effects on the brain, many of which may be irreversible.
Neuroscientists, lawyers and activists such as King have teamed up with the goal of abolishing solitary confinement as cruel and unusual punishment.

Most prisoners sentenced to solitary confinement remain there for one to three months, although nearly a quarter spend over a year there; the minimum amount of time is usually 15 days. The most common reasons for being sent to solitary are for preventive measures, which can be indefinite, or for punishment, which is more likely to have a set end point. Several states have passed legislation limiting who can be in solitary confinement, including mentally ill and juvenile offenders, and for how long. The United Nations recommends banning solitary confinement for more than 15 days, saying any longer constitutes torture.
Even in less extreme cases than that of the Angola Three, prolonged social isolation, feeling lonely, not just being alone, can exact severe physical, emotional and cognitive consequences. It is associated with a 26 percent increased risk of premature death, largely stemming from an out of control stress response that results in higher cortisol levels, increased blood pressure and inflammation. Feeling socially isolated also increases the risk of suicide.
We see solitary confinement as nothing less than a death penalty by social deprivation said Stephanie Cacioppo, an assistant professor of psychiatry and behavioral neuroscience at the University of Chicago, who was on the panel with King.
For good or bad, the brain is shaped by its environment, and the social isolation and sensory deprivation King experienced likely changed his. Chronic stress damages the hippocampus, a brain area important for memory, spatial orientation and emotion regulation. As a result, socially isolated people experience memory loss, cognitive decline and depression.
Studies show depression results in additional cell death in the hippocampus as well as the loss of a growth factor that has antidepressant-like properties, creating a vicious cycle. When sensory deprivation and an absence of natural light are thrown into the mix, people can experience psychosis and disruptions in the genes that control the body’s natural circadian rhythms.
Social deprivation is bad for brain structure and function. Sensory deprivation is bad for brain structure and function. Circadian dysregulation is bad said Huda Akil, a professor of neuroscience at the University of Michigan who was also on the panel. Loneliness in itself is extremely damaging.

King has experienced lasting cognitive changes from his time in solitary confinement. His memory is impaired and he has lost his ability to navigate, both of which are signs of damage to the hippocampus. At one point he was unable to recognize faces, but that problem has passed.
Cacioppo speculated that social areas of his brain that were not being used, like those involved in facial recognition, might have atrophied during his time in solitary. Supporting this idea, recent research conducted in mice by neuroscientist Richard Smeyne at Thomas Jefferson University in Philadelphia and presented at the conference revealed that after one month of social isolation, neurons in sensory and motor regions of the brain had shrunk by 20 percent.
The question remains as to whether these neuronal changes are permanent or can be reversed. Akil said, however, she doubts you can live through that experience and come out with the same brain you went in with, and not in a good way.
King said he survived the ordeal because he recognized that his case was “politicized,” and bigger than himself. He and many supporters believe the Angola Three were targeted and falsely convicted because they were members of the Black Panther party. Their cases were later taken up by the United Nations as an example of the inhumanity of solitary confinement.
According to Cacioppo, King’s connection to a larger group and larger purpose likely gave him the resilience to survive the ordeal.
Collective identity is protective against individual loneliness she noted.
By pairing their research with King’s experience, the neuroscientists on the panel hope to move the needle on people’s perspectives and policy around the issue.
Jules Lobel a professor of law at the University of Pittsburgh and the sole lawyer on the panel, thinks they can: Neuroscience research played a role in a class action lawsuit he won against solitary confinement in California.
Neuroscience can not only be a powerful tool for understanding the human condition he said but can also play an important role in changing the conditions that humans live under.
Add a comment...

Post has attachment
Think of a pink elephant riding a scooter. If your brain were scanned right now, neuroscientists would see a region of your spongy think organ lighting up. What they wouldn’t see is a pink elephant riding a scooter. A thought in a brain scan is very different than your experience of thinking.
The chasm between the brain (grey matter) and the mind (thinking) is one of the biggest scientific mysteries.
It’s a problem that has stymied neuroscientists, computer scientists, philosophers and physicists.
A paper out today in the journal Science offers a possible piece to the puzzle. Building on Nobel Prize-winning research in neuroscience, a team of researchers from the Max Planck Institute for Human Cognitive and Brain Sciences in Germany and the Kavli Institute for Systems Neuroscience in Norway, are proposing that the human thought process relies on the brain’s navigation system.
One of the researchers, Edvard I. Moser received the Nobel prize for his discovery that place cells in the hippocampus and grid cells in the entorhinal cortex allow an organism to position itself and navigate through space.
It may be helpful to think of the grid cells like a GPS map and place cells as the blue dot representing where you are on the map. Activation of place and grid cells occur when you’re navigating through your environment. As you move through the world, each new position in geographical space is reflected by a unique pattern of neural activity inside your brain that generates a mental map of a particular location that can be recalled whenever you’re returning to the same spot.
Grid and place cells don’t just spark up when you’re navigating through the world. Grid cells are active when you learn a new concept. This piece of evidence helped build the hypothesis that knowledge is organized in a spatial fashion.

We believe that the brain stores information about our surroundings in so-called cognitive spaces. This concerns not only geographical data but also relationships between objects and experience says the paper’s senior author Christian Doeller via press release.
First author Jacob Bellmund says grid and place cells that physically exist in the hippocampus and the entorhinal cortex are the neural substrates of cognitive spaces.
Grid and place cells are copiously studied in the context of navigation but they’re also studied a lot in the context of memory or imagination, different cognitive abilities. We propose some ideas for how [grid and place cells] might do all these things using similar computations says Bellmund.
He suggests these cells produce a trajectory through cognitive space, a coordinate system for our thoughts where we collect and piece together the properties of a concept.
Our train of thought can be considered a path through the spaces of our thoughts, along different mental dimensions.

Bellmund hypothesizes that objects sharing similar properties are positioned closer together on a cognitive map.
Yet any comprehensive theory of how we think will have to account for the thought process of cognitive outliers. For instance, skilled comedians routinely make unanticipated associations between wildly different concepts.
It’s a small world, but I wouldn’t want to have to paint it is one example from the brain of comedian Steven Wright. Bellmund says he hasn’t seen the studies on comedian thought processes, but if he were to hazard a guess based on this model, the surprise of a joke may have something to do with a comedian’s ability to rapidly connect remote but meaningful associations between very distant locations in cognitive space. Bellmund insists this is just a guess. It’s not a topic his team has explored in their research.
There’s also an important semantic distinction relevant to this paper and the scientific vernacular. The researchers here are offering a theoretical model of how the mind works. But it is not yet a scientific theory. A scientific theory can be tested, verified and used to make predictions. The authors of today’s paper propose a compelling hypothesis based on neuroscience that helps get us closer to a unifying scientific theory of mind.
Add a comment...

Post has attachment
Excessive stress during fetal development or early childhood can have long-term consequences for the brain, from increasing the likelihood of brain disorders and affecting an individual’s response to stress as an adult to changing the nutrients a mother may pass on to her babies in the womb.
The new research suggests novel approaches to combat the effects of such stress, such as inhibiting stress hormone production or “resetting” populations of immune cells in the brain.
The findings were presented at Neuroscience 2018, the annual meeting of the Society for Neuroscience and the world’s largest source of emerging news about brain science and health.
Childhood stress increases the chance of developing anxiety, depression, or drug addiction later in life by two to four times, while stress during pregnancy may increase the child’s risk of developing autism spectrum disorder, as well as several other psychiatric illnesses.
Scientists are discovering more about the mechanisms through which childhood or fetal stress disrupts brain development and leads to these disorders, which may help reveal new therapeutic strategies.

Today’s new findings show that:
In a mouse model of autism spectrum disorder caused by maternal infection during pregnancy, renewing fetal brain immune cells alleviates symptoms of the disorder (Tsuneya Ikezu abstract 030.09)
Stress before or during pregnancy can alter gut bacteria in women and mice, which in the mice reduces critical nutrients reaching fetuses’ brains (Eldin Jašarevic abstract 500.14)
Early life stress changes chromatin structure in a brain reward region in mice, making them more vulnerable to stress as adults (Catherine Pena abstract 500.01)
In rat pups, stress-induced deficits in social behavior and amygdala development occur only when the mother is present (Regina Sullivan abstract 783.14)
Early life stress accelerates the development of the fear response in young mice, but the effect can be prevented by blocking stress hormone production (Kevin Bath abstract 499.01)
a child
The research presented today demonstrates the long-lasting and far reaching effects of stress during early development, from the populations of bacteria in the gut to the way DNA is folded in the nucleus said press conference moderator Heather Brenhouse PhD, of Northeastern University and an expert in the effects of early life trauma.
Understanding how stress impacts developing biological systems may lead to new, patient-specific approaches to treatment and better outcomes.
Add a comment...

Post has attachment
Earlier this year, scientists identified the existence of a brand new DNA structure never before seen in living cells. That's right, it's not just the double helix.
The discovery of what's described as a 'twisted knot' of DNA in living cells confirms our complex genetic code is crafted with more intricate symmetry than just the double helix structure everybody associates with DNA. Importantly, the forms these molecular variants take affect how our biology functions.
When most of us think of DNA, we think of the double helix said antibody therapeutics researcher Daniel Christ from the Garvan Institute of Medical Research in Australia back in April when the discovery was made.
This new research reminds us that totally different DNA structures exist and could well be important for our cells.
The DNA component the team identified is called the intercalated motif (i-motif) structure, which was first discovered by researchers in the 1990s, but up until now had only ever been witnessed in vitro, not in living cells.
Thanks to Christ's team, we now know the i-motif occurs naturally in human cells, meaning the structure's significance to cell biology, which has previously been called into question, given it had only been demonstrated in the lab, demands new attention from researchers.

If your only familiarity with DNA shapes is the dual helical spirals made famous by Watson and Crick, the configuration of the intercalated motif could come as a surprise.
The i-motif is a four-stranded 'knot' of DNA explained genomicist Marcel Dinger who co-led the research.
In the knot structure, C [cytosine] letters on the same strand of DNA bind to each other, so this is very different from a double helix, where 'letters' on opposite strands recognise each other, and where Cs bind to Gs guanines
According to Garvan's Mahdi Zeraati, the first author of the new study, the i-motif is only one of a number of DNA structures that don't take the double helix form, including A-DNA, Z-DNA, triplex DNA and Cruciform DNA, and which could also exist in our cells.
Another kind of DNA structure, called G-quadruplex (G4) DNA, was first visualised by researchers in human cells in 2013, who made use of an engineered antibody to reveal the G4 within cells.

In the April study, Zeraati and fellow researchers employed the same kind of technique, developing an antibody fragment (called iMab) that could specifically recognise and bind to i-motifs.
In doing so, it highlighted their location in the cell with an immunofluorescent glow.
What excited us most is that we could see the green spots, the i-motifs, appearing and disappearing over time, so we know that they are forming, dissolving and forming again said Zeraati.
While there's still a lot to learn about how the i-motif structure functions, the findings indicate that transient i-motifs generally form late in a cell's 'life cycle', specifically called the late G1 phase, when DNA is being actively 'read'.
The i-motifs also tend to appear in what are known as 'promoter' regions, areas of DNA that control whether genes are switched on or off, and in telomeres, genetic markers associated with ageing.
We think the coming and going of the i-motifs is a clue to what they do said Zeraati.
It seems likely that they are there to help switch genes on or off, and to affect whether a gene is actively read or not.

Now that we definitively know this new form of DNA exists in cells, it'll give researchers a mandate to figure out just what these structures are doing inside our bodies.
As Zeraati explains, the answers could be really important, not just for the i-motif, but for A-DNA, Z-DNA, triplex DNA, and cruciform DNA too.
These alternative DNA conformations might be important for proteins in the cell to recognise their cognate DNA sequence and exert their regulatory functions Zeraati explained to ScienceAlert.
Therefore, the formation of these structures might be of utmost importance for the cell to function normally. And, any aberration in these structures might have pathological consequences.
The findings have been reported in Nature Chemistry.
Add a comment...
Wait while more posts are being loaded