Join this community to post or comment

Pacific Northwest National Laboratory (PNNL)

• General/Interdisciplinary  - 
Microbes that make methane are taking chemists on a road less traveled: Of two competing ideas for how microbes make the main component of natural gas, the winning chemical reaction involves a molecule less favored by previous research, something called a methyl radical. Reported in the journal Science, this research is important for understanding not only how methane is made, but also how to make things from it. Read more at

* * *

"Methane is an interesting substance because it's both a fossil fuel and a potentially renewable fuel that can come from microbes," said study lead Stephen Ragsdale of the University of Michigan, Ann Arbor. "In addition, detailed knowledge of the chemical steps involved in making methane could lead to major breakthroughs in designing energy efficient catalysts for converting methane into liquid fuels and other chemicals."

This study demonstrates one of a very few known instances of nature using a highly reactive methyl radical in its biological machinations.

"We were totally surprised," said computational chemist Simone Raugei, a coauthor at the Department of Energy's Pacific Northwest National Laboratory. "We thought we'd find evidence for other mechanisms."

Origins story: More than 90 percent of methane is (and has been) generated by microbes known as methanogens, which are members of the archaea, a group of microbes that are similar to bacteria. To make the gas, methanogens use a particular protein known as an enzyme. Enzymes aid chemical reactions in living organisms like synthetic catalysts do in industrial chemical conversions. Also, the enzyme can run the reaction in reverse to break down methane for energy consumption.

Scientists know a lot about this microbial enzyme. It creates the burnable gas by slapping a hydrogen atom onto a molecule called a methyl group. A methyl group contains three hydrogens bound to a carbon atom, just one hydrogen shy of full-grown methane.

To generate methane, the enzyme pulls the methyl group from a helper molecule called methyl-coenzyme M. Coenzyme M's job is to nestle the methyl group into the right spot on the enzyme. What makes the spot just right is a perfectly positioned nickel atom, which is largely responsible for transferring the last hydrogen.

How the nickel atom does this, however, has been debated for decades in the highly complex world of chemical reactions. Different possible paths create different fleeting, ephemeral intermediate molecules, but the reaction happens too fast for scientists to distinguish between them.

The path chemists have most sided with involves the nickel atom on the enzyme directly attacking the methyl group and stealing it from coenzyme M. The methyl-nickel molecule exists temporarily, until the methyl in its turn steals a hydrogen atom from another molecule in the enzyme's workspace, coenzyme B, and becomes methane.
Many experiments lend support for this idea, which creates an intermediate methyl-nickel molecule.

A second possibility, according to a much smaller group of supporters, is via a methyl radical. Radicals (aka free radicals) are unstable molecules that have an unpaired electron. They can do a lot of damage by breaking down weaker bonds in molecules.

It's that unpaired electron that causes problems. Bonds between atoms routinely involve two electrons, like a pair of ballroom dancers. The unpaired electron will do everything in its power to find a second, just as a single dancer in search of a partner will cut into another couple.

In this path to methane, the nickel atom bonds to a sulfur atom in coenzyme M rather than the methyl group. This knocks the methyl away and sends it off sans an electron. Hungry and irritated, the methyl radical immediately snags a hydrogen atom from coenzyme B, generating methane.

Process of elimination: To find out which mechanism was correct, the UM-PNNL team of researchers came up with a way to rule out one or the other. The first thing they had to do was slow down the reaction. They slowed it down a thousand times by hobbling the second half of the path to methane, after the intermediate came alive. Doing so let the intermediate build up.

Then, they performed a biochemical analysis called electron paramagnetic resonance spectroscopy at the University of Michigan that allowed them to distinguish between the two intermediate molecules. If the reaction created the methyl-nickel molecule, methyl-nickel would show up as a blip on their EPR system. If the reaction created a methyl radical that sauntered off, the molecule remaining with the protein — nickel bound to coenzyme M — would not register at all.

The team found no blip in the EPR profile of the post-reaction products, making the most likely intermediate the methyl radical. But, to be sure, the team performed additional biochemical analyses that ruled out other potential molecules. They also performed another biochemical test and showed that the structure of the major intermediate was the nickel stuck to coenzyme M, the expected result if the reactions took the methyl radical path.

"The impact of radicals on living matter, such as biological material, can be devastating, and involvement of a methyl radical, one of the most unstable radicals, is truly surprising," said Raugei, "For this to happen and make methane 100 percent of the time, the protein has to perform and control this reaction with an extremely high degree of precision, placing that methyl radical specifically beside only one atom — the hydrogen atom bound to the sulfur of coenzyme B."

Energy block: To further substantiate their results, the team modeled the reaction computationally. They zoomed in on the action within the enzyme, known as methyl-coenzyme M reductase.

"We found that the methyl radical required the least amount of energy to produce, making that mechanism the frontrunner yet again," said Bojana Ginovska, a computational scientist on the PNNL team.

In fact, one of the other intermediates required three times as much energy to make, compared to the methyl radical, clearly putting it out of the running.

Modeling the reaction computationally also allowed the team to look inside the reductase. Experiments showed that the reaction happens faster at higher temperatures and why: Parts of the protein that helped move the reaction along would move the nickel closer to the methyl-coenzyme M. Shorter distances allowed things to happen faster.

The team used high performance computing resources at two DOE scientific user facilities: EMSL, the Environmental Molecular Sciences Laboratory at PNNL, and NERSC, the National Energy Research Computing Center, a DOE Office of Science User Facility at Lawrence Berkeley National Laboratory.

The results might help researchers, including Ragsdale and Raugei, learn to control methane synthesis — either in the lab or in bacteria that make it in places like the Arctic — and how to break it down.

"Nature has designed a protein scaffold that works very precisely, efficiently and rapidly, taking a simple methyl group and a seemingly innocent hydrogen atom and turning it into methane as well as running that reaction in both directions," said Ragsdale. "Now how can chemists design a scaffold to achieve similar results?"

"It would be a major breakthrough if we were able to devise a biomimetic strategy to activate methane, which means to turn it into more useful fuels," said Raugei. "If nature figured out how to do it in mild conditions, then perhaps we can devise an inexpensive way to design catalysts to convert methane into liquid fuels like we use in our vehicles and jets."

This work was supported by the Department of Energy Offices of Science and ARPA-E.

Reference: Thanyaporn Wongnate, Dariusz Sliwa, Bojana Ginovska, Dayle Smith, Matthew W. Wolf, Nicolai Lehnert, Simone Raugei, and Stephen W. Ragsdale. The Radical Mechanism of Biological Methane Synthesis by Methyl-Coenzyme M Reductase, Science May 20, 2016, DOI:10.1026/science.aaf0616
John Verdon's profile photoअमित सिंह 1438's profile photo
Add a comment...

Pacific Northwest National Laboratory (PNNL)

• General/Interdisciplinary  - 
Using a comprehensive system-wide approach, a team of researchers studied metabolic pathways that regulate lipid accumulation in a genetically tractable yeast species. This research could improve lipid yields and enhance the efficiency of biofuel production. Learn more at #bioenergy

* * *

The yeast Yarrowia lipolytica has the ability to accumulate a large amount of lipids when nitrogen is limited. This, along with its amenability to genetic methods, has made it an attractive model for the production of high-value lipids for biofuel production. However, relatively little is known about factors that regulate enzymatic pathways responsible for lipid accumulation in this species.

To address this knowledge gap, a team of researchers from Pacific Northwest National Laboratory (PNNL) integrated metabolome, proteome, and phosphoproteome data to characterize lipid accumulation in response to limited nitrogen in Y. lipolytica.

The researchers used a microscopy system that integrates nonlinear two-photon excitation, laser-scanning confocal microscopy, and fluorescence lifetime imaging at EMSL, the Environmental Molecular Sciences Laboratory, a national scientific user facility. In the first global study of protein phosphorylation in Y. lipolytica, the researchers focused their analysis on changes in the expression and phosphorylation state of regulatory proteins, including kinases, phosphatases, and transcription factors.

They found that lipid accumulation in response to nitrogen limitation results from two distinct processes: the higher production of malonyl-CoA from excess citrate increases the pool of building blocks for lipid production, and the decreased capacity for β-oxidation reduces the consumption of lipids. The findings provide new genetic targets that could be manipulated to improve lipid yields in future metabolic engineering efforts.

The impact? A better understanding of the metabolic pathways that regulate lipid accumulation in yeast could be harnessed to improve lipid yields and enhance the efficiency of biofuel production.

This work was supported by the U.S. Department of Energy’s Office of Science, Office of Biological and Environmental Research (BER), including support of EMSL, an Office of Science User Facility; BER Genomic Science program; William Wiley Distinguished Postdoctoral Fellowship; and BER-funded Pan-omics program at PNNL.
Boston Web Designer's profile photoAjai Mani Tripathi's profile photoMalith Arambage's profile photo
interesting research..must read and thanks for sharing
Add a comment...

Pacific Northwest National Laboratory (PNNL)

• General/Interdisciplinary  - 
Will your electric car's battery fade during on that long, lonely stretch of road? For EV drivers with this worry, the answer could be lightweight lithium-sulfur batteries. They hold two times the energy of those on store shelves, but they won't hold a charge for long. Through the Joint Center for Energy Storage Research, PNNL scientists discovered how a salt used in the battery’s electrolyte plays a critical role in allowing lithium-sulfur batteries to hold a charge after more than 200 uses. This research offers needed design principles for creating long-lasting, high-capacity batteries. Read more at

* * *

Scientists found that salts used in the liquid in the batteries make a big difference. When a salt called LiTFSI is packed in the liquid, a test battery can hold most of its charge for more than 200 uses. The LiTFSI helps bind up lithium atoms and sulfur on the electrode but quickly releases them. In contrast, a similar liquid ties up the lithium and sulfur but doesn't release it. The result is an electrode that quickly degrades; the battery fades after a few dozen uses.

To determine the influence of electrolytes in lithium-sulfur batteries, the team did experiments with both LiTFSI and a similar electrolyte, called LiFSI, which has less carbon and fluoride. After continually measuring the amount of energy that the battery held and released, the team did a post-mortem analysis to study the electrodes. They did this work using instruments at DOE's EMSL, an Office of Science scientific user facility.

They discovered that with the LiTFSI, the electrode's lithium atoms became bound up with sulfur. The result is lithium sulfide (LiSx) forming on the electrode's surface. With LiFSI, lithium sulfate (LiSOx) formed. By calculating the strength with which the compounds clung to the lithium, they found that the lithium sulfide easily broke apart to release the lithium. However, the lithium sulfate was hard to separate. The oxygen in the lithium sulfate was the culprit.

"By conducting a macroscopic compositional analysis combined with simulations, we can see which bonds are easily broken and what will happen from there," said PNNL's Dr. Ji-Guang (Jason) Zhang, who led the study. "This process lets us identify the electrolytes behavior, guides us to design a better electrolyte, and improve the cycle life of lithium-sulfur batteries."

Why is this important? One of the concerns with electric cars is long, lonely stretches of highway. Drivers don't want to be stranded between charging stations, and this concern can factor into their decision to buy lower emission vehicles. The results of this study add another important page into the design guide for high-energy lithium-sulfur batteries.

What's Next? For the researchers, the next step is developing an electrolyte additive that forms a protective layer on the lithium anode's surface, protecting it from the electrolyte.

This work was supported by the Joint Center for Energy Storage Research (JCESR), an Energy Innovation Hub funded by the U.S. Department of Energy, Office of Science, Basic Energy Sciences. 
Ulysse Odysseus's profile photoAbak Hoben's profile photo
Add a comment...

Vitaliy Kaurov

• General/Interdisciplinary  - 
Ritual human sacrifice promoted and sustained the evolution of stratified societies A group of researchers ran statistical analyses on human sacrifice data gathered from 93 groups in the Pacific Island region. 

ABSTRACT: "Evidence for human sacrifice is found throughout the archaeological record of early civilizations1, the ethnographic records of indigenous world cultures, and the texts of the most prolific contemporary religions. According to the social control hypothesis, human sacrifice legitimizes political authority and social class systems, functioning to stabilize such social stratification. Support for the social control hypothesis is largely limited to historical anecdotes of human sacrifice, where the causal claims have not been subject to rigorous quantitative cross-cultural tests. Here we test the social control hypothesis by applying Bayesian phylogenetic methods to a geographically and socially diverse sample of 93 traditional Austronesian cultures. We find strong support for models in which human sacrifice stabilizes social stratification once stratification has arisen, and promotes a shift to strictly inherited class systems. Whilst evolutionary theories of religion have focused on the functionality of prosocial and moral beliefs, our results reveal a darker link between religion and the evolution of modern hierarchical societies."



Original NATURE article:


"Most of us would agree that human sacrifice is a bad idea. Yet many ancient civilizations (and some more modern ones) engaged in religious rituals that involved sacrificing people. Why do so many societies evolve a system of human sacrifice, despite the obvious moral drawbacks? A group of social scientists has just published a statistical analysis in Nature that reveals how this grisly practice has fairly predictable results, which benefit elites in socially stratified cultures.

The group examined 93 Austronesian cultures in the Pacific Islands, drawing information from the Pulotu Database of Pacific Religions to determine which groups had human sacrifice and when. Previous analysts have suggested that human sacrifice helps to maintain social stratification. In this new study, the researchers wanted to understand the relationship between human sacrifice and social stratification over time.

To do that, they created statistical models using Bayesian methods, testing to see how human sacrifice affected societies that fit into three buckets: egalitarian, moderately stratified, and highly stratified. They write:

Evidence of human sacrifice was observed in 40 of the 93 cultures sampled (43%). Human sacrifice was practiced in 5 of the 20 egalitarian societies (25%), 17 of the 46 moderately stratified societies (37%), and 18 of the 27 highly stratified societies (67%) sampled.
The researchers ran these societies through several different probabilistic models, exploring how the cultures had changed over time and what role (if any) human sacrifice played in those changes.
What they found is probably not too surprising, though it is revealing. Human sacrifice has the effect of maintaining stability in highly stratified cultures, and it can also turn a moderately stratified society into a highly stratified one. Interestingly, egalitarian societies that introduced human sacrifice did not become stratified. 

Human sacrifice, in other words, is a useful tool for elites who want to maintain their power in a stratified society. This is especially true in the Austronesian context, where religious and political leaders were often the ones doing the sacrificing, and the sacrificial humans were generally slaves or people with low social standing (see figure below for some of their calculations).

It's worth noting that this finding reflects religious human sacrifice in Austronesian cultures—which are themselves fairly diverse—and may not be relevant to cultures where humans are sacrificed by the state or during racial/ethnic genocide. It's also important to keep in mind that it's extremely hard to quantify human sacrifice and social status, so these findings represent only the broadest and most abstracted trends.

Still, it's a finding worth pondering. As the researchers conclude in their paper:

Unpalatable as it might be, our results suggest that ritual killing helped humans transition from the small egalitarian groups of our ancestors, to the large stratified societies we live in today.
They also point out that their work calls into question recent research on the pro-social role of religion in the development of civilization. "Religious rituals also played a darker role in the evolution of modern complex societies," they write. If the "ritual" you're talking about is human sacrifice, that seems like a fair assessment."
Dan Campos's profile photoKyle James's profile photoPatrizio Bruno's profile photoMatthew Curry's profile photo
We still use human sacrifice we will probably sacrifice a presidential candidate to keep the status quo
Add a comment...

Malthus John

• General/Interdisciplinary  - 
Yea, or nea?
Is human health influenced by Solar cycles?

Yet another scientific study claims that it is.  The latest zeros-in on giant cell arteritis and rheumatoid arthritis.[1]  Others have speculated many other ailments, from headaches to heart attacks.[2,3] 

Of course, correlation is not causation, and the potential explanations are also numerous.  One example uses magnetic storms and our internal compass system, while another connects the Schumann Resonances and melatonin, for one. [4,5]

What about you?  Have you ever noticed a health problem and correlated it with solar storms?  Most haven't as you'd have to hear about the idea first to bother looking, unless maybe you are an Aurora chaser, and noticed the coincidence.  Since this kind of info has been readily available (about 12 years), I have not had a headache that was not preceded by solar activity (but not all storms cause headaches..).  That's enough to spark my interest and motivate further study, which is similar to the case in Science!

[1]  Do solar cycles influence giant cell arteritis and rheumatoid arthritis incidence?
[2]  Magnetic Storms Affect Humans As Well As Telecommunications
[3]  Severe Geomagnetic Storm & Flares Causing Widespread Sleep Disturbances & Other Human Health Effects
[4]Are stress responses to geomagnetic storms mediated by the cryptochrome compass system?
[5]  Schumann Resonances, a plausible biophysical mechanism for the human health effects of Solar-Geomagnetic Activity;jsessionid=FF84E5D8BF93AAA1CFAA222FA5943F2E?sequence=1

#sciencesunday   #solarcycles   #geomagneticstorms   #health   #rheumatoidarthritis   #giantcellarteritis   #space   #plasma   #physics   #science  
What began as a chat between husband and wife has evolved into an intriguing scientific discovery. The results, published in May in BMJ (formerly British Medical Journal) Open, show a “highly significant” correlation between periodic solar storms and incidences of rheumatoid arthritis (RA) and ...
George Rainovic's profile photoMAN OF STEEL's profile photoChristopher Chaw's profile photoAbak Hoben's profile photo
Yes, that is central to what I wrote, and what the studies are clear on.  Misunderstood by the press and the general public much more than in Science.  +Dave Sharley

What's interesting are the 100s of papers referenced that provide plausible causal dynamics.
Add a comment...

Pacific Northwest National Laboratory (PNNL)

• General/Interdisciplinary  - 
Revolutionary new electronic devices require new and novel material systems. Scientists from PNNL and the University of Minnesota showed that combining two oxide materials in one particular orientation gives rise to a densely packed sheet of highly mobile electrons. The density of these electrons – the highest ever observed at the junction of two materials – may well help create a new class of electronic devices. Learn more at

* * *

By depositing alternating, ultra-thin layers of NdTiO3 and SrTiO3 on a crystalline surface, and investigating their properties experimentally and theoretically, the researchers demonstrated that a very high density of mobile electrons can be generated and confined within the SrTiO3 layers. The mobile electrons jump from the NdTiO3 layers, where they cannot easily move, into the SrTiO3 layers, where they are free to move.

Why do the electrons jump? A certain number must jump from NdTiO3 into SrTiO3 to stabilize the combined material system. The charges that stabilize the neodymium (Nd) and titanium (Ti) ions in NdTiO3 cannot be reached without electron rearrangement, and part of this rearrangement involves some electrons jumping across the junction into the adjacent SrTiO3 layers. However, when the NdTiO3 layer reaches a certain thickness, it becomes energetically favorable for additional loosely bound electrons in the NdTiO3 layer to spill over into the adjacent SrTiO3 layer, like water running over a waterfall. Once this happens, the SrTiO3 layers become conducting channels with a high density of mobile electrons.

Why is this important? New kinds of electronic devices that exhibit novel functionalities are constantly being sought after to expand our technology base. One such device, which cannot be fabricated with existing electronic materials, is a high-frequency plasmonic field effect transistor. This device can turn a larger electronic signal on and off very fast, something not achievable with traditional semiconductor materials, such as silicon. The interface between NdTiO3 and SrTiO3 constitutes such a pathway, even though neither oxide conducts electricity as a pure material.

What's Next? This work is part of ongoing research into the electronic, magnetic, and optical properties of doped metals at Pacific Northwest National Laboratory.

Work at the University of Minnesota was supported primarily by the National Science Foundation through the Materials Research Science and Engineering Center under awards DMR-0819885 and DMR-1420013. The band offset work at Pacific Northwest National Laboratory was supported by the Department of Energy, Office of Science, Office of Basic Energy Sciences, Division of Materials Sciences and Engineering. The computational modeling at Pacific Northwest National Laboratory was supported by the PNNL Laboratory Directed Research and Development (LDRD) program.
Dawood Khan's profile photoRob Asfar's profile photo
Add a comment...

Chad Haney

• General/Interdisciplinary  - 
How can you use x-ray CT to image a leech?
Joy and Luck of Fixing a Leech
A group of researchers have used microCT to examine and describe a new species of leech, which they named after Amy Tan (the author of Joy Luck Club). The most important part is kind of buried in the Science Daily piece.

For objects smaller than a human, CT has horrible soft tissue contrast. MRI has significantly better soft tissue contrast, regardless of the size of the sample. However, microCT has better spatial resolution compared to preclinical or 'microMRI' if you will. This group used a microCT that is not intended for live specimen and can reach 5 micron resolution.

The research presented here is a fantastic example of how science works, i.e., building on previous work, especially if it is in a different area. For those of you who remember high school biology, you probably had to dissect a frog or something from a jar. That stinky liquid is formaldehyde, which is what preserves the specimen. It's known as a fixative. So this group looked at fixatives that are typically used in scanning electron microscopy. One of them was osmium tetroxide, which binds the metal osmium to give better contrast. The recipe that worked best was using AFA (alcohol, formalin (which is a variant of formaldehyde), and acetic acid) as the primary fixative, followed by osmium tetroxide.

Unlike the BaSO4 method I wrote about earlier, this method involves soaking the sample for several hours (6-12).

The other key part is in the visualization and image analysis tools. Identifying the various internal organs uses a tool called segmentation. Sometimes it's automated and sometimes you have to do it manually.

Since I'm heading out to walk my dog, I'll keep this short and give you a few links if you wish to read more.

Medical Imaging 101 pt 2: CT

Fast CT from GE Healthcare

BaSO4, X-ray Contrast

Medical visualization, it's what I see and do

GE phoenix v|tome|x s scanner

Here's the Science Daily article.

Full article here:
1. Michael Tessler, Amalie Barrio, Elizabeth Borda, Rebecca Rood-Goldman, Morgan Hill, Mark E. Siddall. Description of a soft-bodied invertebrate with microcomputed tomography and revision of the genusChtonobdella(Hirudinea: Haemadipsidae). Zoologica Scripta, 2016; DOI: 10.1111/zsc.12165

h/t +rasha kamel 
Gary Ray R's profile photoChad Haney's profile photoRomavic Antony's profile photo
I'm happy to share, +Gary Ray R. Maybe someday I'll get some interesting discussion going with strangers interested in learning some science.
Add a comment...

Pacific Northwest National Laboratory (PNNL)

• General/Interdisciplinary  - 
Seashells and lobster claws are hard to break, but chalk is soft enough to draw on sidewalks. Though all three are made of calcium carbonate crystals, the hard materials include clumps of soft biological matter that make them much stronger. Research published in Nature Communications reveals how soft clumps get into crystals and endow them with remarkable strength. Read more at

* * *

The research shows that such clumps become incorporated via chemical interactions with atoms in the crystals, an unexpected mechanism based on previous understanding. By providing insight into the formation of natural minerals that are a composite of both soft and hard components, the work will help scientists develop new materials for a sustainable energy future, based on this principle.

"This work helps us to sort out how rather weak crystals can form composite materials with remarkable mechanical properties," said materials scientist Jim De Yoreo of the Department of Energy's Pacific Northwest National Laboratory. "It also provides us with ideas for trapping carbon dioxide in useful materials to deal with the excess greenhouse gases we're putting in the atmosphere, or for incorporating light-responsive nanoparticles into highly ordered crystalline matrices for solar energy applications."

Calcium carbonate is one of the most important materials on earth, crystallizing into chalk, shells, and rocks. Animals from mollusks to people use calcium carbonate to make biominerals such as pearls, seashells, exoskeletons, or the tiny organs in ears that maintain balance. These biominerals include proteins or other organic matter in the crystalline matrix to convert the weak calcium carbonate to hard, durable materials.

Scientists have been exploring how organisms produce these biominerals in the hopes of determining the basic geochemical principles of how they form, and also how to build synthetic materials with unique properties in any desired shape or size.

The strength of a material depends on how easy it is to disrupt its underlying crystal matrix. If a material is compressed, then it becomes harder to break the matrix apart. Proteins trapped in calcium carbonate crystals create a compressive force — or strain — within the crystal structure.

Unlike the strain that makes muscles sore, this compressive strain is helpful in materials, because it makes it harder to disrupt the underlying crystal structure, thereby adding strength. Scientists understand how forces, stress and strain combine to make strong materials, but they understand less about how to create the materials in the first place.

The leading explanation for how growing crystals incorporate proteins and other particles is by simple mechanics. Particles land on the flat surface of calcium carbonate as it is crystallizing, and units of calcium carbonate attach over and around the particles, trapping them.

"The standard view is that the crystal front moves too fast for the inclusions to move out of the way, like a wave washing over a rock," said De Yoreo.

That idea's drawback is that it lacks the details needed to explain where the strain within the material comes from. The new results from De Yoreo and colleagues do, however.

"We've found a completely different mechanism," he said.

To find out how calcium carbonate incorporates proteins or other strength-building components, the team turned to atomic force microscopy, also known as AFM, at the Molecular Foundry, a DOE Office of Science User Facility at Lawrence Berkeley National Laboratory. In AFM, the microscope tip delicately runs over the surface of a sample like a needle running over the grooves in a vinyl record. This creates a three-dimensional image of a specimen under the scope.

The team used a high concentration of calcium carbonate that naturally forms a crystalline mineral known as calcite. The calcite builds up in layers, creating uneven surfaces during growth, like steps and terraces on a mountainside. Or, imagine a staircase. A terrace is the flat landing at the bottom; the stair steps have vertical edges from which calcite grows out, eventually turning into terraces too.

For their inclusions, the team created spheres out of organic molecules and added them to the mix. These spheres called micelles are molecules that roll up like roly-poly bugs based on the chemistry along their bodies — pointing outwards are the parts of their molecules that play well chemically with both the surrounding water and the calcite, while tucked inside are the parts that don't get along with the watery environment.

The first thing the team noticed under the microscope is that the micelles do not randomly land on the flat terraces. Instead they only stick to the edges of the steps.

"The step edge has chemistry that the terrace doesn't," said De Yoreo. "There are these extra dangling bonds that the micelles can interact with."

The edges hold onto the micelles as the calcium carbonate steps close around them, one after another. The team watched as the growing steps squeezed the micelles. As the step closed around the top of the micelle, first a cavity formed and then it disappeared altogether under the surface of the growing crystal.

To verify that the micelles were in fact buried within the crystals, the team dissolved the crystal and looked again. Like running a movie backwards, the team saw micelles appear as the layers of crystal disappeared.

Finally, the team recreated the process in a mathematical simulation. This showed them that the micelles — or any spherical inclusions — are compressed like springs as the steps close around them. These compressed springs then create strain in the crystal lattice between the micelles, leading to enhanced mechanical strength. This strain likely accounts for the added strength seen in seashells, pearls and similar biominerals.

"The steps capture the micelles for a chemical reason, not a mechanical one, and the resulting compression of the micelles by the steps then leads to forces that explain where the strength comes from," said De Yoreo.

This work was supported by the Department of Energy Office of Science, National Institutes of Health.
John Purchase's profile photoSamuel Muiruri's profile photoANITA KUMARI's profile photoLipika Ray's profile photo
"These compressed springs then create strain in the crystal lattice between the micelles, leading to enhanced mechanical strength."

So are shells strong like prestressed concrete - extra compression is released when subjected to tensile force?
Add a comment...

ULg Reflexions

• General/Interdisciplinary  - 
How do listeners perceive whether a #singer is in tune or not? It is challenging to define it objectively. In spite of the difficulties involved, Pauline Larrouy-Maestri, a researcher at the Max Planck Institute and a scientific collaborator with the department of psychology of the +Université de Liège (ULg) , has succeeded in doing so. By quantifying objective criteria for judging singing accuracy with the help of computer programs, and by comparing the subjective judgements of #music professionals and #laymen, she succeeded in evaluating the perception of accuracy among the two groups. This research greatly alters the commonly-held idea that music professionals are better equipped to judge voice accuracy.
Read more about it : 
Alex Gigliotti's profile photoRomavic Antony's profile photoFrank Schwab's profile photo
Add a comment...

Pacific Northwest National Laboratory (PNNL)

• General/Interdisciplinary  - 
To make "clean" fossil fuel burning a reality, researchers have to pull carbon dioxide out of the exhaust gases that rise from coal or natural gas power plants and store or reuse it. For the capturing feat, scientists are studying special scrubbing liquids that bind and release the gas, but some of the most promising ones thicken when binding carbon dioxide, rendering them inefficient and expensive to use. Now, researchers have used computer modeling to design these liquid materials so that they retain a low viscosity after sponging up carbon dioxide, based on a surprise they found in their explorations. Read more at

* * *

Although the chemists still have to test the predicted liquid in the lab, being able to predict viscosity will help researchers find and design cheaper, more efficient carbon capture materials, they report in Journal of Physical Chemistry Letters March 28.

"We're hoping to drive down the operational costs," said chemist Roger Rousseau of the Department of Energy's Pacific Northwest National Laboratory. "With a lower viscosity, we can run the process at lower, optimal temperatures, and the cost to implement drops astronomically."

Carbon Sponge: Coal and natural gas power plants currently emit substantial quantities of global greenhouse gases, but carbon capture and storage technologies provide one promising potential path to reduce those emissions.

To separate the gas, researchers are designing materials that can reversibly — and only — bind carbon dioxide. Exhaust fumes are mixed with the material in some way, the material collects the greenhouse gas like a sponge, and then other processes would, figuratively, squeeze the sponge to get the carbon dioxide back out — for storage out of the atmosphere or reuse in fuels or chemicals production.

A frontrunner for such materials are liquids known as carbon dioxide binding organic liquids, or CO2BOLs for short (pronounced co-balls). CO2BOLs have an advantage over many technologies — they are not water-based. Water-based materials that collect carbon dioxide require a lot more energy to pull the carbon dioxide back out, because the process involves heating.

Researchers have studied CO2BOLs for many years, but had trouble overcoming a thickening problem. The CO2BOLs by themselves are similar in viscosity to water, but once they bind carbon dioxide, they become thick like cold honey. The more carbon dioxide they collect, the thicker they get.

The high viscosity is a drawback in many respects: Pumping thick liquids through pipes to a facility to collect the carbon dioxide requires a lot of energy and pulling the gas out of the liquid requires a lot of heat. Also, operators cannot collect as much carbon dioxide as the CO2BOL will hold; they have to keep the viscosity thin enough to pump the fluid. Economic calculations suggest using CO2BOLs inefficiently like this would be expensive.

"The high viscosity would significantly increase the cost compared to conventional carbon capture technology," said PNNL chemist Dave Heldebrant, a coauthor on the study who is leading the chemical synthesis part of the team.

Calculated Visc: Not willing to give up on a promising technology, Rousseau and his colleague Vassiliki-Alexandra "Vanda" Glezakou took to the computer. They decided to see if they could predict the viscosity of a molecule with molecular simulations. That would allow them to design low-viscosity CO2BOLs that could then be tested in the lab.

They started by seeing what was going on with a simple CO2BOL and carbon dioxide. They chose a molecule called IPADM and simulated its binding with carbon dioxide. In doing so, the computer program must keep track of the position and movement of many thousands of atoms in the simulation.

They found that a neutral carbon dioxide molecule landed on a particular place on the neutral IPADM, forming IPADM-CO2. When this happened, the electrons shuffled around between atoms. This created a spot within the IPADM-CO2 with a positive charge and another spot with a negative charge. The result is an overall neutral molecule that has separated positive and negative charges, what chemists call a zwitterion.

Glezakou, Rousseau and colleagues performed multiple simulations, but mixed (computationally) an increasing percentage of carbon dioxide into the IPADM molecules. They saw a relation between how the charges moved around on the molecules and their viscosity. This allowed them to develop an equation that could calculate viscosity from various chemical characteristics of their CO2BOLs. They validated their equation by comparing calculated viscosities of different CO2BOLs with known values.

Their analysis also showed them that the zwitterion is what ramps up the viscosity. The internal positive and negative charges can interact with such charges on other IPADM-CO2 molecules, preventing them from freely moving around. Without the -CO2, the IPADM is thin as water. With it, honey.

Z to A: Could the researchers get rid of those charges? In other zwitterions, the researchers knew that protons often moved around, sometimes forming a neutral molecule. They wondered if they could force CO2BOLs to do this as well by changing the molecular scaffold to push back the proton to the negative part of the molecule once it captured carbon dioxide, resulting in a neutral acid.

"The minute we saw that the acid form was more stable for some CO2BOLs, we knew instantly that we could change the molecular structure of candidate CO2BOLs to make that happen more often," said Glezakou. "Nobody was considering the neutral acid form for these systems before. The conventional idea was that CO2BOLs would always be an ionic liquid, but clearly it doesn't have to be. But the trick is to have the two parts of the molecule that receive the proton in close proximity and able to share the proton."

If this idea panned out, viscosity would no longer be an issue for a non-ionic carbon capture solvent systems.

They team simulated IPADM binding to carbon dioxide again, but they included two variations on IPADM. The two variations included small chemical changes to IPADM that would influence where electrons traveled through the bound structure.

They simulated 25 percent of the CO2BOLs binding carbon dioxide and determined how easily the neutral acids formed. The team saw that, unlike IPADM, the other two CO2BOLs unexpectedly formed the neutral acid easily, almost as easily as it formed the zwitterion. One called EODM, especially, was about 50 percent zwitterion and 50 percent neutral acid.

Reality check: The scientists then determined what form IPADM-CO2 took in real life. Comparing the values of calculated viscosities with experimentally determined viscosities, the chemists concluded that IPADM-CO2 most commonly forms the zwitterion.

But what would happen to IPADM-CO2's viscosity if the neutral acid formed? The chemists calculated the predicted viscosity of IPADM-CO2 and its two variations. For all three, the predicted viscosity dropped by more than half.

Armed with what they now know has to happen to their CO2BOLs, the chemists are shifting their focus towards molecules that keep the charged parts near each other and that are more likely to form the neutral acid variation. Some initial measurements in the lab confirmed the findings, with more to come.

"If we could cut the viscosity by 50 percent or more, CO2BOLs would be acting in their optimal range and be far more efficient," said Heldebrant.

"This is a fine example of how fundamental science and molecular level insights can accelerate technology in real time", said Glezakou. "And this was possible through the unique integration of cutting-edge theory and experiment."

This work was supported by the Department of Energy Office of Fossil Energy.

The Journal of Physical Chemistry Letters paper may be found at
Abak Hoben's profile photoMalith Arambage's profile photo
Add a comment...

Daniel Montesinos

• General/Interdisciplinary  - 
How Sci-Hub disrupted the scientific editorial business model

Technology is disrupting many business models (think Uber, AirBnB, Spotify), and scientific publications are next. But this time no one is making a profit. This time the scientists are the ones benefiting, at least for now.

I am guessing that at some point universities will just refuse to pay access fees, and then all editorials will have to go full Open Access. Editorials are in fact already transitioning, but keep both pay-per-view and Open Access articles within the same journal in an attempt to keep both income sources for as long as they can. When access fees are over, we'll have to fight the next battle: reasonable publication fees.

“A lawsuit isn’t going to stop it (Sci-Hub), nor is there any obvious technical means. Everyone should be thinking about the fact that this is here to stay.”

"It is easy to understand why journal publishers might see Sci-Hub as a threat. It is as simple to use as Google’s search engine, and as long as you know the DOI or title of a paper, it is more reliable for finding the full text."

"Many users can access the same papers through their libraries but turn to Sci-Hub instead—for convenience rather than necessity"

"The flow of Sci-Hub activity over time reflects the working lives of researchers, growing over the course of each day and then ebbing—but never stopping—as night falls."

And don't miss the opportunity to zoom into your town and see how many people is using Sci-Hub.
An exclusive look at data from the controversial web site Sci-Hub reveals that the whole world, both poor and rich, is reading pirated research papers.
aurobind enugala's profile photoBrandon Petaccio's profile photoJolana J's profile photoMubeen Shahid's profile photo
This reminds me of something I read yesterday in a related subject on Paywalls
Add a comment...

Pacific Northwest National Laboratory (PNNL)

• General/Interdisciplinary  - 
Researchers examined the chemical identity and 3-D position of atoms in soft biological materials with a new approach using atom probe tomography, or APT. An extension of this new specimen preparation technique can further enhance the APT study of organic and inorganic materials and nanoparticles relevant to energy and the environment. Learn more at

* * *

The chemical identity and 3-D position of individual atoms in inorganic materials can be revealed using the powerful APT technique, which combines mass spectrometry with advanced microscopy. However, use of APT to study soft biological materials has been limited due to difficulties in specimen preparation. To address this problem, researchers from EMSL, the Environmental Molecular Sciences Laboratory, and Pacific Northwest National Laboratory (PNNL) developed an advanced specimen preparation approach to study soft biological materials using APT. The new specimen preparation approach involves embedding ferritin in an organic polymer resin that lacks nitrogen to provide chemical contrast for visualizing atomic distributions. The team used the Helios Nanolab dual-beam focused ion beam/scanning electron microscope (FIB/SEM) at EMSL, a Department of Energy (DOE) scientific user facility, to carve and lift out an appropriate sample for APT analysis. Then using EMSL’s APT, they directly mapped the distribution of phosphorus at the surface of the ferrihydrite mineral, thereby providing insight into the role of phosphorus in stabilizing the ferrihydrite structure. The robust sample preparation method can be directly extended to further enhance the study of biological, organic and inorganic nanomaterials relevant to energy or the environment.

Why is this important? An extension of the new specimen preparation technique can further enhance the APT study of organic and inorganic materials and nanoparticles relevant to energy and the environment.

Publication: D.E. Perea, J. Liu, J. Bartrand, Q. Dicken, S.T. Thevuthasan, N.D. Browning and J.E. Evans, “Atom probe tomographic mapping directly reveals the atomic distribution of phosphorus in resin embedded ferritin.” Scientific Reports (2016). [DOI: 10.1038/srep22321]

This work was supported by the U.S. Department of Energy Office of Science (Office of Biological and Environmental Research), including support of the +Environmental Molecular Sciences Laboratory (EMSL) at PNNL; and the Chemical Imaging Initiative conducted under the Laboratory-Directed Research and Development Program at PNNL.
matt hogenkamp's profile photo
Add a comment...

Pacific Northwest National Laboratory (PNNL)

• General/Interdisciplinary  - 
Proteins are at the core of life. In living things, they are architect and engineer. They are the wrenches and machines that build an organism's varied parts, building those parts out of other proteins of many sizes and shapes. They form the power plants in cells, run the plants, make energy and store energy. Now, scientists developed a system to make synthetic polymers with the versatility of proteins. Based on an inexpensive industrial chemical, these synthetic polymers might one day be used to create materials with functions as limitless as proteins, which are involved in every facet of life. Read more at

* * *

Researchers hoping to design new materials for energy uses have developed a system to make synthetic polymers — some would say plastics — with the versatility of nature's own polymers, the ubiquitous proteins. Based on an inexpensive industrial chemical, these synthetic polymers might one day be used to create materials with functions as limitless as proteins, which are involved in every facet of life.

Reporting in Angewandte Chemie International Edition March 14, researchers reveal a method to produce polymers that mimic proteins in the versatility of their raw ingredients and how those ingredients link together to form a larger structure.

"Proteins are sequence-defined polymers and have a whole variety of exquisite functions," said materials scientist Jay Grate of the Department of Energy's Pacific Northwest National Laboratory. "But natural materials are unstable. That's good for nature, but if we want stable, long-lasting materials, we need to make our own sequence-defined polymers."

Stuff of life: Because of their versatility, proteins are some of researchers' favorite tools. Many drugs are re-engineered proteins such as converted antibodies (for example, drugs whose names end in -mab). The problem, however, is that proteins are also short-lived. Nature designed them to be temporary and recyclable. Any environment proteins find themselves in is full of things — often other proteins — that break them down.

One way to get around these butchers is to design a material that behaves like proteins but is not actually protein. To that end, researchers are pursuing materials that mimic the building blocks of proteins — amino acids. Amino acids bestow on proteins their tremendous variety and versatility.

Those qualities are what plastic protein researchers are trying to emulate. Amino acids come in 20 or so variations. Each has the same backbone, from which juts a group of atoms called a side chain that gives the amino acid its particular chemical characteristic. The amino acid backbones snap together like beads on a string, the side chains arranged in a particular order for each protein.

But proteins aren't floppy pearl necklaces. The beads fold in upon themselves to form structured objects. Some proteins end up looking like balls, some like capital Ys, others like olive wreaths.

These shapes come about because the side chains and the protein backbone stick to other side chains and backbone regions like Post-Its stick to one another. The folding and sticking are very specific, like origami, resulting in a particular structure rather than a tangled mess.

Threes: Grate needed three things to mimic proteins: raw components with a backbone that can support a large variety of side chains; the ability to put the side chains in a particular order; the stickiness, which chemists refer to as non-covalent bonds.

He had been working with an industrial chemical called cyanuric chloride for unrelated purposes, but his understanding of its chemical properties made him think it might be a good starting point. Cyanuric chloride is a molecule that has three convenient places it could be extended. Two of them can link together, like two people holding hands, to form the backbone. The third can house a side chain. All together, Grate called the resulting molecule a triazine-based polymer, or TZP for short.

Although such a polymer would be immune to protein-destroying entities, Grate expects other things in the environment such as bacteria will break it down, based on TZP's chemical nature. So the material would last, but not forever.

Idea in hand, Grate and PNNL chemist Kai-For Mo had to develop a way to synthesize TZP. They made a variety of monomers by adding different side chains to cyanuric chloride, with each monomer a single building block analogous to an amino acid. For this study, they created five different side chains. Then they found they could add one monomer at a time relatively simply by changing the temperature at which they performed the chemical reactions, among other synthesis tricks.

After synthesizing polymers six monomers long, called a 6-mer in polymer parlance, the researchers verified their creations. They used analytical instruments to show that the polymers were the right size, had the right side chains, and the side chains were in the right order. They also synthesized a 12-mer to show this method works with longer polymers.

To see whether TZPs would fold in a manner analogous to proteins, PNNL computer scientist Michael Daily simulated small TZPs, singly and interacting with each other. A 6-mer folded neatly in half, forming a straight rod three monomers long and two monomers wide, like a hairpin. Similarly, two 3-mers lined up along each other, partially intertwined like a zipper.

The stickiness holding these "nanorods" together were non-covalent bonds between backbone atoms, the same types of bonds nature uses so that proteins take their proper shapes. And like protein structures, the TZP side chains were arranged in specific positions around the exterior of the rods formed by the backbone-backbone interactions.

The next step is to create a larger library of side chains, the count of which is ten so far. Then they must make longer polymers and show that they really do take useful shapes. Once researchers understand the rules for how to get specific shapes with TZPs that also assemble into larger structures, they can design materials with desired functions — for example, a membrane for a battery, a catalyst for a fuel cell, or even a therapeutic drug. 
matt hogenkamp's profile photo
Add a comment...

Pacific Northwest National Laboratory (PNNL)

• General/Interdisciplinary  - 
PNNL scientists knew the titanium alloy made from a low-cost process they had previously pioneered had very good mechanical properties, but they wanted to make it even stronger. Using powerful electron microscopes and a unique atom probe imaging approach, they were able to peer deep inside the alloy's nanostructure … and gain the understanding to create the strongest titanium alloy ever made. Learn more about this research, published today in Nature Communications, at; watch the PNNL video at

* * *

An improved titanium alloy — stronger than any commercial titanium alloy currently on the market — gets its strength from the novel way atoms are arranged to form a special nanostructure. For the first time, researchers have been able to see this alignment and then manipulate it to make the strongest titanium alloy ever developed, and with a lower cost process to boot.

They note in the Nature Communications paper that the material is an excellent candidate for producing lighter vehicle parts, and that this newfound understanding may lead to creation of other high strength alloys.

Mixing it up: At 45 percent the weight of low carbon steel, titanium is a lightweight but not super strong element. It is typically blended with other metals to make it stronger. Fifty years ago, metallurgists tried blending it with inexpensive iron, along with vanadium and aluminum. The resulting alloy, called Ti185 was very strong — but only in places. The mixture tended to clump — just like any recipe can. Iron clustered in certain areas creating defects known as beta flecks in the material, making it difficult to commercially produce this alloy reliably.

About six years ago, PNNL and its collaborators found a way around that problem and also developed a low-cost process to produce the material at an industrial scale, which had not been done before. Instead of starting with molten titanium, the team substituted titanium hydride powder. By using this feedstock, they reduced the processing time by half and they drastically reduced the energy requirements — resulting in a low-cost process in use now by a company called Advance Materials Inc. ADMA co-developed the process with PNNL metallurgist Curt Lavender and sells the titanium hydride powder and other advanced materials to the aerospace industry and others.

Modern Blacksmiths: Much like a medieval blacksmith, researchers knew that they could make this alloy even stronger by heat-treating it. Heating the alloy in a furnace at different temperatures and then plunging it into cold water essentially rearranges the elements at the atomic level in different ways thereby making the resulting material stronger.

Blacksmithing has now moved from an art form to a more scientific realm. Although the underlying principles are the same, metallurgists are now better able to alter the properties based on the needs of the application. The PNNL team knew if they could see the microstructure at the nano-scale they could optimize the heat-treating process to tailor the nanostructure and achieve very high strength.

"We found that if you heat treat it first with a higher temperature before a low temperature heat treatment step, you could create a titanium alloy 10-15 percent stronger than any commercial titanium alloy currently on the market and that it has roughly double the strength of steel," said Arun Devaraj a material scientist at PNNL. "This alloy is still more expensive than steel but with its strength-to-cost ratio, it becomes much more affordable with greater potential for lightweight automotive applications," added Vineet Joshi a metallurgist at PNNL.

Devaraj and the team used electron microscopy to zoom in to the alloy at the hundreds of nanometers scale — about 1,000th the width of an average human hair. Then they zoomed in even further to see how the individual atoms are arranged in 3-D using an atom probe tomography system at EMSL, the Environmental Molecular Sciences Laboratory, a DOE Office of Science User Facility located at PNNL.

The atom probe dislodges just one atom at a time and sends it to a detector. Lighter atoms "fly" to the detector faster, while heavier items arrive later. Each atom type is identified depending on the time each atom takes to reach the detector and each atom's position is identified by the detector. Thus scientists are able to construct an atomic map of the sample to see where each individual atom is located within the sample.

By using such extensive microscopy methods, researchers discovered that by the optimized heat treating process, they created micron sized and nanosized precipitate regions — known as the alpha phase, in a matrix called the beta phase — each with high concentrations of certain elements.

"The aluminum and titanium atoms liked to be inside the nano-sized alpha phase precipitates, whereas vanadium and iron preferred to move to the beta matrix phase," said Devaraj. The atoms are arranged differently in these two areas. Treating the regions at higher temperature of a 1,450 degrees Fahrenheit achieved a unique hierarchical nano structure.

When the strength was measured by pulling or applying tension and stretching it until it failed, the treated material achieved a 10-15 percent increase in strength which is significant, especially considering the low cost of the production process.

If you take the force you are pulling with and divide it by the area of the material you get a measure of tensile strength in megapascals. Steel used to produce vehicles has a tensile strength of 800-900 megapascals, whereas the 10-15 percent increase achieved at PNNL puts Ti185 at nearly 1,700 megapascals, or roughly double the strength of automotive steel while being almost half as light.

The team collaborated with Ankit Srivastava, an assistant professor at Texas A&M's material science and engineering department to develop a simple mathematical model for explaining how the hierarchical nanostructure can result in the exceptionally high strength. The model when compared with the microscopy results and processing led to the discovery of this strongest titanium alloy ever made.

"This pushes the boundary of what we can do with titanium alloys," said Devaraj. "Now that we understand what's happening and why this alloy has such high strength, researchers believe they may be able to modify other alloys by intentionally creating microstructures that look like the ones in Ti185."

For instance, aluminum is a less expensive metal and if the nanostructure of aluminum alloys can be seen and hierarchically arranged in a similar manner, that would also help the auto industry build lighter vehicles that use less fuel and put out less carbon dioxide that contributes to climate warming.

DOE's Vehicle Technologies Office — Propulsion Materials Program supported this research using capabilities developed under PNNL's internally funded Chemical Imaging Initiative. 
Sidni Hale's profile photoRozsa Sandor's profile photoAbak Hoben's profile photoVicky Vera's profile photo
HI Rohit, the approach is intended to be applicable to manufacturing ... so as with other metal fabrication, any shape required by industry.
Add a comment...

Pacific Northwest National Laboratory (PNNL)

• General/Interdisciplinary  - 
Inexpensive materials called MOFs (metal-organic frameworks) pull gases from air or other mixed gas streams, but fail to do so with oxygen. Now, a research team from PNNL, the Environmental Molecular Sciences Laboratory at PNNL, University of Amsterdam and Argonne National Laboratory's Advanced Photon Source has overcome this limitation. They've created a composite of a MOF and a helper molecule in which the two work in concert to separate oxygen from other gases simply and cheaply. This scientific advancement can impact a variety of applications, including fuel cells, food packaging, oxygen sensors and other industrial processes. Read more about this research at

* * *

The results of this research, reported in today's Advanced Materials, might help with a wide variety of applications, including making pure oxygen for fuel cells, using that oxygen in a fuel cell, removing oxygen in food packaging, making oxygen sensors, or for other industrial processes. The technique might also be used with gases other than oxygen as well by switching out the helper molecule.

Currently, industry uses a common process called cryogenic distillation to separate oxygen from other gases. It is costly and uses a lot of energy to chill gases. Also, it can't be used for specialty applications like sensors or getting the last bit of oxygen out of food packaging.

A great oxygen separator would be easy to prepare and use, be inexpensive and be reusable. MOFs, or metal-organic frameworks, are materials containing lots of pores that can suck up gases like sponges suck up water. They have potential in nuclear fuel separation ( and in lightweight dehumidifiers (

But of the thousands of MOFs out there, less than a handful absorb molecular oxygen. And those MOFs chemically react with oxygen, forming oxides — think rust — that render the material unusable.

"When we first worked with MOFs for oxygen separation, we could only use the MOFs a few times. We thought maybe there's a better way to do it," said PNNL materials scientist Praveen Thallapally.

The new tack for Thallapally and colleagues at PNNL involved using a second molecule to mediate the oxygen separation — a helper molecule would be attracted to but chemically uninterested in the MOF. Instead, the helper would react with oxygen to separate it from other gases.

They chose a MOF called MIL-101 that is known for its high surface area — making it a powerful sponge — and lack of reactivity. One teaspoon of MIL-101 has the same surface area as a football field. The high surface area comes from a MOF's pores, where reactive MOF's work their magic.

MOFs that react with oxygen need to be handled carefully in the laboratory, but MIL-101 is stable at ambient temperatures and in the open atmosphere of a lab. For their helper molecule, they tried ferrocene, an inexpensive iron-containing molecule.

The scientists made a composite of MIL-101 and ferrocene by mixing them and heating them up. Initial tests showed that MIL-101 took up more than its weight in ferrocene and at the same time lost surface area. This indicated that ferrocene was taking up space within the MOF's pores, where they need to be to snag the oxygen.

Then the team sent gases through the black composite material. The material bound up a large percentage of oxygen, but almost none of the added nitrogen, argon or carbon dioxide. The material behaved this way whether the gases went through individually or as a mix, showing that the composite could in fact separate oxygen from the others.

Additional analysis showed that heating caused ferrocene to decompose in the pores to nanometer-sized clusters, which made iron available to react with oxygen. This reaction formed a stable mineral known as maghemite, all within the MOF pores. Maghemite could be removed from the MOF to use the MOF again.

Together, the results on the composite showed that a MOF might be able to do unexpected things — like purify oxygen — with a little help. Future research will explore other combinations of MOF and helper molecules. 
冰雪雁's profile photoVisal Batthursiy's profile photo
Add a comment...

Pacific Northwest National Laboratory (PNNL)

• General/Interdisciplinary  - 
Harvested from what goats, horses and sheep leave behind in the meadow, anaerobic gut fungi helps herbivores digest stubborn plant material. Now, a research team reports in the journal Science that these same fungi could lead to less-expensive biofuel and bio-based products. Learn more about this research at

* * *

Nature's figured it out already -- how to best break down food into fuel. Now scientists have caught up, showing that fungi found in the guts of goats, horses and sheep could help fill up your gas tank too.

The researchers report in the journal Science on Feb. 18 that these anaerobic gut fungi perform as well as the best fungi engineered by industry in their ability to convert plant material into sugars that are easily transformed into fuel and other products.

"Nature has engineered these fungi to have what seems to be the world's largest repertoire of enzymes that break down biomass," said Michelle O'Malley, lead author and professor of chemical engineering at the University of California, Santa Barbara.

These enzymes — tools made of protein — work together to break down stubborn plant material. The researchers found that the fungi adapt their enzymes to wood, grass, agricultural waste, or whatever scientists feed it. The findings suggest that industry could modify the gut fungi so that they produce improved enzymes that will outperform the best available ones, potentially leading to cheaper biofuels and bio-based products.

To make the finding, O'Malley drew upon two U.S. Department of Energy Office of Science User Facilities: the +Environmental Molecular Sciences Laboratory (EMSL) at PNNL and  the +DOE Joint Genome Institute. O'Malley's study is the first to result from a partnership between the two facilities called Facilities Integrating Collaborations for User Science or FICUS. The partnership allows scientists around the world to draw on capabilities at both Office of Science user facilities to get a more complete understanding of fundamental scientific questions. O'Malley's team also included scientists from PNNL, DOE JGI, the Broad Institute of MIT and Harvard, and Harper Adams University.

"By tapping the RNA sequencing and protein characterization capabilities at the respective facilities, we have advanced biofuel research in ways not otherwise possible," said Susannah Tringe, DOE JGI deputy for User Programs. "This collaborative program was established to encourage and enable researchers to more easily integrate the expertise and capabilities of multiple user facilities into their research. FICUS offers a one-stop shopping approach for access to technology infrastructure that is rapidly becoming a model for collaboration."

The latest omics technologies and transcontinental teams aside, these finding would not be possible without the most humble of substances.

The Poop Scoop: Companies want to turn biomass like wood, algae and grasses into fuel or chemicals. The problem: The matrix of complex molecules found in plant cell walls-lignin, cellulose and hemicellulose-combine to create the biological equivalent of reinforced concrete. When industry can't break down this biomass, they pretreat it with heat or chemicals. Or throw it away. Both options add to the cost of the finished product.

Many farm animals have no trouble breaking down these same molecules, which inspired the research team to investigate. Their search started at the Santa Barbara Zoo and a stable in Massachusetts, where they collected manure from goats, horses and sheep. The fresher the sample, the better, for this barnyard bounty held live specimens of biomass-eating fungi.

As some of the world's first nucleus-containing single-celled organisms, anaerobic gut fungi have been around since before the dinosaurs. Scientists have long known they play a significant role in helping herbivores digest plants. One reason has to do with the swarming behavior of some fungi. When the fungi reproduce, they release dozens of spores with tail-like appendages called flagella. These baby fungi swim around like tadpoles and find new food in the gut. They then trade tails for root-like structures called hyphae, which dig into plant material. Then foliage becomes food.

O'Malley and her colleagues knew the fungi's hyphae excrete proteins, called enzymes, that break down plant material. Like tools in a toolbox, the more diverse the enzymes, the better the fungi can take apart plants and turn them into food. If industry can harness fungi with such a toolbox, it can more effectively break down raw biomass.

"Despite their fascinating biology, anaerobic gut fungi can be difficult to isolate and study," said Scott Baker, EMSL's science theme lead for Biosystem Dynamics and Design. "By utilizing the cutting-edge scientific capabilities at EMSL and JGI, O'Malley showed how the huge catalog of anaerobic gut fungi enzymes could advance biofuel production."

To find that prize, the research team needed a map. Well, two maps. So armed with a scoop of poop, they took a deeper look into the gut fungi.

Fungi that are the cream of the crop: In the hands of scientists, a list of enzymes produced by gut fungi is the first step to unlocking their biofuel-producing potential. Like monks in a monastery copying religious texts, messenger RNA molecules transcribe the genetic information needed to make proteins, including enzymes. So the DOE JGI sequenced the mRNA of several gut fungi to come up with their transcriptome, which represents all the possible proteins they could make.

O'Malley compared this effort to re-assembling a map from its pieces, only without seeing the complete picture. Since not all proteins are enzymes, the researchers needed to cross check their map with another one. Enter the EMSL, where researchers created that second map that identified enzymes the fungi actually produced. This so-called proteome acted like landmarks that matched up to JGI's map, highlighting the biomass-degrading enzymes in the transcriptome.

Together, the maps from JGI and EMSL pointed to the treasure trove of enzymes gut fungi can produce. Compared to the industrial varieties, which top out around 100 enzymes, gut fungi can produce hundreds more. Of note, the fungi produce enzymes better at breaking down a hemicellulose found in wood, called xylan. And when the scientists changed the fungi's diet from canary grass to sugar, the fungi responded by changing the enzymes it produced. In other words, the fungi can update their enzyme arsenal on the fly.

"Because gut fungi have more tools to convert biomass to fuel, they could work faster and on a larger variety of plant material. That would open up many opportunities for the biofuel industry," said O'Malley, whose study was funded by the U.S. Department of Energy Office of Science, the U.S. Department of Agriculture and the Institute for Collaborative Biotechnologies. Additionally, O'Malley was the recipient of a DOE Office of Science Early Career Award within the Biological and Environmental Research Program.

O'Malley will present her findings at the DOE JGI's 11th Annual Genomics of Energy & Environment Meeting in Walnut Creek, California, on March 24. Registration is still open for the meeting.
Mick Sylvestre's profile photoIsa Kocher's profile photoRoman Chukanov's profile photoDavid Camp's profile photo
We NEED to STOP using CARBON to fuel our cars.

Add a comment...

Pacific Northwest National Laboratory (PNNL)

• General/Interdisciplinary  - 
In the United States, 90% of electrical power comes from power plants that use heat-conversion (thermoelectric) systems, fueled by coal, gas, oil and nuclear generators. These plants require fresh water to generate steam and for cooling. PNNL scientists have developed a new modeling tool to understand the impact of human activities on temperatures of complex river and stream systems. Learn more at

* * *

As a bellwether for water quality, stream temperature is regulated to protect aquatic ecosystems. PNNL scientists developed a new modeling tool to understand stream temperature, with their sights on how it may be influenced by climate change and human activities.

Applying the new module in a river transport model and coupling it with a generic water management model within an Earth system model framework, they were able to closely mimic the observed stream temperature variations from over 320 river gauges across the contiguous United States.

Their additional analysis focused on reservoir operations which they found could cool down stream temperature in the summer low-flow season, from August to October, by as much as 1 to ~2oC by altering the timing of streamflow that boosts summer water flows.

"Our new capability lays a solid foundation for future studies on the water-energy-land nexus," said Dr. Hong-Yi Li, a PNNL hydrologist who led the study. "It opens exciting opportunities to evaluate our options for managing resources in an evolving environment."

PNNL researchers developed a new large-scale stream temperature model within theCommunity Earth System Model (CESM) framework. The model was coupled with the MOdel for Scale Adaptive River Transport (MOSART) that represents river routing, and a water management model. The coupled models allowed reservoir operations and withdrawal impacts on stream temperature to be explicitly represented in a physically based and consistent way. The models were applied to the contiguous United States driven by the observed meteorological forcing.

The team evaluated model-simulated streamflow and stream temperature against the observations at over 320 U.S. Geological Survey (USGS) river gages in the United States. They showed that including water management in the models improves the agreement between the simulated and observed streamflow and stream temperature at a large number of stream gauge stations. The space and time variation of stream temperature was systematically analyzed at regional to national scales.

Through sensitivity experiments, this study revealed the relative influence of climate and water management on both streamflow and stream temperature. Further, it uncovered the notable impacts of reservoir operations on stream temperature during August-September-October when changes in stream temperature have critical effects on water-cooling thermoelectric power production and aquatic ecosystems.

Why is this important? In the United States, 90 percent of electrical power comes from generating electricity using heat-conversion (thermoelectric) systems, fueled by coal, gas, oil, and nuclear generators. Withdrawing as much water as farms, these power plants require fresh water to generate steam and for cooling purposes.

Because of the sensitivity of stream life, regulations are in place—especially for the protection of fish—on the temperature of water discharged from power plants. In turn, this means that stream temperature is an important limitation on energy production. Such a constraint is particularly critical during low water flows and drought conditions, which are projected to be more widespread and prolonged in a warmer climate.

Researchers increasingly use Earth system models to understand climate change impacts. Adding a stream temperature module can provide important information to evaluate potential changes in river temperature. This information is extremely valuable because stream temperature can impact thermoelectric power production that uses wet-cooling systems. The water's eventual release back into the stream system can put aquatic and coastal ecosystems in jeopardy.

What's Next? PNNL researchers are now examining the relative contributions of climate change and human activities on stream temperature variations and subsequently on wet-cooling thermoelectric power production in the coming decades. They are also extending MOSART to simulate how sediment, carbon, and other nutrients move from the landscape through the rivers and into the ocean.
Greyson Chadwick's profile photo
Add a comment...

Arran Frood

• General/Interdisciplinary  - 
If you are interested in greener alternatives to fossil fuels, then do check out Plants v petrol, the new video from UK bioscience funders BBSRC.

The 01:30 animation outlines how tinkering with the mechanism for photosynthesis to make it more efficient (evolution isn't perfect!) could lead to better liquid fuels for cars and other vehicles.

More details here:

And there are more in the Plants Vs series here:
Matthew Brownlow - Hewett's profile photoRomavic Antony's profile photoNick Gleeson's profile photoNatty.'s profile photo
look here : electricity generator :

like solar_betrie, but without sun (!)..
- strengthening the reinforced beam of light.....

try it :)
(low cost,...)
Add a comment...

Pacific Northwest National Laboratory (PNNL)

• General/Interdisciplinary  - 
You can learn a lot from the green goop that thrives in ponds – including how to create a form of clean, alternative energy. PNNL researchers collected near-complete genomic information for microscopic microbes that comprise two diverse microbial communities in blue-green algae. Their research has resulted in the most complex communities to have their genomes detailed to date. They also studied how these diverse communities interact and coexist. This new understanding has important implications for how these communities could be used in the future as an energy source. Learn more at

* * *

Tiny microbes are hiding big secrets. Scientists often use a collection or community of microbes to study molecular functions, but the more complex the community, the more difficult it is to tease out functions and interactions. Now, scientists at Pacific Northwest National Laboratory have peered into two microbial communities of blue-green algae to collect near-complete genomic information for all 20 members in each community.

These communities, called unicyanobacterial consortia, or UCCs, are the most complex communities to have their genome described in detail to date. As recently reported in Applied and Environmental Microbiology, the researchers made two surprising discoveries that will help scientists better understand the communities' functions and interactions.

For one thing, the community composition was more varied than typical analyses would have estimated. PNNL's more in-depth analysis showed that usual approaches can underestimate the true diversity and function present within a community. For another, closely related organisms had tiny differences in their genomes. Normally, such microdiversity would put these organisms in competition with each other, and one would drive the other to extinction. Yet these organisms were coexisting in their communities, suggesting they have different functions.

To coax the microbial communities into giving up their secrets, PNNL scientists borrowed a technique called "genome reconstruction" from the environmental field. The relatively new approach allows scientists to segregate species into bins to reconstruct their genomes. Because the same organisms were present in each community at different abundances, the scientists could more easily parse out genomic information.

The team then investigated the function of individual organisms and predicted interactions. To confirm the process's accuracy and specificity, researchers compared the sequenced genomes of isolated organisms from the UCCs. The results appeared fairly accurate, lending support for other scientists to use this method.

"The UCCs are excellent systems from which to learn about how microbial communities behave," said Dr. William Nelson, the PNNL microbiologist who led the study. "They are much simpler than natural communities, making it easier to interpret what's going on. We have them growing in the lab, so we can perform experiments on them. And, now we have the genome sequences of all the organisms, which gives us a better ability to both make predictions and interpret results."

Why is this important? Microdiversity refers to the differences in organisms that have highly similar physical or genetic characteristics. Current thinking assumes that microdiversity in the communities PNNL studied would be minor. However, PNNL's research shows microdiversity is a fundamental property of microbial communities. These differences can have profound impact on understanding how UCCs function, particularly as these communities are being considered for use as a form of clean energy.

"The idea behind this project was to characterize the interactions between organisms-get a complete understanding of how they work together, how they form a community greater than the sum of its parts," said Nelson. "Our work shows that these communities are more diverse than anyone expected."

What's Next? PNNL scientists are looking into how the small differences in genome sequences translate to function. The UCCs will form the basis for upcoming experiments exploring interactions between community members, including how microdiversity is maintained and how it affects community function. In addition, the results of the current study can serve as a foundational dataset and resource for future investigations to understand interactions among microbial communities, particularly those important to human health and the environment.

This work was sponsored by the U.S. Department of Energy's (DOE) Office of Science, Biological and Environmental Research via the Genomic Sciences Program.
Cassio Murillo's profile photoANITA KUMARI's profile photo
Add a comment...

Gary Ray R

• General/Interdisciplinary  - 
The Man Who Discovered The Doppler Effect

I wanted to share this excellent post by +annarita ruberto who writes today about Christian Andreas Doppler who discovered the Doppler effect. 

The Doppler effect (or Doppler shift) is the change in frequency of a wave (or other periodic event) for an observer moving relative to its source. It is named after the Austrian physicist Christian Doppler, who proposed it in 1842 in Prague. It is commonly heard when a vehicle sounding a siren or horn approaches, passes, and recedes from an observer. Compared to the emitted frequency, the received frequency is higher during the approach, identical at the instant of passing by, and lower during the recession.  Wiki

Christian Andreas Doppler is renowned primarily for his revolutionary theory of the Doppler effect, which has deeply influenced many areas of modern science and technology, including medicine. His work has laid the foundations for modern ultrasonography and his ideas are still inspiring discoveries more than a hundred years after his death.

Christian Andreas Doppler: A legendary man inspired by the dazzling light of the stars
Today in Mathematics History: Christian Andreas Doppler

Born: 29 November 1803 in Salzburg, Austria
Died: 17 March 1853 in Venice, Italy

Christian Andreas Doppler was an Austrian mathematician and physicist. He is celebrated for his principle — known as the Doppler effect — that the observed frequency of a wave depends on the relative speed of the source and the observer. He used this concept to explain the color of binary stars.

Doppler was raised in Salzburg, Austria, the son of a stonemason. He could not work in his father's business because of his generally weak physical condition. After completing high school, Doppler studied philosophy in Salzburg and mathematics and physics at the k. k. Polytechnisches Institut (now Vienna University of Technology) where he began work as an assistant in 1829. In 1835 he began work at the Prague Polytechnic (now Czech Technical University), where he received an appointment in 1841.

He published widely, but was known as a harsh instructor who was not popular among his students. 

In 1842, Doppler gave a presentation called "Über das farbige Licht der Doppelsterne" ("On the colored light of the double stars and certain other stars of the heavens") at the Royal Bohemian Society of Sciences. The paper theorized that since the pitch of sound from a moving source varies for a stationary observer, the color of the light from a star should alter according to the star's velocity relative to Earth. This principle came to be known as the "Doppler effect." The Doppler effect has been used to support the Big Bang Theory and is often referenced in weather forecasting, radar and navigation.

Doppler left Prague in 1847 and accepted a professorship in mathematics, physics and mechanics at the Academy of Mines and Forests in the Slovakian town of Banska Stiavnica. When revolution broke out in the region in 1848, Doppler was forced to return to Vienna.

In 1850, Doppler was appointed head of the Institute for Experimental Physics at the University of Vienna. One of his students there was Gregor Mendel, known for his tremendous contributions to the field of genetics, who did not impress Doppler at the time. Another member of faculty, Franz Unger, served as a mentor to Mendel.

He was often ill and died while convalescing in Venice, Italy, on March 17, 1853.

References and further reading

Read the biography in the Mac Tutor website>>

► Image source>>

#Christian_Andreas_Doppler, #DopplerEffect , #history_of_mathematics , history_of_science
michael dicarlo's profile photoRomavic Antony's profile photo
Add a comment...