Stream

Join this community to post or comment

Intellects

• General/Interdisciplinary  - 
 
What Smoking Does to an unborn baby?

#ultrasound #smoking #nervousystem

Originally Shared by +SciConcilium 
 
What Smoking Does to an unborn baby?

Smoking kills, slowly but surely, but it can potentially change whole life of unborn kid. Ultrasound 4D scans show delayed development of central nervous system in fetus born to smoking moms. How does that affects child life in long term is still a  open question.
"After studying their scans at 24, 28, 32 and 36 weeks,  foetuses whose mothers smoked continued to show significantly higher rates of mouth movement and self-touching than those carried by non-smokers."

Original Study: Acta Paediatrica http://onlinelibrary.wiley.com/doi/10.1111/apa.13001/abstract
Source: DailyMail http://ow.ly/KJp1g 

#ultrasound4d #fetusgrowth #smoking 
View original post
10
2
Philip Andrew's profile photoManuel Hermann's profile photoSilvia Rodriguez's profile photo
 
I am amazed people still smoke
Add a comment...

SciConcilium

• General/Interdisciplinary  - 
 
US cancer drugs lobby and solutions revealed

The high prices of the cancer drugs are affecting the way patients are treated. Doctors from the Mayo Clinic Cancer Center indicated American patients pay 50 to 100 percent more for the same patented drug than patients in other countries. It is not clear the reasons to calculate the drugs high prices. Despite the obvious cancer research costs, people do not understand drugs are not operating under a free market economy. Different cancer drugs must be used in combination. Therefore, there isn’t a real competition defining the prices. Among the various recommendations, doctors proposed new drug price negotiation methods, allow FDA recommending the drug prices, permit the importation of drugs for personal use or allow Medicare to negotiate drug prices.

Source: EurekAlert - http://ow.ly/KqBY1
Original Study: Mayo Clinic Proceedings - Oncologists Reveal Reasons for High Cost of Cancer Drugs in the US, Recommend Solutions - http://ow.ly/KxDaE

‪#‎oncology‬ ‪#‎cancer‬ ‪#‎drugs‬ ‪#‎price‬ ‪#‎lobby‬ ‪#‎cartel‬ ‪#‎FDA‬ ‪#‎Medicare‬
6
Add a comment...
 
#naturescience #scientificresearchpublishing  #Financial
An Econophysics Model of Financial Bubbles
Usually financial crises go along with bubbles in asset prices, such as the housing bubble in the US in 2007. This paper attempts to build a mathematical model of financial bubbles from an econophysics, and thus a new perspective. I find that agents identify bubbles only with a time delay. Furthermore, I demonstrate that the detection of bubbles is different on either the individual or collective point of view. Second, I utilize the findings for a new definition of asset bubbles in finance. Finally, I extend the model to the study of asset price dynamics with news. In conclusion, the model provides unique insights into the properties and developments of financial bubbles.
If you want read more, please read here: 
http://www.scirp.org/journal/PaperInformation.aspx?PaperID=53636&utm_campaign=google&utm_medium=wt
Published by Scientific Research Publishing (SCIRP)
Facebook link: https://www.facebook.com/pages/Scientific-Research-Publishing/495692727119674
Linkedin link: https://www.linkedin.com/company/scientific-research-publishing-inc.-usa
4
Gary Ray R's profile photoScirp Jmp's profile photo
2 comments
 
+Gary Ray R Thx for supporting me and give your precious suggestions.
Add a comment...

SciConcilium

• General/Interdisciplinary  - 
 
*DNA Quantum Jitters *

Human mistakes are what makes human race creative and ever adapting and the fiber of mistakes is sown into human DNA. DNA is bound to make mistakes but at exact rate which keeps us evolving but yet surviving. Study published in Nature shows that frequency of quantum jitters is same as expected rate of mutation in DNA. This is first study showing changes at atomic level being associated with genetic mutations. These mutations are responsible for the process of evolution but at higher rate lead to Cancer. The fine tuning of quantum jitters is essential for evolution and survival. 

"This tiny movement, or “quantum jitter,” takes such an enormous amount of energy that bases are successful at accomplishing the feat only once out of every 10,000 or so attempts.
Even then, they can only hold their new shape for a very short period of time—50 to 200 microsecond—before the hydrogens pop back into their original position.
The researchers looked back at previous biological studies and found that these rare alternative states appeared in the DNA about as often as the polymerase machinery’s copying errors.
This is a remarkable study that illuminates a fundamental mechanism responsible for the random mutations that drive evolution and contribute to cancer,”

Original Study: Nature http://www.nature.com/nature/journal/vaop/ncurrent/full/nature14227.html
Source: Futurity http://www.futurity.org/quantum-jitters-dna-873672/

#DNA #Quantumjitters #mutations #cancer #evolution 
28
7
Ashok Kumar Tripathy's profile photoHans-Jürgen Kugler's profile photo
Add a comment...

Pacific Northwest National Laboratory (PNNL)

• General/Interdisciplinary  - 
 
PNNL researchers used a computational model to estimate the state-by-state increasing water requirements by U.S. electric power producers through 2095. The study shows the impact of two mitigation strategies – carbon capture and storage and nuclear power – will be less favorable on water consumption than those that rely on renewable energy and water-saving technologies. Climate scenarios also project a decline in the future electric-sector water withdrawal through the century. Read more at https://www.pnnl.gov/science/highlights/highlight.asp?id=3936.
 
* * *
 
"This detailed accounting of technologies and geographical information through the end of the century will help inform scientific and policy questions at the heart of the U.S. water-energy nexus," said co-author Dr. Mohamad Hejazi, climate researcher working at the Joint Global Change Research Institute (JGCRI), a partnership between PNNL and the University of Maryland.
 
In this study, PNNL researchers used the Global Change Assessment Model (GCAM), a technologically detailed model of the economy, energy, agriculture and land use, water, and climate systems. The researchers extended the model to simulate electricity and water systems at the state level (GCAM-USA).
 
Under a set of seven climate scenarios, they used the model to estimate future state-level electricity generation and consumption, and their associated water withdrawals and consumption. These seven scenarios had extensive detail on the generation fuel portfolio and the cooling technology mix, with the associated water-use intensities of both.
 
The scenarios allowed the researchers to investigate the implications of mitigating factors that could play out in the future: socioeconomic development and growing electricity demands, cooling system transitions, adoption of water-saving technologies, climate mitigation policy, and electricity trading options on future water demands of the U.S. electric-sector. All scenarios project a decline in the future electric-sector water withdrawal through the century.
 
The climate scenarios revealed several water impacts. For instance, in areas such as the Southwest where water can be a scarce resource, the research looked at the trade-off between water withdrawal and water consumption. Drought in 2008 caused several power plants to shut down for days due to lack of cooling water. In this case, the increased use of closed loop cooling systems will mean less water withdrawal, but relatively high water consumption. And in coastal regions in California and elsewhere, regulations that require reduction in once-through cooling systems may improve conditions for marine life, but result in greater use of freshwater challenging local watersheds. These changes will also add substantial renovation costs to the power plants in those areas.
 
The research found that climate mitigation strategies such as nuclear power and carbon capture and storage will increase water consumption. Strategies that support renewable energy and water-saving technologies will reduce it. The study's high level of geographic and technology detail provides a platform to address scientific and policy relevant and emerging issues at the heart of the water-energy nexus.
 
Why is this important?  Currently, U.S. water requirements for electricity generation account for nearly half the total freshwater withdrawal. With a changing climate, steadily growing electricity demands and limited water supplies in many water-scarce areas poses a significant challenge. While electricity production is likely to increase in the near future, it is less certain how the U.S. electric-sector water demand will change. Some energy production technologies are less water intensive than others. This study sheds light on the interactions between the electricity and water systems, both state-wide and nationally.
 
What’s next? Interesting questions remain about the mechanisms of sectoral water competition outside the United States. These questions include incorporating desalinated water and groundwater, evaluating different climate mitigation and adaptation policies, and assessing environmental impacts of energy-sector transformation processes. Future research can expand the GCAM-USA framework to other countries and regions. 
14
2
Pacific Northwest National Laboratory (PNNL)'s profile photoThermmo Dhynamics's profile photoMark Palmberg's profile photoRobert Miller's profile photo
4 comments
 
+Pacific Northwest National Laboratory (PNNL) ok that's honorable we have a debate.
Now just for the precision, geothermal energy is in fact also nuclear.
Happens it's origin is due to decaying naturally present elements.
Extraction of this radiogenic heat comes at a price.
Proliferation of enormous quantities of slightly radiopolluted water.
Therefore the entire nomenclature of terms is to be deeply revised.
Furthermore the term renewable energy is a blatant abuse.
No treaty of physics I know about defines how to renew energy.
Solar cells for exemple have a very poor efficiency. Most of incident energy is irradiated as low temperature high entropy by-product unfit for other uses.
Which contributes to the desertification effect observed in vicinity of large solar installations on farming land. Complexity of water cycle gains therefore about an order of magnitude.
Wind energy is not free neither since it slows / modifies air atmospheric flows creating a local discontinuity.
So any attempt to look at the problem with a spherical cow methodology is pretty nonsensical.
While attempts to create full models are unrealistic.
Leaving a significant grey zone of incertitudes that won't be solved unless of course an exact model can validate it's extend. Reductio ad absurdum.
Therefore pretending is the only option that fuels such articles.
And I'll greatly appreciate this to be stated.
Because happens, politics is no science and proves very prone to turn "maybes" into binding laws favoring biased economic interests.
Thanks for your reading. 
Add a comment...

Pacific Northwest National Laboratory (PNNL)

• General/Interdisciplinary  - 
 
According to new PNNL research, natural gas powered solid oxide fuel cells – located at the point of use to produce electricity facilities like big box stores or hospitals – could provide both economic and environmental benefits. Read more at http://www.pnnl.gov/news/release.aspx?id=4185.
 
* * *
 
Instead of drawing electricity from the power grid, facilities could use natural gas-powered solid oxide fuel cells to lower their electric costs, increase power reliability, reduce greenhouse gas emissions, and maybe even offset costs by selling excess fuel cell-generated power back to the power grid. Such an energy future could be possible — assuming fuel cell lifespans are improved and enough systems are produced to reach economies of scale — according to a cost-benefit analysis published in the journal Fuel Cells.
 
If such advances are made, PNNL researchers conclude natural gas solid oxide fuel cells could play a significant role in meeting future energy demand. The technology could help meet the 10 percent increase in electricity the nation will need in the next decade. That estimate, by the U.S. Energy Information Administration, will require 68 gigawatts more generating capacity.
 
PNNL's study focused on distributed generation, where fuel cells are located right at the individual facilities they power. This is different than the traditional central generation approach to energy, where large power plants are often located far away from end users.
 
Instead of burning fuel like combustion engines, solid oxide fuel cells oxidize it electrochemically. Each cell is made of ceramic materials, which form three layers — an anode, a cathode and a solid electrolyte, much like a battery. Multiple cells must be assembled into a fuel cell stack to achieve the desired power output.
 
Solid oxide fuel cells are inherently highly efficient in converting fossil fuels to electrical energy and PNNL's unique system design, which includes anode recycling, steam reforming, and pressurization, advances the technology even further.
 
"On the anode side of the fuel cell, we recycle waste heat in a steam reformer to squeeze even more energy out of the fuel - about 25 percent more chemical energy compared to typical solid oxide fuel systems," said Larry Chick a materials engineer at PNNL. "The stack operates under high pressure - about the equivalent of being 230 feet under water. That increases the power density, which reduces the size of the stack by about 60 percent and lowers the fuel cell's overall cost significantly."
 
The researchers based their cost modeling study on a small-scale solid oxide fuel cell system designed, built, and tested at PNNL and a larger, conceptual system of 270 kilowatts, which is enough to power a large retail facility or light industry. Cost estimates are expressed in 2012 dollars.
 
The study showed that for the same power output, a natural gas solid oxide fuel cell would cost almost one-third less to build than a centralized natural gas combined cycle plant.
 
"We were intentionally conservative as we calculated the cost of both building and operating natural gas solid oxide fuel cells and other types of generation," said PNNL economist Mark Weimar. "For instance, in comparing the solid oxide fuel cell to a 400 megawatt natural gas combined cycle plant, we assumed that the larger, central generation plant would pay cheaper wholesale prices for natural gas compared to smaller, distributed generation fuel cells, which we estimated would pay retail or almost double the wholesale cost."
 
The authors report that if stack life improvements are made and mass manufacturing is achieved, natural gas solid oxide fuel cells can be cost-competitive with natural gas combined cycle plants, which are projected to generate electricity at a total cost of 6.5 cents per KWh. They calculated natural gas fuel cells would have a total electricity cost of 8.2 cents per kWh.
 
When researchers factored in the additional benefits of distributed generation, it brought the cost down to 5.3 cents per kWh. Those benefits stem from the fact that fuel cells don't have the extra costs and power losses associated with transmission and distribution power lines that central power plants experience.
 
Higher efficiency, lower emissions: The high efficiency of natural gas fuel cells means fewer greenhouse gas emissions as well. The PNNL prototype showed 56 percent electrical conversion efficiency compared to 32 percent from conventional coal plants and 53 percent from natural gas combined cycle plants. The study shows that the natural gas fuel cell system would produce 15 percent less carbon dioxide per kWh than a modern natural gas combined cycle power plant.
 
Additionally, since a distributed generation natural gas fuel cell system would be installed on site, some of the heat from the fuel cell could be used to heat water or interior spaces. If just 20 percent of the fuel cell heat replaced the use of grid electricity for heating, then the fuel cell system would produce 22 percent less carbon dioxide than large-scale natural gas combined cycle plants tied to the grid.
 
Currently, fuel cell stacks last only about two years. Over time, as the fuel and oxygen are constantly being pumped in and run over the catalyst in the cells, the chemicals start to degrade and the system starts to wear down. The study noted fuel cell stacks would need to last six to eight years to be competitive, and uses a15-year lifespan in the study's cost comparison table.
 
With additional research, the limited stack life can be overcome," Chick said. "It's a matter of conducting reliability testing on integrated systems and using advanced characterization techniques to figure out what is degrading the performance of the stacks over time. The Department of Energy's Solid Oxide Fuel Cell program has been achieving targeted improvements over the last decade, so things are moving in the right direction."
13
4
Крыстафер Гомес's profile photoÓscar A.'s profile photoMubashir To Muhammad To Allah's profile photoBrad Steeg's profile photo
15 comments
 
+Brad Steeg
Maybe that I could not explain clearly my thougth on this natural gas fuel cell (NGSOFC) experimented by PNNL or that you have not well read the article on the economy of a standard installation of the optimal fuel cell of 270 Kw of electrical installed power. The main problem of this small equipment based on the combustion of natural gas is that the cost of the Kwh is not competitive, without CO2 capture, with the cost of a big natural gas turbine of 400 Mw integrated in a general power grid and  esplicitly equipped with a suitable plant to capture and recycle the CO2 emitted. The small installation of 270 Kw is competitive only with an impossible and never proposed micro-turbine, both without CO2 capture, for a possible use as decentered small units connected or not (autonomous) with the power grid.
The same article consider that the best possibility of installation of 270 Kw decentered and independent NGSOFC is optimistically of 250 production units in all the USA, that is nothing. It means that this project of small NGSOFC decentered is unrealistic and not economically convenient, while the big turbine plant is much more economical, connected with the general grid, and ecologically correct by capture of CO2 emitted.
Clearly the small independent fuel cell is furthermore ecologically unacceptable as polluting.
That is all, without necessity of a discussion on the opportunity to capture and recycle greenhouse gas emitted or not in small and limited NGSOFC units.
 
Add a comment...
 
A new twist on an old tool lets scientists use light to study and control matter with 1,000 times better resolution and precision than previously possible.

Physicists at the University of Michigan have demonstrated "ponderomotive spectroscopy," an advanced form of a technique that was born in the 15th century when Isaac Newton first showed that white light sent through a prism breaks into a rainbow.

the researchers started with atoms of the soft metal rubidium. In rubidium atoms, just one electron occupies the outer valence shell. With finely tuned lasers, they excited this outer electron enough to move it 100 times farther away from the atom's nucleus. This turned it into what's called a Rydberg atom – a giant that exhibits not only greater size, but also much stronger interactions. Because of these properties, Rydberg atoms are candidates for the circuits of future quantum computers.

Next, the researchers generated a lattice of laser beams that formed a sort of egg carton of light. This lattice is what gave rise to the ponderomotive force that's essential to their approach. The ponderomotive interaction is present in all light fields. But the researchers found that by pulsating the laser beams at certain rates over time, they could use the field to both trap the whole Rydberg atom by holding fast to its outer electron, and induce in that atom a real quantum leap that would be forbidden with traditional spectroscopy.
7
Gary Ray R's profile photo
 
Thanks for the post +University of Michigan Business Engagement Center
The paper is available open access.
Forbidden atomic transitions driven by an intensitymodulated laser trap
http://arxiv.org/pdf/1409.4087v1.pdf
Add a comment...

SciConcilium

• General/Interdisciplinary  - 
 
*Black plague history must be rewritten; rats were not the villains. *

Centuries of hate feelings were driven against black rats because of their active role in spreading the black plague. The European mid-14th century black plague (bubonic plague) was thought to have been spread by rats when fleas carrying the Xenopsylla cheopis bacteria jumped from rats to humans. However, recent findings published in Proceedings of the National Academy of Sciences revealed Asian gerbils should be the ones to blame. The warm weather in Asia contributed to another plague-carrying rodent - the giant gerbil - to thrive. The explosive combination of a growing trading activity between Asia and Europe and the warm weather conditions helped the black plague entering European ports. That promoted the disease dispersion throughout the whole continent. Scientists will now analyze bacteria from European plague victims to investigate bacteria DNA and trace its variation from Central Asia.

Source: BBC.com - http://ow.ly/JyYH1

Original Study: Proceedings of the National Academy of Sciences - Climate-driven introductions of the Black Death and successive plague epidemics into Europe - http://ow.ly/JyYLN 

#plague #blackplaque #bubonic #rats #europe #asia #gerbils #rats #centralasia
32
9
John Humlick's profile photoAlex van L. Maas's profile photoMorgan Quell (Weenus from Venus)'s profile photoCassie Carnage's profile photo
23 comments
 
I agree. 
We fight a constant fight against excessively inflated headlines and reporting in science articles.  The media wants to get those eyeballs and a good overblown headline will do it every time.

I think the original journal article states what the study was about much better that the headlines and discussion suggest.

Climate-driven introduction of the Black Death and successive plague reintroductions into Europe
http://www.pnas.org/content/early/2015/02/20/1412887112.full.pdf+html

Thanks 
Add a comment...

Scott Lewis

• General/Interdisciplinary  - 
 
What better way to easily support good science communication than to nominate one of the best accounts on Twitter for an award?
 
I nominate @RealScientists for a Shorty Award!

Why? Well for one, it's got to be one of the best ways to connect with scientists from nearly every field over on Twitter. 

Each week, a scientist or science communicator takes over the account and engages directly with the public, talking about their field or leading a conversation on a particular topic. It's a fantastic way to learn more about science with those that are truly passionate about their subject area. 

Please head on over to the +Shorty Awards site and give them a nomination! 

Find them all over the Internet here:
Twitter - http://twitter.com/realscientists
Facebook - https://www.facebook.com/RealScientists
Website - http://realscientists.org 
#ShortyAwards   #Twitter   #RealScientists   #ScienceSunday   #Science   #STEM   #STEAM  
See who nominated realscientists in the Shorty Awards, the awards for the best of social media.
2 comments on original post
6
1
brian scott sparks's profile photo
Add a comment...

Justin Chung

• General/Interdisciplinary  - 
 
Yesterday's DSCOVR launch was scrubbed just minutes before the scheduled liftoff time. For those interested, scrub details in the original post below. Next launch attempt is today at 6:07pm EST. Edit: Monday launch called off due to weather and Tuesday due upper level winds. Next launch attempt is now tomorrow (Wed) at 6:03pm EST.

DSCOVR will succeed NASA's ACE (Advanced Composition Explorer) in supporting solar observations and provide 15 to 60 minute warning time to improve predictions of geomagnetic storm impact locations.

Here's a good Q&A on NOAA's DSCOVR Mission:
www.nasa.gov/content/goddard/qa-on-noaas-dscovr-mission

For addition info and details on DSCOVR:
www.nesdis.noaa.gov/DSCOVR
 
DSCOVR Launch Today Scrubbed. Next Attempt Tomorrow!

Today's DSCOVR (Deep Space Climate Observatory) launch on a SpaceX Falcon 9 at Cape Canaveral Air Force Station's Space Launch Complex 40 was scrubbed just minutes before the scheduled liftoff time due to what was first reported as "tracking issues."

NOAA (National Oceanic and Atmospheric Administration) details with, "There were two issues: a first stage transmitter and an issue with a range radar." SpaceX's Elon Musk tweeted, "Prob good though. Will give us time to replace 1st stage video transmitter (not needed for launch, but nice to have)."

NASA later blogs, "Today's launch of the DSCOVR mission is scrubbed due to loss of the Air Force's Eastern Range radar, which is required for launch." Second attempt is tomorrow (Monday) at 6:07pm EST, 3:07pm PST. NASA TV will begin countdown and launch coverage at 5pm EST. [blogs.nasa.gov/dscovr]

#dscovr #nasa #noaa #spacecraft #satellite #spacex #falcon9 #rocket #space #science #technology #scienceeveryday #sciencesunday
4 comments on original post
11
Add a comment...

American Scientist

• General/Interdisciplinary  - 
 
 
Marie Curie was a well-known physicist and chemist who did pioneering research on radioactivity. She also won two Nobel Prizes where she was the first woman to win one as well as the first person and only woman to win it twice! Alongside all of her work in research and making strides in the scientific world, she was also a mother. 

How does having children potentially affect a woman's career in science, technology, math and engineering fields? Read one of our most popular articles, "When Scientists Choose Motherhood" by Wendy M. Williams and Stephen J. Ceci: http://www.amsci.org/issues/pub/when-scientists-choose-motherhood

#womenshistorymonth   #womenintech   #Womeninscience   #History   #Science   #motherhood   #children  
View original post
29
3
ayush pandey's profile photoMathieu Hautefeuille's profile photoLaura Golin's profile photoPriscila Araújo's profile photo
 
Curie married twice. Thats enough!
Add a comment...

SciConcilium

• General/Interdisciplinary  - 
 
New screen delivers real-time holograms

Most people don't realize, but holography dates from 1947. It was such an important discovery the scientist and inventor, Denis Gabor, was awarded the Nobel Prize for Physics in 1971. Overcoming technical limitations, researchers in the UK developed a new display able to make real-time holograms reality. Scientists used nanostructures that act as antennae in the display which can be operated using streams of liquid crystals. The new technology takes advantage of how light interacts with the electrons that float freely around in metal materials, a phenomenon known as plasmonics. Although it’s not possible to use this in videoconferences yet, it is possible to make holograms of recorded images quickly.

Source: Science alert - http://ow.ly/KrP8v
Original Study: Physica Status Solidi - Engineered pixels using active plasmonic holograms with liquid crystals - http://ow.ly/KEbYW

‪#‎holograms‬ ‪#‎liquidcrystals‬ ‪#‎light‬ ‪#‎plasmonics‬ ‪#‎sciencefiction‬
31
7
Dave Quinn's profile photoMatthew Gordon's profile photoJoshua Ledden's profile photoJason Tepper's profile photo
 
Engineering!
Add a comment...

Pacific Northwest National Laboratory (PNNL)

• General/Interdisciplinary  - 
 
The quality of worldwide investment risk and technologies could make curbing global warming substantially easier. New research published in Nature Climate Change indicates that these factors may shift efforts to reduce emissions from developing to developed countries. The research also found these factors substantially change the cost of cutting global carbon dioxide emissions in half by 2050. Read more at http://www.pnnl.gov/science/highlights/highlight.asp?id=3942.
 
* * *
 
“… investment banks would charge higher interest on a loan to a firm that builds a wind farm in India, compared to, say, the U.S., because banks treat the Indian investment as financially more risky," said Gokul Iyer, lead author and researcher, working at the Joint Global Change Research Institute (JGCRI), a partnership between PNNL and the University of Maryland.
 
To understand the implications of real-world data, researchers used the Global Change Assessment Model (GCAM), a technologically detailed climate-energy-economy integrated assessment model developed at JGCRI. Taking into account the large variation in investment risks in real-world technology decisions, they used GCAM to analyze conditions that affect where and how energy investors undertake investments.
 
Computing the impacts on the costs and geographical distribution, the research found that accounting for such variations increases the cost estimates of reducing by half global carbon dioxide emissions in 2050 by up to 40 percent.
 
Further, the research concludes that major efforts to bring about institutional reforms will be a critical element of a larger global effort to address climate change. Absent reforms, the effort to mitigate global warming would be ineffective in developing countries, and hence, the majority of mitigation effort would shift to developed countries.
 
Why is this important? When investment decisions are being made, this real estate mantra applies. Institutional qualities, such as the infrastructures to support development and industry, are important factors for considering investment decisions. This study found that when they accounted for international differences in institutional qualities for technology investments, the costs of reducing carbon dioxide emissions are substantially higher and the increases are primarily borne by industrialized countries.
 
"Our study looks at how the variation in investment risks impacts which technologies are actually used to reduce emissions and where those emission reductions occur," said Iyer.
 
What's Next? The study looked at one variable—quality of national institutions—among many that affect investment decisions. Although an important factor in the cost of limiting global warming to 2°C, other factors might have different or even counteracting effects. Future model research should use real-world assumptions to understand the implications of other factors and their interactions with each other.
18
3
Vito Enzo Salatino's profile photoManuel Alzurutt's profile photoHanna T's profile photoSantosh Sonwane's profile photo
4 comments
 
I agree with your opinion that the problem of global cost for CO2 capture  and reduction on world scale is not correctly posed, especially when the general statement is affirmed that the cost of capture and conversion of CO2 will be much higher in the developped countries, and technically advanced countries will pay for reduction in grenhouse gas emission much more and also for the other ones. The reason is said to be that general costs for the same chemical process for CO2 reduction is more expensive from any point of view, including finance, in advanced countries in respect of the poor ones. Therefore it means that advanced countries at the end will pay  to install plants for C02 capture and conversion for theirselves and also for the others.
This generic e general point of view is not correct, as it assumes that the process of capture and conversion of CO2 to new uses is the same and identical everywhere, but with different costs. This general idea is wrong, because, as there are a number of processes, also of different dimensions, that generate CO2, so there are a number of complementary processes to capture and recycle the CO2, with different costs, a part from where the plant in installed.
This discussion to capture greenhouse gas cannot be generic ans global as indicated in this study, based only on finance necessary for a global investment on CO2 recovery, because it is a mistake to generalize  different processes in different countries with completely diffrerent costs for CO2 capture.
It is the same mistake that we make when continuously discuss everywhere and every time on the absolute necessity to reduce the emission of CO2 and at the end all agree on this fact. But noone knows or propose anything  to do, where and when to procede to install some plant in some place to begin to satisfy to this absolute necessity.
The only thing  sure is that something must be done, but noone decide what to do  and install a plant in some place to recover the greenhouse gas.
Now similarly, as someone says that the generic costs are higher in a place than in another, one says again that it is strongly necessary to do something, but one is not in a posirtion to pay at home more than at the neighbouring country. So again noone does anything, only production of words and economical studies.

Thanks for your attention.  
Add a comment...

SciConcilium

• General/Interdisciplinary  - 
 
Magnetic Stimulation of Neurons

Neuron degenerative disorders like Parkinson and Alzheimer can lead to inactivity of neurons and potential neuron death. Scientist have used magnetically activated nanparticles to stimulate neurons to revive them in mice model. Non-invasive magnetic stimulation were able to stimulate targeted area in mice brain. Preliminary data shows encouraging prospects for brain disorders but questions remain how would nanoparticles effect brain function in long term?

Original Study : Science http://www.sciencemag.org/…/early/2015/03/11/science.1261821
Source: Verge http://ow.ly/Km2Eh

‪#‎Alzheimer‬ ‪#‎Parkinson‬ ‪#‎neurondegenderation‬ ‪#‎brain‬ ‪#‎magneticstimulations‬ ‪#‎nanptechnology‬
23
Add a comment...

SciConcilium

• General/Interdisciplinary  - 
 
Break through in energy harvesting could power ‘life on Mars’

Since life on Earth is becoming too mainstream, it is time to start thinking of colonizing other planets and possibly starting civilizations there. However, in order to satisfy our human needs, it is important that we have sources of energy generations at said locations - this is exactly what researchers at Northumbria University are doing. This proposed energy harvesting plan will use principles based on the Leidenfrost effect which is centered on the phenomena of when a liquid comes into contact with a surface warmer than its boiling point. Through this method, researchers will be able to generate energy through carbon dioxide. By using blocks of dry ice and trapping evaporated gas – researchers are hoping to use this vapor to power an engine. This technique is very useful because of its effectiveness in extreme and foreign environments. Should this idea work, it will provide immense assistance in long term space exploration missions. Although dry ice is not an abundant resource on Earth, there is great evidence that it is plentiful on the red planet. Dr. Rodrigo Aguliar, a main contributor to this project said that, “One thing is certain; our future on other planets depends on our ability to adapt our knowledge to the constraints imposed by strange worlds, and to devise creative ways to exploit natural resources that do not naturally occur here on Earth.” 

Original Paper: Nature Communications http://www.nature.com/…/150…/ncomms7390/full/ncomms7390.html
Source: http://ow.ly/K0o3p

‪#‎energy‬ ‪#‎mars‬ ‪#‎exploration‬ ‪#‎Leidenfrosteffect‬
24
4
Kermit Williams's profile photoNicholas Littlejohn's profile photo
Add a comment...

Ted Ewen
owner

• General/Interdisciplinary  - 
 
Just a Public Service Announcement
Royal Society archives free online until the end of March
 
Awesome birthday present by the Royal Society. "[W]hat better way to mark the 350th anniversary of the world’s first science journal than to make all Royal Society content freely available, to everyone?

Yes, you read that right… readers can access our complete collection online, without the need for a subscription, between now and the end of March.
[...]
Seminal research papers include accounts of Michael Faraday’s ground-breaking series of electrical experiments, Isaac Newton’s invention of the reflecting telescope and the first research paper published by Stephen Hawking.

Early papers contain fascinating descriptions of how Captain James Cook preserved the health of his crew aboard the HMS Endeavour and the astonishment of 18th century society at the performance of an eight year-old Mozart.

More recently, our topical publications have covered such issues as the discovery of the Higgs boson, the impact of climate change on vector-borne diseases, and the latest developments in bioinspiration.

So, what are you waiting for?"
28 comments on original post
46
14
Ketan Mehta's profile photoRadu-Florian Atanasiu's profile photoYessica Gomez's profile photoVictoria Narro's profile photo
5 comments
 
This collection is full of awesome.  
Add a comment...

SciConcilium

• General/Interdisciplinary  - 
 
First human head transplantation possible within two years

Some find it an aberration, a step into playing God and perturbing the natural order. Others look at it as the last resource to be set free from a malfunctioning body. Sergio Canavero of the Turin Advanced Neuromodulation Group in Italy mentioned his interest in helping people where a head transplant might be the only solution. People may not know or remember, but Sby Robert White gave the first steps at Case Western Reserve University School of Medicine in Cleveland. The equivalent procedure was used for monkey head transplantation in 1970. A new article about the technical procedures to be used in humans was published in Surgical Neurology International and is already raising many ethical issues. Dr. Canavero believes all technical processes are already well defined and a human head transplantation surgery will be ready to be performed as early as 2017.

Source: NewScientist - http://ow.ly/JEOg2
Original study: Surgical Neurology International - Surg Neurol Int 2015, 6:18 - “The "Gemini" spinal cord fusion protocol: Reloaded” - doi.org/2c7

‪#‎headtransplant‬ ‪#‎organsrejection‬ ‪#‎ethics‬ ‪#‎humantransplantation‬
43
34
lalatendu das's profile photoDmytro Bevzenko's profile photoperam gouthami's profile photoWolfeye S's profile photo
41 comments
 
What I can't get about it is that the procedure's proponents MUST have found a way to reconnect severed nerves, independently to how "complete" the transplant will be (brain only or original spinal cord too).
If they can do that, people with spinal lesions or badly severed limbs or even missing eyes are going to be very happy.
Add a comment...

Gary Ray R
owner

• General/Interdisciplinary  - 
 
Secrets of the Shamsheer Sword

In the article below I discuss the recent study of a famous Shamsheer sword. As a metallurgist this is probably my favorite topic.
 
Opening the Secrets of the Shamsheer Sword

For people who are interested in metallurgy, metals, or history, the study of old metal weapons is a fascinating subject.  Old swords, ones designed with the finest metallurgical craftsmanship of the era, are a rare treat.   Metal swords had to be light weight, strong, tough, and have the ability to hold a sharp edge.  Designing a blade that is thin and curved takes master sword-making skills of that we are just now discovering.  

The sword that was studied is called a shamsheer sword (it has many spellings, I will use shamsheer).  Wiki says about shamsheer swords:

A Shamshir (from Persian شمشیر shamshir) also Shamsher, Shamsheer and Chimchir, is a type of sabre with a curve that is considered radical for a sword: 5 to 15 degrees from tip to tip. The name is derived from Persian شمشیر shamshīr, which means "sword" (in general). The radically curved sword family includes the shamshir, scimitar, Talwar, kilij, Pulwar and the Turko-Mongol saber.  ⓐ

Originally Persian swords were straight and double edged, just as the Indian khanda. The curved scimitar blades were Central Asian in origin. The earliest evidence of curved swords, or scimitars, is from the 9th century, when these weapons were used by soldiers in the Khurasan region of Central Asia.[3] The sword now called "shamshir" was introduced to Iran by Turkic Seljuk Khanate in 12th century and was later popularized in Persia by the early 16th century, and had "relatives" in Turkey (the kilij), the Mughal Empire (the talwar), and the adjoining Arabian world (the saif) and (the sam-saam).  ⓐ

It is a terrible weapon designed for slashing at an enemy; and it is an example of skilled metallurgical knowledge.  The advancement of the study of metal has been driven by the need for better weapons.  From the abstract of the study: 

The evolution of metallurgy in history is one of the most interesting topics in Archaeometry. The production of steel and its forging methods to make tools and weapons are topics of great interest in the field of the history of metallurgy. In the production of weapons, we find almost always the highest level of technology. These were generally produced by skilled craftsmen who used the best quality materials available. Indian swords are an outstanding example in this field and one of the most interesting classes of objects for the study of the evolution of metallurgy   ⓑ   

Scientists at the UK's Science & Technology Facilities Council (STFC) used two different techniques to study the sword; they were able to get a small sample from a damaged section, and studied that by classic metallurgical methods, and then used two modern very high tech non destructive tests and compared the results. 

Indian swords don't get a lot of cultural respect compared to the works of Spain or Japan but a new study used two different approaches to analyze a shamsheer, a 75-centimeter-long sword from the Wallace Collection in London, and found that it was master craftsmanship 

The study, led by Eliza Barzagli of the Institute for Complex Systems and the University of Florence in Italy, used metallography and neutron diffraction to test the differences and complementarities of the two techniques. The shamsheer was made in India in the late eighteenth or early nineteenth century and is of Persian origin. The base design spread across Asia and eventually gave rise to the family of similar weapons called scimitars that were forged in various Southeast Asian countries.  ⓒ

The sword in question first underwent metallographic tests at the laboratories of the Wallace Collection to ascertain its composition. Samples to be viewed under the microscope were collected from already damaged sections of the weapon. The sword was then sent to the ISIS pulsed spallation neutron source at the Rutherford Appleton Laboratory in the UK. Two non-invasive neutron diffraction techniques not damaging to artefacts were used to further shed light on the processes and materials behind its forging.  ⓒ

The antique sword, which is from the Wallace Collection in London, was tested by scientists using two ISIS instruments – INES, which focuses on material science, archaeometry and detector tests, and ENGIN-X, which is more commonly used to test major engineering components such as aircraft wings or train wheels. INES was used to determine the composition and microstructure of the metals used; and ENGIN-X showed how the steel was formed to distribute strain on the blade.  ⓓ

With the ability to test the entire blade, scientists were able to discover just how good the master sword makers in India understood metallurgy.  Descriptions of the testing devices are down in the references. 

It was established that the steel used is quite pure. Its high carbon content of at least one percent shows it is made of wootz steel. This type of crucible steel was historically used in India and Central Asia to make high-quality swords and other prestige objects. Its band-like pattern is caused when a mixture of iron and carbon crystalizes into cementite. This forms when craftsmen allow cast pieces of metal (called ingots) to cool down very slowly, before being forged carefully at low temperatures. Barzagli's team reckons that the craftsman of this particular sword allowed the blade to cool in the air, rather than plunging it into a liquid of some sort. Results explaining the item's composition also lead the researchers to presume that the particular sword was probably used in battle.  ⓒ

Scientists were also able to tell that two different forging methods were used in making the blade.  

I’ll end with with a quote from one of the scientists that worked on this study.

”Experiments like these are necessary to study the history of science, and to learn what technology was known at different points in history and different cultures," stated Dr Joe Kellehar, an instrument scientist for ENGIN-X, on the significance of applying modern techniques to delicate historical artefacts, especially those related to warfare, which is itself a key driving force behind historical technological development. "The craftsmen often did not record their methods and in some cases actively protected their trade secrets.”   ⓓ



ⓐ Wiki Shamshir
http://en.wikipedia.org/wiki/Shamshir

ⓑ  Applied Physics A  (Behind Paywall)
*Characterization of an Indian sword: classic and noninvasive methods of investigation in comparison_
http://link.springer.com/article/10.1007%2Fs00339-014-8968-0

ⓒ  Science 2.0
http://www.science20.com/news_articles/shamsheer_indian_sword_is_a_masterpiece_of_bladesmithing-153071

ⓓ  Science Technology Research Council Press Release
Secrets of India’s master sword-makers revealed
http://www.stfc.ac.uk/3500.aspx

ENGIN-X is a dedicated engineering science facility at ISIS. The beamline is optimized for the measurement of strain, and thus stress, deep within a crystalline material, using the atomic lattice planes as an atomic 'strain gauge’.
http://www.isis.stfc.ac.uk/instruments/engin-x/engin-x2900.html

INES is a powder diffractometer, built and managed by the Italian National Research Council (CNR) within the cooperation agreement with STFC. It is a general purpose diffractometer and is mainly devoted to materials characterization (structure refinement and phase analysis), cultural heritage studies and equipment tests.
http://www.isis.stfc.ac.uk/instruments/ines/

IMAGE: ISIS scientist Dr Francesco Grazzi setting up the ENGIN-X measurements
(Credit: STFC)
3 comments on original post
39
2
Ted Ewen's profile photoGary Ray R's profile photoGastón Alegre Stotzer's profile photoAshish Qurban Prasla's profile photo
12 comments
 
I am interested in the history of swords and only wanted some places to look for more information.  As always asking for references.  No offence meant to anyone.  Now I have a lot of reading to do, and thanks for taking interest, everyone. 
Add a comment...

John Parrott

• General/Interdisciplinary  - 
 
 
We live in an age when all manner of scientific knowledge—from climate change to vaccinations—faces organized and often furious opposition. So what's causing reasonable people to doubt the data?
We live in an age when all manner of scientific knowledge--from climate change to vaccinations--faces furious opposition. Some even have doubts about the moon landing.
94 comments on original post
39
10
Bernhard Kraml's profile photoRicardo Olenewa's profile photoPeter Carson's profile photo신충우's profile photo
14 comments
 
From what I can tell, this is a giant case of throwing the baby out with the bath-water, and it's happening because people can't tell the difference between science and R&D.

Science a method employed to increase understanding, in a structured way. R&D is a process where companies come up with new ways to make money.

Each process is carried out by people with  similar skill sets, but the results can diverge, and that comes down to the intent behind the work. Science is usually carried out because someone has observed something and wants to know why. R&D is not about understanding but about understanding just enough to make money.

Because people have no vision into the process they can't measure intent for themselves, so they then equate them, usually because they are reported equivalently in the media. And because unscrupulous people have abused R&D to make money while doing harm at the same time, people then assume a science is capable of the same harm.
Add a comment...

Francesco Busiello
moderator

• General/Interdisciplinary  - 
 
The science of ice cream

Who doesn't love ice cream? It's sweet, delicious and refreshing. Frozen yet creamy. It's no wonder that Americans consume more than 20 litres (about 42 pints) each a year(!).

I love ice cream. I'm the kind of person that would eat ice cream at any time, no matter the weather.  You could dunk me into a frozen lake and then ask me if I'd like some ice cream and I'd probably say yes. If I have to die of hypothermia, I might as well do it while eating ice cream.

But what is ice cream? What is it made of? And who invented it?

A short history of ice cream

Iced drinks and myths

We're not entirely sure how ice cream was invented or by whom. The earliest evidence of iced food is from a couple of thousand years ago: the Persians used to eat grape juice mixed with ice. The ice was stored in specially built cooling evaporators the size of a small building called yakhchals.

The Romans also enjoyed mixing fruit juices with ice taken from the mountains. But, to get from the precursors of the modern granita (or other iced drinks, like the frappuccino) to proper ice cream took more than a thousand years.

There are several myths on the origin of ice cream. Some say that Marco Polo witnessed ice creams being made on one of his trips to China and then introduced them to Italy. A version of this myth involves the Mongol riders taking provisions of cream in animal-skin satchels. During the winter, in the sub-freezing temperatures of the steppe, the galloping of the horses churned the cream and turned it into ice cream. As the Mongols conquered China, this knowledge spread and was well known by the time of Marco Polo's little jaunt to Cathay.

Other accounts tell the tale of a cook under Charles I inventing ice cream. Charles then offering the cook a lifetime pension in exchange for never giving up the recipe to the royal treat. These, however, are just myths. There is no historical evidence giving them any credence.

In fact, the easiest way to trace the history of ice cream is to follow the development of refrigeration. As Chris Clarke writes in his book "The science of ice cream", the history of ice cream can be divided into 5 stages:

1. Cooling food and drink by mixing it with snow or ice.
2. The discovery that dissolving salts in water produces cooling.
3. The discovery (and spread of knowledge) that mixing salts and snow or ice cools even further.
4. The invention of the ice cream maker in the mid-19th century.
5. The development of mechanical refrigeration in the late 19th and early 20th centuries.
-Chris Clarke, "The Science of Ice Cream", page 4

Salt and ice

There are no chemical reactions involved in making ice cream, but plenty of physical ones. At its core ice cream is what the name implies: very cold dairy cream. However, if we were to freeze pure cream it would just become a big, hard block of frozen dairy.

To avoid this sugar is added to the cream. What sugar does is decrease the melting point of the mixture. By decreasing the melting point it is possible to have a solution of milk and ice with partially frozen water. The free liquid water contributes enormously to the creamy texture of ice cream.

However, this causes an issue. Since the melting point of the cream and sugar mixture is lower than that of water we can't simply use ice to freeze it. Ice, on its own, is stuck at a temperature of 0 degrees Celsius (32 Fahrenheit). The cream and sugar mix needs to be cooled far below that. If we were to place some cream and sugar in a container, and then submerge the container in a simple bath of ice and water, ice cream is never going to form.

Which brings us to an important discovery in the realm of refrigeration: there is a way to make ice water colder than zero degrees centigrade. By adding salt.

Anyone who lives in a cold climate will probably be very familiar with the effect of salt on ice. Salt is used to melt snow or ice on pavements and roads.

So, if salt is added to an ice bath, it decreases its melting point. A saturated solution of ice and salt will reach a temperature of -21.1 degrees Celsius (-6 F). It is then possible to use this salted ice bath , which is at a sub-zero temperature, to freeze cream and sugar.

There is a detailed description of the effect of salt on ice in an Arabic medical textbook from 1242. Around the same period a book was also published containing recipes for sorbets.

This knowledge eventually spread to Italy. In the late 16th century, a Neapolitan scientist called Giambattista Della Porta "discovered" the cooling effect of a salted ice bath. This knowledge spread around Europe and by the mid-17th century it was not uncommon to be served frozen ice desserts at banquets. These desserts were still more granita-like and lacked the cream that gives the name to my favourite treat.

Putting the cream in ice-cream

The first written mentions of ice cream as frozen dairy appear around the end of the 17th century. Ice cream was served at a feast in Windsor in 1671 (though only at Charles II's table).

The recipes from this time show a development from the simple ice and flavouring mixtures to more complex ones involving dairy cream. Another recipe from the time detailed the ultra-rich glace au beurre — literally iced butter — which involved 40 egg yolks per litre of cream!

Another important development was the realisation that constant stirring of the sugar and cream mix would decrease the size of the ice crystals. As we'll see later, this is fundamental to the creamy texture of ice-cream.

Mechanisation and mass production

The next big step involves a woman called Nancy Johnson, who lived in Philadelphia in the 19th century.

In 1843 she was awarded a patent for the first mechanised ice-cream maker. Until then ice-cream had to be made by hand and in small batches. It was a tedious, laborious and inefficient process.

Johnson's ice-cream maker was composed of a bucket to hold the salt and ice and a sealed cylinder for the ice cream mix. The mix could be continuously stirred by a hand-cranked rotating spatula.

This design was later improved by William G Johnson of Baltimore, who added the rotation of the sealed cylinder in the brine to improve cooling. As Harold McGee puts it: "The Johnson-Young freezer allowed large quantities of ice cream to be made with simple, steady mechanical action."

The beginning of the mass production of ice cream is usually attributed to Jacob Fussell, a Baltimore milk dealer, who started using his seasonal surplus of cream to make ice cream on a grand scale, which allowed him to sell it at a far lower price. He founded the first ice cream factory in 1851 in Baltimore before expanding to New York, Washington and Boston.

The advent of modern refrigeration increased the mass availability of ice cream.

In fact, modern industrial ice cream freezers are not that different from the Johnson-Young machine. They still have a barrel with a rotating scraper enclosed in a cooling bath. However, the coolants used have a lower temperature than ice and salt (liquid ammonia, an often used one, has a temperature of -30 C, almost 10 degrees lower than a salted ice bath) and the barrel in which the ice cream is formed is now horizontal rather than vertical and allows for continuous use. Ice cream mix is pumped in at one end and ice cream is pumped out at the other.

Since the rate of cooling is dependent on the difference between the temperature of the mix and the coolant, the lower temperature of the coolant allows faster cooling. This produces smaller ice crystals which improves texture. In fact, some restaurants offer almost-instant ice cream, made right at your table by using liquid nitrogen. As liquid nitrogen has a ridiculously low temperature (-196 C to be exact), the ice cream mix turns into ice cream almost instantly. Artisanal ice cream, like the gelato found in many Italian ice cream parlours, is still made in a batch process.

The industrialisation of ice cream provided several refinements (though some might say that not all were positive ones). To achieve more and more smoothness of texture, manufacturers began to add other ingredients such as gelatin or powdered milk.

In the United Kingdom, during the Second World War, the use of dairy cream to make ice cream was banned for rationing purposes. British manufacturers began using vegetable oil as a fat replacement. Even though this ban was lifted after the war, the British public had become used to the taste of vegetable oil ice cream (gross) that some manufacturers still use it. Of course, it helps that vegetable oil is also cheaper. (As a side note, British chocolate is also made with vegetable oil and not cocoa butter. Some countries have very strict requirements on what can be sold as "chocolate". Chocolate made with vegetable oil cannot be sold as such in these countries. But that's a topic for a different article.)

Manufacturers began to add stabilisers to ice cream to ensure minimal texture disruption during transport in home freezers, which are much less stable and reliable than industrial ones. Other modern ice cream additives include emulsifiers, flavourings and colourants. (More on stabilisers and emulsifiers in the next section where we'll have a look at the structure of ice cream).

One definitely positive effect of industrialisation is the widespread use of pasteurisation, which vastly reduces the risk of spoilage and makes ice cream a safer food.

The science of ice cream

Ice cream, at its most basic, is composed of three elements: air bubbles created by the mixing and churning, ice crystals made of pure water, and concentrated cream that is formed as the water in the cream turns into the crystals. It is both an emulsion (a mixture of water and fat) and a foam. In fact, it contains all three states of matter: solid, liquid and gas.

But first, let's have a look at the fundamental ingredient of ice cream: cream.

Cream is milk that has been enriched with fat. Traditionally this process occurs naturally under the action of gravity. If milk is left to sit for a few days, the fat globules in milk will rise and form a layer at the top of the container (fat is lighter than water). This layer can be then skimmed off and either used as cream or churned to make butter. These days cream is separated from milk using centrifuges.

Compared to milk, which has about equal amounts of protein and fat, cream is much fattier and richer: it has about 10 parts of fat to 1 of protein. The fat in milk and cream is suspended into globules. A membrane of phospholipids covers each globule.

Phospholipids are emulsifiers and allow the fat to be soluble in water as well as keeping the globules from sticking to each other. When cream is churned, these membranes are broken and the fat can pool into a large mass: butter.

Let's get back to the micro-structure of ice cream.

Ice crystals form from the water in the cream as the mix is frozen. The size of the crystals determines the smoothness of the ice cream. Large crystals will give the ice cream a coarse and grainy texture. About three quarters of the water in the mix is frozen into crystals at -18 C (0 F).

The rest of the water forms a highly saturated thick liquid solution with sugars, milk proteins and stabilisers (if used). This solution forms a matrix in which the other particles are suspended.

Air bubbles are introduced into the ice cream via mixing and, in some industrial processes, by directly injecting air during the freezing. Air is fundamental as it disrupts the matrix formed by the ice crystals and the cream. It makes the ice cream easier to scoop and bite into. The increase in volume due to the air bubbles is called the overrun and is measured as a percentage of the original mix volume. Obviously, the higher the overrun the less dense the ice cream. Soft-serve ice cream, for example, has an overrun of as much as 100%. Artisanal ice creams tend to to have a much lower quantity of air.

Finally, the fat globules from the concentrated cream provide stability to the air bubbles and prevent their collapse, much like they do in whipped cream. They also provide plenty of creaminess and flavour. Other emulsifiers can also be added to the mix to improve the stability of ice cream. Egg yolk is also sometimes used as an emulsifier and also to add flavour. For example, "base gialla" is an Italian ice cream base which contains eggs.

Stabilisers are large molecules that increase the viscosity of ice cream. This has several beneficial effects including increasing the perceived smoothness of ice cream in the mouth, reducing the rate of melting, increasing the stability of the foam as well as increasing the ease of pumping ice cream in an industrial setting. Too much stabiliser and the ice cream will be too firm and chewy. However, sometimes this may be desirable: chewy Turkish ice cream is made with the addition of a natural stabilisers.

Thawing and refreezing ice cream causes the ice crystals to increase in size as they melt, coalesce and reform. Poorly stored and transported ice cream will have a very coarse and unpleasant texture. It's always disappointing to buy ice cream at a supermarket only to find it composed of large crunchy crystals. In my experience I've found that different supermarkets and shops treat their ice cream with different levels of care and I avoid buying it from those I don't trust.

To make good ice cream one must consider the delicate balance of all the ingredients and how they affect the micro-structure.  The less water in the mix the easier it is to make smoother ice cream. But if there is too much sugar the ice cream might end up syrupy as well as increasing the risk of sugars crystallising (which give an unpleasant texture). Too much fat and the mixing action may cause it to churn into butter.

Industrial ice cream is made by first preparing the mix with the desired balance of milk solids, fat, water and additives. Then it is frozen in an industrial freezer which often also introduces air bubbles by injection. As the mix gets colder so does its viscosity. There comes a point at which the heat introduced by the mixing blades is equal to that taken away by the coolant (around -5 C). At this stage the ice cream can't be cooled any more by the freezer and only half of the water is frozen.

The ice cream is quickly extruded and hardened by blowing -40 C air on it. During this process some of the liquid water migrates to the already formed ice crystals. The ice cream can then be packaged or formed into a variety of shapes and confections.

Serving and storing ice cream

Keeping ice cream in a home freezer for a long time can alter its structure and flavour. Home freezers are sometimes not very stable: their internal temperature can fluctuate. This can cause the growth of ice crystals. It's also possible for the fat in the ice cream to absorb off-flavours from other items in the freezers and, if dried out by the air in the freezer, can also go rancid.

It's best to store ice cream at temperatures of at least -18 C (0 F) which is a typical home freezer temperature. Industrial cold stores are usually around -25 C.

When serving ice cream, it's best to wait a little while. At a warmer temperature of -13 C the ice cream is softer (and easier to scoop). At -18 C taste buds are numbed by the cold temperature of the ice cream and can pick up less flavour. Waiting for the ice cream to thaw a bit can thus improve taste.

How to (not) make ice cream at home

As I was researching this article I had the idea of making ice cream at home the old-fashioned way. I don't own an ice cream machine, but really, at its core ice cream is just milk, cream and sugar. How hard can it be? (I mean, I've only just spent several  thousand words describing the complexity of ice cream).

The project was doomed from the start. My trusty kitchen thermometer decided to take an announced vacation and measured the unsalted ice bath as a balmy 22 degrees Celsius instead of the expected 0 C. Fantastic. It meant I could't measure how cold my salted ice bath was or if my ice cream mix was getting colder.

Undeterred, I poured about 4 trays of ice cubes into a salad bowl and added a little water and a LOT of salt. Then I mixed about 500 grams of milk, 100 g of cream and 150 g of granulated sugar and poured it into a metal pot. I was trying to make an "Italian-style" ice cream, with fat content around 6-7%.

I was not using any emulsifiers or stabilisers because I could not find any at the supermarket and I didn't using egg yolk as an emulsifier because:

a) I was trying to make a so-called "base bianca", one of the ice-cream bases used by italian gelatai which can then be used to make a bunch of different flavours (including what I'm making today, stracciatella) and not "base gialla" which contains eggs

b) using egg yolk would require heating the mixture up to pasteurise it and I could not really be bothered (and I wouldn't have been able to measure the temperature anyway, stupid broken thermometer).

So far, a disaster.

I placed the metal pot in the ice bath and started whisking. Within seconds, much like a marriage to a stripper in Las Vegas, I realised that this was not going to work out. There is far too much ice cream mix. It would take forever and far more ice to freeze it all.

I decided to pour half of the mix into an ice cube tray and placed it in the freezer. At least I would have something to compare it to once I'm done.

I used ice-cube trays because I figured that thanks to the increased surface area the mix should freeze faster than if i just froze it in a big bowl. Faster freezing should mean smaller crystals.

I kept on working on the mix in the metal pot. I whisked, I stirred vigorously. At some point, it looked like it was getting a bit thicker but I couldn't really tell for sure. I added more ice, more salt and keep whisking. Nothing.

Forty-five minutes in, all I had was boredom, a sore wrist and a metal pot with some slightly cold sugary milk. Boredom won out and I decided to mix in some chopped chocolate and pour what I have into another ice-cube tray and stick it in the freezer.

A couple of hours later, the moment of truth. Have you ever had milk ice cubes? Because that is exactly what I made. It actually tastes quite nice. Milky and sugary. It's a shame that the crystals are enormous. It's not creamy or chewy at all. I do find another use for the "ice-cream" cubes. They make a pretty good iced cappuccino.

Lessons learned:

There is a reason why we use emulsifiers and stabilisers in ice cream.
An ice bath is useless if you can't tell how cold it is.
Making ice cream is more difficult than just freezing some milk, cream and sugar.
Don't let my misadventure dissuade you from making ice cream at home. With a little more preparation and care it's definitely possible. Also, now I'm hungry for ice cream.

-Francesco

You can find this article with more links, pictures and information on my blog: http://piecubed.co.uk/science-ice-cream/

References and further reading

Harold McGee, "On Food And Cooking", ISBN 978-0684800011
Robert T. Marshall, H. Douglas Goff, Richard W. Hartel, "Ice Cream", ISBN 978-0306477003
Luciano Ferrari, "Gelato and Gourmet Frozen Desserts - A Professional Learning Guide"
"ICE CREAM - The finer points of physical chemistry and flavor release make this favorite treat so sweet" - American Chemical Society
"Ice Cream" - University of Guelph, Department of Food Science
"Finding science in ice cream" - Prof. H. Douglas Goff, Ph.D. Dept. of Food Science, Univ. of Guelph (PDF)
Chris Clarke, "The Science of Ice Cream"
Chris Clarke (2003), "The Physics of Ice Cream" - Physics Education
 

 
50
13
JenTheHen InTheDenWithAPen's profile photoJulio Fernando Gómez's profile photoRicard Marxer's profile photoChristine QB's profile photo
6 comments
 
nice writing sir!
Add a comment...