Profile cover photo
Profile photo
Ward Plunet
58,584 followers
58,584 followers
About
Ward Plunet's posts

Post has attachment
Agents that imagine and plan: from DeepMind

In two new papers, we describe a new family of approaches for imagination-based planning. We also introduce architectures which provide new ways for agents to learn and construct plans to maximise the efficiency of a task. These architectures are efficient, robust to complex and imperfect models, and can adopt flexible strategies for exploiting their imagination.

Post has attachment
Why the future of deep learning depends on finding good data

punch line....

...That’s right, rather than working towards the goal of getting as much training data as possible, the future of deep learning may be to work towards unsupervised learning techniques. If we think about teaching babies and infants about the world, this makes sense; after all, while we do teach our children plenty, much of the most important learning we do as humans is experiential, ad hoc—unsupervised.


Post has attachment
How physical exercise prevents dementia

Physical exercise seems beneficial in the prevention of cognitive impairment and dementia in old age, numerous studies have shown. Now researchers have explored in one of the first studies worldwide how exercise affects brain metabolism..As expected, physical activity had influenced brain metabolism: it prevented an increase in choline. The concentration of this metabolite often rises as a result of the increased loss of nerve cells, which typically occurs in the case of Alzheimer's disease. Physical exercise led to stable cerebral choline concentrations in the training group, whereas choline levels increased in the control group. The participants' physical fitness also improved: they showed increased cardiac efficiency after the training period. Overall, these findings suggest that physical exercise not only improves physical fitness but also protects cells._

link: https://www.sciencedaily.com/releases/2017/07/170721090107.htm
Photo

Post has attachment
Why sugary drinks and protein-rich meals don't go well together

Having a sugar-sweetened drink with a high-protein meal may negatively affect energy balance, alter food preferences and cause the body to store more fat, according to a study published in the open access journal BMC Nutrition. Dr Shanon Casperson, lead author of the study from USDA-Agricultural Research Service Grand Forks Human Nutrition Research Center, USA said: "We found that about a third of the additional calories provided by the sugar-sweetened drinks were not expended, fat metabolism was reduced, and it took less energy to metabolize the meals. This decreased metabolic efficiency may 'prime' the body to store more fat." The researchers found that the inclusion of a sugar-sweetened drink decreased fat oxidation, which kick-starts the breakdown of fat molecules, after a meal by 8%. If a sugar-sweetened drink was consumed with a 15% protein meal, fat oxidation decreased by 7.2g on average. If a sugar-sweetened drink was consumed with a 30% protein meal, fat oxidation decreased by 12.6g on average. While having a sugar-sweetened drink increased the amount of energy used to metabolise the meal, the increased expenditure did not even out the consumption of additional calories from the drink. Dr. Casperson said: "We were surprised by the impact that the sugar-sweetened drinks had on metabolism when they were paired with higher-protein meals. This combination also increased study subjects' desire to eat savory and salty foods for four hours after eating."

Post has attachment
Interpreting neurons in an LSTM network

A few months ago, we showed how effectively an LSTM network can perform text transliteration. For humans, transliteration is a relatively easy and interpretable task, so it’s a good task for interpreting what the network is doing, and whether it is similar to how humans approach the same task. In this post we’ll try to understand: What do individual neurons of the network actually learn? How are they used to make decisions?

Post has attachment
Graphcore’s AI chips now backed by Atomico, DeepMind’s Hassabis

Graphcore is building what it calls an IPU — aka an “intelligence processing unit” — offering dedicated processing hardware designed for machine learning tasks vs the serendipity of repurposed GPUs which have been helping to drive the AI boom thus far. Or indeed the vast clusters of CPUs needed (but not well suited) for such intensive processing. It’s also building graph-framework software for interfacing with the hardware, called Poplar, designed to mesh with different machine learning frameworks to enable developers to easily tap into a system that it claims will increase the performance of both machine learning training and inference by 10x to 100x vs the “fastest systems today”. Toon says it’s hoping to get the IPU in the hands of “early access customers” by the end of the year. “That will be in a system form,” he adds. “Although at the heart of what we’re doing is we’re building a processor, we’re building our own chip — leading edge process, 16 nanometer — we’re actually going to deliver that as a system solution, so we’ll deliver PCI express cards and we’ll actually put that into a chassis so that you can put clusters of these IPUs all working together to make it easy for people to use.

Post has attachment
In making decisions, are you an ant or a grasshopper?

In one of Aesop's famous fables, we are introduced to the grasshopper and the ant, whose decisions about how to spend their time affect their lives and future. The jovial grasshopper has a blast all summer singing and playing, while the dutiful ant toils away preparing for the winter. Findings in a recent publication by UConn psychology researcher Susan Zhu and colleagues add to a growing body of evidence that, although it may seem less appealing, the ant's gratification-delaying strategy should not be viewed in a negative light. "This decision strategy can be harder or more time-consuming in the moment, but it appears to have the best outcome in the long run, even if it isn't fun," says Zhu. The ant is what the researchers would call a maximizer. A maximizer is someone who makes decisions that they expect will impact themselves and others most favorably: they seek to "maximize" the positive and make the best choices imaginable. Yet the ant may consider so many variables that the same tendency to maximize benefit may lead to difficulty in making decisions. Previous research suggested this, with maximizers being less happy overall, having higher stress levels, and possibly regretting decisions they made. Zhu suggests that maximizing has beneficial consequences. "Maximizers are forward thinking, conscientious, optimistic, and satisfied," she says. "Though a lot of work and thought go into those decisions, maximizing has beneficial outcomes."

Post has attachment
Learning to Learn

A key aspect of intelligence is versatility – the capability of doing many different things. Current AI systems excel at mastering a single skill, such as Go, Jeopardy, or even helicopter aerobatics. But, when you instead ask an AI system to do a variety of seemingly simple problems, it will struggle. A champion Jeopardy program cannot hold a conversation, and an expert helicopter controller for aerobatics cannot navigate in new, simple situations such as locating, navigating to, and hovering over a fire to put it out. In contrast, a human can act and adapt intelligently to a wide variety of new, unseen situations. How can we enable our artificial agents to acquire such versatility? There are several techniques being developed to solve these sorts of problems and I’ll survey them in this post, as well as discuss a recent technique from our lab, called model-agnostic meta-learning.

Post has attachment
Adderall might improve your test scores – but so could a placebo


Each participant took a batch of cognitive tests four times. On two of these occasions they were given 10 milligrams of Adderall, while they were given a placebo the other times. With each treatment, they were once told they were getting medication, and once told they were getting a placebo. Compared with placebo, Adderall produced a slight improvement on two tests, relating to memory and attention, out of 31 tests in total. But simply believing that they were taking a medication – regardless of whether they were or not – had a stronger effect, improving performance on six tests. The students performed least well when they were told they had taken a placebo – even when they had actually taken Adderall. “Expectation seemed to have more of an effect on objective performance than the actual medication state,” says Fargason. But neither the drug nor the belief that they were taking it boosted the volunteers’ performances in more complex tests of cognitive ability. “In terms of the value for learning, it’s not clear it really would make that much of a difference,” says Fargason.

Post has attachment
ASML enabling Moore’s law scaling and cost reduction out to 1 to 2 nanometers in mid-2020s

ASML has a presentation that describes how they see EUV enabling a scaling push with lowering costs out to 2 nanometers.
Wait while more posts are being loaded