In other words, with the completion of the Higgs boson experiments, unknown particles (such as Dark Matter) and unknown forces (such as Dark Energy) must lie within high-energy, short-range, short-life, or weak-interaction regimes that give them no influence over day-to-day human life.
Having a complete theory of physics at human-relevant energy and distance scales however rules out phenomena that would call for unknown particles and unknown forces, such as telekinetic force, clairvoyant vision, telepathic communication, or spirit-forces poking at your brain to drive you around.
IBM has an article in Science about their TrueNorth neural net chip. The chip has 4096 processors, each of which has 256 integrate-and-fire spiking neurons (with binary output) each with 256 inputs. Each processor can compute all 256 neurons 1000 times per second. Neuron states are binary (spikes) and synaptic weights are 9-bit signed integers with a 4-bit time delay. The overall peak performance is 4096x256x256x1000 = 266 GSops (billion synaptic operations per second). The power consumption is supposed to be around 100mW.
The NYT and Wired have articles about the announcement in which I express my skepticism of the approach. I feel that I need to clarify my position.
First, I'm all in favor of building special-purpose chips to run neural nets. In fact, I have worked on this myself. My NYU lab started the NeuFlow project, which implements convolutional neural nets in hardware (see http://www.neuflow.org/ ). the NeuFlow project was initially implemented at NYU on a Xilinx Virtex-6 FPGA (http://yann.lecun.com/exdb/publis/index.html#farabet-suml-11 ), and later turned into an ASIC design by Eugenio Culurciello and his team at Purdue (http://yann.lecun.com/exdb/publis/index.html#pham-mwscas-12 ).
The NeuFlow design implements convolutional networks which we know can produce state of the art performance on a number of vision tasks such as object recognition, semantic segmentation, obstacle detection, etc. So we know that the hardware could be practically useful and was worth the effort and the cost (mostly paid for by ONR).
Now, what wrong with TrueNorth? My main criticism is that TrueNorth implements networks of integrate-and-fire spiking neurons. This type of neural net that has never been shown to yield accuracy anywhere close to state of the art on any task of interest (like, say recognizing objects from the ImageNet dataset). Spiking neurons have binary outputs (like neurons in the brain). The advantage of spiking neurons is that you don't need multipliers (since the neuron states are binary). But to get good results on a task like ImageNet you need about 8 bit of precision on the neuron states. To get this kind of precision with spiking neurons requires to wait multiple cycles so the spikes "average out". This slows down the overall computation.
Getting excellent results on ImageNet is easily achieved with a convolutional net with something like 8 to 16 bit precision on the neuron states and weights. This requires to implement hardware multipliers. But the structure of convolutional nets make it easy to organize the computation so the traffic with the memory is minimized.
An ASIC implementing the NeuFlow architecture with the same fabrication technology as TrueNorth would be capable of about 3000 GSops (billion synaptic operations per second) in about 1 Watt. That's 3.3e-13 Joules/Sop. This is similar to the TrueNorth's 3.7e-13 J/Sop (266 GSops in 0.1 W).
These estimates are somewhat speculative, but fairly conservative. I'm assuming 16 bit states and weights for NeuFlow, which is considerably more accurate than TrueNorth's 1-bit states and 9-bit weights. Further improvements can be obtained with dynamic precision (variable bit depth). We know we can get away with 8 bits in the early layers without much loss of accuracy.
Science (paywall): http://www.sciencemag.org/content/345/6197/668
For a comparison of NeuFlow ConvNets with AER (spiking neuron) ConvNet implementation, look at this: http://yann.lecun.com/exdb/publis/index.html#farabet-frontiersin-12
- Site Reliability Engineer, 2013 - presentSite Reliability Engineering for the Ads Quality systems
- NaspersSenior Software Engineer at MIH Internet Africa, 2009 - 2013Developer for the Mocality.com directory, working on the back-end infrastructure with a focus on performance, reliability and monitoring.
- NaspersSearch Engine Systems Engineer at 24.com, 2008 - 2008Systems Engineer on the SearchSA South African search engine.
- University of Cape TownMaster of Science: Bioinformatics, 2006 - 2008
- University of Cape TownBachelor of Science Honours: Applied Mathematics, 2005 - 2005
- University of Cape TownBachelor of Science: Computer Science, Physics, Applied Mathematics, 2001 - 2004
The Kiss - Gustav Klimt - Google Cultural Institute
“The Kiss”, probably the most popular work by Gustav Klimt, was first exhibited in 1908 at the Kunstschau art exhibition on the site of toda
Love is . . . ‘almost like a psychological disorder’
Love might be good for the heart, and even your mood, but it is in the brain where the drama really plays out
NASA | GPM: Engineering Next Generation Observations of Rain and Snow
For more information: http://www.nasa.gov/content/goddard/ready-set-space-nasas-gpm-satellite-begins-journey For the past three years, the G
For South Africans Abroad: A Guide to Voting in the 2014 Elections
A guide for South African expats on everything you need to know in order to vote abroad in the 2014 national elections.
Scientists Discover a Jewel at the Heart of Quantum Physics - Wired Science
“This is completely new and very much simpler than anything that has been done before,” said Andrew Hodges, a mathematical physicist at Oxfo
Headspace | Guided Meditation & Mindfulness App | Headspace
Try the Free Guided Meditation App and Learn how to unwind using Meditation and Mindfulness techniques with Andy Puddicombe & the Headspace