Post has attachment
The call for applications to the 2017 Telluride Neuromorphic Cognition Engineering Workshop is open. This year's workshop is centered around the theme of Neuromorphic Autonomous Agents. Please go to https://academicjobsonline.org/ajo/jobs/8908 for details how to apply. Application deadline is April 2nd 2017.

Post has attachment
Feb. 15 Application Deadline - 2017 Nengo Summer School

[All details about this school can be found online at http://www.nengo.ca/summerschool]

The Centre for Theoretical Neuroscience at the University of Waterloo is inviting applications for our 4th annual summer school on large-scale brain modeling. This two-week school will teach participants how to use the Nengo software package to build state-of-the-art cognitive and neural models to run in simulation and on neuromorphic hardware. Nengo has been used to build what is currently the world's largest functional brain model, Spaun [1], and provides users with a versatile and powerful environment for designing cognitive and neural systems to run in simulated and real environments. For a look at last year's summer school, check out this short video: https://goo.gl/EkhWCJ

We welcome applications from all interested graduate students, research associates, postdocs, professors, and industry professionals. No specific training in the use of modeling software is required, but we encourage applications from active researchers with a relevant background in psychology, neuroscience, cognitive science, robotics, neuromorphic engineering, computer science, or a related field.

[1] Eliasmith, C., Stewart T. C., Choo X., Bekolay T., DeWolf T., Tang Y., Rasmussen, D. (2012). A large-scale model of the functioning brain. Science. Vol. 338 no. 6111 pp. 1202-1205. DOI: 10.1126/science.1225266. [http://nengo.ca/publications/spaunsciencepaper]

*Application Deadline: February 15, 2017*

Format: A combination of tutorials and project-based work. Participants are encouraged to bring their own ideas for projects, which may focus on testing hypotheses, modeling neural or cognitive data, implementing specific behavioural functions with neurons, expanding past models, or providing a proof-of-concept of various neural mechanisms. Hands-on tutorials, work on individual or group projects, and talks from invited faculty members will make up the bulk of day-to-day activities. A project demonstration event will be held on the last day of the school, with prizes for strong projects!

Topics Covered: Participants will have the opportunity to learn how to: build perceptual, motor, and sophisticated cognitive models using spiking neurons; model anatomical, electrophysiological, cognitive, and behavioural data; use a variety of single cell models within a large-scale model
; integrate machine learning methods into biologically oriented models; interface Nengo with various kinds of neuromorphic hardware (e.g. SpiNNaker); interface Nengo with cameras and robotic systems; implement modern nonlinear control methods in neural models; and much more…

Date and Location: June 4th to June 16th, 2017 at the University of Waterloo, Ontario, Canada.

Applications: Please visit http://www.nengo.ca/summerschool, where you can find more information regarding costs, travel, lodging, along with an application form listing required materials.

If you have any questions about the school or the application process, please contact Peter Blouw (pblouw@uwaterloo.ca). We look forward to hearing from you!

Post has attachment

Post has attachment
Dear all

** Apologies for cross-posting

Would like to draw your attention to a new Special issue in JETCAS on "Low-Power, Adaptive Neuromorphic Systems: Devices, Circuit, Architectures and Algorithms" that I am co-editing with colleagues. Manuscript submission deadline is on April 30, 2017. Look forward to receiving your contributions.

IEEE JOURNAL ON
EMERGING AND SELECTED TOPICS IN CIRCUITS AND SYSTEMS

CALL for PAPERS
Low-Power, Adaptive Neuromorphic Systems: Devices, Circuit, Architectures and Algorithms
Guest Editors
Full name Email Affiliation
Arindam Basu* arindam.basu@ntu.edu.sg
Nanyang Technological University
Tanay Karnik tanay.karnik@intel.com
Intel
Hai Li hai.li@duke.edu
Duke University
Elisabetta Chicca chicca@cit-ec.uni-bielefeld.de
Bielefeld University
Meng-Fan Chang mfchang@mx.nthu.edu.tw
National Tsing Hua University
Jae-sun Seo jaesun.seo@asu.edu
Arizona State University
(* Corresponding)

Scope and Purpose
The recent success of “Deep neural networks” (DNN) has renewed interest in bio-inspired machine learning algorithms. DNN refers to neural networks with multiple layers (typically two or more) where the neurons are interconnected using tunable weights. Though these architectures are not new, availability of lots of data, huge computing power and new training techniques (such as unsupervised initialization, use of rectified linear units as the neuronal nonlinearity, regularization using dropout or sparsity, etc.) to prevent the networks from over-fitting have led to its great success in recent times. DNN has been applied to a variety of fields such as object or face recognition in images, word recognition in speech or even natural language processing and the success stories of DNN keep on increasing every day.
However, the common training method in deep learning, such as back propagation, tunes the weights of neural networks based on the gradient of the error function, which requires a known output value for every input. It would be difficult to use such supervised learning methods to train and adapt to real-time sensory input data that are mostly unlabeled. In addition, training and classification phases of deep neural networks are typically separated, such that training occurs in the cloud or high-end graphics processing units, while their weights or synapses are fixed during deployment for classification. However, this makes it difficult for the neural network to continuously adapt to input or environment changes in real-world applications. By adopting unsupervised and semi-supervised learning rules found in biological nervous systems, we anticipate to enable adaptive neuromorphic systems for many real-time applications with a large amount of unlabeled data, similar to how humans analyze and associate sensory input data. Energy-efficient hardware implementation of these adaptive neuromorphic systems is particularly challenging due to intensive computation, memory, and communication that are necessary for online, real-time learning and classification. Cross-layer innovations on algorithms, architectures, circuits, and devices are required to enable adaptive intelligence especially on embedded systems with severe power and area constraints.
Topics of Interest
This special issue invites submissions relating to all aspects of adaptive neuromorphic systems across algorithms, devices, circuits, and architectures. Possible scalability to human brain-scale computing level with energy-efficient online learning is desired. Submissions are welcome in the following topics or other related topics:
• Spin mode adaptive neuromorphics with devices such as spin transfer nano-oscillator, domain wall memory, tunneling magnetic resistance, inverse spin hall effect, etc.
• Memristive technology based learning synapse and neurons
• Neuromorphic implementations of synaptic plasticity, short-term adaptation and homeostatic mechanisms
• Self-learning synapses (STDP and variants) and self-adaptive neuromorphic systems
• High fan-in scalable interconnect fabric technologies mimicking brain-scale networks
• Circuits and systems for efficient interfacing with post-CMOS memory based learning synapses
• Design methodology and design tools for adaptive neuromorphic systems with post-CMOS devices
• Algorithm, device, circuit, and architecture co-design for energy-efficient adaptive neuromorphic hardware
Important Dates
1. Manuscript submissions due: April 30, 2017
2. First decision: July 15, 2017
3. Revised manuscripts due: August 15, 2017
4. Final Decision: October 15, 2017
5. Final manuscripts due: November 15, 2017
Request for Information
Arindam Basu (arindam.basu@ntu.edu.sg)
https://mc.manuscriptcentral.com/jetcas



Best regards,
Arindam

http://www3.ntu.edu.sg/home/arindam.basu/


Post has attachment
A spike-based neuromorphic stereo-vision architecture, that shows how spike-timing helps resolve open problems (e.g. false negatives in stereo-correspondence): http://www.nature.com/articles/srep40703

I’ve accumulated some interesting(?) links over the last few years while following Deep Learning's development. FYI – the lists aren’t prioritized in any manner nor are they complete if you know of some additional ones that should be noted please let me know about them. I also apologize in advance if I've associated anyone with the wrong university or company, some of these change faster than I can keep up with and I've slowed down the last few months in tracking them.

Researcher websites:
http://www.iro.umontreal.ca/~bengioy/yoshua_en/index.html Yoshua Bengio - Deep Learning and AI researcher at University of Montreal
http://yann.lecun.com/exdb/publis/index.html#farabet-frontiersin-12 Yann Lecun - Deep Learning (especially CNNs) researcher at NYU and Leads Facebook’s Research
https://www.facebook.com/yann.lecun/posts/10152184295832143 Yann Lecun’s thoughts on IBMs TrueNorth Chip
http://people.idsia.ch/~juergen/ Juergen Schmidhuber - Deep Learning (RNNs LSTMs) and AI researcher at IDSIA in Switzerland
https://engineering.purdue.edu/elab/ Eugenia Culurciello - Deep Learning hardware developer with a focus on Image Processing
http://www.eecs.berkeley.edu/Faculty/Homepages/jordan.html AI researcher at UC Berkeley
http://www.cs.toronto.edu/~hinton/ Geoffrey Hinton – Deep Learning and AI pioneer at University of Toronto and Google
http://www.demishassabis.com/ Demis Hassabis – AI Researcher at Google’s Deep Mind
http://www.izhikevich.org/ Eugene Izhekevich – Researcher at Brian Corp, part of Qualcomm now
http://cs.stanford.edu/people/ang/ Andrew Ng Stanford Researcher now heading Baidu’s Research
http://redwood.berkeley.edu/bruno/ Bruno Olshausen – Neuroscience researcher at UC Berkeley
http://www.cs.toronto.edu/~ilya/ Ilya Sutskever – Deep Learning and AI researcher at OpenAI
http://techlab.bu.edu/members/gail/?/techlab/members/gail/ Gail Carpenter Neural Modeling researcher
http://www.cns.bu.edu/Profiles/Grossberg/ Stephen Grossburg Neural Modeling researcher


Some Companies:
http://www.enlitic.com/ Focusing on medical applications
http://www.kaggle.com/home Data Science Competitions
http://www.nervanasys.com/ Part of Intel now
http://deepmind.com/ Demis Hassabis’ company (part of Google)
http://www.hrl.com/laboratories/cnes/cnes_main.html Hughes Research Labs participated in the DAPRA SyNAPSE program
http://www.research.ibm.com/cognitive-computing/neurosynaptic-chips.shtml#fbid=cbfHKFQN-N_ IBM TrueNorth site
https://www.qualcomm.com/news/onq/2013/10/10/introducing-qualcomm-zeroth-processors-brain-inspired-computing Qualcomm Zeroth Processor site
http://numenta.org/ Numenta focuses on Hierarchical Temporal Memory (HTM) architectures (Generative model, Anomaly detection)
http://numenta.com/
http://www.teradeep.com/ Eugenio Culurciello’s company
http://www.braincorporation.com/ Eugene Izhekevich company (part of Qualcomm?)
https://www.facebook.com/Engineering/ Facebooks Engineering page
http://www.dmtk.io/?utm_content=buffer1bd6c&utm_medium=social&utm_source=plus.google.com&utm_campaign=buffer Microsoft’s Distributed Machine Learning Toolkit page
https://nnaisense.com/ Jurgen Schmidhuber’s company
https://openai.com/about/ OpenAI website
https://www.microsoft.com/en-us/research/ Microsoft Research page
https://www.microsoft.com/en-us/research/product/cognitive-toolkit/ Microsoft’s Cognitive Toolkit page
https://research.google.com/index.html Google Research page
https://www.tensorflow.org/ Google TensorFlow site
https://aws.amazon.com/machine-learning/ Amazon Cloud Machine Learning site
https://cloud.google.com/datalab/ Google Cloud Data Science site
https://cloud.google.com/ml/ Google Cloud Machine Learning site


Some Research Centers:
http://speech.fit.vutbr.cz/ University of Brno speech research group
http://www.cs.nyu.edu/~yann/ NYU’s Computational & Biological Learning Lab where Yann Lecun works
http://apt.cs.manchester.ac.uk/projects/SpiNNaker/ Spinnaker website modeling the human brain
http://www.idsia.ch/ Swiss AI Lab IDSIA – where Schmidhuber works
http://cs.stanford.edu/research Stanford’s Research website


Software Websites and Repositories:
http://caffe.berkeleyvision.org/ Caffe website
https://github.com/baidu-research/warp-ctc Baidu’s speech recognition repository
https://github.com/kjw0612/awesome-deep-vision Deep Learning resources for computer vision
https://github.com/Microsoft/CNTK Microsoft Computational Network Toolkit for Deep Learning and Machine Learning runs on Windows
https://github.com/fchollet/keras Keras Github repository, Deep Learning for Python
https://github.com/vlfeat/matconvnet matconvnet Github repository, CNNs for MatLab
https://github.com/zhongkaifu/RNNSharp Github site for RNNSharp toolkit
http://sourceforge.net/p/rnnl/wiki/Home/ RNNLIB repository by Alex Graves (Google Deepmind) for RNNs
http://sourceforge.net/projects/currennt/ CURRENNT repository for RNNs
http://deeplearning.net/software/theano/ Theano website
http://deeplearning.net/software/pylearn2/ PyLearn 2 machine learning library built on Theano
http://pybrain.org/pages/home Machine Learning Library for Python
http://kaldi-asr.org/ JHU HLT COE ASR open source ASR project called Kaldi
https://github.com/kaldi-asr/kaldi Kaldi Github repository


Tutorials:
http://neuralnetworksanddeeplearning.com/index.html A tutorial on Neural Nets, good introduction to concepts
http://deeplearning.stanford.edu/tutorial/ A tutorial on Neural Nets, good introduction to concepts
http://deeplearning.net/tutorial/ A tutorial on Neural Nets, good introduction to concepts
http://www.wildml.com/2015/09/recurrent-neural-networks-tutorial-part-1-introduction-to-rnns/ RNN tutorial
http://www.deeplearningweekly.com/pages/open_source_deep_learning_curriculum Open Source Deep Learning Curriculum
https://www.coursera.org/learn/machine-learning Stanford Machine Learning class
https://www.coursera.org/learn/practical-machine-learning JHU Machine Learning class


Other Worthwhile sites:
http://en.wikipedia.org/wiki/SyNAPSE about the SyNAPSE project
http://brainarchitecture.org/ Brain Architecture Project
http://www.ine-web.org/ Institute of Neuromorphic Engineering website
https://www.humanbrainproject.eu/ Human Brain Project
http://www.technologyreview.com/featuredstory/522476/thinking-in-silicon/ MIT tech Review article on DARPA SyNAPSE
https://amplab.cs.berkeley.edu/2014/10/22/big-data-hype-the-media-and-other-provocative-words-to-put-in-a-title/ Critique of Deep Learning Hype
http://www.ntu.edu.sg/home/egbhuang/ Singapore University Extreme Learning Machines website
http://deeplearning.net/ Deep Learning Clearing house website with links to lots of good stuff
http://deeplearning.net/software_links/ Deep Learning Software Links
https://developer.nvidia.com/cuda-zone NVidia CUDA site
https://developer.nvidia.com/devbox NVidia Deep Learning Hardware and Software site
http://launch.ceva-dsp.com/cdnn/ CEVA specialized hardware for DNN processing in mobile devices
http://lucida.ai/ Open Source Personal Assistant website from U of Michigan
http://allenai.org/ Allen Institute for AI
http://www.deeplearningbook.org/ Online deep Learning book by Goodfellow, Bengio, and Courville



CALL FOR SUBMISSIONS:
(FIRST REMINDER)

The Second Misha Mahowald Prize for Neuromorphic Engineering.

The Misha Mahowald Prize recognizes outstanding research in neuromorphic engineering in a broad sense: neurally-inspired hardware, but also neuromorphic software, algorithms, and architectures can compete for the award.

The Prize is awarded by a jury of international experts and carries a cash prize of USD 3000.

The inaugural prize was awarded in 2016 to IBM Research - Almaden for their ground-breaking project on the neuromorphic processor TrueNorth.

The competition is open to any individual or research group worldwide. A description of any type of neurally-inspired hardware, software, or algorithm may be submitted. The award is for an original, ground-breaking contribution to neuromorphic engineering. The work of individuals and groups will be considered equally. Only one winner is announced each year. There are no runners-up. Revised resubmissions are encouraged.

To apply:

Send an extended abstract in English of up to two DIN A4 pages, containing:

• Applicant(s) and affiliation(s)
• Contact person information
• Project title
• Brief description of the work, its novelty, and its potential impact, including images/tables/original paper links
• Link to a video, if applicable (authors must arrange for unrestricted online viewing of video)

Send the document as a PDF file (max. size 2 MB) to

prize@mahowaldprize.org

If a video is included in the submission, a download link to the original source file should be included.

The submission deadline is February 1, 2017.

2017 Jury:

• Prof. Dr. Steve Furber, University of Manchester
• Dr. Dan Hammerstrom, DARPA
• Prof. Dr. Christof Koch, CSO, Allen Institute for Brain Science
• Dr. Dharmendra Modha, IBM Research - Almaden
• Dr. Eric Ryu, Master, Samsung Electronics
• Prof. Dr. Terrence Sejnowski, Salk Institute (head of the Jury)

The Prize is sponsored by iniLabs, a technology company based in Switzerland that invents, produces, and sells neuromorphic technologies for research. iniLabs plays no role in selecting the nominee.

The award is named for Misha Mahowald, a creative and influential pioneer, who passed away before she could see the field flourishing. She created some of the first neuromorphic circuits including the silicon retina and the silicon neuron.



Post has attachment
Phys.org: How the brain recognizes faces: Machine-learning system spontaneously reproduces aspects of human neurology. http://google.com/newsstand/s/CBIwhf7i3TA

Post has attachment
Dear colleagues,

I am very to let you know that our open access article for the Wiley Encyclopedia of EEE is already available at:

http://onlinelibrary.wiley.com/doi/10.1002/047134608X.W8328/full

Thanks to everybody for getting us to this point, and special thanks to those who pushed to have it open access.
Best wishes to everybody,

Bernabe http://www.imse-cnm.csic.es/~bernabe

Post has attachment
New PhD open positions for the NeuroAgents project are available at INI:
https://www.ini.uzh.ch/positions/jobs

Wait while more posts are being loaded