Profile cover photo
Profile photo
Rupesh Kumar Srivastava
641 followers
641 followers
About
Rupesh Kumar's posts

Post has attachment
13 delicious videos & slides from our NIPS 2016 symposium on "Recurrent Neural Networks and Other Machines that Learn Algorithms" are now available!
http://people.idsia.ch/~rupesh/rnnsymposium2016/program.html

Post has attachment
Recurrent Highway Networks CRUSH the benchmarks in our new update on arXiv (https://arxiv.org/abs/1607.03474) with perplexity on Penn Treebank improving to 66 and entropy on raw wikipedia improving to 1.32 BPC. That's just the headline.

We have also expanded the related work, and overhauled the entire Experiments section, easily training networks with depth of 1 to 10(!) in the recurrent transition (while keeping the model sizes fixed) which was never possible before.
CC: Razvan Pascanu Kyunghyun Cho

But wait, there's more! We are also releasing the source code in Tensorflow and Torch7 to reproduce the results and to help use these networks for your own problems: https://github.com/julian121266/RecurrentHighwayNetworks/

Post has shared content
We have a fantastic lineup of speakers at the upcoming RNN symposium! Submissions of poster abstracts is now open. Deadline: 15 October.

NIPS 2016 Symposium: Recurrent Neural Networks and Other Machines that Learn Algorithms (Thursday, December 8, 2016, Barcelona) - Call for Posters

Soon after the birth of modern computer science in the 1930s, two fundamental questions arose: 1. How can computers learn useful programs from experience, as opposed to being programmed by human programmers? 2. How to program parallel multiprocessor machines, as opposed to traditional serial architectures? Both questions found natural answers in the field of Recurrent Neural Networks (RNNs), which are brain-inspired general purpose computers that can learn parallel-sequential programs or algorithms encoded as weight matrices.

The first RNNaissance NIPS workshop dates back to 2003: http://people.idsia.ch/~juergen/rnnaissance.html . Since then, a lot has happened. Some of the most successful applications in machine learning (including deep learning) are now driven by RNNs such as Long Short-Term Memory, e.g., speech recognition, video recognition, natural language processing, image captioning, time series prediction, etc. Through the world's most valuable public companies, billions of people can now access this technology through their smartphones and other devices, e.g., in the form of Google Voice or on Apple's iOS. Reinforcement-learning and evolutionary RNNs are solving complex control tasks from raw video input. Many RNN-based methods learn sequential attention strategies.

At this symposium, we will review the latest developments in all of these fields, and focus not only on RNNs, but also on learning machines in which RNNs interact with external memory such as neural Turing machines, memory networks, and related memory architectures such as fast weight networks and neural stack machines. In this context we will also will discuss asymptotically optimal program search methods and their practical relevance.

Our target audience has heard a bit about RNNs, the deepest of all neural networks, but will be happy to hear again a summary of the basics and then delve into the latest advanced topics to see and understand what has recently become possible. All invited talks will be followed by open discussions, with further discussions during a poster session. Finally, we will also have a panel discussion on the bright future of RNNs, and their pros and cons.

A tentative list of speakers can be found at the symposium website: http://people.idsia.ch/~rupesh/rnnsymposium2016/index.html



Call for Posters

We invite researchers and practitioners to submit poster abstracts for presentation during the symposium (min. 2 pages, no page limit). All contributions related to the symposium theme are encouraged. The organizing committee will select posters to maximize quality and diversity within the available display space.

For submissions, non-anonymous abstracts should be emailed to rnn.nips2016@gmail.com by the corresponding authors. Selected abstracts will be advertised on the symposium website, and posters will be visible throughout the duration of the symposium. NIPS attendees will interact with poster presenters during the light dinner break (6:30 - 7:30 PM). The submission deadline is October 15th, 23:59 PM CET.



Jürgen Schmidhuber & Sepp Hochreiter & Alex Graves & Rupesh Srivastava


#artificialintelligence
#deeplearning
#machinelearning
#computervision


Photo

Post has shared content
PostDoc Jobs 2016: Join the Deep Learning team (since 1991) that won more competitions than any other. We are seeking postdocs for the project RNNAIssance based on this tech report on "learning to think:” http://arxiv.org/abs/1511.09249 . The project is about general purpose artificial intelligence for agents living in partially observable environments, controlled by reinforcement learning recurrent neural networks (RNNs), supported by unsupervised predictive RNN world models. Location: The Swiss AI Lab, IDSIA, in Switzerland, the world’s leading science nation, and most competitive country for the 7th year in a row. Competitive Swiss salary. Preferred start: As soon as possible. INTERVIEWS: Berlin (April 29 - May 1), NYC (May 2-7), Beijing (May 12-13), London (May 17-20), or in Switzerland, or by video. More details and instructions can be found here: http://people.idsia.ch/~juergen/rnnai2016.html

#artificialintelligence
#deeplearning
#machinelearning
#computervision
Photo

Post has shared content
The groups of +Juergen Schmidhuber, Luca Gambardella and myself have joined forces to present the first work using Deep Neural Networks to enable an autonomous vision-control drone to recognize and follow forest trails. The video is narrated, so turn on your loud speakers and enjoy it! More info on the DNN training and testing in the FAQs below. Btw, the paper is currently nominated for the AAAI Best Video Award; the video with the most likes on YouTube wins; so, if you like it, please give us a thumb-up on YouTube!

Paper:
A. Giusti et al., A Machine Learning Approach to Visual Perception of Forest Trails for Mobile Robots, IEEE Robotics and Automation Letters, 2016.
PDF: http://rpg.ifi.uzh.ch/docs/RAL16_Giusti.pdf
Project webpage and datasets: http://www.leet.it/home/giusti/website/doku.php?id=wiki:forest

FAQs:

What is the paper about?
We present the first work using a Deep Neural Networks (DNNs) image classifier running onboard our vision-controlled drone to recognize and autonomously follow forest trails. Unlike previous works, which relied on image salience or low-level features, our DNN-based image classifier operates directly on pixel-level image intensities and outputs the direction of the trail with respect to the heading direction of the drone. If a trail is visible, the software steers the drone in the corresponding direction.

How did we train the classifier?
In order to gather enough data to train our DNN classifier, we hiked several hours along different trails in the Swiss Alps and took more than 20 thousand images of trails using cameras attached to a helmet (Fig. 4 in the paper). This effort paid off: when tested on a new, previously-unseen trail, the DNN was able to find the correct direction in 85% of cases; in comparison, humans faced with the same task guessed correctly 82% of the time.

Real time and onboard?
Yes. The classifier ran in real time and onboard the smartphone processor (Odroid quadcore computer) on our custom-made vision-controlled quadrotor. Both visual odometry (based on SVO) and control were also running onboard.

Why do we want drones to follow forest trails?
To save lives. Every year hundreds of thousand people get lost in the wild worldwide. In Switzerland alone, around 1000 emergency calls per year come from hikers, most of whom are injured or have lost their way. Drones are an efficient complement to human rescuers and can be deployed in large numbers, are inexpensive and prompt, and thus minimize the response time and the risk of injury for those who are lost and those who work in rescue teams.

Is the training and testing data available for research?
Yes, from the project webpage.

More on Deep Learning: http://www.scholarpedia.org/article/Deep_Learning

#computervision
#deeplearning
#machinelearning
#artificialintelligence
#robotics
#drones

https://youtu.be/umRdt3zGgpU

Post has shared content

Post has shared content

Post has attachment

Post has shared content
How to Learn an Algorithm (video). I review 3 decades of our research on both gradient-based and more general problem solvers that search the space of algorithms running on general purpose computers with internal memory. Architectures include traditional computers, Turing machines, recurrent neural networks, fast weight networks, stack machines, and others. Some of our algorithm searchers are based on algorithmic information theory and are optimal in asymptotic or other senses. Most can learn to direct internal and external spotlights of attention. Some of them are self-referential and can even learn the learning algorithm itself (recursive self-improvement). Without a teacher, some of them can reinforcement-learn to solve very deep algorithmic problems (involving billions of steps) infeasible for more recent memory-based deep learners. And algorithms learned by our Long Short-Term Memory recurrent networks defined the state-of-the-art in handwriting recognition, speech recognition, natural language processing, machine translation, image caption generation, etc. Google and other companies made them available to over a billion users.

The video was taped on Oct 7 2015 during MICCAI 2015 at the Deep Learning Meetup Munich:  http://www.meetup.com/en/deeplearning/events/225423302/  Link to video: https://www.youtube.com/watch?v=mF5-tr7qAF4

Similar talk at the Deep Learning London Meetup of Nov 4 2015: http://www.meetup.com/Deep-Learning-London/events/225841989/ (video not quite ready yet)

Most of the slides for these talks are here: http://people.idsia.ch/~juergen/deep2015white.pdf

These also includes slides for the AGI keynote in Berlin http://agi-conf.org/2015/keynotes/, the IEEE distinguished lecture in Seattle (Microsoft Research, Amazon), the INNS BigData plenary talk in San Francisco, the keynote for the Swiss eHealth summit, two MICCAI 2015 workshops, and a recent talk for CERN (some of the above were videotaped as well).

Parts of these talks (and some of the slides) are also relevant for upcoming talks in the NYC area (Dec 4-6 and 13-16) and at NIPS workshops in Montreal:

1. Reasoning, Attention, Memory (RAM) Workshop, NIPS 2015 https://research.facebook.com/pages/764602597000662/reasoning-attention-memory-ram-nips-workshop-2015/

2. Deep Reinforcement Learning Workshop, NIPS 2015 http://rll.berkeley.edu/deeprlworkshop/

3. Applying (machine) Learning to Experimental Physics (ALEPH) Workshop, NIPS 2015 http://yandexdataschool.github.io/aleph2015/pages/keynote-speakers.html

More videos: http://people.idsia.ch/~juergen/videos.html

Also available now: Scholarpedia article on Deep Learning: http://www.scholarpedia.org/article/Deep_Learning

Finally, a recent arXiv preprint: On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models. http://arxiv.org/abs/1511.09249

#machinelearning
#artificialintelligence
#computervision
#deeplearning


Post has shared content
Again and again, signs that Nature needs to reevaluate what it is. 
On Dark Matter And Dinosaurs

Let me begin by saying there is no evidence that dark matter killed the dinosaurs. None whatsoever. Unfortunately the idea was posted on Nature’s blog, and from there it went to Scientific American and elsewhere. The various social media took the story and it has spread like a prairie wildfire. The actual preprint is much less sensational (and doesn’t mention dinosaurs) but it is still very speculative.

The idea comes from the fact that the Sun does not follow a flat orbit around the galaxy. Instead, its motion wobbles above and below the galactic plane, crossing the galactic plane every 35 million years. This isn’t unusual, as lots of stars follow similar paths, but it has led some to speculate that perhaps this periodicity could explain periodic mass extinctions in the geologic record.

The problem is, there isn’t any strong evidence for cyclic mass extinctions. Some analysis of the data has hinted at a pattern, but the correlation isn’t very strong. Of course that hasn’t stopped people from proposing everything from companion stars to Nibiru to explain these periodic extinctions. There been similar proposals that every time the Sun crosses the galactic plane the Oort cloud would be disrupted, causing comets to sweep into the inner solar system and bombard the Earth.

What’s new here is that the authors propose that dark matter within the plane of the galaxy is doing the disrupting. As I wrote about last week, there is a hint of dark matter seen in gamma ray observations of the center of our galaxy. One model that could account for these gamma rays is type of dark matter that would lie within the galactic plane. So if this type of dark matter exists, and if it disrupts the Oort cloud when the Sun crosses the galactic plane, and if that caused comets to fling into the inner solar system and bombard the Earth, and if that bombardment caused periodic mass extinctions, then you should see some evidence in the geologic record.

So what evidence is there? None. Well, not quite none. If you assume the model is true, and then look for a periodicity in the cratering record of Earth, you find that the cratering record agrees with the model about three times better that it agrees with random cratering. Scientifically, that isn’t very convincing data. It makes for a mildly interesting paper, but it’s mostly speculation at this point.

But Nature and several other websites have decided to take this speculative idea, add the word dinosaurs to the title, and imply that scientists are proposing dark matter killed the dinosaurs. No one is proposing that. It’s link-bait noise that makes the job of communicating real science all that more difficult. So if you see one of these sensationalized titles, don’t share it on social media. Tell your friends that share the articles that it’s speculative nonsense. Hopefully we can drown this noise and get back to real science.

Because honestly, science is interesting enough without the hype.

Paper: Lisa Randall, Matthew Reece. Dark Matter as a Trigger for Periodic Comet Impacts. arXiv:1403.0576 [astro-ph.GA] (2014).
Wait while more posts are being loaded