Profile cover photo
Profile photo
Pablo Torre
99 followers
99 followers
About
Pablo Torre's posts

Post has shared content
Spark it up... 
#Spark 2.0 prepares to catch fire! Check out improved performance, SparkSessions & more http://ow.ly/9gR1300CCcY Databricks Ian Pointer

Post has shared content
pretty cool way to speed up training time and improve generalization of the resulting net. :) it's a win-win! 

Post has shared content
Giant Multi-Headed 3D Printer Can Create Massive Objects in One Pass
Video: https://vimeo.com/157523884#t=0s

#3dprinter   #3dprinting
Animated Photo

Post has shared content
lol... 
Computers with a sense of humor? Not a joke. http://hp.nu/QJxin 
Photo

Post has shared content
Dear Lord Jesus, teach us to know You, love You and to serve You.  Amen.


#lord   #jesus   #teach   #know   #love   #serve  
Photo

Post has attachment
4096 Is possible!! :D 

http://2048game.com/
Photo

Post has attachment
This class teaches you how to learn better! :) 
it's free and starts next week :) 

Post has shared content

Post has shared content
Recently I gave a dozen talks on "Deep Learning" in New York and the Bay Area, for Yahoo, SciHampton, Google, SciFoo at the Googleplex, Stanford University, ML meetup San Francisco, ICSI, Berkeley University, ML meetup in the Empire State Building, IBM Watson. Similar material was used for invited plenary talks / keynotes at KAIST 2014 (Korea), ICONIP 2014 (Malaysia), INNS-CIIS 2014 (Brunei). Links to videos are listed below.

Typical title and abstract:

Deep Learning RNNaissance

Machine learning and pattern recognition are currently being revolutionised by "Deep Learning" (DL) Neural Networks (NNs). I summarise work on DL since the 1960s, and own work since 1991. Our recurrent NNs (RNNs) were the first to win official international competitions in pattern recognition and machine learning; our team has won more such contests than any other research group. Our Long Short-Term Memory (LSTM) RNNs helped to improve connected handwriting recognition, speech recognition, machine translation, optical character recognition, image caption generation, and other fields. Our Deep Learners also were the first to win object detection and image segmentation contests, and achieved the world's first superhuman visual classification results. We also built the first reinforcement learning RNN-based agent that learns from scratch complex video game control based on high-dimensional vision. DL is now of commercial interest (Google spent over 400m on start-up "deepmind" co-founded by our student). Time permitting, I'll also address curious/creative machines and theoretically optimal, universal, self-modifying artificial intelligences.

Talk slides:
http://www.idsia.ch/~juergen/deeplearning2014slides.pdf

Outline of slides:

- First Deep Learning (DL) (Ivakhnenko, 1965)
- History of backpropagation: Bryson, Kelley, Dreyfus (early 1960s), Linnainmaa (1970), Speelpenning (1980), Werbos (1981), Rumelhart et al (1986), others
- Recurrent neural networks (RNNs) - the deepest of all NNs - search in general program space!
- 1991: Fundamental DL problem (FDLP) of gradient-based NNs (Hochreiter, my 1st student, now prof)
- 1991: Our deep unsupervised stack of recurrent NNs (RNNs) overcomes the FDLP: the Neural History Compressor or Hierarchical Temporal Memory / related to autoencoder stacks (Ballard, 1987) and Deep Belief Nets (Hinton et al, 2006)
- Our purely supervised deep Long Short-Term Memory (LSTM) RNN overcomes the FDLP without any unsupervised pre-training (1990s, 2001, 2003, 2006-, with Hochreiter, Gers, Graves, Fernandez, Wierstra, Gomez, others)
- How LSTM became the first RNN to win controlled contests (2009), and set standards in connected handwriting and speech recognition
- Industrial breakthroughs of 2014: Google / Microsoft / IBM used LSTM to improve machine translation, image caption generation, speech recognition / text-to-speech synthesis / prosody detection
- 2010: How our deep GPU-based NNs trained by backprop (3-5 decades old) + training pattern deformations (2 decades old) broke the MNIST record
- History of feedforward max-pooling (MP) convolutional NNs (MPCNNs, Fukushima 1979-, Weng 1992, LeCun et al 1989-2000s, others)
- How our ensembles of GPU-based MPCNNs (Ciresan et al, 2011) became the first DL systems to achieve superhuman visual pattern recognition (traffic signs), and to win contests in image segmentation (brain images, 2012) and visual object detection (cancer cells, 2012, 2013) / fast MPCNN image scans (Masci et al, 2013)
- Why it's all about data compression
- 2014: 20 year anniversary of self-driving cars in highway traffic (Dickmanns, 1994)
- Reinforcement Learning (RL): How NN-based planning robots won the RoboCup in the fast league (Foerster et al, 2004)
- Our deep RL through Compressed NN Search applied to huge RNN video game controllers that learn to process raw video input (Koutnik et al, 2013)
- Formal theory of fun and creativity


Videos of DL talks in the US, with variations due to questions from the audience:

1. New York City Machine Learning Meetup hosted by ShutterStock in the Empire State Building:
https://www.youtube.com/watch?v=6bOMf9zr7N8
Also at Vimeo:
http://vimeo.com/113402131

2. ICSI, Berkeley:
https://www.youtube.com/watch?v=h4FqFss9hEY

3. Bay Area ML Meetup hosted by upsight.com in downtown San Francisco:
https://vimeo.com/105972440

4. Google, Palo Alto (only voice and slides):
http://youtu.be/obGrn1oVJsY

An earlier DL talk at the ML Meetup at ETH Zurich (January 2014):
https://www.youtube.com/watch?v=JSNZA8jVcm4


More in the invited DL Survey (88 pages, 888 references):
http://www.idsia.ch/~juergen/deep-learning-overview.html
http://arxiv.org/abs/1404.7828
Published online by Neural Networks (2014):
http://authors.elsevier.com/a/1Q3Bc3BBjKFZVN
Hardcopy to appear in Vol. 61, p. 85–117, January 2015

No public videos exist of the DL talks for SciFoo, Stanford, Berkeley Computer Vision, Yahoo, IBM Watson, KAIST, ICONIP, INNS-CIIS. Other videos (not listed here) taped at SciHampton and Univ. Berkeley had a different focus, namely, the formal theory of fun & beauty & creativity - more on this here:
http://www.idsia.ch/~juergen/creativity.html
http://www.idsia.ch/~juergen/interest.html

Numerous earlier videos:
http://www.idsia.ch/~juergen/videos.html

#machinelearning
#artificialintelligence
#computervision
#deeplearning
http://www.idsia.ch/~juergen/deeplearning.html

Im taking Berkley's AI class in EDX, after going over the first couple of weeks of lectures and studying the A* search, I keep seeing conceptual similarity between search heuristics and error gradients... however, googling for the 2 terms has returned less answers than I expected. 

So Im wondering if you guys can help me clarify this. Are error gradients a type of heuristics?

 
Wait while more posts are being loaded