Profile

Cover photo
Lucas Walter
Attended University of Washington
Lives in Seattle
1,001 followers|704,562 views
AboutPostsPhotosYouTubeReviews

Stream

 
I didn't get timestamps but at one point Ng says intelligent computers are a long way off, but then in another expresses surprise at the last five years of progress.

Is this on youtube?  I found some videos from +NVIDIA here: https://www.youtube.com/playlist?list=PLZHnYvH1qtOaW8_l6XgrFqqp2V2Q3hMly but not this one.
 
My GPU Technology Conference talk has been posted online.  The talk includes several new ideas/results that I haven't talked about before, including a new demo of Deep Speech, new face recognition results (far better than human-level), and a simple slide showing how I think about developing ML systems. Even though it's a long, full-length talk, and even if you have heard me speak elsewhere on Deep Learning, I hope many of you will find this worth watching!  Go to: http://www.ustream.tv/recorded/60113824
1 comment on original post
1
Add a comment...

Lucas Walter

commented on a video on YouTube.
Shared publicly  - 
 
Projector mapping pixelated textures onto real world objects to make them look like they are somewhat primitively 3d rendered is a really cool idea.
1
Add a comment...

Lucas Walter

commented on a video on YouTube.
Shared publicly  - 
 
2:03 - car tracking on a highway.

2:20 - optical flow estimation- what is different from the old optical flow?

2:32 - DTAM- is this monocular, gets the depth/structure from motion?
1
Woldo Wondare's profile photo
 
+Lucas Walter regarding DTAM as far as i know, yes, its calculating depth from monocular images (your question is 1 week old, i guess you figured that out, didn't you?)
Add a comment...

Lucas Walter

commented on a video on YouTube.
Shared publicly  - 
 
The view of the rocket and scenery at 1:30 and 1:57 is the best.
1
Add a comment...

Lucas Walter

commented on a video on YouTube.
Shared publicly  - 
 
+alex graves on Hallucination with RNNs

49:28 recurrent neural network trained on black and white atari video games (Enduro here and River Raid at 52:10) simulate their inputs, they generate the game being played. The goodness of the imagined game represents the fidelity of the rnn model. It's possible to preserve external joystick control though it is said the car/aircraft sometimes moves in the wrong direction. ('Hey Sheldon what are you doing?' 'Playing Super Mario on a poorly trained neural network approximation')

(Was this shown in that DeepMind promotional video from a few weeks ago, or was that just the nn playing the real game? I should watch that)
1
Add a comment...

Lucas Walter

Shared publicly  - 
 
I want a robot that sits in my basement and goes through every box, photographs and enters every item into an online database that is easy to search from my phone, scans every document (and why not the books too), and calculates the cost per cubic inch to store any given item (as a function of rent/mortgage) vs. the risk adjusted price of re-acquiring a copy any number of years in the future after getting rid of it now.
1
Shervin Emami's profile photoLucas Walter's profile photoTed Lemon's profile photo
4 comments
 
Ha!   Sure you can.   I've actually been meaning to write an app to do the database integration.   Hand the kid a phone with a camera and a bar code printer, do a picture inventory of the boxes, slap bar code labels on each one, the pictures are keyed to the labels, you go through and annotate the pictures (or better still, use image recognition), and then you can either just know where all your crap is, or else ebay it or amazon it to pay for the teenager's hourly wage.
Add a comment...

Lucas Walter

commented on a video on YouTube.
Shared publicly  - 
 
The audio quality is horrible but the presentation very interesting- is the same talk duplicated in another video perhaps from a different time and place?

45:54  large convolutional net: 650e3 neurons, 832e6 synapses, 60e6 parameters.

How are we doing on dedicated nn co-processors (fpgas and gpus are playing that role now)?  Or shifting general purpose cpu design towards running these more quickly?   
1
Add a comment...
Have them in circles
1,001 people
Jimmy Gunawan's profile photo
cai xiaoyang's profile photo
Jan Z P's profile photo
Ben Nielsen's profile photo
Shashank Singh's profile photo
Mustafa Çakır's profile photo
Ivan DeWolf's profile photo
Lee Nelson's profile photo
Koeller Holtman's profile photo

Lucas Walter

commented on a video on YouTube.
Shared publicly  - 
 
25:57 Random forests and the Kinect.  Used computer generated depth images to get labels for body parts automatically.  Were those derived from motion capture originally though?  

I didn't understand the randomized feature part- surely the random offsets are then stored and used thereafter in the application of the derived tree?   And the summation of the individual trees would tend to smear out the random features into perhaps something resembling a gaussian around the selected point? 
1
Add a comment...

Lucas Walter

Shared publicly  - 
 
In 2011 a milestone was reached: Apple and Google spent more on lawsuits and payments involving patents than they did on research and development of new products.
books.google.com -  Following his blockbuster biography of Steve Jobs, The Innovators is Walter Isaacson’s revealing story of the people who created the computer and the Inter...
4
1
Oren Laskin's profile photoKazimierz Kurz's profile photo
 
Have to love that system
Add a comment...

Lucas Walter

Shared publicly  - 
 
Levine and Morgan did what everyone on Wall Street did when they wanted to find out what was going on inside a rival bank: They invited some of its employees in for job interviews

Elsewhere a single individual finds out what secretive companies were doing with HFT by finding the profiles of their software employees on LinkedIn (and presumably was adding them as a link and the targets were blindly accepting them, though the researcher had some industry credentials themselves already).

Also a (serious) software candidate for Goldman Sachs is asked whether 3600 is a prime number, and another question I've forgotten and is not available in Amazon search inside this book.
 
I just finished Michael Lewis’ fantastic new book "Flash Boys: A Wall Street Revolt" and enjoyed it immensely.

In my opinion, the reason Flash Boys is doing so well ( number 1 on the New York Times bestseller list as I write this) is because of story. Lewis has taken the complex and arcane world of trading stocks in real-time measured in milliseconds, a subject most would find boring and complicated, and turned it into a story filled with heroes and villains and conflict.

What business content creators can learn from Flash Boys

As I was reading the book, I was enjoying elements usually found in fiction within this nonfiction book. I was thinking how these techniques can be used by any nonfiction writer, but typically are not.

Writing about interesting characters, creating conflict, making a hero, and having elements that challenge a reader to want to learn more can make your own writing come alive.

Read Flash Boys for the story. But learn from it to make your own content better too.

More on my blog
http://www.webinknow.com/2014/04/flash-boys-and-the-power-of-story.html
View original post
1
Add a comment...

Lucas Walter

commented on a video on YouTube.
Shared publicly  - 
 
Cool, that looks pretty good.  I want to try out live people over a low quality green screen (one that has lots of folds, shadows, other artifacts).
1
Add a comment...

Lucas Walter

commented on a video on YouTube.
Shared publicly  - 
 
6:39 "I've read the sift paper about 5 times now and still have no idea what it is doing"

10:00 - 'One learning algorithm' hypothesis.  The brain has just one learning algorithm and applies it all over the place.  It is possible to reuse parts of the brain to take on the tasks of other parts.

11:30 - Tongue seeing.  Low resolution gray-scale image transmitted though a device that stimulates the tongue (I imagine the tongue has a relatively high density of touch sensors vs. skin elsewhere, and is conveniently also located on the head).  It only takes a few minutes (?) to start seeing with this.  This would be great to try out.

18:20 supervised vs semi vs. self taught

25:00 sparse coding.

32:00 hierarchical sparse coding.  Intriguing pictures of face models built from object parts (of eyes and ears and mouths) which are themselves built from simple basis features that are mostly edges of different orientations.  It would be great to download a library of these component parts.
3
Add a comment...
People
Have them in circles
1,001 people
Jimmy Gunawan's profile photo
cai xiaoyang's profile photo
Jan Z P's profile photo
Ben Nielsen's profile photo
Shashank Singh's profile photo
Mustafa Çakır's profile photo
Ivan DeWolf's profile photo
Lee Nelson's profile photo
Koeller Holtman's profile photo
Work
Occupation
Engineer
Skills
C++, OpenCV
Places
Map of the places this user has livedMap of the places this user has livedMap of the places this user has lived
Currently
Seattle
Story
Introduction
Currently working on underwater 3D laser scanners and various machine vision/computer vision systems.  I used to do avionics for a small aerospace company.


Education
  • University of Washington
Links
Other profiles
Contributor to
There was dog shit and piss on the floor.
Public - in the last week
reviewed in the last week
Public - a month ago
reviewed a month ago
Public - 2 months ago
reviewed 2 months ago
Public - a year ago
reviewed a year ago
41 reviews
Map
Map
Map
Public - 2 months ago
reviewed 2 months ago
Now reopened in Hillman City at Rainier & S Meade St.
Quality: ExcellentAppeal: ExcellentService: Excellent
Public - 7 months ago
reviewed 7 months ago
Public - a year ago
reviewed a year ago