Profile cover photo
Profile photo
Abhinav Shrivastava
781 followers -
“Do not seek to follow in the footsteps of the men of old; seek what they sought." - Matsuo Basho
“Do not seek to follow in the footsteps of the men of old; seek what they sought." - Matsuo Basho

781 followers
About
Posts

Post has shared content

Post has shared content
Big news... Surreal Vision, co-founded by +Richard Newcombe​​​, +Renato Salas-Moreno​​​ and +Steven Lovegrove​​​, all previously PhD students from my lab, has been acquired by Oculus. They will be setting up an amazing new lab in Seattle to research the future of SLAM and perception for mixed reality.

Original start-up website: http://surreal.vision/

Post has shared content
SORTING - A visualization of the most famous sorting algorithms
http://sorting.at/
Photo

Post has shared content
I took a few clips of soap water freezing instantly in the recent cold wave where the temperatures dropped as low as -26 C (with feels like of -40 C). My original plan was to film frozen soap bubbles, but the wind was instantly blowing them away or blowing them up. I ended up just filming the film :)

http://youtu.be/NPWle_Qd1XQ

Post has shared content
On Research@Google!!!
I Crawl, I See, I Learn: Teaching Computers to Think

Human beings are remarkably good at learning. Every day, our brain processes visual data from the world around us, learning common sense relationships and developing inferences about new things we may have never seen before.  For example, when faced with an image of a strange creature, we infer that it is an insect despite never having seen it before, based on its physical characteristics and the context in which the image is taken. 

But is it possible for a computer learn common sense from visual data in a fashion similar to humans, just by browsing images found on the internet? Researchers at the Robotics Institute (http://goo.gl/se0wTo) and Language Technology Institute (http://goo.gl/9aPxwH) of Carnegie Mellon University (CMU) believe so. 

Assistant Research Professor Abhinav Gupta (http://goo.gl/fG0Rbm) aims to build the world’s largest visual knowledge base with the Never Ending Image Learner (NEIL) program. Gupta, working alongside PhD student and former Google intern Abhinav Shrivastava (http://goo.gl/og37Ho) and PhD student Xinlei Chen (http://goo.gl/tQ8sCv), has developed NEIL to automatically extract visual knowledge from the Internet, enabling it to learn common sense relationships between objects and categories. For example:

Deer can be a kind of / look similar to Antelope
Car can have a part Wheels
Sunflower is/has Yellow  

Recipient of a Google Focused Research Award (http://goo.gl/hn59r) and #9 in CNN’s top 10 ideas of 2013 (http://goo.gl/cFXK1F), NEIL runs 24 hours a day, 7 days a week, using small amounts of human labeled visual data in conjunction with a large amount of unlabeled data to iteratively learn reliable and robust visual models. NEIL is then able to develop associations between different concepts, using these learned relationships to further improve the visual models themselves. This is in contrast to machine learning approaches where concepts are isolated and recognized independently. 

To date, NEIL has learned to identify 1,500 objects, 1,200 scenes, and made 2,500 associations between scene and object, from the millions of images found on the Internet. It is the hope of Gupta et al. that NEIL will learn new relationships between different concepts without the need for human labeled data, and in the process develop common sense that is needed for better for perception, reasoning and decision making.

To learn more details about NEIL and see what it has learned, visit the program page linked below.

Post has shared content
Nice read..
Some thoughts on the future of object detection...

Post has attachment

Post has attachment
Check out our new project -- NEIL: A Never Ending Image Learner!! www.neil-kb.com

He runs 24x7 trying to make sense out of the images it sees on the web!!

Post has shared content
Exactly my thoughts!
Ed Chi and other Googlers show, in a recent CHI paper reporting on a usability study, that people ignore social annotations when searching (e.g. one of your friends clicked or liked a web page that is in your search results) and, in the rare cases when searchers pay attention to them, people don't find them useful.  That has some pretty serious implications for Facebook's partnership with Bing and Google's increasingly annoying habit of slopping Google+ all over Google web search.

Post has shared content
Wait while more posts are being loaded