Profile

Cover photo
Abhinav Shrivastava
Lives in Pittsburgh
741 followers|36,879 views
AboutPostsPhotosYouTube
People
Have him in circles
741 people
NCR Tutors's profile photo
Amey Dharwadker's profile photo
Saurabh Gupta's profile photo
Mohammad Arif's profile photo
JobSearch Mine's profile photo
nakul chettri's profile photo
Sugandha S's profile photo
Shubham Mishra's profile photo
hariballabh agrawal's profile photo
Work
Occupation
Graduate Student
Basic Information
Gender
Male
Story
Tagline
“Do not seek to follow in the footsteps of the men of old; seek what they sought." - Matsuo Basho
Introduction
I am a graduate student in Artificial Intelligence and Robotics at the Robotics InstituteCarnegie Mellon University (CMU), working under the supervision of Prof.Alexei (Alyosha) Efros, Prof. Martial Hebert and Prof. Abhinav Gupta.
Before coming to CMU, I did B.Tech. in Computer Science and Engineering from Jaypee Institute of Information Technology (JIIT), India.

For more detailed information, visit www.abhinav-shrivastava.info
Bragging rights
A Computer-Vision hacker, aspiring Roboticist and graduate student in Artificial Intelligence and Robotics @ RI, CMU.
Places
Map of the places this user has livedMap of the places this user has livedMap of the places this user has lived
Currently
Pittsburgh
Previously
Mountain View - Redmond - New Delhi

Stream

Abhinav Shrivastava

Shared publicly  - 
 
5
1
Xinlei Chen's profile photo

Abhinav Shrivastava

Shared publicly  - 
 
Nice read..
 
Some thoughts on the future of object detection...
5 comments on original post
2
1
Xinlei Chen's profile photo

Abhinav Shrivastava

Shared publicly  - 
 
Check out our new project -- NEIL: A Never Ending Image Learner!! www.neil-kb.com

He runs 24x7 trying to make sense out of the images it sees on the web!!
PITTSBURGH—A computer program called the Never Ending Image Learner (NEIL) is running 24 hours a day at Carnegie Mellon University, searching the Web for images, doing its best to understand them on its own and, as it builds a growing visual database, gathering common sense on a massive scale.
5
7
Ben Hachey's profile photoMihail Sirotenko's profile photo

Abhinav Shrivastava

Shared publicly  - 
 
Exactly my thoughts!
 
Ed Chi and other Googlers show, in a recent CHI paper reporting on a usability study, that people ignore social annotations when searching (e.g. one of your friends clicked or liked a web page that is in your search results) and, in the rare cases when searchers pay attention to them, people don't find them useful.  That has some pretty serious implications for Facebook's partnership with Bing and Google's increasingly annoying habit of slopping Google+ all over Google web search.
View original post
1
Debadeepta Dey's profile photo
 
Finally someone got it.

Abhinav Shrivastava

Shared publicly  - 
 
Glenn Thomas originally shared:
 
so true.
144 comments on original post
7
2
Muhammad Omer's profile photoVimal Sagar Tiwari's profile photo
Have him in circles
741 people
NCR Tutors's profile photo
Amey Dharwadker's profile photo
Saurabh Gupta's profile photo
Mohammad Arif's profile photo
JobSearch Mine's profile photo
nakul chettri's profile photo
Sugandha S's profile photo
Shubham Mishra's profile photo
hariballabh agrawal's profile photo

Abhinav Shrivastava

Shared publicly  - 
 
 
I took a few clips of soap water freezing instantly in the recent cold wave where the temperatures dropped as low as -26 C (with feels like of -40 C). My original plan was to film frozen soap bubbles, but the wind was instantly blowing them away or blowing them up. I ended up just filming the film :)

http://youtu.be/NPWle_Qd1XQ
1 comment on original post
5

Abhinav Shrivastava

Shared publicly  - 
 
On Research@Google!!!
 
I Crawl, I See, I Learn: Teaching Computers to Think

Human beings are remarkably good at learning. Every day, our brain processes visual data from the world around us, learning common sense relationships and developing inferences about new things we may have never seen before.  For example, when faced with an image of a strange creature, we infer that it is an insect despite never having seen it before, based on its physical characteristics and the context in which the image is taken. 

But is it possible for a computer learn common sense from visual data in a fashion similar to humans, just by browsing images found on the internet? Researchers at the Robotics Institute (http://goo.gl/se0wTo) and Language Technology Institute (http://goo.gl/9aPxwH) of Carnegie Mellon University (CMU) believe so. 

Assistant Research Professor Abhinav Gupta (http://goo.gl/fG0Rbm) aims to build the world’s largest visual knowledge base with the Never Ending Image Learner (NEIL) program. Gupta, working alongside PhD student and former Google intern Abhinav Shrivastava (http://goo.gl/og37Ho) and PhD student Xinlei Chen (http://goo.gl/tQ8sCv), has developed NEIL to automatically extract visual knowledge from the Internet, enabling it to learn common sense relationships between objects and categories. For example:

Deer can be a kind of / look similar to Antelope
Car can have a part Wheels
Sunflower is/has Yellow  

Recipient of a Google Focused Research Award (http://goo.gl/hn59r) and #9 in CNN’s top 10 ideas of 2013 (http://goo.gl/cFXK1F), NEIL runs 24 hours a day, 7 days a week, using small amounts of human labeled visual data in conjunction with a large amount of unlabeled data to iteratively learn reliable and robust visual models. NEIL is then able to develop associations between different concepts, using these learned relationships to further improve the visual models themselves. This is in contrast to machine learning approaches where concepts are isolated and recognized independently. 

To date, NEIL has learned to identify 1,500 objects, 1,200 scenes, and made 2,500 associations between scene and object, from the millions of images found on the Internet. It is the hope of Gupta et al. that NEIL will learn new relationships between different concepts without the need for human labeled data, and in the process develop common sense that is needed for better for perception, reasoning and decision making.

To learn more details about NEIL and see what it has learned, visit the program page linked below.
10 comments on original post
7

Abhinav Shrivastava

Shared publicly  - 
 
Kudos to the new work of +Saurabh Singh, +Carl Doersch, +Abhinav Gupta & +Alyosha Efros in SIGGRAPH'12! 
 
Our new paper at SIGGRAPH '12
Jacob Aron, technology reporter. rexfeatures_1329317v.jpg. (Images: TOIVANEN/Rex Features & Londonstills.com/Rex Features). It is a Hollywood cliché that every window in the French capital has a g...
4 comments on original post
1

Abhinav Shrivastava

Shared publicly  - 
 
Rob Gordon originally shared:
 
Be nice - or we'll bring you some democracy!
97 comments on original post
1

Abhinav Shrivastava

Shared publicly  - 
 
Robert Scoble originally shared:
 
Oh, great, now we'll need Waze and Trapster even more. Speaking of which, I have a video coming tonight on Trapster.
31 comments on original post
1