Shared publicly  - 
 
Our Google Glass Video has been watched over 10M times. Extremely exciting. We are asking for feedback and suggestions. What do you want Glass to be for YOU?
Project Glass: One day...
94
46
Johnny Benitez's profile photoRon Amadeo's profile photoPeter Meijer's profile photoGiuliano Vercellese's profile photo
108 comments
 
It's exciting times at the Googleplex +Sebastian Thrun. I think what many of us are looking for from +Project Glass is it to be natural and intuitive. Something with minimal learning curve and clear benefits that can't be found from anything we already carry with us each day. Big expectations we know you and the team can achieve.
 
Sir , can I have one ... (for free or for trial ) . For advice:
I want the interface to be simplistic , or else it will be a overkill.
 
My main interest is in using +Project Glass as an app development environment: http://goo.gl/UrdhI. I'd also love to be driving the marketing for this effort. HUGE opportunity for integration and branding. :)
 
Way to discover cemeteries. :) Last week at our office we even thought it can become a "real game interface".
 
I'd love Glass to be my real time language translator using voice recognition and ocr (a la Word Lens). Can't wait to see what wonderful things your team does with it.
 
One word: Simplicity. It has to quickly and easily become a part of our normal lives. This includes some sort of workaround for people who already wear prescription glasses (like me). The goal should be that people shouldn't have to even think about using GG; it should be so ingrained into their actions that they can use it subconsciously.
 
I do not want another screen that does what my phone can already do (notify me of messages, give me directions, and so on). What Project Glass can do that smartphones cannot is know what I am looking at and display relevant information (like the subway example in the video). Obviously there would need to be some AI behind-the-scenes determining what deserves an automatic pop-up and what should be requested, but I would love to be able to just look at a building or some other landmark, say "What is that?" and have the answer appear right next to it.
 
I agree with +Ian Netto, the potential for Google Translate integration is huge.

Personally, I think facial recognition and display of relevant information for people within our G+ Circles would be incredible (though it would obviously be an opt-in feature).
 
Don't forget to implement face recognition in order to find offenders in streets and, which is more, inside buildings.
 
Since it is on the head anyway it would be interesting to see sound dealt with via bone conduction!
 
If you're wondering why you haven't gotten much in the way of real, lengthy suggestions, it's because most people think this is vapor ware, similar to Corning's "A Day Made of Glass" videos. If you want people to believe this is a real product, you need to prove it. Normally a concept video is fine, but this project is a little too 'out there' for many people to take seriously. You'll get more discussion when you let people try it.

I'm working on a Project Glass blog post though, I'll + you when it's done.
 
(oops! I accidentally deleted my previous post trying to edit it...)

I would like to see Google Glass become an interface to an augmented reality.

Interactive recreations of historical events in the actual locations where they happened. Augmented reality FPS games would be great, like paintball but without the paint, and not limited in play space. Live action role playing games where player's avatars could be reprojected onto their bodies.

Virtual tagging (graffiti), or full on retexturing the real world, even adding virtual 3d models to it would be great. Then sharing those visual and interactive "layers" of the world with friends and/or the public.

Virtual tour guides that could walk with you and show you around a city that you have traveled to.
 
+Ron Amadeo Honestly, I would just like to see a video of the real thing in action, even if it cannot yet be used by anyone outside Google.
 
+Sebastian Thrun, firstly, I think that so far you've been nailing the physical aesthetics, keep it up! I agree with two main aspects of the above posts:

- Simple and graceful UI
- Supportive of app development

The latter has interesting implications when you really put it together with Android and +Google Play. What I mean is that I'd love to see +Project Glass be the precursor to the next smartphone form-factor. Even if the glasses can "only" do most of what a smartphone can do, the convenience and coolness factors enough will make them a hit. Then when you take the added functionality and possibilities and combine them with a supportive development platform, it could really take off.

I've been very enthusiastic about "smartglasses" for a while and have plenty more to share if you're interested :)

(+David Jacobs you'll probably also enjoy giving some input here too!)
 
Having followed mobile computing efforts since the early days of Steve Mann and the MIT Borglab (sadly, largely offline these days), what I'm looking for in terms of this technology is something that provides me with a number of things.
1) It has to give me uncomplicated access to the information and communications channels I want to use. That's an interface issue; if I have a voice interface that I have to try half a dozen times to get to recognize an email address or URL, then it's not working well. Let along entering longer messages or annotations, but I suspect that more complex communications are a "version 2.0" and that early implementations will be concerned with more, for a lack of a better term, "Twitter-like" communications.
2) It has to be something that is effortlessly integrated into my daily routine. The word effortlessly is important; if I have to find a plug to recharge from every two or three hours, then I'm effectively tethered. If I have to lug around a heavy battery pack or bulky processor connected to a visor by wires, that's going to impact on my ability to carry/use it comfortably for extended periods of time, and is probably not going to go over well in some social circumstances (until, of course, everyone's using them; look what Blackberry did for bringing smartphones into the business world).

Beyond those two elements, you start getting into specific application domains, and that requires more knowledge of the hardware involved. Or, for that matter, how that hardware can be extended. The video seems to show GPS capability, built-in camera, some kind of wireless connectivity, voice input... it's a good start. From that alone, I can imagine use cases; I'm personally horrible at remembering names. A simple face-recognition "app" could easily call up a reminder of who a person you're talking to is, when you last met, bring up their contact info ("Hang on, let me email you the details..."). Day planning, exercise monitoring (they already have smartphones that do that with bluetooth heartrate monitors, so why not?)... there's any number of things that smartphones can do now, but with a clunky and sometimes socially awkward way of interfacing.

There's a lot of potential here, and I'm finally seeing the holy grail (well, MY holy grail, anyhow...) of a device that can be used for ubiquitous, mobile personal computing without being heavily encumbered or constantly clutching at your smartphone or tablet. I'll definitely be watching this project with great interest, though, and I'll probably be one of the first in line to get my hands on the technology when it finally becomes available to the public. Here's to hoping that's sooner, not later.
 
I want that staple of video games where there is a little 'radar' overhead view circle at the edge of vision with color coded moving dots for people and vehicles and map lines for walls and streets.

Also don't forget to put a back pointing camera which could act as a rear view mirror.
 
+Lucas Walter, a radar-like Google Maps HUD display with what information IS available would still be pretty sweet! I don't think getting vehicles on there (beyond general traffic data) would be realistic just yet, but people through social check-ins (e.g. Foursquare), Latitude, and the like could work!
 
What I'd love is the ability to "project" on surfaces (even if just flat, rectangular surfaces), with synchronisation between multiple glasses. Applications are surely numerous, but a couple are below.

Productivity: project a "computer screen" on a wall. Now developers can't complain about not having enough screen real estate. Share part or all of your screen (a physical area), or each person sees something different.

Entertainment: project a film on a physical surface (e.g. TV screen), with synchronisation between multiple pairs of glasses. You get the versatility of the glasses (different people can watch different things; large screen at low cost) while maintaining the feeling of a shared experience.
 
My Suggestions :-
1. Buletooth connectivity (So that we can stream videos via Mobile)
2. Facebook Connectivity (To make it more popular)
3. Memory Storage (So that we can store something . atleast 1 Gig)
4. Must be comfortable and may be vision problem person can vear this.
 
I love the idea of the glasses I'd like to try them to form an opinion, I think it would help a lot to people because it provides information from food to tours, that's fantastic for anyone who has problems with orientation, and information I feel great.
 
+Sebastian Thrun , Thanks for sharing the update and asking for suggestions.... I cant wait for this next big(read small) thing after internet.
my suggestions...
1. journalling the day, review and organize - includes events, faces,information; intelligent use of inputs- gps, camera,voice,gesture to organize the info.
2. read books, select from book to clip the info, test to speech
3. face recognition related apps- name, event, followup
4. many more google glass apps ....

5. intuitive overlay of informations and navigation - sure google knows this well
6. intuitive sharing between glasses
7. cant wait for a hangout with glass...

Though very excited, would be keen to check the following
safety - emissions, battery etc on wearable device, clinical data?
privacy - indicators
cost

Thanks
Suresh
 
My suggestion:

I'm guessing it already has gyroscopes for tilt, an accelerometer and of course GPS.

So all I'm asking for is a fast enough processor so us developers can develop AR games that will actually feel like the characters/objects are merged with our world, instead of the standard of today of looking at them through a phone/tablet.

How amazing would be to walk into your local park and chase and catch "Pokemon" or some non-intelectual property violating equivalent. With each digital representation actually interacting with not one person, but multiple people at the same time. Or a child wearing google glasses meeting and greeting a wizard, who is about to take him on an adventure in his local area. All we really need is the hardware to build these things. A fastish processor that can handle 3D opengl, gps, and 3g or higher.

Although there still is the problem with the inaccuracies of GPS, so a character to one person might be positioned differently to another person. This might shatter the illusion if one observer is watching another player talking to thin air as the character he sees is not in front of the other player.

Galileo GPS may solve that in the future though.
 
+James Anderson I think the positioning problem could be solved with some local networking between pairs of glasses + kinect-like sensors + a particle filter algorithm.
 
One more suggestion from my side is that Google can create the 3D maps ... By capturing the information by including in privacy policy .
 
I'd like to be able to look at a barcode, and (if I want) be told places nearby or online I can get the item cheaper
 
Could these glasses along with robotics help surgeons (across the world) lend their expertise across borders?
 
Basic physical safety system - If an increasing number of people are using this, chances for physical contacts between one to another would increase too. Say, the system may know that your "glass" is overloaded with messages and you are actually walking or driving too fast, it would give out alerts like "Slow down.. your going to bump into the lady in your front." or "right side, fast car coming", etc. It would be very dreadful if people get accidents while trying to enjoy technology.
 
* recognizing persons i already know and showing me the name. (usefull on parties)
* giving me google navigation arrows and pulse/speed/cadence data from ant+ sensors during bicycling
* voice control (google voice input is not working for my voice/languages yet :((( )
 
SDK is what developpers are seeking for at the moment. A couple of ideas are coming in mind on how to take advantage of this unique device.
 
There are already many very good ideas!
I could imagine, that it supports me in reading books or articles. It sees what I'm reading and gives me additional information (summary, comments, details on the content, further articles, ...). Perhaps I can ask questions about parts of a text and others can answer.

Finding things could be improved, too. Let's stay with the book example. You're in the library and you are searching for a book. The glasses first direct you to the shelf and then highlight the book in the shelf. (Similar thing with products in stores, sight in a city, ...).

There could be several do-it-yourself instructions. e.g. How can I repair my dishwasher? The glasses show each step by highlighting the parts and gives information on what to do.

What about a first person view in sports? e.g. A player is wearing the glasses during a game. Viewers can switch their tv channel to first person of the player or the player records the game and can improve skills by analyzing the video...

Advertising can be improved. The glasses see, that something falls on the floor and broke. Thus the glasses display stores where you can buy the thing or similar objects.

Well, there are many scenarios where the glasses can be useful ;-)
 
My biggest hope is that Google Glass will be extensible. Clearly, developers will be hungry for a SDK.

But what sort of potential could you unlock by pairing that with simpler option for "normal people". That HTML was so approachable and that millions of non-technical people could "scratch their own itch" is one of the reasons the internet grew so large and so quickly in the nineties. There is a reason why people still speak with affection about Hypercard, and one of the reasons Office is so successful is that people can easily extend their content creation with scripting to solve business problems.

Is there an opportunity to build on the work Google has already put into App Inventor?
 
I think the main thing software wise we'll need is, as others have mentioned, SDK support. Give it the ability to communicate with our phone's that we're already carrying with us. (Android@Home?) Offload some (most?) of the heavy processing to our dual and quad core phones to handle.

Hardware wise, I would love for Glass to either be wifi/bluetooth only or have the option of a wifi/bluetooth only version. I would hate having to pay yet another monthly fee when it will only be used in conjunction with my phone which I'm already paying for. Another thing that is borderline make it or break it for me is for it to have induction or pogo pin charging and to come with a docking station. I already have too many micro usb charging cables on my night stand and I do not want to add another one.
 
Hi +Sebastian Thrun I am a mountain biker. I finished the Craft Bike-Transalp 2010 an 2011. The tracks we drove were so amazing. But to find them again is very difficult. So my dream is to wear glasses with seethrough screen to get the signs where to go right into my sight!
 
I think everything that's in the demo video would be an awsome start ! One important thing, we need to be able to pay for it instead of it being advertissing based. It needs a small but easily reachable off switch.
 
Hey +Sebastian Thrun, it would be great if it had facial recognition so that if we forgot someone's name it could whisper it into our ear or show it on the screen. That way you wouldn't ever have to worry about forgetting someone's name. I'm not sure how you would keep the information. It would have to be only the people you tagged in pictures on Google+ probably to keep people from getting upset about their privacy.
 
You should capitalize on already existing Google products :
1- Google translate = universal translator
2- Navigation = outdoor/indoor mapping
3- Video chat = google talk / messenger
4- Some on demand layers of AR, like info on wikipedia, weather.

You should be really really careful in opening it to third party developers ... at least at first. Take the Google+ approach ... and wait until you really think you nailed it
 
A few potential applications:

Being able to see an informational layer "over" a map I see in the real world. For example, what additional helpful things can be presented on top of a subway map, a mall directory, or a campus map? These maps already contain data and context, how can you do more by building on top of this?
Annotations. MST3K proved you can turn trash into treasure by adding annotations. Could Google Glass sync with a movie audio track and show me witty/sarcastic comments to make a "chick flick" more bearable? How about streaming "color commentary" for a game, or translated lyrics for the opera? Aside from streaming/syncing, which could be computationally/bandwidth/power intensive, perhaps I could scroll through a sequence of annotations for, say, a Shakespearean tragedy. Speaking a search would be impolite at a play, while silently scrolling through a series of annotations might not be. What sort interesting art could be created if a dramatist could create annotations that were intended to be streamed alongside a play--say to give us additional insight into a characters interior life, background, or motivations. What possibilities exist if members of the audience each follow randomly assigned characters this way, creating a tension between our shared knowledge and our particular knowledge.eady contain a lot of information and context, how can you leverage that?

(step by step "sit, stand, kneel" directions so that people not born into a particular faith community can know what to do at a wedding)

Interior directions. Turn by turn directions and GPS are great--outside. How can you help me inside a convention center, mall, museum, or airport?
 
I would use this to perfect eye tracking tech. Canon uses it for focusing, as should you. You need to know what your looking at by looking at it. Then use forward tracking to watch your hands and sync the retinal tracker to the forward camera. Run everything in the cloud so you don't need to deal with the cost, heat and processing issues. Education would be extremely useful for training like a real/AR operation game. Robotics control, programming teach points for machines, physical training/sports version that's rugged. How about business? Presentations, scan in documents and business cards, take info of applications, recreate 3d basic models, monitor operations during meetings, facial capture for new clients and sync to contacts, and facial recognition to remember acquaintances, hud translate at business meetings with foreign clients. Translate and display responses phonetically in other language. Use eye tracking to navigate camera in video games (huge). I could go on... I'd focus on cramming it full of sensors and high speed communications.

Very exciting guys, keep it up.
 
it would be great if it could recognize faces and then display tags about the person on their body so I can recall facts easily sort of like a CRM but for everyone. Remember b-days, likes, etc... "never forget a face again!" also... maybe easily add someone to google+ if you meet them and they have an account.
 
Hand them out at trade shows with everyone's registration info loaded. Then you break the ice by having background info on people and know you are talking to the right people. Cuts down on time and increases profitability.
 
could also help you make connections you may have missed!
 
One thing it should have is Bluetooth for sure. Some hard of hearing people are already having BT enable hearing aids, there is no way they can use plug-in earphone. So, the Google glasses should handle this by design. Also, for some applications, it should be able to access the network intra/extranet from a WiFi access point. For some applications, it would not be desirable to access content through the extranet.

I disagree with Patrick Rochon, for the product to gain industry acceptance it must be extensible and third party developers should be more than welcomed.
 
Way to interact with world. Augmented reality, and gesture interface as some kind of interface for work, for example. some 3d render of graph which i can interact with voice and gestures. For example we can show math formula as graph and programs too (lisp anybody :)) So I would like to sit on bank of whitewater river moving my hand and making money for living all the same time. And color coded water velocity app for kayakers, please
 
Embedded Google Goggles functionality, perhaps? As in, you could use the device's built-in camera to scan a book cover and have the Amazon page pop up.
 
I think integration with your android cell phone should be a must have. And integration with social networks would do wonders, using NFC you could add info on people you have just met. If your meeting someone, that someone could become "visible" to you on google maps if he/she wished to. Associate it with Google goggles and expand it, don't just recognize where you are, recognize shopping tastes, recommend products, show reviews on the product you're looking at. Hell why not create a fashion advisor?? Look at the mirror and see if what you are wearing looks acceptable (color wise). Overall, should this product become available it will be a major game changer. There is a world of possibilities awaiting its release!
 
Since you guys have the android market..you could make it like layar where you can add different layers of augmented reality that are useful to you
 
is wonderful be in a any country or place of the world and you can, with google glass, know everything about these...food, language, streets...shops...if the place is dangerous or not....airlines, time traveller,
 
One concern I have is this would work best for urban areas: areas with lots of things in close proximity and usually with decent wireless coverage. The product would have to encompass pattern recognition. The things we do that we don't realize we do on a consistent basis in order to build up a database of relevant data so it can be cached and pre-fetched for more easy access. Addtionally like Google Labs for Google Maps, which allows you to pre-cache the map area, this would be a good fit. I would like to see a daily planner pop-up to show me my day at a glace. Then have it go away. Also, it would be good if the device had a driving mode so it only shows minimal information while driving, such as directions, turn by turn navigation, and perhaps in driving mode, instead of showing messages, it does audio. I worry about the distractions while driving.
Exiting elements I see would be business utilizations for this product. For example: As an IT professional, I do a lot of system monitoring. It would great if monitoring tools I currently use could use an API to tie into Project Glass and display alerts on systems. Perhaps I have a customized personal web page that links to monitoring data and I can have it display the page when needed via a voice command macro. These are just the tip of the iceberg of all things I could see doing with this technology.
 
Well, +Sebastian Thrun, I can't really say I want it to be anything more (or other) than what is portrayed in the video...because you guys have pretty much covered all bases already! Seriously, I can't think of a thing to add. Make that reality and you'll have made magic.
 
I'd love to be able to extend the capability of Glass by connecting it to other hardware in a manner similar to the Android Open ADK (though using a wireless interconnect instead of USB). It would be very exciting to me to have Glass as the center of a personal area network consisting of sensors or other peripherals that I could then integrate into the HUD in a seamless, natural way. It seems to me that the Glass will become a very personal extension of each wearer, so having the ability to customize and extend it will be vital.

Just a subset of the things I'd like to build for mine if this becomes a reality:
- Biotracking: a set of sensors to measure respiration, pulse, oxygenation, and other non-invasively measurable indicators and an inconspicuous display for those. They probably wouldn't be displayed at all unless they were out of range, but if the system detects that you were exercising, for example, it could help you keep your pulse and respiration in the optimum range for the type of workout.
- Environmental monitoring: a set of sensors to measure environmental characteristics: air quality, presence of hazardous substances, radiation level, etc. The data from these would be used to alert the wearer to any dangerous or abnormal conditions. Data from other wearers could be (anonymously) combined to map out where the safest areas are nearby.

I'm very excited to see Project Glass take shape, it resonates deeply with my interests. Thank you for keeping us updated on the project's progress.
 
Id love it to ge a source of information. Not just my phone's notifications, but include facial recognition and give me info about my contacts or recognize places and give me information.
 
Real time transcripts - Combined with a directional microphone (controllable by the wearer, important for improving accuracy of voice recognition), real time transcripts of conversations should be generated (which would be very helpful for hearing impaired users) as well as normal users for life logging applications or to document important conversations (e.g. Minutes of meeting).

Communicating with other devices - Near field communication techniques or protocols should help us converse with the the surrounding world, e.g. if we enter a restaurant a virtual menu (e.g. subway menu) should automatically pop up, we can then virtually make a sandwich of our choice, this information will be relayed to the sandwich artist in the restaurant who will prepare it as per your specifications, the idea is creating a virtual manifestation of many physical entities and linking the two in real time

Sharing encrypted sensor information - An information system that allows the user to selectively provide his encrypted sensor information to be used by other applications of his or her choice (after data is aggregated and anonymized) in real time , this can spur many useful applications but taking care of the security is very important. As a lot of data is generated (potentially more than a conventional smartphone), this is absolutely essential.

Alternative input mechanisms - Although the video touts voice based commands, it is still infeasible for many users and many locations (e.g. public spaces). Virtual keyboards may or may not work for a number of users but inputs based on movements of fingers (using a sensory apparatus such as special rings worn on the fingers) may just work out better for a certain section of the population. Instead of developing something like this , if the company creates the right environment , many innovative developers would automatically work on products for different demographics. Inputs from a number of environments should be possible. Personally i would love to convert a table into a virtual keyboard (e.g. by laying a marked piece of paper that i can carry). If i start typing on it , rings in my fingers would interpret and convey this information to the screen. Using the smartphone screen as an input medium is also better for doing things such as browsing websites through the glasses.

Collaboration without cloud in proximal environment - The ability to sync multiple glasses to collaborate on editing tasks (such as photos, documents etc). The idea is that instead on collaborating via a cloud based system such as Google docs this should rather happen locally for all glasses. (e.g. when conversing in a meeting) in a certain environment. When you point your finger to a certain part of a picture as you observe it from your perspective within your glasses, the finger movement should be taken as input (as a cursor) that can be relayed to any other person looking at the same object within their perspective. This means if i am explaining the parts of a car to someone else , i can view the 3d object within my glasses and can interact with it using my bare hands and explain it to someone else looking at the same object from their perspective as well(This may be difficult to achieve with the limited hardware).
 
Probably asked already but what if you already wear conventional glasses?
 
If you need a tester in a small city, pick me. I already use my Android phone for way more than most people do so this would just be the next step in the progression.
 
Modularity, I want to be able to choose what features use/add/remove
SDK, there are an infinite number of application for this hardware, many of these can help improve the quality of life of people and the productivity at work
Integration with the existing (cell and tablet) and future (robotic car) hardware
Alternative input methods, a bunch of people who's staring in the void and talking to no one it's not ideal :)
... and obviously I'm waiting a course about Programming Google Glass at Udacity
 
Human computer interaction is the only thing left for us to work on...
I believe project glass will be a giant leap on the same
Its one of those things which has infinite number of applications...
I am dying to make my very own Glass out of the original code... :)
Is it gonna be open source... ??
 
Project glass is so cool! I think google has got something here. I would definitely get these when they come out (withstanding on the facts that A.I can afford them, B.They stay with their current design and colors {I like the white}, & C.They don't require a smartphone or data plan.)

These would especially be cool in the augmented reality side of things. The camera in the front would be perfect for that. imagine having a "holographic screen and keyboard" (not really. Just images.) on the glass and then having the camera detect specific hand movements, thus simulating typing on a keyboard, thus providing input! It would be awesome. I also think these would be cool if google did something like amazon did with the kindle and incorporated free 3G in them. They could practically & easily steal the floor right out from underneath the major cell phone companies with free 3G and free google voice service! These would become the must have device!

I absolutely dreamed of having something like these and now humanity may actually get to! I could easily see within 2 - 3 years after these go on shelves that almost EVERYONE will have a pair.

Even if these just hook up to your phone for basic H.U.D information (withstanding they support standard headsets & non-android devices), I would still love them! I told my parents to have a rain check for my birthday present this year and to wait till these come out to get me a pair.

Google Glass: The Future... :D
 
The medical implications of this are astounding, particularly to patients with early dementia (imagine Glasses showing you the way home) or while examining a patient in a clinic-all of the data literally right in front of you!
 
If these do require a phone to work, please make them compatible with MOST if not ALL phones, not just android. Not everyone has a android phone, let alone a smartphone, and I would hate it if I couldn't use these. Also, as I stated in an earlier comment, give them free 3G like amazon did with the kindle. It'd make things easier (no contracts, phone pairing, or wifi to worry about). Also, some people have said to use pairing with ones phone and using its processor to run tasks on the glass, but I disagree. Cloud, yes; Android device Required, eh... NO. keep it simple and it won't need a big processor. We'd still need our smartphones for bigger tasks, and it would make these a lot more accessible to more people. Also, Start over with the voice commands if your still using google voice search. I had a heck of a time getting it to recognize my voice (I'm an american, btw). Team up with Nuance, if you have to! Also replace that horrid voice that comes with Android devices. Keep things in the cloud, when necessary as well. Don't rely to much on memory. Try to keep it simple, having things stored in the cloud, and accessing that info when necessary. This would be much more useful as a standalone device than an accessory, and I bet ya it may sell more as a standalone device to.

Also give it an Augmented Reality keyboard feature, for a voice command alternative. Have the camera detect a large enough flat surface and then have the glass display a keyboard on that surface. Allow users to type on that keyboard to input information when necessary. Also, just a side thought: use the glass to display augmented reality holographic screens that a user can interact with. It's just a side thought (I wouldn't care if you didn't do that.).
 
if you can add two more cameras in order to capture users' eyes, it will be good. you will be able to overlay navigation system on a real environment, like show an arrow exactly on the road. you also can know precisely where the user is looking at. in that case, I think the glasses will able to show something like the name and briefly information of things that user is looking at such as mountains or buildings.
 
I'd like to have access to a google collective Brain, so that each time I wonder if something is true or false, or If I'd like more informations upon a subject, I can receive real Time informations from verified sources. It's a brand new paradigm to learn anything, constant access to verified information to be overlaid on our visual field. But once information becomes omnipresent anywhere anytime, you have the responsability to provide real information verified as true knowledge or if not possible you have to make it obvious. So that we can make a different usage of our internal brain memory, as there's no need to store on local Hard Drive (Brain) useless informations which are always available live online.. The most difficult part of your glass technology will be the Siri like (and by far better hopefully) Google Assistant, to provide the user with natural language interface. The Retina scan Display will provide floating virtual screen before our eyes with very little energy need.
 
Between Gmail, my Chromebook, Google Maps, my Android phone, Google Voice, Google Search, Google Docs, and Google Plus, I essentially storing my brain on Google. It kind of reminds me of the Matrix where you "dial up" how to fly a helicopter. I am not sure I am ready for a doctor to "dial up" how to do brain surgery before they work on me, but for other things, this would be awesome.
 
It is time to succeed the Turing Test...Time to have sufficient Artificial Intelligence in order to answer accurately to any natural language question and then to decide what to display as an accurate answer in the user visual field... This part remains the biggest challenge !
 
I would hope to see "Layer"-like capability where you can see services all around you, I'd like to see comments people made about a restaurant hovering by the door as I walk in. I'd like to scan a crowd and see public profiles as thought bubbles above each person who has the glasses or the app, and immediately know when "friends" are near me. Google Glass should eventually become like the highly integrated psuedo-exocortical interface of Manfred Macx in Charles Stross' "Accelerando."
 
Sabastian, how do I somehow become involved in this exciting project? Just one of the areas I would be interested in is the application for it in classroom, specifically higher education. I currently teach graduate students, and am a former HP'er.
 
I would love to see these have the capability to be implemented with persription lenses
 
So many great applications. But what about all the things we don't want these glasses to do? For every great, useful application I also see an abusive one. I'm afraid this technology will only hasten our already invasive world. Privacy will soon become the rarest of luxuries when the world is populated with millions (billions?) of data-mining google-bots. But hey, they are pretty cool! ;)
 
+Sebastian Thrun One funny application that just crossed my mind would be for rock climbers. The glasses could show you where the next bolt, potential hold, etc above your position is.
You could also have it prerecord a sort of security/TODO list that you then have displayed point after point whenever you start a potentially risky sequence/operation. That could apply for a lot of situations other than rock climbing actually.
 
GG can be the revolution of mobile devices, boy I'd like to be on this Project. There are a lot of evolutionary steps involved: First it's only a display device so battery,sensors,processing power, data storage, connectivity etc have to be elsewhere maybe one by one integrating in the device.
And of course the display itself has to evolve. Hope Google starts with displays integrated into normal glasses (nearly full field of view and 3d, traditional good looking design). Later on the display can be contact lenses, beamed onto the retina or direct neural interface :-)
 
1) Built in "Google Goggles" - To get information of stuff that you look at.
2) Create a "information security" and sharing policy from the start.
3) Implement a "gesture" recognition
4) Ability to translate the voice of a man standing in front of us
5) Finally, perhaps the most important point ... Glasses is the subject of an entirely new area, before jumping straight to the model of data collection and matching of advertisements, try to think about a new business model that can be applied here. For example, allow people to record video and map areas within (museums, sites, shops, etc. ...) I still have a few ideas I can share in private....
 
My favourite part is the "subway" part in the video. great. )
also the glasses should have a "silent mode" - sometimes you cannot talk all the times, but you definitely would like to access the map, read messages etc.
I was thinking about this idea for the last couple of years.
Really happy that the google will put this idea into reality.
 
Suggestions for current possibilities and future ones:

Current possibilities: Navigation while driving a car (until we all have self driving ones, of course), could be less distracting by using the glasses. Rear-view camera that pans if you pan your head. While walking around, storing real world locations and having them overlayed for later use (nice restaurants, little shops). Those locations should be easily shared with other people, for meeting up. Adding event based reminders (if in area X, meeting person Y, show reminder Z).

Future possibilities: include an eye tracker that is used to control the interface better (a wink could be a mouse-click event). Include a pico projector to show other people previously recorded situations/whiteboards. Include an outward facing 3d camera (close distance kinect-like) that detects hand motion for writing on virtual screens or manipulating virtual objects. Transcribe all audio on the fly and allow selection of text for quick searches (someone talks about a subject, from the transcription the object name can be selected and found on wikipedia).
 
that particular prototype from vuzix uses half silvered mirrors to fuse the VR display onto a user's vision...
 
This is something I've been wanting for quite some time. In the long run I can see integrating these with a scaled down Kinect style device (not sure how practical that is but whatever...) and a Bluetooth connection to a cellphone and now you have a full Minority Report computer with a fully portable display.

I would love to work on something like this...
 
What about Majel 's Google Assistant ? It would be an usefull building block according to Project Glasses objectives. I would be very interested in testing any Majel function in my Google search , and even better in French natural Language....
 
We are working on a serious application for a heads up display such as yours. How soon can we test with our system?
 
Display or read aloud files in Google drive. But then you need an intuitive way to navigate through pages and files.
Next requirement: Ability to connect a wireless keyboard and mouse to edit those documents, type URLs and browse the net.
Next requirement: Please add me to the pilot program.
 
dating plugin: showing 5 tag words on every person you see (if they allow that option ofcourse), it helps me see the world better and not only look at appearances.
 
Again zoom has to be there on the camera. Not digital zoom. Optical Zoom.
 
i think the main problem of Google Glass isn't about what it can do, it is about how it will get things done. I really love the idea but i don't think it is practical yet because i dont think people will go to the street with those "weird" googles (and yes i know that the revealed model is only a prototype). However even if that "noticeable" size is needed in order for those "weird goggles" to function properly they can still be used without causing too much apparatus as a free hands tool by bikers (inside helmet projection, communication...) or other drivers. There are a lot of great ideas related to google glass but people still need to know better what are their true capabilities to give you ideas that you can really use.

P.S.: As an electrical and computer engineering student who has started to get to know better how computer vision and related stuff works i must say that the objectives of project Google Glass aren't easy to accomplish. You have all my gratitude for trying (and eventually succeed). Good luck :)
 
+Mark Lodes i want it too and also see my own Software working on it. Cause it's a documentation software for assembling and disassembling of bigger machines. That can really help engineers...
 
Combining Glass with Google's other services (Search and now cloud storage) holds unlimited potential. Some ideas:

- Monitor health, such as in cases Mark above mentioned
- On demand language translation services, provided in text. This is much less intrusive than having to pop out your phone and hold it up to someone while they are talking.
- Automatically process formulas and other such data (looking at an equation would show a popup with the answer, for example)
- Apps that are more specialized to a field. This in my mind is the killer feature. Hospitals could distribute software for doctors to call up information about dosage, drugs, symptoms, pull up and search through charts just by looking at a patient's door, etc. Mechanics could pull up part names for vehicles. Customer service could check stock, waitresses could pull up table seating, the list goes on. Almost every field can benefit!
- Locate nearby services of a certain type, mapping as shown in the video. You can do this with phones already but to be honest, the features would be used even more often if they were more tightly integrated into our lives with Glass. You can do most of the things that you can do on a tablet on a laptop, computer, or cell phone, but that hasn't stopped tablets from rising.

The less steps people need to take to access the data they want, the more useful it is. Most people can't imagine leaving the house without their smart phone now, and I believe Glass will eventually be seen the same way.
 
Has anybody thought about the possibility of using hand and finger gestures as a user input device? Even to the extent of having a virtual keyboard overlay so you can send text messages without having to speak aloud or type with an external keyboard.
 
For ALL information needed on Google Glasses, I suggest watching Futurama S06E03 - Attack of the Killer App That's what we need in these Augmented Reality glasses :D
 
make them secure, woud be annoying if people could "hack em"
 
Hi Sebastian. First a big thank you for the AI class in the fall. Now using it my packing screen software at work!

Now, Google Glasses. You ARE Sebastian Thrun, so dah, integrate the glasses with the self driving car. When you stare "too long" (based on "normal pattern" of looking at stuff) at a coffee shop while on the road (Sign saying coffee, online GIS data, etc.), the glasses pick up on it, and suggest routing you past the nearest coffee drive-thru (and of course pre-order and pay your favorite kind of coffee).

Should be easy... can't wait to see it in action :-)

Do google have a official suggestion box for stuff? :(Maybe call it the glass box :-)
 
I noticed on the interview, as well as the photos of prototypes, that there aren't any earbuds. If we're going to be able to listen to music and videochat like in the video, maybe you should incorporate a wrap-around headphone.
 
Can you be signed up for tester of this gadget :) i would love to test it in denmark, then i don't have any problems wearing glasses :P
 
+Project Glass +Sebastian Thrun 
I think the glasses can usher in a new realm of endless possibilities. While driving on a long trip, they could offer navigation, upcoming food areas with zagat ratings and current special offers or discounts, the nearest, cleanest bathroom, a park for the kids to play at on a long trip, etc.

In the office, they could be used to highlight key figures during a sales presentation, share documents in a meeting, collaborate on a drawing, or pull down relevant information on a customer that helps you find out the best way to connect with them. Maybe you two share a hobby that would have never come up in a typical business meeting. 

At home, they could be used to aid with cooking by populating recipes based on food in the fridge, Or to photograph a wild bird in the backyard to get loads of information on it and share with your children. 
They could be used as a learning tool (flash card style) to study with during a few quiet minutes. 

This is only the beginning....
Let me know if you want external testers!! I'm all in!
 
Glass can help bring information from your fingertips within sight ;-) If implemented properly, it can help people who need information to do their daily jobs within easy access. A good example are emergency medical services personnell who can receive information via glass and send information back to the hospital using the same device. I am very excited about this project!
 
I'm conducting research that examines the social implications of these wearable location-enabled technologies and educational potential of these devices.
 
I work in law enforcement and after watching this video I can see the endless applications to law enforcement. Is Google going to work on a law enforcement edition?
 
I designed and implemented my first wearable AR system in the early 80's. Monocular info overlay display, on-body system and software, etc.

The prototype worked very well and my invention specifications included many of the features that are more easily accomplished today (thanks to Moores' Law) and detailed years before any AR patent filings.

The initial test scenario was to rapidly identify and verify VIP guests at a Celebrity Nightclub and authorize their entry and escort map to the VIP room and private tables. Interesting that I thought of a "Social Engineering" application as the most effective use of the system.
Probably too much time spent at Studio 54 when I was younger.

The prototype cost me over $6K to build in parts alone and the nightclub business wasn't a viable market for such a pricey perk for their important customers. A doorman with a good memory was the appropriate solution to the "VIP's kept waiting problem" at that time.

I detailed many other expansive applications for the platform (and AR as it is referred to today) in general, some of which were tested, others that were only proposed in my notes and diagrams.

I demonstrated the system at several trade shows and conferences before moving on to other areas of interest.
 
Education. Think big data repository with unlimited access for Google Glass users that allows for location, facial recognition, video and audio capture, sensor data and other in an assessment and recognised prior learning context. Augmediated with task overlays. Love of learning made easy and accessible, popular,creative and affordable. Will be presenting a paper on this at http://www.veillance.me 
 
Not sure if anyone posted this idea, or even if this is the right place... Anyway, how about an art application in which the user can select different options from a pallet and trace or draw what they see... 
Add a comment...