Profile cover photo
Profile photo
Brandyn White
3,158 followers -
PhD Student/Faculty Research Assistant @ UMD, Entrepreneur, Open Source Enthusiast
PhD Student/Faculty Research Assistant @ UMD, Entrepreneur, Open Source Enthusiast

3,158 followers
About
Brandyn's posts

Post has shared content
We’ve expanded our testing program to a total of four U.S. cities over the last several months, so it’s time to add more vehicles to our fleet. We’re planning to more than double our fleet with the initial addition of about 100 new 2017 Chrysler Pacifica Hybrid minivans, and we hope the first few will be on the road by the end of this year.

This collaboration with Fiat Chrysler Automobiles (FCA) is the first time we’ve worked directly with an automaker to create our vehicles. FCA will design the minivans so it’s easy for us to install our self-driving systems, including the computers that hold our self-driving software, and the sensors that enable our software to see what’s on the road around the vehicle. The minivan design also gives us an opportunity to test a larger vehicle that could be easier for passengers to enter and exit, particularly with features like hands-free sliding doors.

In the coming months, our team will collaborate closely with FCA engineers. This experience will help both teams better understand how to create a fully self-driving car that can take you from A to B with the touch of a button. Collaborations like these are an important part of realizing the potential of self-driving technology to improve road safety and make transportation more accessible for millions of people.
Photo

Post has shared content
OK Glass, we’re coming to the UK

Probably the question we’ve heard more than any other is: when will Glass be available outside the US? Well, we’re starting out by dipping our toes across the pond. 

Beginning today, we’re extending our open beta Explorer program to the UK. The world sees the UK as a center (actually, a centre) of innovation. It has produced some of the greatest technology inventors and inventions of the last century, and people on the ground are always excited to explore new products and ideas.  

So, if you are here in the UK and fancy a demo, RSVP to try Glass at our London Demo Days on June 27-28th at http://londonthroughglass.splashthat.com/ or join the Explorer Program and get Glass at www.google.co.uk/glass

We can’t wait to see you in a few days. In the meantime, get a glimpse of London life #throughglass in this video.

What apps have you developed in the past?

Not long after Glass was released I created the WearScript Project (wearscript.com) with the goal of simplifying development on Glass, integrating it with other peripherals and hardware devices (e.g., Pebble, Myo, Eye trackers), and exploring research topics (e.g., Computer Vision, Augmented Reality, Crowdsourcing, Accessibility).  During the process we've developed Glassware for a variety of use cases, some of which I've documented on my youtube.

How did you first become involved with Glass?

My #ifihadglass application was selected and prior to getting Glass I had started development on what would become WearScript.  The first project I worked on when I got Glass was a system for visually impaired users to ask questions about what is in front of them and have their questions answered by crowdsourced workers.  Along with this I developed a way for a sighted user to annotate a scene around them verbally; when a visually impaired user glances at it, they are read the annotation using text to speech.  This built on a paper I co-authored (VizWiz) and my current research focus is on exploring new ways that wearables can assist visually impaired users.

What’s different about your approach to developing for Glass versus other platforms?

When developing web or mobile applications, many developers are familiar with and have a good intuition for their best practices and user experience considerations; however, since Glass is a new form factor I think it's more important that developers explicitly walk through the user experience at all stages of design.  A large part of that is understanding how to keep interactions with Glass quick, how to make information easy to digest, and how to make notifications timely.  For me that means prototyping designs early (I use wearscript for this) and iterating using prototypes on the hardware.  Glassware is more than just the graphical layout, it really requires experiencing the interface to get a good sense for how effective it will be.  It's also important to have a good understanding (even at a high level) for what Glass can physically do and what the SDK provides.  New capabilities are always being added and after each XE release it's as simple as checking the change log to stay current.

How did you collaborate with WWF to create the Glassware?

Prior to meeting I had an idea of their high level goals and reviewed field manuals, training documents, and field note sheets I could use to understand how they currently perform their job.  We brainstormed over Hangout to pin down the general scope of the task and had an in person meeting where I showed demos of the user interface elements we discussed.  Then I developed a prototype and went through feedback iterations, adding new features, and tweaking user interface elements.  The goal was to get feedback early and often.  About 25% of my time was working with them directly and 75% development.

What was the biggest challenge in developing the Glassware?

This project is unique in that the Glassware is intended to be operated in locations that don't have widely available internet. It constrained a few of the features (e.g., uploading reports live to a server for analysis); however, collecting notes from the device when they are able to access the internet is still significantly faster than manually inputting handwritten field notes.  Moreover, as internet access improves worldwide this is becoming less of an issue, but presently it's something to keep in mind for similar applications.

What would you do differently next time?

With more time, I think there are many other ways that Glass can benefit WWF.  Their SMART monitoring program (http://www.sospecies.org/sos_projects/mammals/tigers2/smart_tiger/) stands out as a good application.


Any tips for the developers who will be working with nonprofits?

Tips
- Understand their needs and identify areas where Glass can make a substantial impact.  It may not (and almost certainly won't be) the most flashy application that helps them the most.
- Keep reliability in mind when exploring more complex applications.  It's relatively easy to make things that are too difficult to use on a daily basis.
- Stick to the developer guidelines, it'll prevent future incompatibility.  If you aren't planning on doing long term maintenance it's the best way to ensure that your Glassware has a long shelf life.

Post has attachment
We put together a video showing our latest work with WearScript (wearscript.com) including 1.) using Pebble, Myo, and magnets as input devices (controlling Glass and playing Tetris), 2.) using Glass and WearScript to facilitate an improv comedy show, 3.) projects from our WearScript hackathon at the MIT Media Lab (using an eyetracker to measure arousal, Pebble to play Pokemon, a game with Myo and networked LED bulbs), 4.) new Augmented Reality features that run realtime on Glass (image registration and AR Tag Detection), and 5.) some examples of home automation with WearScript (bluetooth beacons to control lights by proximity, having Glass control it's own power supply, voice activated carrot juicing, and taking selfies).

Post has attachment
We put together a video showing our latest work with WearScript (wearscript.com) including 1.) using Pebble, Myo, and magnets as input devices (controlling Glass and playing Tetris), 2.) using Glass and WearScript to facilitate an improv comedy show, 3.) projects from our WearScript hackathon at the MIT Media Lab (using an eyetracker to measure arousal, Pebble to play Pokemon, a game with Myo and networked LED bulbs), 4.) new Augmented Reality features that run realtime on Glass (image registration and AR Tag Detection), and 5.) some examples of home automation with WearScript (bluetooth beacons to control lights by proximity, having Glass control it's own power supply, voice activated carrot juicing, and taking selfies).

Post has attachment

Post has attachment

Post has attachment

Post has attachment
I'm working on a project with visually impaired users and needed a way to use tango hands free.  Here is a case I put together that allows it to be chest mounted https://github.com/wearscript/wearscript-things/blob/master/tango_case/case_straps.stl .  We also added Tango support to http://WearScript.com (video coming) so if anyone wants to use it to hack with Tango I can add you to the repo (especially if you want to do Glass+Tango stuff it'd help a lot).  Here's a picture of what it looks like, a rubber band goes over those pins to secure it but it's a good fit without it.
Photo
Wait while more posts are being loaded