Post has shared content
These words from +Brad Abrams sum up why I'm into ubiquitous computing.
Some #ThursdayThoughts on the impact of voice technology from Googler Brad Abrams:
Photo
Add a comment...

Post has attachment
Add a comment...

Post has attachment
Convergence Approaching

Yesterday, I spent the day at a workshop on some fairly mundane technology. Important. Useful. New, but not bleeding edge. But somewhat mundane.

No fewer than five people at the workshop asked me about Glass. One person asked me to take their picture several times after I demoed it. One pulled out their phone and started searching ebay listings.

While I was there - there was news about Dialogflow, one of the tools behind the Assistant, voice interactions, and Glass coming out of Google's NEXT conference in San Francisco.

This isn't really saying "Glass is back". We've known Glass was still around a year ago. This isn't a consumer model. This isn't even the Assistant on it, which is my eventual hope. But this is a demonstration how some companies see the vision and are working to make it a reality, one step at a time.

https://www.wired.com/story/google-glass-is-backnow-with-artificial-intelligence/
Add a comment...

Post has shared content
Add a comment...

Post has attachment
Do you hear the same thing I hear? Does your Alexa or Google Home hear the same thing? Probably not, which is why you're hearing a lot of four letter words.

My thoughts on Laurel, Yanni, and the Alexa that started sharing a conversation with someone else.

http://www.iaflw.com/2018/05/hear.html
HEAR
HEAR
iaflw.com
Add a comment...

Post has attachment
Deuling Technologists at DevFestDC

+Noble and I have known each other for about 5 years, but in two weeks we'll be facing off! We'll be debating the role of some of our favorite technologies in our lives. Which ones? Get your tickets to DevFest DC and find out!

https://devfestdc.org/
Photo
Add a comment...

Post has attachment
Five Years with Glass

Five years ago today, I took a small trip to the Google offices in Chelsea Market to pick up my pair of Google Glass.

Five years.

Today, I'm sitting at I/O 2018, waiting to hear from Google what is next with this technology they launched at I/O 2012.

A few people will be confused - what am I talking about? Glass was cancelled, wasn't it? Just a handful of us still wearing it, with a batch more using it in nice industries and specializations.

Yes... but Glass was more than just the form factor. It was an idea. A new way of working with technology. As we have repeated over and over, Glass was meant to me "there when you need it, out of the way when you don't".

Today, we see that same concept applied to many other form factors. We have smart watches, smart speakers, and smart displays. There when you need them. Out of your way to experience the physical world when you don't. There to take a note, ask a question, or quickly respond to a message - so you can get back to the things you enjoy in your life.

At the heart of those form factors is, what I think, is the crucial technology - the Google Assistant. There with its own commands, expandable with Actions built by the community, and on a growing number of devices courtesy of Android Things.

At I/O 2016 and 2017, we started to learn about the Assistant. We started to get our feet wet This year, I think we will learn a lot more. We're understanding how to use the Assistant better and what more we need from the platform. We'll hear about how Android and the Assistant will be working even closer together, and how developers will be able to take advantage of that. Developers will have to learn how to adapt to a host of form factors - from voice only to mobile devices to living room TV to items in our kitchens.

I've renamed this collection to reflect that - Glass is still the form factor I love, but more and more I think the crucial concept is that Glass showed us we can use our voice to interact with our digital world. That our devices need to seemlessly integrate with us and our world. And that our Assistant is there to help us do great things.

"But what," I hear you asking, "about Glass? Will we see the Assistant on Glass?"

I hope so. Oh, how I hope so! We know Alexa is getting a face-wearable form factor this year. We saw Intel try and give up. I have been reserved in past years... but I'm hopeful that this is the year we see the new version of Glass we've been waiting for - with the full power of the Assistant always there to help us... and waiting patiently on our heads when we're interacting with the rest of the world.

I'll be posting in my I/O 2018 collection the next few days. Follow me there or at https://prisoner.com/io/
Photo
Add a comment...

Post has shared content
If you're developing for the Assistant... I have something new.

Multivocal is a library for node.js that takes a different approach to developing for the Assistant.
Introducing Multivocal

I'm pleased to announce that Multivocal 0.6.0 is released. (More like a pre-release.)

What is multivocal?

Multivocal is a node.js library that takes a new approach to help you build for Actions on Google and Dialogflow (both v1 and v2). It is largely configuration driven - the phrases, cards, lists and so forth that you intend to send back to the user are stored as templates in a configuration file. Your handler functions primarily focus on loading the values that you'll use in the templates. Using Firebase Cloud Functions, Google Cloud Functions, your own Express server, or AWS Lambda is as simple as changing one line.

Multivocal handles most of the boilerplate for you as well, letting you focus on what you want to say. It handles things like unknown input, no input, and the different ways users can quit for you. It keeps track of how many times a user has visited, or even run a particular Intent in the session or ever and lets you send different messages based on this criteria.

Follow this page to hear about further updates and check us out at https://multivocal.info/ which contains links to the npm entry and the github repository, as well as some initial documentation and examples. (More documentation and examples are forthcoming - targeting feature and documentation completeness with version 1.0.0.) We also have plans to support the Action SDK, chat platforms supported by Dialogflow, and even Alexa.

Multivocal is open source - check it out, try it, and provide feedback and patches!
Photo
Add a comment...

Post has attachment
In-Conversation Media

The Assistant and Actions on Google team have been on a tear this year! But I think today's announcement has a feature that is particularly notable.

On the surface, the new Media Control seems a lot like Alexa's feature that lets you play long audio through the voice agent. There is one huge difference, however. When you start media playback through Alexa - that's it. Your skill can't return to a conversation mode. So Alexa can't play some music, ask you how you liked it, have a conversation with you about it, and then play something else.

With the Assistant - you can. This media is just like everything else in a conversation. A conversation with music! How radical!

With the Media Control, the response you send includes a message (textToSpeech or ssml) and the URL of audio to play. When the audio is done playing, your webhook gets a message that the audio is complete - just like any other message your webhook might receive. You then reply - with more text, with a question, with audio, with whatever you want! While the audio is playing, the user can interrupt with "Hey Google" and a command, and that is sent to your Action.

The implications for this are huge.

This can be used as a "launch screen", to let the user know immediately that they're using your Action while you cache some resources about the user and get ready for their next request.

Have a process that will take a few seconds? Launch some hold music!

Want to give your users some time pressure on a quiz game? Play the countdown music and challenge them to reply before it's over.

Can you think of a few real-time voice-driven games where you want some background music while they play? I can, and now I have a way to implement it.

And I think we've just scratched the surface about how to use this new component. Someone correct me if I'm wrong, but I think this is the first new VUI element that has been launched since the Actions on Google platform itself launched. There are some really powerful things we can do it it, and I'm excited to see where we take it.

Kudos to the team on a great launch! +Brad Abrams +Ido Green +Leon Nicholls +Silvano Luciani

https://developers.googleblog.com/2018/03/new-creative-ways-to-build-with-actions.html
Add a comment...

Post has shared content
Five Years of Voice and +Google Docs

Five years ago today, I was learning how to build for Google Glass. My instructors, including +Timothy Jordan and +Jen Tong, weren't just trying to help us understand the style and syntax - we had to learn a whole new way to design, because our users would be using our programs in ways... most had never used them before.

What I created in just 24 hours was a tool to let you add and view a to-do list on a spreadsheet... using just your voice. It was rough, and had bugs, and didn't win the hackathon... but it convinced me that voice would be a powerful tool in a wide range of applications.

Vodo evolved out of this prototype and this belief.

Voice is now the heart of the Google Assistant, and I've been working to get back to my original vision for Vodo, and go beyond it. Vodo Drive is available for the Assistant now, but has quite a few bugs. We've squashed many of those, have added a couple of new features, and are getting ready for the new release as soon as we resolve an issue with the Assistant configuration.

But I'm using this test version on a daily basis, and I wanted to show a little bit of my favorite new feature.
It was five years ago today at the Glass Foundry event in NYC that Allen first used voice to control a spreadsheet. Here's a little taste of the next version of Vodo Drive.
Add a comment...
Wait while more posts are being loaded