Profile cover photo
Profile photo
Allen “Prisoner” Firstenberg
Prisoner
Prisoner
About
Allen's posts

Post has attachment
Get Your Assistant App Discovered

Listen to what I say
What do I mean?
Surprise me

Explicit Triggering
- “talk to appname”. Every app gets this and it goes to the welcome intent
- This is the phrase to use to tell people how to start
- “ask appname to do something”. Action phrase jumps right into activity.
- be sure to setup and test action phrases
- action package
- - Set queryPatterns in a trigger in an intent
- API.AI
- - Select the intent to trigger. List out in user says part
- - Actions on Google Integration and set as additional triggering intent
- Back in the console…
- - List up to five example invocations
- - These examples show in the directory listing

What if they don’t even know where to begin?
User can express an intent to figure out what they mean and match it to most relevant app
Implicit Triggering
- Currently, this suggests a good app and offers to connect them.
- In the future it will find some good options and suggest them.
- This ends up being a ranking problem, similar to what they’ve done in web search. But they need to find the right signals still.
- - Think about what the user has done
- - Think about what other users have done
- - what about other properties such as search?
- The action patterns are things that can be used to feed into this
- Brand verification (website or android app) links to website and lets them make the implicit link

What is next is coming… eventually. This is how they’re thinking and what comes next
Serendipitous Discovery (in dialog and not in conversation)
- For example, a query about something (like a restaurant) will give the result… and might also give action items to continue the conversation (like booking a table or looking at menus).
- Suggest user actions
- Non-intrusive and non-invasive
- “What can your app do so we can offer it up in situations like this”

Explore tab in Assistant app
- Browse things to do by topic
- Launch directly into some apps

Graphics are also a signal as to quality actions

Marketing
- “Sometimes we showcase you. We are looking for things to showcase.”
- They use quantitative analysis. But also personal qualitative experiences.
- “Did I smile? Did I laugh?” “Would I tell someone?” “Would I use it?” “Does it get better?”
- “Does it act as advertised?”
- “Can I pick up where I left off”?
- on the downside: “Was I confused?” “Did it feel slow?” “Did it handle my silence or messages poorly?”
- conversational hygiene. “Did it recover well if I spoke poorly?” “Did it help me out if I was confused?”
- - No match. If it didn’t understand, did it help to guide them along?
- - No input. If you didn’t get a response, did you recover from it conversationally?

Checklist for discoverability
- Do your branded verification
- Create an informative directory listing. Relevant. Actionable language
- Add action phrases
- Create an awesome app



Photo

Post has attachment
Pullstring

[This one is especially for +Jason Salas, who had many good conversations with me over the years about similar subjects. I also had a similar chat with +Ade Oshineye and +Abraham Williams the day before, and some subsequent conversations on this topic, which I'll post separately.]

www.pullstring.com - building conversational interfaces
can build actions for Google Home.

Robert McKee - how to build screenplay characters through dialogue.
Under what pressure do you put a character, and what point do they reveal their character.

How does the agent react when it doesn’t understand the question, or doesn’t know the answer?
This is pushing the conversational interface under pressure. It shows the character of the agent.
This communicates the tone, mood, and style of what you are trying to build.

Conversation: an informal exchange of ideas with spoken words.
We cannot separate having a conversation from thinking about whom we are having that conversation with. (“No. I am your father.” “To be or not to be”) Public, private, fictional, historical, we can place the person.

We learn about each other through language. Subtext and context. Tone of voice. Prosody. How formal or informal is our language?

Personification
- Making non-human things human.
- We project human traits on things.
- If we don’t create it, people will create one for it.
- Good: Your audience will personalize and invest themselves in it.
- Bad: People may associate negative qualities to it.
- “It is like letting people pick random colors and button locations of your GUI”. It might work, but you no longer control it.
- [It is almost a shame that “Google Assistant” doesn’t have the personality clearly associated with it. But that might be the point.]

“One can think of computer conversation kind of like interactive screenwriting. But we only write half the lines, and someone else writes the interleaving lines.”
This isn’t true in GUI.
Language is unconstrained.

This craft comes from the linear narrative arts.
The other part comes from gaming - engagement and retention.

Fidelity
- Adjacent to real-life full-fidelity human-human conversation
- People text and talk to each other
- Computer communication happens alongside this
Agency
- Take our cues from fiction.
- Characters and behaviors are effective because they can be created to match the situation
- [We build our characters to fit the plot]
Creativity
- “Creativity is just connecting things” - Steve Jobs

Content
- Language is imprecise, but can be incredibly efficient
- We invented language to communicate with complexity and subtlety.
- Still need to focus on content.
Intent
- Usually we focus on intent. That isn’t bad… but…
- It is a big CS focus
Context
- Piecewise-linear. Local context. Need to keep the past 10 or so elements in the conversation.
- Manage complexity

[I’m getting lost here. I’m going to have to re-watch this. A few dozen times.]

“We never want a black box in a creative agenda.”

Balance
- The more personality you offer, the more you engage. The more memorable you will be
- But… this may polarize or segment your audience.
- There is no one right answer. Average personalities are likely not the right answer.
- But… consider a plurality of personalities to get a good match with >95% of your potential audience.

What about gender?
- There is no one answer, since it becomes a cross-cultural question.
- [This was a really good question from a reporter, and we chatted about it a lot afterwards. I'll write thoughts on this later, too.]

“We will find the personalities that appeal to us.”

Animated Photo

Post has attachment
In Conversation, There are No errors

“Design is not just what it looks and feels like. Design is how it works.” - Steve Jobs

Humans can take cues from each other to get back on track in realtime. But for computers… you need to plan for it. So we need to plan for them as if they were turns in the conversation.

Any number of things can go wrong from the user perspective, for any number of reasons.

It becomes too easy to become formulaic.
Errors are opportunities.

Leverage how people expect voice conversations to work.

For any given prompt - you have to think what could go wrong. In general, two paths:
- No input. Timeout
- Input that we can’t handle. No match.

How to prevent errors.
- Using power of spoken language
- Be ready for questions about the question. Help in the moment. Ask the question again, or
- Categorize (and available in API.AI)
- - Repeat
- - Help. Consider that you need to take the original action if they just go there without followup intents . API.AI can handle this
- - Quit. This can be an opportunity for reengagement, or to let them know how to come back or that their state is saved. Or generally to summarize what happened as they leave.
- Use built in capabilities. Conversation helpers
- - Ask for sign in. Ask for date-time. etc.

“I don’t know”
“I give up…”
- It could mean they need help, or it could mean that they just want to get past the current point to go on
- This might mean that you need to use the same phrase in different intents / contexts

[I see that their quit examples have “I quit” but not “quit” itself.] [Later, when I asked, I was told that they know about the “Quit” problem and are working on it.]

There are very simple defaults for no-input and no-match.
- No-input just repeats
- No-match just goes to default fallback intent which has some standard basic responses.

No-input. User didn’t say anything, got distracted, or other things
- If we don’t need the input, change how we approach things or just end.
- Possibly reframe question, not repeating question verbatim.
- - How much help do they need?
- - Answer unasked question
- How hard do you try? Can ask a few different times.
- - Default quits after last attempt to ask
- Consider why they didn’t respond.

No-match
- Is the answer needed? IF no, maybe just end.

Conversation exchange
- who owns the info?
- App has info
- - User may be trying to find out
- User has the info
- - App will try to find out
- - May have idea of range or what valid answer are. Need to consider how important
- Shared knowledge
- - Gatekeeping?
- - May need to explain how to find out or why you need it.
- - In context help

Rapid reprompts. Quick signals that you didn’t understand
- “Sorry”
- “Say that again?”
- But you need to be looking for the right things. If you’re not, you can frustrate them

Situational context
- May need to be clear what you want
- May need to break it down into smaller parts while you ask
- Answer unasked questions.
- Be proactive, but flexible

- Track user progress and behavior
- - Counters
- Design reprompt strategy
- - Generic fallback with rapid reprompts
- - Context sensitive fallbacks, possibly to pivot to new action (like asking if they are done)
- - Add variability or reframe as you reprompt
- - Possibly escalate and pivot as they get frustrated
- Know when to quit
- - Max error prompts

Beyond the basics…
- Variability. Disguise errors. Make the whole conversation be more engaging, not just your errors.
- Threading. Remember what happens along the way.
- - Context
- - How the conversation evolved over time. (Tracking, context, counters.)
- - Lists of prompts for randomization
- - Leverage dynamic values
- - Track which prompts are played and don’t re-use
- Watch what users are doing.
- - Training feature. See what users are hitting and what new intents you might need.
- - [Plus we have these new analytics where we can track conversations.]

Reminding them how far they’ve gotten, or that they don’t have far to go, will help them get back on track.

Photo

Post has attachment
Transactions with the Google Assistant

[The kicker for this is something that wasn't mentioned until the very very VERY end. This is still in Beta - and only available on the phone-based Assistants right now. There are a lot of details to work out before it comes to the Home, including how Notifications might work.]

[It also frustrates me that things like Auth and Notifications keep getting lumped under Transactions. Yes, both are important for Transactions... but they're also very independent.]

There were some gaps with Actions
- Was only available on Home. That’s expanded.
- No transactions. Fixed today.

Key points
- Provide needed info
- Create new account and link
- Updated with notifications
- Assistant order history

What makes a good assistant
- who you are. important elements about you.
- always with you
- easy to use
- A real assistant has more than information - they can do things.

What is a transaction
- purchases
- reservations
- appointments

Three components. Independent, but can work together.
- Payments
- Identity
- Re-engageent

Payments and identity are big friction points where people may drop off.
- 1/3 abandon purchase during credit card entry.
- But if they already have a saved payment method with Google,
- Google can pass you a payment credential, which you can use.
- No Google fee - just payment processor fee. [This is big!]
- Or use your own payment processing (points, loyalty card, etc)

Identity
- 54% - 90% drop off if there are account friction
- Google has identity info already, so can expedite
- Google sends Identity Token to find existing user in your system, or use the info provided (via google signing)
- You can ask them to accept additional OAuth2 scopes to complete the sign-in. [!!! I was just told this can’t be done] [Turns out the person who told me was behind the times. I need to test this, but I have it on good authority this CAN be done.]

Re-engagement
- All transaction history is in a single place
- Can initiate from transaction history or asking assistant (???)
- Provide order updates. [HOW?] [I was told afterwards that this is calling a Google URL with a POST. But this is still pretty thin.]

Building transactional apps
- Use a Helper to get confirmation or information controlled by Assistant, but in your voice
- - You tell Assistant what you want
- - Assistant passes that back on the next request
- What kind of Helpers do we have?
- - Ask for address
- - Propose order. Get back with payment credentials.
- - Confirm order. With summary
- - Link to account

[When confirming order, do we have a time limit to reply saying the order is confirmed? What if dealing with our payment processor takes longer?] [I asked this afterwards and was told that they are dealing with this in three ways: (1) They are increasing the reply time, (2) They are creating a way to say "Wait a bit, I'm working on it, and (3) you can always send the reply saying you've accepted the order and then send a notification later.]

Followup actions
- Important updates trigger an alert to the user. [What about on Home?]
- All are available in the history
- You can attach followup actions.
- - Email
- - Phone
- - URL. Possibly deep links to websites or apps.
- - Intents puts you back in the conversation, and which intent you start in
- POST requests to an Actions on Google endpoint

Account linking/creation
- If there is no access token sent to us, we can ask for a sign-in
- We can use JWT to find out if they exist,
- - If not, create the account and return a new OAuth2 then.
- We can ask for additional scopes
- If we can’t support JWT, we can use older web sign-in

Launch on phone. Expand to Home soon. [!!!]

Photo

Post has attachment
Smart Home and Google Assistant

“Google Assistant will be at the center of driving intelligent interactions with IoT devices”
- Right now, on/off
- Doesn’t consider context

Conversations > Commands

“A user should be able to say multiple things to a device.”

We need context. This is why we have the Google Home Graph.
- Stores and provides contextual data about the home and devices.
- We need better contextual understanding of where you are

Structure
- address
- managers
Room
- are in a structure
- labels
Device
- are in a room
- type, traits, state

Smart Home Apps
- Built with Actions on Google
- This is how everyone has open access

How it works
- Smart Home partners start with hardware
- add a lightweight layer of cloud services
- Google provides NLP, Home Graph, and detailed handling of each device. “What does it mean to be a washing machine and how to interact with one?”

Flow for setup
- You get OAuth registration
- Request for devices the are returned and stored in home graph
- You don’t need to deal with state management

Flow for execution
- They handle TTS and parsing
- Send commands
- Get response

Basic device types and traits
- Lights, switches
- Composite for types/traits
More define types coming
Custom traits also coming soon

Actions SDK can be used because they define the grammar.

Deploy
- Send Google devices for testing
- Google tests integration
- Google certifies integration. They want to make sure things work as expected. Latency is ok. etc.
- Launched

What’s next?
- Build apps: developers.google.com/actions/smarthome/
- New app features and device type support coming soon
- Look at the assistant SDK


Photo

Post has attachment
Slowly getting closer

Back at I/O 2012, +Hermine Ngnomire asked how many women were there. I took this picture of a line into the men's room (with no line into the women's room) to give a rough idea.

I ran into the ever-awesome +Natalie Villalobos this afternoon and she reported that 25% of the attendees are women (or, as she liked to put it "halfway to half").

Silicon Valley has a diversity problem - and Google is part of that problem. But it is also good to see that they are trying to be part of the solution.

Animated Photo

Post has attachment
"Hey, aren't you the guy who did that show with +Dan McDermott​ a few years ago?"
Photo

Post has attachment
queue all the things

I tried to get here early this morning to do some IoT codelabs. Apparently a hundred or so other people had the same idea.
Photo

Start the Presses

I ran into a reporter from Blloom Burg [sic - according to his badge] who sat down and commented "We've talked before about Glass, right?"

Turns out we had, a few years ago.

I asked him what he thought the big news item was from I/O this year. "AR/VR is cool, but its incremental. I think the Assistant is the big thing."

We chatted a bit about Glass, and I opined that Assistant was a natural evolution from Glass. And, of course, he wanted an elaboration.

I referenced the excellent presentation from earlier today about multi-modal interfafces. Finding the right way to present the right data at the right time. These were questions we started asking with Glass, and it continues today with the Assistant


Post has attachment
Web Security with Google

Number of hacked sites increased by 32% in 2016

What to do when you’re hacked
- You’ll most likely get a notification, possible through the chrome red screen. Or your users. Or usage spikes
- Quarantine. Take offline or take parts off. Change usersnames and permissions. You don’t want to be hacked more while you’re trying to fix it.
- - Talk to hosting provider and other teams to get help
- Identification. It isn’t always easy to figure out what is wrong or the vulnerability.
- Cleanup. Remove the files. Make sure your site is running correctly again.
- Patching. Close the vulnerabilities to make sure they can’t do it again. Easiet thing? Just update everything. Tell Google to remove flags.
- Review and move forward
- - Backup your site. But make sure you also close vulnerability
- - g.co/searchconsole
- - stay updated

How google can help
- g.co/hackedwebmasters
- g.co/webmasterhelpforum
- @googlewmc

Understanding how hacks work is useful to understand why.
- Redirects to black hat controlled site, sometimes only for search engines
- Involves names that look close, but not identical, to common sites
- Obfuscated page

Learning to protect site
- New hands-on web security course. 12 lectures
- User data
- - Authentication
- - Session
- - HTTPS
- Web attacks
- How to handle and sanitize 3rd party content

SQL Injection lecture sneak preview

Wait while more posts are being loaded