Profile cover photo
Profile photo
Wayne Piekarski
2,805 followers -
Developer Advocate at Google
Developer Advocate at Google

2,805 followers
About
Wayne's posts

Post has shared content
Check out my blog post about the new Android Things Developer Preview 2 release. We now have USB Audio, native Peripheral API, support for Intel Joule hardware, and a TensorFlow sample.

The screenshot in the picture features my sweet dog Mila, and her breed is correctly detected by the TensorFlow sample code, pretty cool!
Today we are releasing Developer Preview 2 for Android Things, bringing new features and bug fixes to the platform. We are committed to providing regular updates to developers, and aim to have new preview releases approximately every 6-8 weeks. DP2 includes many bug fixes and new features, such as USB Audio and a native Peripheral API. DP2 also adds support for the Intel Joule platform. In addition, we have created a highly requested sample that shows how to use TensorFlow for object recognition and image classification on Android Things.

See the blog post https://goo.gl/uLXxTZ for more information, and join our Google+ community at https://goo.gl/PC9RBw. #AndroidThings

Post has attachment
Over the holidays, I decided that the time was right to finally build a proper flight simulator at home. It uses three 42" LCD displays arranged around the pilot to cover a 270 degree field of view, and has a proper yoke, pedals, and throttle for control. All the rendering is done by a single Intel i7 PC with a Nvidia GTX 660 from 2 years ago. The new X-Plane 11 beta supports multiple displays really well. With the three large displays, you get an incredibly immersive experience, and you can easily look out the left and right windows. You can "feel" the aircraft as it rolls around, it is really awesome.

I've been waiting to do something like this since 1990 as a kid, when I played Flight Simulator 4 on my 286 with a little 14" CRT monitor. But anything bigger was impossibly expensive. Around 2002, when I was a PhD student at the University of South Australia, I borrowed a bunch of expensive projectors and used a cluster of 4 synchronized PCs using FSUIPC to build an immersive experience. It was pretty nice, but the projectors made doing a full wraparound difficult, and it required too much space and equipment. Now it is 2017, 42" televisions are cheap, and a single GPU from 2 years ago can easily drive four outputs at 1920x1080. I used some shelving racks in my garage to hold the monitors, and some custom wood frames to support all the controls.

This project has been a good learning experience to see how it feels and what needs to be improved. I can use more custom wood frames to bring the monitors tighter together, but I'm pretty happy with it so far. So here are some panoramic photos and videos that try to show what it looks like ... enjoy!
PhotoPhotoPhotoPhotoVideo
2017-01-07
14 Photos - View album

Post has attachment
With the availability of Actions on Google, developers can now create their own conversation actions for the Google Assistant. However, there is a lot more to designing an Action than just asking simple questions and parsing the response. It turns out that human conversations are a lot more complex than what many of us would think. Imagine trying to explain to a robot how to walk.

In this video, +Nandini Stocker from Google's Conversation Design Team explains the challenges of designing voice user interfaces, so that the interaction feels natural for the user. It is a very important topic, but a very new area that many developers don't have much experience in. Nowadays we have graphical UI guidelines that have been refined over many decades, but we are still in the early days of something similar for speech. Check out the video, and then lets use our community https://g.co/actionsdev to have a conversation ... about conversation!

https://www.youtube.com/watch?v=MSUPVbbhIGA

Post has attachment
There is a lot more to designing an Action than just asking simple questions and parsing the response. It turns out that human conversations are a lot more complex than what many of us would think. In this video, +Nandini Stocker from Google's Conversation Design Team explains the challenges of designing voice user interfaces, so that the interaction feels natural for the user. It is a very important topic, but a very new area that many developers don't have much experience in. Check out the video, and then lets use this community to have a conversation ... about conversation!

https://www.youtube.com/watch?v=MSUPVbbhIGA

Post has attachment
Announcing updates to Google’s Internet of Things platform: Android Things and Weave

Today we are announcing a full range of solutions to make it easier to build secure smart devices and get them connected. We are releasing a Developer Preview of Android Things, an operating system for connected devices that has the support and scale of existing Android developer infrastructure. You can now develop IoT software using Android Studio and the Android SDK. We are also updating the Weave platform to provide an easy way to add cloud connectivity and management to devices, and enable access to Google services like the Google Assistant and many more over time.

Learn more about Google’s IoT platform from our blog post at https://goo.gl/eENGtu

#AndroidThings   #Weave  

Post has attachment
Announcing updates to Google’s Internet of Things platform: Android Things and Weave

Today we are announcing a full range of solutions to make it easier to build secure smart devices and get them connected. We are releasing a Developer Preview of Android Things, an operating system for connected devices that has the support and scale of existing Android developer infrastructure. You can now develop IoT software using Android Studio and the Android SDK. We are also updating the Weave platform to provide an easy way to add cloud connectivity and management to devices, and enable access to Google services like the Google Assistant and many more over time.

Learn more about Google’s IoT platform from our blog post at https://goo.gl/eENGtu

Welcome to our new IoT developer community!

In this community, you can learn more about working with Android Things, Weave, and Google Cloud Platform. I am your Developer Advocate, and here to help keep you up to date, help answer questions, and also give feedback to the product teams. This community is monitored and supported by myself and many other members of the different product teams.

You can use this community to ask development questions, discuss tools and new techniques, share source code, and request features and fixes you would like. Over time it will evolve to meet the needs of everyone participating.

We also maintain an StackOverflow tag on stackoverflow.com/questions/tagged/android-things and stackoverflow.com/questions/tagged/weave for specific coding questions.

In general we want to keep this community quite open, but please do not advertise your apps or services to this community unless you make the source code available under an open source license.

Welcome and I hope you enjoy this community!

We just updated the Actions on Google Client Library for Node.js to version 1.0.3, which introduces a change that might require those of you using Actions SDK (and not API.AI) to update your code. With this update, references to “inDialogTriggers” were removed, as that functionality is currently unsupported. To help with next steps, we've also updated our samples so that they use this new version of the library. Please refer to the samples as you build your own conversation actions. We apologize for any inconvenience and look forward to seeing what you build!

Post has shared content
Today, we have launched a new developer platform to build Conversation Actions using Actions on Google, to bring your services to the Google Assistant on Google Home. While it is possible to directly parse strings from the user, there is a really impressive developer tool API.AI which makes this experience even easier. You define sentences using the API.AI interface, and it handles all the parsing and interacting with the user for you.

It is really easy to get started with Actions on Google, and you can learn more from our blog post https://goo.gl/RjhcOF and the introduction video below. Join our new G+ community at https://g.co/actionsdev to learn more with other developers.

#ActionsOnGoogle  
Actions on Google: Introduction to Conversation Actions

Starting today, you can build Conversation Actions using Actions on Google. This developer platform allows you to bring your services to the Google Assistant on Google Home. Using the Actions SDK, developers can directly parse requests and construct responses that adhere to the Conversation API. Developer tools such as API.AI make the experience even easier, and you can use a graphical user interface to define the conversation.

It is really easy to get started with Actions on Google, and you can learn more from our blog post https://goo.gl/RjhcOF and the introduction video below. Join us at our new G+ community at http://g.co/actionsdev to keep up to date and share ideas with other developers.

#ActionsOnGoogle

Post has shared content
Starting today, you can build Conversation Actions using Actions on Google. This developer platform allows you to bring your services to the Google Assistant on Google Home. Using the Actions SDK, developers can directly parse requests and construct responses that adhere to the Conversation API. Developer tools such as API.AI make the experience even easier, and you can use a graphical user interface to define the conversation.

It is really easy to get started with Actions on Google, and you can learn more from our blog post https://goo.gl/RjhcOF and the introduction video below.
Actions on Google: Introduction to Conversation Actions

Starting today, you can build Conversation Actions using Actions on Google. This developer platform allows you to bring your services to the Google Assistant on Google Home. Using the Actions SDK, developers can directly parse requests and construct responses that adhere to the Conversation API. Developer tools such as API.AI make the experience even easier, and you can use a graphical user interface to define the conversation.

It is really easy to get started with Actions on Google, and you can learn more from our blog post https://goo.gl/RjhcOF and the introduction video below. Join us at our new G+ community at http://g.co/actionsdev to keep up to date and share ideas with other developers.

#ActionsOnGoogle
Wait while more posts are being loaded