Profile cover photo
Profile photo
Pascal Hartig
Software Engineer at Twitter; Contributer to TodoMVC, TasteJS, Yeoman
Software Engineer at Twitter; Contributer to TodoMVC, TasteJS, Yeoman


Post has attachment
I've been playing with the location APIs and ran into an issue with my physical Google Home that doesn't happen on the simulator. I've left some more details here:

Has anyone else seen this?

How do I whitelist my action for British English?

I have an action that already works for US English but when I change my Google Home to British English and try to activate it, I get the response "Sorry, but [name] isn't available in British English.".

Disable voice interpolation?

I've got a bot that provide train departure times in London. One of the possible responses is "District Line train to Richmond in 2 minutes."

Now, some part of the pipeline tries to be clever and pronounces "Richmond in" as "Richmond, Indiana". I use a simple string from my fulfillment service under `fulfillment.speech`.

How do I prevent this from happening?

Post has shared content
In this video, +Colt McAnlis​​ gives a refreshingly candid take on the enum "problem". Almost everything about this video is fantastic. Almost. Watch it before reading more, because it does a great job of outlining the pros and cons of enum usage:

Ok. So what's wrong here?

In the middle of the video an absolutely ridiculous and sensational number is dropped whose sole purpose is to create a shock-statistic which leads to an incorrect perception of an enum's effect: 2556 bytes.

What app, in the entire history of apps written for Android, has ever had a dex size of 2556 bytes? Zero. Not one. Ever.

The video goes on to show that adding an enum bloats this fictitious app to a whopping 4188 bytes. Why that's basically 2x. I added a single enum and my app doubled in size!

Open Android Studio, go to File > New Project, select a minSdk of 16, select a 'Blank Activity' template, and click Finish. On a clean compile, how large is the dex file of this completely empty app? Two million, five-hundred and sixteen thousand, five-hundred and eighty-two bytes. That's 2,512,582 bytes. 1000x times larger than the "base" example used in the video.

Of course, this size stems from the default dependency of the extremely-useful AppCompat which in turn depends on the also useful fat cow support-v4. If you remove these two dependencies, what does our dex size become? The answer may surprise you: who cares? It's an empty app.

Even if this library-free app perfectly matches up to 2556 bytes as mentioned in the video then adding an enum is completely justified as it would be the only thing in the app.

Whatever random SHA of Square Cash I have sitting on my machine currently clocks in at 6.4MB of dex. How much of that is from enums? Maybe it's 0.01MB. Or maybe it's 0.001MB.

Like I said, this video presents the pros and cons of using enums accurately and does show the relative size difference which is what is important. It is a good video. But, the overall dex size comparison is needless and serves only to mislead you to believe the impact is greater than it really is which just destroys all that credibility it built.

As a library developer, I recognize these small optimizations that should be done as we want to have as little impact on the consuming app's size, memory, and performance as possible. But it's important to realize that throwing away an Iterator allocation vs. an indexed loop, using a HashMap vs. a binary-searched collection like SparseArray, and putting an enum in your public API vs. integer values where appropriate is perfectly fine. Knowing the difference to make informed decisions is what's important and the video nearly nails that except for this one stupid stat.
Add a comment...

Post has shared content
Really happy to see Google taking energy consumption in Chrome seriously.
One of the big complaints about Chrome currently is that it's a battery hog, especially on Mac where Safari seems to do better.

The team has been working on addressing this; here are some cases that have recently been improved on trunk:

Before: Renderers for background tabs had the same priority as for foreground tabs.
Now: Renderers for background tabs get a lower priority, reducing idle wakeups on various perf test, in some cases by significant amounts (e.g. 50% on one test).

Before: On a Google search results page, using Safari's user agent to get the same content that Safari would, Chrome incurs ~390 wakes over 30s and 0.3% CPU usage vs. Safari’s 120 wakes over 30s and 0.1% CPU usage.
Now: 66% reduction in both timer firings and CPU use. Chrome is now incurring ~120 wakes over 30s and 0.1% CPU use, on par with Safari.

Before: On, Chromium incurs ~1010 wakeups over 30s vs. Safari's ~490 wakes.
Now: ~30% reduction in timer firings. Chrome is now incurring ~721 wakeups over 30s.

Before: On, Chromium incurs 768 wakups over 30s and consumes ~0.7% CPU vs. Safari's 312 wakes over 30s and ~0.1% CPU.
Now: ~59% reduction in timer firings and ~70% reduction in CPU use. Chrome is now incurring ~316 wakeups over 30s, and 0.2% CPU use, on par with Safari at 312 wakes, and 0.1% CPU use.

The Chrome team has no intention of sitting idly by (pun intended) when our users are suffering.  You should expect us to continually improve in this area.
Add a comment...

Post has shared content
Add a comment...

Post has attachment
It's gonna take me a few days to recover from this, but the #LondonCoffeeFestival  was so worth the visit.
Add a comment...

Post has shared content
Add a comment...

Post has shared content
I don't know why they call C++ strict and Haskell lazy, it's the wrong way round.

It's easy to write a Haskell program to input a list and start doing work on it before the user has even finished typing it in. This is the default in Haskell. It's eager to get stuff done. The norm in C++ is to wait until the user has finished entering the string before doing any work. Haskell is great if you want to compose a sequence of operations on lists. You don't have to wait for the entire operation on the first list to finish before starting work on the next because Haskell just can't wait to start evaluating the final result.

The Haskell laziness page ( discusses how thunks are used to implement laziness. But really a thunk is a mechanism that allows Haskell to be eager. Rather than try to evaluate code that could cause a computation to block Haskell puts that stuff in a thunk so it can get on with evaluating the part that's going to be most productive from the point of view of any consumers further down the evaluation chain.

It's all a matter of perspective. If you're a consumer of output from a Haskell program it looks eager but if a Haskell program is the consumer of your input it looks lazy.

(I was motivated to write this because of the tweet: )

Update: Repost with different permissions.
Add a comment...
Wait while more posts are being loaded