Profile cover photo
Profile photo
Mark Knichel
Mark's posts

Post has shared content
This is an excellent framework that is used in a lot of products at Google.

One thing that's not covered in the README is that you can easily integrate this with your favorite module loader so that the event handlers are only loaded on demand when you need to load them, which is really useful for large web apps.
I'm super excited that we just open sourced JsAction – a tiny library for declarative event delegation. Thanks +Rui Lopes and +Nathan Naze  for taking the time to untangle it from our code base and open sourcing it.

My current – not yet open source – framework is based on JsAction which is basically a way to say in the DOM: I care about clicks on this DOM node, please let me know under this "name" when there was such a click.
<button jsaction="click:userClickedLoginButton">

The beauty is that JsAction itself is tiny and can be easily inlined into a page. One can then late load the actual implementations to handle events – but the page is immediately responsive when it arrives at the user. This allows for implementing truly fast web pages, where everything is server side rendered and after the HTML arrives at the browser you do exactly nothing. Compared to i.e. jQuery's event delegation where you have to at least run this code
$(document).delegate('click', '.loginButton', …)
which is not a lot of work but still non-zero and you have to download the code which does these explicit mappings.
CC +Google Chrome Developers 

Post has shared content
Interesting impact on performance from browser resource prioritization - thank you +Shubhie Panicker
Interesting results from an experiment I ran recently to switch Google+ from downloading CSS via XHR to using the link tag.

We were using XHR to download CSS (this was to work around an issue in IE - exceeding the limit on the number of rules in a single stylesheet; for convenience we used XHR for all browsers). When it came to our attention that XHRs are requested at low priority (unless they are sync), we decided to run an experiment to see its impact on G+ latency.

In SPDY capable browsers this resulted in a big latency improvement. In Chrome 27 we saw a 4x speedup at the median, and 5x at 25th percentile. In Firefox 21 we saw a 5x speedup at median, and 8x at 25th percentile. No significant impact was seen at the 99th percentile.

In conclusion, in a SPDY world, XHRs are requested at low priority (except sync XHRs), and are opaque to the browser. Using declarative markup like <link> enables the browser to do appropriate resource prioritization.

Managing Page State using HTML5 History

Back in November I posted about how Google+ works behind the scenes ( Many of you asked for follow-up posts, so today I wanted to talk more about how Google+ manages the state of the page using JavaScript and HTML5 History.


As I mentioned in the last post, we use JavaScript to update the page instead of using full page reloads for improved responsiveness. The state of the page that you are currently viewing is then stored as a “history token” in the URL. This includes the type of the page that you are on, but it also contains placeholders, such as the ID of the photo album you are viewing (e.g. /photos/116040565882702717981/albums/5676280311081199169 which refers to a specific album by a specific user). There exists a mapping from history tokens to “views” - a view is just all the logic that can take the history token and display the right content. A view is also associated with a JavaScript bundle and a set of data.

In the client, views are updated through client side navigates. When you click to navigate around Google+, some JavaScript decodes the history token from the URL to the correct view. If the data and the JavaScript bundle for that view isn’t loaded yet, the system loads that before continuing on to display the requested page. This JavaScript also then adds that history token to the browser history and updates the URL so that the back and forward buttons take you to the correct page. The server can also decode the history token so it can render any link generated from the client.

Writing our own history

To manage history tokens, Google+ uses the HTML5 History API ( when browsers support it but falls back to fragment based history ( when they do not. The HTML5 History API allows JavaScript to update any part of the entire URL without causing the page to reload, and provides a clean interface for updating the browser history stack. The spec is implemented by most modern browsers, but Google+ is supported on older browsers that do not support it. HTML5 History, when available, provides us with 3 main benefits:

1. Cleaner URLs

Fragment based history must append history tokens to the URL fragment. Therefore, if the first URL of the page you visit is, when you navigate to the stream page the URL would then be Browsers that support HTML5 History would produce instead, which is shorter and clearer.

2. URLs that can be server side rendered

HTTP servers do not receive URL fragments, so they are not able to render the correct page when provided with a link generated by the fragment based history mechanism. However, servers can render the URLs produced by HTML5 History since it does not use fragments.

3. Crawlable URLs

Since these URLs can be properly server side rendered, search engines can crawl public pages and display them in search results, such as profiles and public posts.

Supporting all URL formats

Since Google+ supports both ways to manage browser history, the site must be compatible with both URL formats. There is some JavaScript in the head of the page that contains logic to fix the URLs when a HTML5 History browser receives a URL generated by a non-HTML5 History browser and vice versa. Additionally, since the server doesn’t receive URL fragments, when the hash fragment indicates that a different page was intended than the one the server rendered, it hides the current page using CSS, does a client side navigate to the correct page in the background, and then unhides the page. This all remains invisible to the user.

Hope you enjoyed this post! Leave a comment with any questions, and let us know if there are any other parts of Google+ that you want to hear about.

Hi everyone! I’m an engineer on the Google+ infrastructure team. When +Joseph Smarr made an appearance on Ask Me Anything back in July (, many of you wanted to hear more about Google+'s technology stack. A few of us engineers decided to write a few posts about this topic and share them with you.

This first one has to do with something we take very seriously on the Google+ team: page render speed. We care a lot about performance at Google, and below you'll find 5 techniques we use to speed things up.

1. We <3 Closure

We like Closure. A lot. We use the Closure library, templates, and compiler to render every element on every page in Google+ -- including the JavaScript that powers these pages. But what really helps us go fast is the following:

- Closure templates can be used in both Java and JavaScript to render pages server-side and in the browser. This way, content always appears right away, and we can load JavaScript in the background ("decorating" the page, and hooking up event listeners to elements along the way)

- Closure lets us write JavaScript while still utilizing strict type and error checking, dead code elimination, cross module motion, and many other optimizations

(Visit for more information on Closure)

2. The right JavaScript, at the right time

To help manage the Javascript that powers Google+, we split our code into modules that can be loaded asynchronously from each other. You will only download the minimum amount of Javascript necessary. This is powered by 2 concepts:

- The client contains code to map the history token (the text in the URL that represents what page you are currently on) to the correct Javascript module.

- If the Javascript isn’t loaded yet, any action in the page will block until the necessary Javascript is loaded.

This framework is also the basis for our support for making client side navigates work in Google+ work without reloading the page.

3. Navigating between pages, without refreshing the page

Once the Javascript is loaded, we render all content without going back to the server since it will be much faster. We install a global event listener that listens for clicks on anchor tags. If possible, we convert that click to an in page navigate. However, if we can’t client side render the page, or if you use a middle-click or control-click on the link, we let the browser open the link as normal.

The anchor tags on the page always point to the canonical version of the URL (i.e. if you used HTML5 history for the URL), so you can easily copy/share links from the page.

4. Flushing chunks (of HTML)

We also flush HTML chunks to the client to make the page become visible as soon as the data comes back, without waiting for the whole page to load.

We do this by
- Kicking off all data fetches asynchronously at the start of the request
- Only blocking on the data when we need to render that part of the page

This system also allows us to start loading the CSS, Javascript, images, and other resources as early as possible, making the site faster and feel more responsive.

5. iFrame is our friend

To load our Javascript in parallel and avoid browser blocking behavior (, we load our Javascript in an iframe at the top of the body tag. Loading it in an iframe adds some complexity to our code (nicely handled through Closure), but the speed boost is worth it.

On a side note, you may have noticed that we load our CSS via a XHR instead of a style tag - that is not for optimization reasons, that’s because we hit Internet Explorer’s max CSS selector limit per stylesheet!

Final Comments

This is just a small glimpse of how stuff works under the covers for Google+ and we hope to write more posts like this in the future. Leave your ideas for us in the comments!
Wait while more posts are being loaded