Shared publicly  - 
 
Hi everyone! I’m an engineer on the Google+ infrastructure team. When +Joseph Smarr made an appearance on Ask Me Anything back in July (http://goo.gl/GbdYv), many of you wanted to hear more about Google+'s technology stack. A few of us engineers decided to write a few posts about this topic and share them with you.

This first one has to do with something we take very seriously on the Google+ team: page render speed. We care a lot about performance at Google, and below you'll find 5 techniques we use to speed things up.

1. We <3 Closure

We like Closure. A lot. We use the Closure library, templates, and compiler to render every element on every page in Google+ -- including the JavaScript that powers these pages. But what really helps us go fast is the following:

- Closure templates can be used in both Java and JavaScript to render pages server-side and in the browser. This way, content always appears right away, and we can load JavaScript in the background ("decorating" the page, and hooking up event listeners to elements along the way)

- Closure lets us write JavaScript while still utilizing strict type and error checking, dead code elimination, cross module motion, and many other optimizations

(Visit http://code.google.com/closure/ for more information on Closure)

2. The right JavaScript, at the right time

To help manage the Javascript that powers Google+, we split our code into modules that can be loaded asynchronously from each other. You will only download the minimum amount of Javascript necessary. This is powered by 2 concepts:

- The client contains code to map the history token (the text in the URL that represents what page you are currently on) to the correct Javascript module.

- If the Javascript isn’t loaded yet, any action in the page will block until the necessary Javascript is loaded.

This framework is also the basis for our support for making client side navigates work in Google+ work without reloading the page.

3. Navigating between pages, without refreshing the page

Once the Javascript is loaded, we render all content without going back to the server since it will be much faster. We install a global event listener that listens for clicks on anchor tags. If possible, we convert that click to an in page navigate. However, if we can’t client side render the page, or if you use a middle-click or control-click on the link, we let the browser open the link as normal.

The anchor tags on the page always point to the canonical version of the URL (i.e. if you used HTML5 history for the URL), so you can easily copy/share links from the page.

4. Flushing chunks (of HTML)

We also flush HTML chunks to the client to make the page become visible as soon as the data comes back, without waiting for the whole page to load.

We do this by
- Kicking off all data fetches asynchronously at the start of the request
- Only blocking on the data when we need to render that part of the page

This system also allows us to start loading the CSS, Javascript, images, and other resources as early as possible, making the site faster and feel more responsive.

5. iFrame is our friend

To load our Javascript in parallel and avoid browser blocking behavior (http://goo.gl/lzGq8), we load our Javascript in an iframe at the top of the body tag. Loading it in an iframe adds some complexity to our code (nicely handled through Closure), but the speed boost is worth it.

On a side note, you may have noticed that we load our CSS via a XHR instead of a style tag - that is not for optimization reasons, that’s because we hit Internet Explorer’s max CSS selector limit per stylesheet!

Final Comments

This is just a small glimpse of how stuff works under the covers for Google+ and we hope to write more posts like this in the future. Leave your ideas for us in the comments!
1016
1373
Liz Quilty's profile photoRobert Pitt's profile photoChris Dent's profile photoSanjay Vasandani's profile photo
74 comments
 
Would this just-in-time loading of Javascript explain why the scroll-bar on notifications has been broken for so long?
 
+Peter da Silva I have noticed that as well. I only use the Chrome extension to view notifications because of that problem.
 
Thanks a lot for the insight +Mark Knichel!

I have a few questions though. I've noticed that while the site is usually pretty zippy and responsive, I've noticed that sometimes, when I click on the +1 button or load my notifications it returns an error which says that there was a problem completing the action (and my +1 used to transition from marked to unmarked).

I'd sent a feedback regarding the issue a while back. The problem seemed to auto correct itself after a page refresh or if I re-opened the browser. I'd got that error a few times when I was using a slow connection but I haven't seen that error in a while and assume that it has been corrected.

What I want to ask is that did that error have something to do with the way the +1 or notification click event was handled? I've seen that the global +1 button available for use and the one that is 'integrated' into the site both send out javascript calls so is there any involvement of closure in the +1 action?

PS: I apologize in advance if my question is unrelated or just plain stupid. :D

EDIT: +Peter da Silva has pointed out another source of recurring error that seems to manifest as a result of JIT javascript loading. Wanted to mention that in order to clarify my own question. Thanks Peter! :)
 
You may <3 Closure, but the result is a web page with a completely inscrutable view source experience meaning that the presumed awesomeness at hand is useless for learning from, thus ruining one of the best parts of the web. Sure it's fast, but...meh.
 
Where was the photo taken, it looks a beautiful place :)
 
I suppose the use of iframes explains why Google+ is not that accessible to screen readers?
 
+Chris Dent : Please understand that static analysis and optimization are necessary to make complex web apps tractable to develop and efficient to load and run. There's an inherent tradeoff between this fact and the ability to run the client side of the app directly from source, which I believe is what you're asking for. When you're faced with this sort of tradeoff, the only reasonable answer is to focus on user experience.

But there's a way out of this tradeoff -- open-source as much of the stack as possible so that we can all learn from each others' experiences. Which is precisely what Google engineers have been doing for some time, and will continue to do. E.g.,:
- http://code.google.com/closure/compiler/
- http://code.google.com/closure/library/
- http://code.google.com/closure/templates/
- http://code.google.com/p/closure-stylesheets/

In the end, "view source" is an extremely blunt instrument for code sharing. The answer is open-source, not hamstringing user experience.
 
Nice, concise overview, Mark! You make it sound so easy...
Thanks for all you guys do to make our experience addictive!
 
I have never been a Huge fan of using frames in order it increase concurrent connections, would it not be better to use sub-domains to increase the amount of concurrent connections, or do you get better results by combining the two methods.

Can you explain more about how you have made your JavaScript's modular, as a developer being modular is something that make your application more maintainable, but sometimes the way to actually create the modular concept is not clear, Do you have some global namespace, such as var plus = {} add methods, getters and setters in order to pre-load such as plus.getNamespace('x.y.z') and this will see if that namespace has already been loaded, if not it will do it asynchronously and fire some event, or do you have a different approach ?

In regards to flushing chunks of HTML, i have usually found that this is a bad design principle, as if I start loading the head of the document, then the header and sidebar, and then the main feed module comes into an error, your stuck on a half loaded page, where as compiling the page in the memory and then flushing it all at once seems to work better for my applications, although i do understand why you pre-flush content.

Thanks for the talk, and hope my question can get some answers.
 
Thanks for the share! Always cool to hear from the engineers!
 
Thanks for this invaluable technical information. I like the idea of event listener, closure, modularity, and event listener. I'm really interested to know the answer of following questions:
1 - Whether there is a REST-ful web service API joined with a JavaScript library like jQuery, or a SOAP web service API.
2 - Have you utilized the Semantic Web and Linked Data on Google+ or any Google product?
 
+Vandré Brunazo : Not to my knowledge (see the "We <3 Closure" bit above). Both Closure and GWT are in heavy use at Google, but for historical reasons some projects use one or the other. We hope to bring these worlds together at some point, but it's going to take a long time because of all the historical momentum involved.
 
Thanks for the lengthy and informative response +Joel Webber, but code-sharing is not what I'm concerned about. I'm very pleased that Google is open sourcing parts of its stack. That's great.

What I'm moaning about is that it results in a lack of transparency in how stuff works and fits together, which, in a technological setting, is a valuable part of the user experience. One of the marvelous aspects of an open web that is nicely resource oriented, with cooluris and all that, is that it is open on so many dimensions. Open for inspection and discovery.

An excessive enhancement application like google+ is just that, an application, not a participant in the open web. I'd much rather see the activities performed via google+ be done by myriad tools which access the web of stuff which is google+. Is the distinction clear? In its current incarnation google+ is the mass of stuff which presents the page, becomes the web app, and is the user experience. This of course leads to optimizations but at a cost that I don't much care for.

That make any more sense? I've had a rant along these lines building up for a while, but it still hasn't gelled to anything quite coherent. I suppose the way over-simplified way to put it is that I just don't like applications, especially on the web. I prefer tools.
 
+Robert Pitt We define a module for an area of the product, such as a module for Photos and one for Profiles. Since those modules may share some dependencies, we do some pre-processing and use the dependency graph to figure out what the common sets of code are, and ensure that we only load those once.

+Vandré Brunazo Nope, we do not use GWT for Google+

+Amir Keshavarz We have created our own internal API for making requests based on Closure
 
What's hot from google? Now Google is telling us "what's hot?" Well.....that' the way they roll.......FAIL. Efff....."What's Hot."
 
+Chris Dent You have mentioned a good area here which is transparency and open web.
One aspect of transparency, which I'm not sure you meant it, is accountability and credibility. I don't think Google+ has done anything regarding accountability which is a vision for a web in an article by Weitzner and Berners-Lee. Another aspect in transparency, which I think is your concern, needs a standard representation of data passing through various systems. The internal structure does not matter, however when one wants to share information in the Web, standard representation does matter.
 
i like all of the above, but i like shell scripts better that automate it :D
 
+Chris Dent : (note that I don't work directly on G+ or represent the team's views -- I do, however, work on web development tooling)

I have to admit I take some exception to your characterization of the G+ app as "excessive enhancement", and to your assertion that this somehow makes it not part of the "open web". If your characterization of the "open web" means that applications (and let's face facts -- that's what things like G+, Facebook, Gmail, and so forth are by their nature) must be served entirely as static documents with light script "enhancement", then I'm afraid your views are at odds with usability and performance. Most "content" on the web can be (and is) served as "documents", but you seem to be asserting either that (a) interactive applications should be shoved through the awkward medium of static documents, or (b) that "web applications" shouldn't exist at all. Neither of these is tenable for real-world use-cases.

I do agree that web applications should expose their data through public APIs and open standards wherever possible, and I know that the G+ team is working hard on that. In many cases, the open standards don't exist yet, and must be hammered out over time. I think +Amir Keshavarz is making this point to some extent. But you cannot expect every user-facing interface to be exposed through an awkward document interface.
 
some smarties work at google thats for sure
 
I hope not to see anymore comments like this. I am not interested and this is boring. this is not how you grow your social network.
Be more fun, or at least tell me stuff like this in a way that makes me care. (or just don't)
 
Envy..i can't do that...
 
Hey, are you guys at google looking for interns from December 26th till January 24th? I'm an electrical engineer at UMass-Dartmouth
 
I believe Google+ success/differentiation may come from external developers deploying compelling applications on the platform (just like it did for FB, Windows, etc.). The more developers, the better; and the easier it is to program, the lower the development cost and interest in the platform.

Have you guys tough on creating development tools as good visual studio to aid with the coding? How about extensive/easy to understand documentation and free training to help not just the full time developer, but also hobbyist and beginner programmers? I believe some of the big magnets may come from gaming and media consumption (just like the real world). There should be a pretty robust/easy to use set of API's for developing graphic/media intensive applications. Android app interaction should probably be integrated seamlessly. The problem for some engineers is that we are pretty comfortable with complicated things; however simplicity/practicality is the name of the game when it comes to business.
 
doesn't bother me one bit. in fact, i sleep well at night knowing google knows all.
 
Thanks so much for posting this Mark!
 
Thanks for sharing. Even at this high level description, it is interesting and useful. When you "load modules", do you use AsynchModuleDefin'n or something similar? I'm trying to figure out what I should learn about JS module use.
 
+Mark Knichel Interesting point about the iFrame. Are there any benefits of using this technique instead of appending the script tags from the dom dynamically with the "async=true" param, a la RequireJS?
 
Thanks for share, eager to see more such posts from you folks.
 
What development tools are in your arsenal? Particularly, what is your IDE of choice and how is tooling for Closure Templates/CSS in it?
 
Amazing use of scripting.... I would like to know .....how do you handle events on the page? whether they are similar to gwt or jquery ??
 
I like Google+ so far. But iFrames is deprecated in XHTML and uses different attributes in HTML 5 than 4, or am I mistaken on that? So its cool but its more non-standard web coding. In the old days we used to do that with kind of caching with framesets. Ive used iFrames as well, to fudge things too in web apps but the problem with using that and a heavily client-side scripted application is you begin to loose the power of the markup language model.....you lose the power of repurposing of your original xml content across devices on in your web pages, and the power of markup languages in delivering that all that. I guess the hope here is HTML 5 will supplant all these javascripting circus tricks as it evolves? I hope so......
 
+Chris Dent Like it or not, the browser got overloaded from a networked document viewer into a portable application sandbox a while ago. JavaScript is basically nothing more than inefficient-but-portable bytecode now. I'm not sure one can simply place the blame on web application developers over regular OS application developers however. Java certainly never lived up to its initial "write once, run anywhere" hype. Outside of web browsers, there isn't really a good alternative for deploying applications to everyone, everywhere, instantly yet.
 
+Mark Knichel Hi! I am noted, that data is sent in Google+ as array, not objects. I am understand, that most operations with arrays more effectively. But in debug form object more convenient, than arrays. For example, this entry {id : 1, name : "tom"} is obviously, but [1, , "tom", ,] is obscure.
I am suppose, that in development and bugfix you use objects and convert object to array on compilation stage. Am I right? If yes, can you describe this process more detail?

Thanks.
 
True David. I couldnt agree more. I think that is why Flash is dead they say and HTML 5 is the next promise. The nightmare of Javascripted web solutions isnt so much as they are creating web apps....they are really still just scripts....breaking the markup language model that HTML and the Internet is designed to be and pushing that to the client as another psuedo-desktop model. Its not content delivery. Its also a HUGE security breach layering in scripted solution like this, and opening up the sandbox to more and more risky injections and other attacks. Your modern servers and broadband networks can deliver lightning fast page updates and super-cached content now via XML and even XSLT if we moved toward that model and the browsers supported it better. HTML 5 could move us back to that by supply the interactive piece and less scripting possibly, but we need the transformation model of content, CSS and xml....and less interactive eye candy. More Web Standards....less ECMAScript. Its not HTML. Lets have HTML 5 provide that....not these gigantic script libraries I say.
 
+Mitch Stokely I'd say Flash is dead simply because it wasn't open source. Vendors like Apple are going to tire of a technology if they aren't allowed to fix all the security holes or valgrind memory leaks another party keeps introducing. I'm very skeptical of the long term viability of just iterating standards, HTML5, HTML6, HTML7, ... HTML20, etc. It carries the downsides of "design by committee": the more standards introduced, the longer it will take to implement them, and the longer it will take to introduce new ones, especially if backwards compatibility must be preserved.

What I hope to see is competing open source, networked sandboxes that can specialize in different areas, such as advanced graphics rendering. For OpenGL applications, right now compiled C is far more portable than WebGL+JavaScript. What I think we'll see is a simple sandbox for OpenGL LuaJIT scripts, as you can send the entire binary runtime and standard library for that language in less then the size of a PNG image, and anything can be rendered in OpenGL, negating the need to ever do a "text shadow and gradients" standard update.
 
I will look into OpenGL. Sounds interesting. My point is people are trying to turn the Client-Server HTTP protocol Model of the Web into a "app delivery system". It was never designed for that. Thats why we have so many security issues now, and proprietary apps and scripts and applets and compiled crap running down batteries on phones and devices. And its why, even with open source, we are back to where we were 10 years ago having competing vendors building applets and plugins and players and running proprietary code in broken sandboxes that leave end users vulnerable. You also saw how Applets, plugin, active x and Flash the past 15 years have not solved the interactivity conundrum online. The iPhone desktop app store model is also taking us back to older ideas......to move forward we got to get free of operating systems, players, script libraries, applets, Flash, video players and devices and go back to XML and HTML and the client-server markup model. HTML 5 has the chance to not only keep the markup (and script) standards in place but build on that by adding a unified interactive markup-based system that provides interactivity run safely by the browsers, but based on markup design. We just need Google and all these vendors to make sure they all participate and push that, otherwise, we are back to what it was in 1999....another browser war with unsecured applets and libraries of JIT compiled stuff running behind the markup....another nightmare for cross-browser people.
 
Nice to have reports on this stuff. It would be great if there was a blog we could subscribe to in order to guarantee we get each update
 
you wrote "If the Javascript isn’t loaded yet, any action in the page will block until the necessary Javascript is loaded" .. how does this work in more detail? for example, the user clicks on something where the needed javascript is not loaded yet. is that click lost? or do you store somewhere and handle later when the javascript arrives? if yes, how?

awesome info btw. :-)
 
+Gábor Farkas The clicks are queued while the modules load. The infrastructure built on Closure provides this service.
 
+Elliott Sprehn cool... i really have to look at Closure more (i'm mostly a jquery-person).. this click-queuing service.. it's not open-source i assume?
 
+Mark Knichel can you list which plug-ins you use for support Closure (library, compiler and template) in Eclipse.
 
+Gábor Farkas Not to my knowledge. It's also not really queuing the clicks so much as it's queuing the user's intent to perform an action.
 
+Elliott Sprehn i see.. interesting... it requires an another level of abstration i assume, but still.. thanks for the info
 
Hummm regardind 3) I wish it would be easier to copy an Image url.
 
Hey +Joel Webber, after sleeping on it I reckon another way of boiling down what I'm complaining about is that google+ should have started as a public web api and then had this implementation of the client (that I'm using right now) tacked upon it. Instead what's happened is that a public api is being retrofitted onto/into a single compiled application bolus which is spread between client and server. This is aligned somewhat with Steve Yegge's platform rant from a while ago.

I don't have a problem with dynamic documents, I have a problem with inscrutable dynamic documents. Nor do I have a problem with web apps, just a problem with web apps where the separation between client and server is not clear enough to allow other clients to easily operate in the same data domain.

In conversation with a friend about this, one of the metaphors we used while talking was that while Unix systems may come with complex guis for managing configuration data, the good ones are just manipulating text files in the background, and you, the user, are welcome to edit those text files by hand if that is what you like.
 
+Chris Dent : An API-first design is something worth discussing (again, let me be clear that I don't speak for the G+ team, though). I definitely agree that there's real value in doing so, though there is a non-trivial cost associated with doing so first, as opposed to exposing a public API after the initial product launch.

The upside of API-first is that it keeps you honest to some extent, because all clients are created equal. The downside (aside from increased schedule risk) is that most users of an API have significantly different needs from that of an interactive frontend -- so in trying to serve both with the same API you can find yourself with impossible tradeoffs (e.g., API simplicity vs. client-side performance). It can be a really tough call, and I think the right answer depends upon the application in question. I'm quite certain the G+ team has put a lot of thought into this, and I'm pretty sure they're working hard on a solid API.

Also let me offer my apologies if I reacted a bit strongly to the initial suggestion that static documents were the solution to the "API-first" design problem. I see this sentiment echoed in several other comments on this thread (e.g., +Mitch Stokely) as well. I've had to argue this point on and off for nearly a decade now, and I keep seeing it pop up over and over again, usually from people who aren't on the hook for delivering highly efficient web apps. I'm all for open data and protocols, but I strongly believe that forcing all apps to use static documents as their UI delivery mechanism is a very poor way to achieve that goal in practice.
 
:-) idea: Improve your css and html structure. If you hit the limit of IE, then you are definitely doing something wrong, very wrong.
 
Joel, I don't necessarily think an API first approach means that you have to use the API directly in the UI, but an API to the underlying service could be used by your UI service + caching layer. After all, the operations must be exposed in some fashion for you to use. If that API was developed exposed then everyone could build their own independent UI and backend systems. Is that not workable?
 
+Todd Hoff : What you're saying makes perfect sense, and is exactly how I think most successful systems are built. The API that frontend servers speak to backends (presuming that the backend is application-specific, and not just a database) tends to be much more general than the API that clients speak to servers. One reason generality works better here is simply that the machines, latency, and bandwidth involved are much better and/or more predictable.

For most application APIs, I believe that talking more or less directly to the backend services (usually via a simple API frontend server) works pretty well -- it's usually not the end of the world if it's a bit on the chatty side, and/or you can't easily make requests specifically tailored to the client application's needs (because you can always request more data than you need and cache it locally).

This can still get somewhat tricky, though, if you want to build an equally-performant version of an app's UI. Most frontend servers do some of the heavy lifting for web clients, speaking to the backends' chatty APIs, caching data, and narrowing the scope of what's sent to the web client so that it's less latency- and bandwidth-constrained. Reproducing this in a third-party application sometimes requires either a lot of caching (with the consistency problems you'd expect) or a high-bandwidth-low-latency connection to the backend (which is often not feasible).
 
Good information guys. I would just add that my point in emphasizing less API-based front and backend website systems and ECMAScript-based web apps and more reliance on markup and native client caching is not to demean the need for more efficient web app design or to demand static non-dynamic content, but the opposite......to get back to the original concept of content delivery to multiple unknown devices and clients and letting them repurpose that content faster and more efficiently using markup system like XML and XSLT transforms across clients. You are building back the "desktop app" model again using these API's and layering that on TOP of a simpler HTML caching system. You need to get back to a more efficient data delivery model. All browers today support the concept of much smaller footprints of xml delivery with cached xslt in the client. Again Im not against the API delivery model, just saying the emphasis should be on data content and less API. Example: In the future your refrigerator will have a servlet and want to tell your phone you need more beer.....why cant it send a small xml packet to you and your phone transform that to a small text message it can use, but a web page online in Google also consume the same packet and spit that out in G+ as a grocery list? Why should you phone and frig have to support a gigantic library of compiled apps and API's it has too hook into to get that data? That API model places too much demand on the lite-clients of the future.
 
"4. Flushing Chunks" sounds very similar to Facebook's BigPipe method for delivering HTML areas to the client. Are there any similarities there?
 
+Mark Knichel Dear Mark, i would like to bring into your notice that as mentioned in your 3rd point, once the page is loaded then the navigating between pages is almost instant as it is done without post back to server. For instance from my home i click on the profile button and then about tab in my profile it all is almost instant. But there is some flaw in this. Almost every time, when i am i home page and i share something in home page it gets showed up in stream but after sharing when i click on my profile button then in posts tab the new post does not always get visible. Many times we need to manually refresh it to get the updated version of the posts. Even if there are new +1's and comments on the posts that are not visible instantly. But now the average has improved from past where it was always not showing. But now atleast some times it manages to show the updated content but sometimes not most of the times.

More over this problem is more critical for +1 tab where it always need to be refreshed to get the updated list of all +1'd items in the web. For instance, i am in my home page or profile page(in posts tab) and in another tab of browser i +1'd two to three interesting things found in the web. Then after +1ing if i go to the tab in which g+ is open and then click on +1 tab the list is not updated. It only and only when we manually refresh then it gets updated.

So the thing is you have got the speed but the content are loosely synchronized. It should update in realtime instantly. It is possible via partial postbacks(i don't know whether it is there in java or not but i used it in .Net) where in only the updated componet get postbacked and updated keeping the speed.

Try sharing a post in the home page and +1 it in the home page itself. Then go to profile and in posts tab again click on +1 on the same post to remove that +1(if your shared post is visible in posts tab). Then again go to home page and you will find that it still holds the +1 even you removed your +1 in posts tab.
 
+Mark Knichel Also i would like to request a feature for +1 tab. As the list of +1'd items gets long it gets difficult to manage and find the required item when we want to revisit it. So please introduce something like circles so as to categorize all different +1'd items in a systematic manner making it easy to find whenever we want.It is like a could bookmarking tool. Also a search feature would be more than appreciated. I also requested it via feedback but i think it was not heard. I hope to see it soon. It will make +1ing meaningful to us rather than just recommending to other it more sense to use it as a cloud bookmarking tool from users perspective. Recommending others only makes sense for your advertisers. But if you make +1ing more useful for us as a cloud bookmarker we would more often +1 than just for recommending to others. That will serve both users as a cloud bookmarking tool and also to ur advertisers as a recommendation.
 
Everything super, yes, but the new update notification (the counter at the top right) on all computers at my disposal (workstation without performance problems) and with two different Internet connections 20Mbps, it takes less than 6 seconds to appear after that everything else is loaded.
Imho, with regard to that counter, fail big. I'm sorry but notifications on Chrome crashes every 5 minutes, then you need to better manage these things and competition
 
very interesting. Cant' wait for write-ups like this related to APIs.
Bob Uhl
 
I was on a very slow (satellite) connexion recently and Google+ was the opposite of fast--it took minutes to load while Facebook took seconds. Maybe there are too many separate components being downloaded, or too little caching? It seemed to me like Plus really wants a low-latency connexion.
 
+Bob Uhl From the post you see that downloading is mostly event driven, so latency makes things very bad. Maybe they should detect it and produce predigested html, like facebook (always) does.
 
GWT is being evangelized lot lot more than Closure yet Closure is being used by Google lot lot more than GWT. just one book on Closure versus gazillions about GWT.
 
I reread the engineering strategy for caching and flushing using client-side scripts and iframes and I question whether your teams have actually looked at the natural caching and rendering engines of most modern browsers? When your end users visit their G+ homepage for the very first time within a few seconds they have the ability to completely cache all css, basic images, site structure and DOM, and all script libraries with the need for a single client side API. Future renderings with or without Javascript are now unbelievably fast and in some request/response calls to static content simulate script events simply because the browser has cached 90% of the page DOM. Im not trying to be critical but Ive seem very very poor performance the past 2 years from many of these social media sites simply because these script kiddies keep piling on more and more complex event driven client side garbage into the memory of the browser rather than creatively using the natural browser content caching abilities. The browsers are much more efficient at this than these giant object oriented libraries that spit out micro calls to front end servers and rewrite HTML. I think it would benefit the developer community to go back to fast html rendering calls based on static document rendering and layer lite client side on top of that ONLY when needed to avoid page refresh. But you will find again most web pages in modern browsers dobt refresh like they used to if you focus on very clean, consistent page html structures on postback. Try an experiment doing what I describe and you will see in many cases you dont need those clunky iframes and client side events.
 
+Mitch Stokely : No offense, but many of the assertions you're making need evidence to back them up. I can assure you that the team working on G+ (and Gmail, and many other projects that use heavy client-side scripting) have thought about these issues very carefully. I know plenty of people who've worked both on Chrome and in Apps, and are quite familiar with the inner workings of the browser. In most cases the script-driven applications you're railing against arose in response to hitting a brick wall when it comes to server-rendered HTML performance.

You also seem to be suggesting that scripted applications for some reason doesn't get to take advantage of the browser's "natural caching", as you describe it. If anything, they can (and do) take better advantage of browser caching, because the memory-resident scripts can do a better job taking advantage of coherence across state transitions than server-rendered HTML, which has to re-download and re-render the entire page even for small changes.
 
+Joel Webber Hi Joel, thanks for the debate. I think its very important! I definitely agree with you on some things here. Yes, Google Plus is great and I am not in any way saying whats being built by the really knowledgeable engineers at Google is not valid and well designed and a great solution. I just want to say that up front. To be honest I have not explored the API fully. I am also not against client-side API's and building superior caching systems that take advantage of what are very diverse latency connection issues, browsers, and application and server issues. There is a lot there we cant even begin to debate. So I understand how engineers solve complex issues and I am sure they have done that here.
But here is my point again....The HTTP/HTML model that came about in the early 1990's is a stateless model that relies on a simple request/response delivery of markup and other frameworks using desktop user-agents, etc etc. But we now have moved forward into cleaner and more robust era in all that. The problem is the same one we debated in 1999....are you delivering content and markup and lite scripting handled by the browser and cached and managed between the server and the user agent or are you trying to recreate a new client-side application? At some point you move from that model into the realm of "apps" and Flash and plugins if yoiu push enough compiled and scripted crap to the user agent. Why not create your own interactive plugin if you are going to be so focused on client-side performance? At some point, I might ask, why are you choosing to use deprecated hidden iframes and your API to cache scripts back build AJAX calls for markup and html and xml when you should consider just building a custom browser or ap too handle all that? If your API size in kilobytes exceeds the size of the markup and content combined you have gone WAY past the HTML stateless model and need to consider building a Google Plus app or plugin. This are the decisions we used to make when Flash was popular....if you need all this JQuery interaction, then just push it into Flash or SilverLight or Java applets. Thats my point Joel.....you should use Ajax calls to create the rich browser inetractivities with browser refreshes...but at some point there is a point of diminishing returns.
You can prove my point on this.......build a dummy set consistently structured html set of pages that all share the EXACT div and content structures and format it all the same with a cached CSS style sheet. Place several large megabytes of content into that structure and change it between the pages, but using the same div structure and css sheets. Let the browser download the first page now and cache the DOM strcuture and CSS. Now link to of the other pages. The browsers today, even though they refresh completely new pages, if all shared a non-scripted cached div and css structure, it will appear as though the page did not refresh....only the content inside the div that changed. This is browser caching at its best. If you load the HTML structure with each scripting call and change CSS thats cached and your DOM changes page to page, you loose that feature. and yes yoiu do need a complex API to simulate that.
Think of a website as a giant cake of HTML and the icing is javascript, API's and plugins. CSS is the candy sprinkles. Thats the right way to go! Now imagine a cake made of nothing but gooey javascript icing and a few crumbs of HTML dropped on top. What a mess! No browser can consume that well. And at that point you might as well go to the Silverlight/Flash/Java bakery and get a boxed cake!
This is a very old argument....the client-side problem with the Web. we had this same problem in 1999 and after Web standards came about in 2002, etc. Until we move more of thgis into the markup standard (HTML5), ECMAScript is just that....script. Its s sandbox. Its not designed to replace the client-server model of HTML and delivery of content. The day it does, you wont need browsers and user-agents. Everyone will build their own browsers and just deliver them via compiled scripts. For security reason that aint going to happen. So I say, lets lighten up on the API's a bit.....
Owen Ou
 
Does G+ run on Google App Engine?
Add a comment...