Shared publicly  - 
 
I've been looking forward to CSS shaders (https://dvcs.w3.org/hg/FXTF/raw-file/tip/custom/index.html) for quite some time now, so when there finally were some webkit builds available with CSS shaders working, I started playing around with them. One of the interesting things about CSS shaders is that it faces many of the security issues my html2canvas project (http://html2canvas.hertzen.com/) does, in that it can't or shouldn't grant access to sensitive information, i.e. cross-origin content, user agent data such as visited links, user agent stylesheets and dictionary etc. 

As html2canvas doesn't actually have access to the visual content on the page, but builds a representation of the page based on the values read through the page DOM, it does not leak any data which browsers do not expose within its DOM (https://twitter.com/Niklasvh/statuses/168339210502275072). CSS Shaders on the other hand work directly with the rendered content, so all of the security issues mentioned above are exposed.

There were a number of proposals to address this issue (http://www.w3.org/Graphics/fx/wiki/CSS_Shaders_Security), and the current proposed method is to disallow access to rendered content within the vertex shader completely and only allow blending within the fragment shader. Personally, I found this very dissapointing, not only because it does not address the real issue, but also since it significantly reduces the type of shaders you can produce as there is no access to the rendered content. 

Anonymous rendering, i.e.  rendering without user agent information such as visited links, different form inputs with dictionaries or file paths, and general platform/os information was one of the proposed methods, but was rejected with the following two reasons:

"First, it is a 'whack a mole' exercise. The number of things that would need to be changed to make an anonymous rendering is not clearly identified, and it is unclear if a finite and definite set can be identified. Then, it makes of an odd user experience. For example, from an end-user perspective, the color of visited links would change depending on whether or not a shader is applied. This is not a good user experience."

The first point is valid in that there are a lot of different aspects to take into account. They may not be clearly identified yet, but if some effort would be put into it, I'm sure it would get done. It is noteworthy to point out here that CSS shaders isn't the only thing that would benefit from having a clear definition as to how to do anonymous rendering, as SVG with inline HTML through foreignObject face this very issue (http://robert.ocallahan.org/2011/09/risks-of-exposing-web-page-pixel-data.html).  To avoid having SVG images leak the anonymous data through a canvas, Chrome and Firefox both had drawn SVG images directly taint the canvas, making it unreadable. 

However, starting from Firefox 11 onwards (https://developer.mozilla.org/en/Firefox_11_for_developers#DOM), SVG images could be drawn as images onto a canvas without tainting it. This could be accomplished by having a very restrictive implementation of SVG images, disallowing any external resources and any user agent information (https://developer.mozilla.org/en/HTML/Canvas/Drawing_DOM_objects_into_a_canvas#Security).  This still hasn't been addressed in webkit (compare http://jsfiddle.net/Q48Me/ with Firefox 11+ and Chrome). 

Sooner or later, what defines "anonymous rendering" needs to be clearly defined.  Firefox has managed to put a quite clear distinction on what to exclude in their implementation of SVG images. Instead of simply rejecting the anonymous rendering proposal, perhaps an attempt to create a clear definition as to what defines anonymous rendering should have been created. I'd much rather wait for six months longer to have CSS shaders with access to rendered content than go with a solution of just disallowing access completely. As quoted by +Vincent Hardy  (http://lists.w3.org/Archives/Public/public-fx/2012AprJun/0102.html):

"We think this demonstrates that CSS shaders are still very useful even with the proposed security restrictions and that it is possible to have multiple ways of combining the result of the fragment shader with the original texture (the prototype implements two, a multiply blend and a matrix multiply, but more options are possible)."

I agree that they are still very useful, but very far from as useful as they could have been. The second argument as to why the anonymous rendering was rejected was the "odd user-experience" differeing visited-link colors etc. would create. That is obviously firstly assuming that you do not provide your own styles for form inputs (instead of user-agent defaults), you have visited-link colors, you use images and other content that are not served with CORS headers etc.

If this truely would be of major concern, why not provide an alternative of either using anonymous rendering with access to rendered content, or full rendering with no access to the rendered content within the shaders. It would provide the developers the option to go with whatever process would be more suitable for their content, with the downside of making it even more complex. However, considering that CSS shaders are miles far more complex than your ordinary CSS rules, adding another extra parameter to toggle which rendering method to use there wouldn't in my opinion make much of a difference in the general picture of things.

Although I hope the CSS shaders won't end up being a fraction of what it could have been, I am still excitedly looking forward to them regardless of   which method it uses to render content. I really think they are the biggest thing to come to CSS. 
Abstract. This document describes a proposed feature called "CSS shaders". CSS shaders are a complement to the Filter Effects specification and a solution for its feCustom element. In additi...
3
3
Add a comment...