Profile cover photo
Profile photo
Anne van Kesteren
Flying Dutchman
Flying Dutchman
About
Anne van Kesteren's posts

So what is the best way to define an HTTP header these days? http://tools.ietf.org/html/rfc6454#section-7.1 uses OWS and such, but the latest draft defining HTTP Content-Type http://tools.ietf.org/html/draft-ietf-httpbis-p2-semantics does not contain either leading or trailing OWS. Input appreciated!

Post has shared content

Post has attachment
So I blogged on IDNA yet again since I want this to become a solved problem for the URL Standard. Input appreciated.

Post has shared content

Post has shared content
If you follow me here you probably follow +Ian Hickson too, but just in case.
In a recent private discussion spawned from one on a W3C mailing list, I was defending the "living standard" model we use at the WHATWG for developing the HTML standard (where we just have a standard that we update more or less every day to make it better and better) as opposed to the "snapshot" model the W3C traditionally uses where one has a "draft" that nobody is supposed to implement, and when it's "ready", that draft is carved in stone and placed on a pedestal. The usual argument is something like "engineering depends on static definitions, because otherwise communication becomes unreliable". I wrote a lengthy reply. I include it below, in case anyone is interested.

With a living standard, one cannot change things arbitrarily. The only possible changes are new features, changes to features that aren't implemented yet or that have only had experimental implementations, and changes to bring the specification even more in line with what is needed for reliable communication, as the argument above puts it (or "interoperability", as I would put it), i.e. fixing bugs in the spec.

With a snapshot-based system, like the W3C TR/ page process, adding new features is done by creating a new version, so we'll ignore that, and experimental stuff is taken out before taking the snapshot, so we'll ignore that too. That leaves the fixing bugs.

Now either one doesn't fix the bugs, or one does fix the bugs. If one fixes the bugs, then that means the spec is no more "static" than a living standard. And if one doesn't fix the bugs, then communication becomes unreliable.

So IMHO, it's the snapshot-based system that's the one that leads to unreliable communications. Engineering doesn't depend on static definitions, it depends on accurate definitions. With a platform as complicated as the Web, we only get accurate definitions by fixing bugs when they are found, which inherently means that the definitions are not static.

Note that we know this system works. HTML has been developed in a "living standard" model (called "working draft" and "editor's draft" by the W3C) for seven and a half years now. Interoperability on the Web has in that time become dramatically better than it ever was under the old "snapshot" system.

Also, note that the snapshot system without applying fixes, which is the system that the W3C actually practices on the TR/ page (very few specs get errata applied, even fewer get substantial in-place corrections in any sort of timely manner), has demonstrably resulted in incorrect specs. HTML4 is the canonical example of this, where in practice that spec is woefully inaccurate and just poorly written, yet nothing was ever done about it, and the result was that if you wanted to write a browser or other implementation that "communicated reliably" with other HTML UAs, you had to explicitly violate the spec in numerous ways that were shared via the grapevine (the default value of the "media" attribute is the example I usually give of this).

And also, note that having a static specification doesn't ensure that nothing will ever change. HTML is a stark example of this, where from HTML4 to the contemporary specification many things have changed radically. But you might say that's my fault, so look instead to XML: the specification has changed such that implementations of the original standard are no longer conforming to the current standard. In fact, XML is arguably developed in a kind of "living standard" approach now, despite officially using the snapshot model. Things change, especially in software. And that's ok!

But it's incompatible with the snapshot model. Or at least, interoperability is made harder with the snapshot model.

</soapbox>

Before commenting on this, please check to see if your argument is already refuted on the WHATWG FAQ: http://wiki.whatwg.org/wiki/FAQ

Post has shared content
Just before the working world goes quiet to eat and give, an update on encodings and the Shadow DOM.

By the way, if you want to help manage +WHATWG let +Anne van Kesteren know!

Post has attachment
At least it is not out of band, but this still strikes me as a slippery slope towards the mess Internet Explorer is dealing with (and Word has in the past). Not very webby, ECMAScript.

Post has shared content
Look, WHATWG is on Google+ now!
WHATWG now on Google+. Where we find out Google+ cannot handle our pun blog title.

Post has attachment
Trying to tell the IETF yet again that browsers are indeed sniffing fonts and that it would indeed be good if that was defined. Getting font/* was nigh-on impossible so everyone shipped with implementations that sniff fonts rather than look at MIME types. Now it seems that defining that is pretty hard too.

As an aside, if the IETF wants MIME types to work they should really start having a wiki for the registry instead of the current loaded process.

Post has attachment
I wrote a validator for WebVTT. I am especially interested in feedback on the JavaScript I wrote.
Wait while more posts are being loaded