Shared publicly  - 
In a recent private discussion spawned from one on a W3C mailing list, I was defending the "living standard" model we use at the WHATWG for developing the HTML standard (where we just have a standard that we update more or less every day to make it better and better) as opposed to the "snapshot" model the W3C traditionally uses where one has a "draft" that nobody is supposed to implement, and when it's "ready", that draft is carved in stone and placed on a pedestal. The usual argument is something like "engineering depends on static definitions, because otherwise communication becomes unreliable". I wrote a lengthy reply. I include it below, in case anyone is interested.

With a living standard, one cannot change things arbitrarily. The only possible changes are new features, changes to features that aren't implemented yet or that have only had experimental implementations, and changes to bring the specification even more in line with what is needed for reliable communication, as the argument above puts it (or "interoperability", as I would put it), i.e. fixing bugs in the spec.

With a snapshot-based system, like the W3C TR/ page process, adding new features is done by creating a new version, so we'll ignore that, and experimental stuff is taken out before taking the snapshot, so we'll ignore that too. That leaves the fixing bugs.

Now either one doesn't fix the bugs, or one does fix the bugs. If one fixes the bugs, then that means the spec is no more "static" than a living standard. And if one doesn't fix the bugs, then communication becomes unreliable.

So IMHO, it's the snapshot-based system that's the one that leads to unreliable communications. Engineering doesn't depend on static definitions, it depends on accurate definitions. With a platform as complicated as the Web, we only get accurate definitions by fixing bugs when they are found, which inherently means that the definitions are not static.

Note that we know this system works. HTML has been developed in a "living standard" model (called "working draft" and "editor's draft" by the W3C) for seven and a half years now. Interoperability on the Web has in that time become dramatically better than it ever was under the old "snapshot" system.

Also, note that the snapshot system without applying fixes, which is the system that the W3C actually practices on the TR/ page (very few specs get errata applied, even fewer get substantial in-place corrections in any sort of timely manner), has demonstrably resulted in incorrect specs. HTML4 is the canonical example of this, where in practice that spec is woefully inaccurate and just poorly written, yet nothing was ever done about it, and the result was that if you wanted to write a browser or other implementation that "communicated reliably" with other HTML UAs, you had to explicitly violate the spec in numerous ways that were shared via the grapevine (the default value of the "media" attribute is the example I usually give of this).

And also, note that having a static specification doesn't ensure that nothing will ever change. HTML is a stark example of this, where from HTML4 to the contemporary specification many things have changed radically. But you might say that's my fault, so look instead to XML: the specification has changed such that implementations of the original standard are no longer conforming to the current standard. In fact, XML is arguably developed in a kind of "living standard" approach now, despite officially using the snapshot model. Things change, especially in software. And that's ok!

But it's incompatible with the snapshot model. Or at least, interoperability is made harder with the snapshot model.


Before commenting on this, please check to see if your argument is already refuted on the WHATWG FAQ:
Michael Hausenblas's profile photoFormer Gavin Carothers's profile photoThaddee Tyl's profile photoEdu Flores's profile photo
I can't help, but I have to agree with +Ian Hickson - and in fact I wish for a 'living standard' solution to be implemented W3C-wide. As someone who has worked in W3C activities since 2006 (currently co-chairing RDB2RDF WG and serving as an Editor in the Government Linked Data WG) I can only add to what Ian said: seen it over and over again; be it @rel values or datatypes for RDBMS mappings - the static model makes our lives harder and does not, in any way, provide advantages.

Disclaimer: The above stated is my personal opinion, with my hat off re my role as W3C Advisory Committee representative for DERI.
Great post. I agree that the living standard is the right approach.
+Daniel Glazman The issues enumerated at causes the number of IETF standards to be very very small. They only really have RFCs now. RFC is the new STD.
Standards don't work well with versioning. If you do have versioning, people don't know whether you're talking about HTTP1.0 or 1.1 (soon 2.0), although the latest is always better. The idea that you need huge changes to make a new version is also flawed, since flaws in the standard must not stay uncured, even if the fix is small.
How, then, is the snapshot model different to the DVCS-ed standard model?
Commits are the new snapshots.
IMO, increasing the speed at which a standard changes does not automatically improve it.
@Thaddee Tyl "Commits are the new snapshots" - I agree.

You know what's better than a versioned spec to work against? A well maintained, current test suite. User agents should be implementing against a good shared test suite.

The reality is that nothing is ever perfectly stable. Pretending that it's safe to work against some arbitrary version is silly.

Working against stale versions of a spec is only going to introduce bugs in the user agent which are already fixed in the current spec, and delay your implementation of new features.

Yes, the spec is a moving target. That means you'll always have new features to implement and old bugs to fix, but if we could all work against a common test suite (and contribute updates and fixes to that), I think that a lot of our problems would go away.

This debate is effectively over. Agile and CI have won. Now let's make that process better.

Still want a version? Use a commit hash.
+Daniel Glazman The IETF is even worse at this than the W3C. But that's a topic for another post.

+Danny Ayers Versioning in a standard that's implemented incompletely over time by user agents that are released on schedules that don't match the schedule that the standard uses (as in the case of browsers and Web standards) just doesn't work. Many others have discussed this to death already, so I won't go into the arguments for that here. But given that: let's say a legacy Web tech specification somewhere, call it SU, defines 1 rad = 180/π, and that that's considered a bug. Either enough software has shipped with this that there's Web pages that depend on it, or it's still in the experimental stage and few if any pages depend on it. In the latter case, we can change it, no problem. In the former case, we simply can't change it. It doesn't matter how many people think it's wrong. Reality has forced the mistake to be the way it is. End of story. There's countless examples of such mistakes on the Web platform; my favourite, because it's entirely my fault, is that the new "pushState()" API has three arguments and the second one is completely pointless and has no effect. But we can't remove it, because many sites on the Web depend on the third argument being the third argument and not the second.

It's quite possible that this model is not appropriate for nuts and bolts. I'm only arguing it in the context of Web software.

Commits aren't like snapshots. With commits, you don't go through an LC/CR/REC cycle like with snapshots. That's the key difference. (The HTML Living Standard goes through many commits a week. We're averaging about 3.2 commits a day.)

+Mike Amundsen Changing a standard does not automatically improve it, regardless of the speed. But I don't see any reason why, once we have established that something does need changing, we would ever want to delay making the change. All that having known bugs can do is cause implementors who aren't paying close attention to the process end up implementing it incorrectly. Case in point, the <q> element: we had changed it in the HTML5 spec to be more compatible with IE, but Microsoft were still paying attention to the HTML4 spec, since this was before the W3C changed their mind about whether HTML was dead or not. So Microsoft implement <q> the old way, rather than the new way. And so we ended up having to change <q> back again. Had we been able to make the change to the HTML4 spec in place, like we can now with the contemporary HTML standard, we wouldn't have had this problem.
(Test suites are absolutely key here, couldn't agree more.)
+Ian Hickson "But I don't see any reason why, once we have established that something does need changing, we would ever want to delay making the change. All that having known bugs can do is cause implementors who aren't paying close attention to the process end up implementing it incorrectly."

first, thanks for the response.

second, these two sentences assume that "something does need changing" and "known bugs" are equivalent statements. surely you do not think this is true in all cases,right? IOW, do you think the only things that need changing are "known bugs"?

third, "I don't see any reason why...we would ever want to delay making the change." really? possibly making the change now would disadvantage a partner/competitor. would undo hard work in another related area, cause other changes to become invalid or unstable, etc. etc. etc. i can rattle off quite a few reasons to wait on making changes. i suspect others can, too.

finally, i will observe that it is possible that speeding up the rate at which change happens can introduce a greater number of "mistakes" along the way; things that seemed right at the time, but now (after some experience and contemplation) should be done differently. this can make it seem like the ability to "go back" and "fix bugs" is quite compelling. however, additional "known bugs" is likely a result of the increased rate of change and not necessarily an argument in favor of maintaining an increased rate of change.
+Mike Amundsen As I noted in some of the earlier comments and in the OP, there are three kinds of (normative text) changes that apply to a Web specification: new features, changing the design of a feature that is still under development and not shipped, and fixing the spec to match reality. For the third kind, which is the only kind that matters in a discussion of "snapshot model vs living standard model", the changes, once we know we have to make them, are by definition "known bugs".

I don't see how delaying a spec change of this nature could ever be bad for the Web. Can you give a concrete example?

I agree that the first two kinds of changes I mention, especially the first one (adding new features), are not things that we should do too quickly. Feature design and integrating implementation experience are things that we should do in a very careful and considered manner. The original context of this conversation was actually my saying that we shouldn't be making a particular change now, but should instead wait for more implementation experience, and +Danny Ayers asking what the timetable for making such a change would be.

In practice, the speed at which changes of the first two types happen is more a function of the browsers competing with each other and with other platforms and thus pushing the envelope at the rate they can implement, than anything driven from the spec side, at least for HTML.
+Ian Hickson " there are three kinds of (normative text) changes that apply to a Web specification: new features, changing the design of a feature that is still under development and not shipped, and fixing the spec to match reality."

"match reality" only applies to the last of these?

maybe a less value-laden list would be:
- adding new features to the working set w/o declaring them as part of the standard
- modifying existing features that are already part of the working set but not yet declared part of the standard
- modifying features after they are declared part of the standard.

"For the third kind, which is the only kind that matters in a discussion of "snapshot model vs living standard model", the changes, once we know we have to make them, are by definition "known bugs"."

It is your assertion that any change made to features that are already part of the standard are by definition "known bugs.", correct?

Possibly you mean to say, "We should only change features that are part of the standard when it is determined that they are, in fact "known bugs"
The snapshot model is appropriate when there are a large number of vendors to implement the standard, or a pyramid where implementations move slowly through a chain from a smaller number of base libraries to a large number of endpoints.

HTML was originally something that was implemented multiple times, and so having consensus built and acted upon slowly was appropriate.

Now with the size of HTML5 (and its inclusion of everything from IE5 bug compatibility to SQL), and with the serious set of performance and implementation expectations of all the ancillary technologies, the reality is that there is a very small number of HTML implementations, especially HTML5, and there are unlikely ever to be any that aren't from the incumbents or built on OSS versions of their technology.

The W3C was originally formed to give both vendors and users of web technology a voice, but in recent years, the economic imperative has driven the majority stake over to vendors, and users are expected to make their peace with what they can get, and argue occasionally for changes in the details.

In this new scenario, changing the behavior of HTML5 depends only on the coordination of a small number of players. Whether the W3C or WHATWG (or maybe just IRC among a few friends at different companies) doesn't really much matter. Economically, big players in a multi-billion dollar emerging mobile and device market, which is where the growth is, and big players in established web desktop (and increasingly video media) markets are going to act in their own economic self interest -- no surprise here -- and if that means not breaking most things and adding features incrementally as they get +1 votes on IRC from 3 people, that's what will happen. Call it incrementalism, or call it cronyism, or call it realpolitik, but it's what we have.
+Mike Amundsen well there is no "reality" until there's shipped implementations, so yes. Until then it's all science fiction.

In practice there's no "declare part of the standard" step that you (as a spec author) have any control over. It's entirely in the hands of the implementors and authors. Once a feature has shipped and is used, it's de facto part of the platform, and you're constrained. Until then, it's nothing, and you can do what you want. This is one of the things that the W3C (and IETF) processes get completely wrong. It doesn't matter how long you keep something in "WD", if it gets implemented, you have a standard, end of story. Similarly, if you go to REC but nobody comes to implement, you can still completely change it (and likely will have to, to make it something people do want to implement).

The CSS working group is currently going through a debate related to this: they have this model where until something is Officially Ready, the implementors are asked to use prefixes on the draft property names if they implement them. The result has been that many sites on the Web now use CSS properties with "-webkit-", "-moz-", and "-ms-" in their names, and that there are even browsers implementing properties with the prefixes of other vendors. It's quite the mess.

Once something is part of the platform, we can only change the spec so as to make the spec match what the implementations do.
+Leigh Klotz, Jr. We've moved more slowly in the last five years than we did in the first five years of the Web. The HTML spec doesn't contain SQL. IE5 compatibility doesn't make the spec any bigger than it would otherwise (everything has to be specced, whether it is specced to do A or B doesn't change its size). Vendors have always been the main voice. Implementors have the ultimate power because they are the ones who, at the end of the day, decide what ships and what doesn't. This hasn't changed since the W3C was created (just look at HTML3 vs HTML3.2). There are literally two orders of magnitude more people involved in HTML's development today than there ever were in the 90s — compare the size of the acknowledgements lists for HTML4 and the contemporary spec.
Sorry for not noticing the IE5 and SQL issues, but I was pretty sure the SQL API was still in HTML5. But in any case, those were just illustrative points of its size, which is something you agree with, that it's a big spec. I'm not attempting to argue with you on whether incrementalism or snapshotting is better; I'm just pointing out that the versioned-spec approach was designed for a different environment than the one that we operate in today.
+Ian Hickson

Your reply suggests the following is a better rendering of your list:
Fictional Phase:
- adding new features that are marked as "not shipped" to one or more implementations
- modifying features marked as "not shipped" in one or more implementations
Reality Phase:
- modifying features that are marked as "shipped" in one or more implementations

"if it gets implemented, you have a standard, end of story."
Basically, what appears 'in the wild' is, without question, the "standard" itself, correct?

If this is the case, I am not clear on what it is the WHATWG is providing. Certainly it is not the work of creating a standard; as you've already pointed out that work is happening elsewhere.

Possibly you mean to say that WHATWG's only role is in the "reality phase"? IOW, the WHATWG exists to deal with modifying features that are marked as "shipped" in in existing implementations.
+Leigh Klotz, Jr. I'm willing to concede that the snapshotting and versioning systems made sense when they were started; they certainly make sense in other contexts (e.g. nuts and bolts, as mentioned earlier, and indeed in almost any hardware situation where reality doesn't tend to evolve over time). In retrospect, though, I would say that by the time I got involved in the Web, namely about 1999/2000, the versioning/snapshotting models were already an ill fit to the market. It took me another half decade or more to actually recognise this, though.

+Mike Amundsen You need a spec even for stuff that's shipped, to help implementations converge. (The alternative, which is what we had in the HTML4 days, is that every vendor reverse-engineers every other vendor, which (a) is significantly more expensive for everyone involved, (b) raises the bar for entering the market, and (c) can sometimes result in a kind of oscillation where two vendors both imperfectly reverse-engineer each other each time they ship a version, with each cycle introducing yet more quirky behaviour to the platform. Some of the platform's worse warts are the result of (c).)

The WHATWG (and the W3C) work does all three stages, from coming up with new features to evolving them based on experimental implementation experience, to describing shipped features. New features that are specced (as "fiction") become standard (shipped) features when the browsers implement and ship them and authors use them.
+Ian Hickson "You need a spec even for stuff that's shipped, to help implementations converge." so you've slipped into 'spec' and not 'standard' here; is that significant? do you think these words are interchangeable in this discussion?

"The WHATWG (and the W3C) work does all three stages, from coming up with new features to evolving them based on experimental implementation experience, to describing shipped features. New features that are specced (as "fiction") become standard (shipped) features when the browsers implement and ship them and authors use them."

again, if i understand you here, the WHATWG is not really involved in setting standards. it may
1 - "come up w/ new features" (presumably for implementors to adopt; i assume the WHATWG is not the sole provider of these new features, right? Implementors come up w/ new features, too. In fact, it's not clear to me what the WHATWG is [indepdendent of implmentors] "come up w/ new features").
This is the "adding new features that are marked as "not shipped" to one or more implementations" part, right?

2 - "evolv[e] [features] based on experimental implementation experience" (not clear how this works, who is doing the evolving here? not WHATWG, right?)
this the "modifying features marked as "not shipped" in one or more implementations", correct?

3 - " describing shipped features" (so the WHATWG is in the 'recording secretary role' here; you document what others have already done, correct?)

I didn't see where "modifying features that are marked as "shipped" in one or more implementations" fit in your latest list. what am i missing here?
Ian, smaller, more modular specs can be more stable. They tend to stay that way until they are obsoleted by events, though often even in those cases they live on. Many RFCs are now obsolete, or are superseded by other RFCs, or are downright useless, but if you have to interact with a system that's old (i.e., if you work for a non-web company), then those fixed points are invaluable. Web software and content (which is of course software) moves faster, and I certainly believe it moved more slowly before the y2K timeframe you site, and I have no quibble with you on that point.

However, while I think you have a valuable perspective and an important role in HTML5, your viewpoint doesn't have universal applicability even to standards and specs for things-that-interact-with-the-web.

A case in point: when I observed in 2007 or so that XML would not go away, even if JSON became the way that data worked for web browsers, a web-startup guy said to me, "What else is there?" When I asked him about oil drilling data (Energistics) or financial data, I got a classic eye-roll.
By "specification" I mean a document that describes how to implement a contrivance (e.g. software program, hardware device, data file) that interoperates with others in order to perform a particular task (e.g. a document that describes how to write Web browsers and Web pages such that any browser can display any page in an equivalent fashion).

By "standard" I mean a specification that a group of implementors all purport to follow.

By "reality" I mean the constraints that implementors operate under, typically, in the case of the Web, due to existing content, and the resulting decisions they are forced to make that might disagree with the standard they are attempting to follow.

Spec authors such as myself, +Anne van Kesteren, etc, come up with new features based on input from implementors, authors, users, etc, and describe them in specifications. (This is #1 above.) If we do a good job, implementors look at these specs and at the feedback they're getting from authors and users, and decide to implement them. They then usually report back to the spec author (via the mailing lists, usually) to describe their experience implementing the spec, and then the spec author will take that feedback and update the specification accordingly. (This is #2 above.) Finally, the implementors will update their software based on those changes, and ship it. Later, it may be found that exactly following the spec is not possible while being compatible with deployed content; implementors report such problems to the spec author, and the spec author updates the spec to match this feedback, so that other implementors, when they try to implement the spec, don't have to find that they can't implement the spec. (This is #3 above.)

The input based on which spec authors come up with stuff to spec (#1) might be based on all manner of things, in some cases it's just use cases, in some cases it's straw man proposals, and in other cases it's experimental work that browser vendors have already implemented without any specification.

Note that I don't think any of what I've been saying here applies any differently to the W3C, the WHATWG, or indeed the IETF, ECMA, or many other groups involved in Web technologies. It happens that the WHATWG is already explicitly following the model I describe, whereas the other groups are either following other models or (in practice) doing hybrids where they claim to have "draft specs" and "final standards" but where implementors are really only looking at the drafts and acting as if they were living standards.

I hope this clears things up. Let me know if anything is still unclear or if you disagree. :-)
+Danny Ayers I disagree with the premise of your last comment. Browser vendors (implementors) have always have ultimate power over the spec's UA conformance criteria (again, see HTML 3.2 for an example of this from before I was around, or indeed look at Tim's original spec). Content creators are significantly better represented today in the relevant working groups than they ever have been, both in proportional terms and in absolute terms.

+Leigh Klotz, Jr. I don't see any interesting difference between a stable subsection of a large document that has sections that are still being polished and new sections that are still being developed, and a stable document amongst a multitude of documents of which some are still being polished and some still being developed.

In fact the difference is IMHO completely mechanical. As an example of this, consider the WHATWG spec, which is one monolithic document at the WHATWG, yet consists of half a dozen or more specs at the W3C. The text is exactly the same. The stability of each section is exactly the same. Personally I prefer having fewer larger documents, but it's just a personal preference, there's no real difference between them.

If you write a spec as HTML is being written, then the most modern spec should be all you have to look at to get your decade-old content to work. The current HTML spec, and its dependencies and sibling specs like CSS, are sufficient to make a browser that renders a document from 1992 with full fidelity. No need to refer to "stable" specs from the 90s. In fact, the specs from the 90s would be much worse, since they are less precise and less accurate.

XML is effectively developed using the living standard model (the spec is in-place replaced with technically incompatible revisions to bring the spec more in line with reality). I don't think it's a good argument in favour of snapshots.
+Ian Hickson +Michael Hausenblas the different models have different strengths and weaknesses, and I don't think a single W3C-wide policy of one over the other would be helpful.

In the XML Activity we really have both kinds of development going on, and I think for good reason : XML itself is like nuts and bolts and sock sizes, and not a place for innovation or new features. There are literally tens of thousands of implementers, many of whom have never even heard of W3C and/or work from secondary sources such as printed books. Languages like XQuery, on the other hand, are living, with new features being implemented rapidly, and probably fewer than 100 primary implementations. New features proposed in XQuery often make their way into implementations very quickly.

You're right that both W3C and the IETF processes grew out of the long-discredited Waterfall model of software engineering, but to the extent that people want standards at all, there are still needs for knowing "I conform to version X", even if "X" is a date-stamp or a git/cvs/hg release tag.
+Ian Hickson I appreciate your willingness to answer my questions; it has been a learning experience for me.

"By "specification" I mean a document that describes how to implement a contrivance (e.g. software program, hardware device, data file) that interoperates with others in order to perform a particular task (e.g. a document that describes how to write Web browsers and Web pages such that any browser can display any page in an equivalent fashion)."
With this I understand that the WHATWG is in the business of authoring specifications (not, as I assumed earlier, in "setting stndards").

"By "standard" I mean a specification that a group of implementors all purport to follow."
This statement seems at odds w/ your earlier assertion that "if it gets implemented, you have a standard, end of story." I'm not sure how to reconcile the two. IOW, i get the impression from your earlier comment that standards are (in your view) de facto. But this latest statement sounds to me as you are saying that standards are by agreement. Frankly, I have no idea how to parse "purport to follow" in your context<g>. Again, despite whatever confusions I may have here, I get from your statement that the WHATWG is not "setting standards" here.

By "reality" I mean the constraints that implementors operate under, typically, in the case of the Web, due to existing content, and the resulting decisions they are forced to make that might disagree with the standard they are attempting to follow.
Here, I can relate your definition back to your comment earlier: "fixing the spec to match reality."

So, I understand this all to mean that, from your POV, the WHATWG
1 - creates specifications
2 - waits to see if they become standards and, after they 'become standards' (I am still vague on how that happens),
3 - changes some of the existing specs in cases where the implementors did not follow the spec.

I also understand that it is your assertion that what is described here is the same process used by other organizations (you mention IETF, W3C, and ECMA), even if they make statements to the contrary. I refer to statements such as "where they claim...but where implementors are really..." and (earlier) "This is one of the things that the W3C (and IETF) processes get completely wrong."
+Ian Hickson I like your view on combining modular specs into one document. It's something we're doing as well in another WG. I like your view that adding a new module is evolutionary, and I like your view that you should try not to break existing modules when you add new ones, and preserve the meaning of work that was done prior. I like your commitment to getting behaviors and interfaces that constitute "web browsers" nailed down. I don't, however, come to the conclusion that a document that conveniently rolls up the adopted pieces into a single file is a "living specification," but I can see how that's the viewpoint you have. I would quibble that the rolled-up document is the "Web Browser" spec and not the "HTML" spec, but I can see that you probably consider that distinction to be at best irrelevant.
+Mike Amundsen By "if it gets implemented, you have a standard" I mean that specifying something that doesn't match what is implemented is futile. That is, if you write a spec, and then someone implements it (and it gets shipped and used) then you are now in mode #3 and can no longer make the kind of changes one can do in mode #2, regardless of whether you'd like to or not. For example, it would be futile for us to specify that document.lastModified return an ISO8601 datetime stamp; reality dictates that it's a US-style date, whether we like it or not. Pages depend on it, browsers implement it, that's what it is.

By "a specification that they purport to follow" I mean that it's the specification that they agree describes, in principle, what they think they should all do. An example of something I would say is a standard would be the DOM4 Core specification. It's still being tuned, and features get added occasionally, but by and large browser vendors agree that it's the best description of what browsers do, should, and will implement. An example of a specification that isn't a standard by this definition would be ISO/IEC 15445:2000(E), also known as ISO HTML. No browser vendor that I'm aware of is in any way attempting to, or even claiming to, implement it. Instead, they look at the contemporary HTML spec (either in the form of the WHATWG HTML "living standard" or the W3C HTML5 "editor's draft").

We don't wait for specs to become standards. Whether a spec is a standard or not is a function of how good a job the editor is doing, in practice. When HTML4 was the last word on HTML, it was the standard; eventually the new HTML spec became the standard. If I start doing crazy stuff with that spec and someone else does a better job, the browser vendors will say "screw Hixie, he's nuts" and move to the other guy's spec, and then that'll become the standard.

So if a spec writer is doing a good job, he'll get buy-in and he'll be setting the standard. If a spec writer is not doing a good job, he'll just be writing science fiction. (XHTML2 is an example of that. It was a spec, but never got buy-in and so never became a standard. XForms is an example of something that was both for different communities: for certain implementors, it's their standard; for Web browser vendors, it's not relevant, not a standard for them, even though its writers would have like it to be.)
You can have standards that are developed using the snapshot model (often known as the waterfall model, as +Liam Quin correctly points out), and you can have standards developed using the "living" model. It's just a question of which is the spec that has the most buy-in from the relevant implementors. My thesis here is just that, for Web technologies, explicitly adopting the "living" model serves the Web better.
+Liam Quin any claims about conformance to a particular spec can't seriously be made in relation to the spec's version or commit revision or similar. Such claims can only seriously be made in terms of a the results that a particular implementation gets when tested against a particular test suite revision. Sadly, on the Web, our test suites are still quite inadequate for these purposes.

(In the "living standard" model, a claim that a browser released at the time that revision r1234 was current implements revision r1234 may in fact be less true than the claim that the browser implements r1235, if the delta from r1234 to r1235 was to fix the spec to better match what that browser did after it was released, which is often what happens. In practice, therefore, it's better to just claim that one implements particular features, and not bother relating it to a specific specification revision.)
+Ian Hickson I am happy not to have XForms in HTML5. In today's world, XForms is just another MVC framework, and HTML5 is a rich playground for all of us. There are and will be other MVC frameworks with other syntaxes, some decorative javascript, some made-up attribute names. They can all coexist on the web. JavaScript is fast enough to do what we need now (nod to Fabrice Bellard who ported his x86 qemu to it so I can run Linux in my browser) and webapis does a fine job specifying IDL and events, on which XForms interoperates as well: so thank you, we are happy.

A case in point: I added local:// as a submission scheme and now we can use HTML5 local storage transparently! Again, thanks for the toys, we like them.
I'll join +Michael Hausenblas in wishing that that W3C had a Living Standard model. It's very clear that even small simple specifications change over time. Fixed documents only become out of date and confusing to implementors and users. Not saying that it should be the only model perhaps, but it should exist.

These opinions are my own with my Turtle/TriG editor hat off but still in hand
+Ian Hickson well...

"By "if it gets implemented, you have a standard" I mean that specifying something that doesn't match what is implemented is futile." so, when you offered this up earlier when I asked about a definition of standards, you were really just tossing off a comment about futility?

"By "a specification that they purport to follow" I mean that it's the specification that they agree describes, in principle, what they think they should all do." I really don't know what to do w/ this statement; not sure I can find a single measurable thing here.

" An example of a specification that isn't a standard by this definition would be ISO/IEC 15445:2000(E), also known as ISO HTML." since the words "International Standard" are in the ISO HTML document, I can only assume you think the writers of that document are mis-using the term in a fundamental way.

"Whether a spec is a standard or not is a function of how good a job the editor is doing, in practice." huh.

While I have no doubt you know what these assertions mean, I'm not able to follow them. Thanks for your time, tho.
+Mike Amundsen The word "standard" is used by various people to mean various things. Some people use it to mean "a document whose authors hope people will obey". I'm using it to mean something stronger, namely "a document that people use as their guide where possible". ISO seem to use it in the former sense (in some cases, though not in the case of their ISO HTML spec, they then back up their hope by the force of law). To be honest, any time anyone self-describes their work as a standard, it's always a description of what they hope will happen. It's not a misuse, just a different definition. Heck, even the WHATWG spec says "Standard" on it, following the same "hopeful" definition.

I don't think the definitions of the terms "standard" and "specification" have any bearing on the point of the OP, though. The point is just that it's better to keep the documentation updated in place rather than having periodic releases.
+Ian Hickson a product can claim conformance to a particular version of a spec for sure. A conformance test suite might help substantiate that claim. W3C is, of course, (as you know but others reading the comment might not) explicitly not in the business of conformance testing; last time I checked (some time last year) standards organizations that did do conformance testing typically charged anywhere from $10K to a few hundred thousand dollars to test and certify a product. It's a good revenue source and helps to eliminate that pesky open source stuff ☺ for which reason I've always fought it. Instead, W3C test suites are to test whether the spec can be implemented.

For a "waterfall spec" if there's an error, errata are issued, and the vendor is supposed to document which errata they have applied. For a "living" spec it becomes, as you point out, less clear, but the information is still useful for people actually using the product, especially if the product does not have continuous develooment/release, but, say, a five-yearly release cycle, like some of the large commercial database products that implement XML Query.
In theory, waterfall+errata, if actually applied consistently, is just the same as a living standard, except without an easy to read spec. Having an easy to read spec is probably the most important thing one can do to get interop after having a test suite. So I think that's a distinct win for the in-place updates model.

In practice, for Web tech, specs developed with the waterfall model don't have errata applied consistently, nobody reads the errata that are applied, and nobody makes conformance claims that mention the errata. This is true of RFCs, HTML4, CSS2, the DOM specs, etc.
"nobody" is too strong a word - it very much varies by technology. I've been surprised at how much the various XML activity errata do get read, for instance. Agree it's a spectrum with fuzzy cat-fur (and sometimes loud cat-fights) in the middle.
By "web tech" I'm referring more to the front-end stuff that's sent over HTTP. CSS, DOM, HTML, HTTP, XML itself, JSON, JS, that kind of thing. For those, they really don't get read. Even I, who wrote a number of the errata items for CSS and even maintained the list for a while, IIRC, used to forget about it (it's why we did CSS2.1, and why even before 2.1 was a REC I would point implementers to the 2.1 "draft" rather than CSS2). I don't have any experience with the back-end stuff, so I'll defer to you on that. :-)
I very much enjoyed the conversation so far and learned a lot, thanks +Ian Hickson, +Liam Quin, +Mike Amundsen , +Leigh Klotz, Jr. etc.

Now, I'd like to formulate my take-home message: on the Web (aka in the cloud or whatever your preferred expression for the thing that's built around HTTP, URLs and HTML is) we can identify an emerging pattern which I call cycle contraction.

It essentially means that the life-cycle of X - where X can be any of 'app development and deployment' [1], 'development of a standard' [discussion here], 'figuring the market fit in a start-up' [2], or 'on-line discussions and news broadcasting' (nowadays mainly on Twitter, G+, etc. - RSS is dead), and so on and so forth - contracts in the sense of: from long turn-around times and feedback loops to quick ones and the ability (not obligation) to 'ship 24/7'.

The Web as an ecosystem is both partially the reason and also the enabler of the cycle contraction. Not sure where it leads us, but, as the Chinese say: exciting times ahead!

+Ian Hickson ☺ I can't easily explain why it's different, except that "back end" isn't "cool" these days, and is often more in the "civil engineering" mindeset than the "graphic design/UX/performance" mindset. Dunno.
+Danny Ayers The snapshot model implies a kind of cycle where the spec text is considered to go through various maturity levels, which is different to how the "living" model works. But more importantly, I think the key point is that claiming conformance to one revision doesn't make a meaningful statement about what the software does, in practice. Microsoft claimed to be "100% compliant to CSS1" back in the late 90s, and they meant it, but if you were to write software to the CSS1 spec, it would not interoperate with that version of IE as well as if you wrote software to the CSS2.1 spec, where we fixed the specification to take into account mistakes in the spec and in the implementations. Similarly, if you wanted to be interoperable with software that claimed to implement HTML4, you'd actually be better off implementing the contemporary HTML specification than the HTML4 specification, because all the browsers that claimed to implement HTML4 did it in slightly incorrect ways that have since been taken into account in the contemporary HTML specification and have never been fixed in HTML4.
By "snapshot", in this thread, as explained in the OP, I'm specifically referring to the baggage that usually comes with it. I'm not aware of anyone (short of the WHATWG, and even then only in Subversion) providing copies of the spec at every point. I don't deny that doing so is useful, especially for the people developing the spec. (It's not all rosy. There's the very real risk of implementors accidentally looking at old copies instead of the latest one. We see this all the time in browser vendor land, where new hires often end up referring to old obsolete copies of the HTML spec on the W3C TR/ page.)

If someone wants to maximise interoperability, then they should just do their best. That's what all the browsers do. None of them religiously follow every last edit as soon as they are made. They just fix problems as they get around to them. Referring to an old copy of the spec with known bugs is definitely not what I'd recommend. Referring to the last REC, not even the last WD, would be even worse.
+Ian Hickson people refer to old versions all the time; I came across an HTML 5 "platform" page linking to XML 4th edition the other day instead of 5th (and XPath 1, but that's another story). I wish W3Cs's /TR page only had the "latest version" link and you had to go through the "previous version" links to get the others, with the default link being undated.... it'd really help encourage decoupling of specs.
We've actually got a whole bunch of tooling around the WHATWG spec — you can subscribe to specific topics to get notified when relevant sections change, each checkin is annotated with what conformance class(es) it affects, there's a Web interface to go through recent checkins filtering out editorial changes, there's the semi-automatic coupling with Bugzilla both for filing bugs and for tracking which checkin relates to which bug, there's per-section stability markers, there's the in-spec definition back-references, there's the update notification UI when you have the page open, etc. I'm always open to more ideas, especially any that might help relevant implementors get better interop, and especially any that come with volunteers. :-)
+Liam Quin "I came across an HTML 5 "platform" page linking to XML 4th edition the other day instead of 5th (and XPath 1, but that's another story)."
I think that's deliberate, because it's what browsers implement.
@Daniel Glazman The snapshot model makes more sense for most IETF RFCs as they are lower level. How often does TCP change or add new features? TLS? HTTP?
+Yuhong Bao Actually HTTP is a great example of the snapshot model failing. For over a decade there's been this specification out there that's woefully inaccurate, and now that people are working on fixing it, their work is hidden off on the side instead of replacing the existing stuff in place.
On the other hand, once it become an RFC, how often does it has to change afterwards? HTML on the other hand is constantly evolving.
I don't know why RFCs would be any more free of problems and lacking in evolution than any other specs. On the contrary, I think there's plenty of evidence that this isn't the case. URLs and HTTP, the main technologies of the Web that are in RFCs, have been in desperate need of work for years, and I see no reason to believe we'll magically reach perfection next time we publish either.
HTTP is implemented in web server software and no one handcrafts HTTP requests/responses, for one thing. That is what I mean by "lower layer". HTTP/URLs is simple enough that once a spec is created that matches implementations, errors can be dealt in errata, while HTML/CSS is much more complex.
I'm seeing a couple of problems with this. One is the assumption that the "living standard" model is inherently better than the "snapshot" model based (from what I can see here) on people who have chosen to use the "living standard" model observing a lack of progress in specific processes that use the "snapshot" model.

I won't deny that the "living standard" model has major advantages for the people who curate and develop specifications, but it comes at a significant cost to those who implement those specifications, and people who rely on those implementations. Specifications exist to serve implementors, not those who maintain them, and any decision that places the convenience of those who maintain the spec above those who implement it is the wrong one.
To see why you need versions for the HTML standard, just look at the legend on a page describing a feature, like In the real world, developers currently track where they can use a particular feature by looking at a little grid of icons, ostensibly representing all the browsers they might ever want to support. Do PS3s and Wii browsers support this feature? Who cares! Probably there aren't a lot of those.

Also note that despite the attempt to re-brand as simply "HTML", there is still a hunger for version numbers among working content authors. They want to know what's changed in a sensible summary that doesn't involve being glued to their RSS feeds all day long, and w3schools reflects this with their very consistent usage of "HTML5", along with helpful break-out boxes everywhere that say "Differences between HTML 4.01 to HTML 5".

What this interop legend should look like is a box that says "HTML Version: 6.2", with the clear implication that any browser supporting 6.2 or later will have this feature, and it's up to you to find out what version your particular vendor claims to support. This is really what makes it possible to author a standard entirely by liveblogging it: the incredibly small number of credible browser vendors.
Browsers don't, and have never, implement features by specification version. They pick and choose, implementing the most popular stuff first, even if it's in the latest draft, and implementing the less popular stuff later, even if it's been in versioned standards for over a decade.

So at least on the Web, I don't buy that implementors pay a "significant cost" from not having versions. On the contrary, it's the spec authors who do. But it's a cost we should pay, to better serve the implementors.
It used to be possible to look at the doctype declaration of a document and compare it with version X of browser Y to determine whether support could be relied on. As of now, all we know from the doctype is that the document claims to be HTML 5+. Might not be an issue at the moment, but in ten years' time when the "living standard" has grown and changed beyond all recognition? There'll be no way for a browser to look at a legacy document and determine that at an arbitrary point in time (which could be any time between now and then) it was conformant. If the information in an HTML document has not changed between now and then, there's no reason that the markup should be forced to change - rather, browsers should be able to sniff the doctype and parse it as HTML4, 5, 6 or whatever rather than play a guessing game with ten years' worth of a "standard" that's never formally defined.

As to your assertion that "with a living standard, one cannot change things arbitrarily" I need only refer you to the <time> debacle.

There's a significant parallel with politics here: the primary role of a democratic apparatus of government is to make it as difficult as possible to pass new legislation, so that only what is rational, just and not significantly opposed will enter the body of law. A standards body should fulfil the same role, which your model is failing to do - if your "living standard" mentality had already been entrenched in the W3C we would all now be writing XHTML2. You may feel like your methods are the only right and appropriate way to do things, but the ease of passing amendments is a little akin to a benevolent dictatorship - you have to account for the fact that the next dictator will have the same powers and may not be benevolent. So, assume the next person to oversee the spec is an idiot, or malicious, or both. Would you be happy for them to be able to change it so easily?
+Chris Cox There are features in every version of HTML that have never been supported. The DOCTYPE has never been a guide as to what can be relied upon.

The contemporary HTML specification has been in development for 9 years now, and it's still very much resembles what we had 15, 20 years ago. I see no reason why it would change "beyond recognition" in another 10 years.

There's no value in knowing if a document was conformant at some past time.

Browsers do not act differently based on the version of the document. They treat documents from 1991 the same as those from 2012. There's no reason to use a guessing game — the entire point of the specification that we have now is to make that unnecessary, by defining exactly what browsers have to do to support all documents.

I don't understand your comment about XHTML2. Browsers don't magically follow a spec just because it has no version number. They follow a spec because they agree that it's what they want to implement.

This is one of the most important and key differences between government and standards development. In government, the elected officials, or the dictators (benevolent or not), have the power of the military, the law, and the police to back them up. They win any battle short of all-out violent rebellion or civil war, because they can imprison you or kill you if you disagree. This is radically different than in standards, where the spec writer has no meaningful power. If implementors disagree with what I put in the spec, then they just ignore me and do something else. (This happens all the time with Web specs. It's why XHTML2 failed, and would have done so regardless of whether it was done using the "living standard" model, or the snapshot model. Indeed, since it never got to REC, it was effectively a living standard the whole time it was being developed.)

So I have no problem whatsoever with the idea that the next person to oversee the spec will be an idiot, or malicious, or both. If they are, they'll just be ignored. (This, incidentally, is why it's so important that specs be easy to fork.)

Regarding the <time> issue (which was hardly a debacle; we got feedback, we processed it, we got more feedback, we processed that — it was the system working exactly as intended), it is a perfect example of when we can change things: when they are not yet implemented. It wasn't an arbitrary change, it was a careful set of changes done while we still could. If I tried to change how, say, the <p> element worked — now that would just be dumb, and it would result in me being ignored. Hence, I can't do it and remain relevant. Which is what I mean by "with a living standard, one cannot change things arbitrarily". (One also can't change things arbitrarily with a snapshot-model spec; my point is that it's no different here.)
me gustaria saber eso también...