Oh, please no. JSON is a data format. If you want a hypertext format there's HTML for that.
Linking in JSON. To be a full-fledged format on the Web, you need to support links -- something sorely missing in JSON, which many have noticed lately. In fact, too many; everybody seems to be piling ...
James Snell's profile photoMike Kelly's profile photoMark Nottingham's profile photoBill de hÓra's profile photo
It's a data format being used in a linking way, but without the linky bits.
Some bits of the data might contain links, true. The same might be said of C structs, yet I see no such movement to make them into hypertext formats.

I'm obviously being hyperbolic, but to make the point that not every format exchanged over HTTP has to be a hypertext format ala HTML.
JSON is a typeless data format. MNOT is trying to define a "link" type for use in that data format. There is a disconnect here, and it's not having links in JSON....
+Joe Gregorio "not every format exchanged over HTTP has to be a hypertext format ala HTML." or Atom/AtomPub or VoiceXML or SVG....

oh, i see. hypermedia design work is to be restricted to only one base format.

see, i can hyperbolic, too.
Theres nothing wrong with using URLs in JSON, they are perfect as IDs for other retrievable entities. But I see absolutely no need to standardize their format, just use a string.
Adding links to JSON wouldn't make it hypermedia. Getting twitter's JSON API to contain links between tweets doesn't require (or benefit from) a common link construct.
HTML is also a data format, isn't it? So if there are links there, why there could not be links in JSON as well?
+Joe Gregorio Pointers in C are links, that you can resolve to find other data structures. They exists because of all the advantages they bring. You can dereference them because you are in the local memory space. On the web, you need global pointers (adresses) to dereference in the global web space; thus URIs. In a sens, C (structure) is an "hypertext" format for local program space, and URIs are data pointers in the global web space. Not using URI in web data structures is like not using pointers in C structures, which can be ok, but should be possible.
+Joe Gregorio +Jörn Horstmann The issue with using just a string is that a client can't know that it is a URI rather than just a string. Or you need a schema/documentation of your format. I agree that you might need such a thing anyway, but it ease dealing with unknown/changed/evolving "schema" if the link can be identified as such. Moreover, you might want to add "attributes" to the link to express its context: rel (wich can be represented by the json key), but also the content-type of the target or its language e.g. (I admit that these variations should ideally be dealt with conneg), and any information that can help the UA to decide which link to dereference and if it is reasonable/acceptable to dereference it. Having a "standard" or consensual way to represent such things can ease the UA development.
I think I'd be happier with this proposal if it were motivated by a concrete problem. Instead of starting with a concrete problem: "we have a JSON blob which is hard to work with because it's too hard to work out which strings are links" we start with pure speculation: "to be a full-fledged format on the Web, you need to support links."

This is the kind of thinking that leads to hard to use data formats.
I've been thinking about this a lot lately, and I'm increasingly of the opinion that hyperlinks don't actually cut it for APIs. I think the JSON-Schema folks are the closest to having the right idea here. URI templates in a 'hyper' schema often make more sense. Allow URIs to be constructed from a context. More efficient wire formats, and potentially fewer HTTP requests being sent over the wire.
Re: the above, pagination and 'next' links strike me as being the most problematic example. Works OK for APIs based on continuation tokens, but terrible for seeking to page 3 or 4 directly. Still fails on continuation-token based APIs if any of the other variables that make up the URI have to change from one page to the next.

Perhaps most importantly though, it's just not how developers write code. Developers construct URIs whether you want them to or not.
Hmm... Atom is also a data format and we have links there... not sure saying "JSON is a data format" is a good argument against what Mark is suggesting... also, I don't see how indicating that a particular field is a particular kind of link is a bad thing. For example: http://code.google.com/apis/discovery/v1/reference.html#resource_discovery ... here you have documentationLink, discoveryLink, icons/x16, icons/x32, basePath, and.. gasp! "$ref" is there too... since we already have these things coming up in real world applications, and since we're already seeing issues with people going off and handling it in inconsistent ways, whats the harm in trying to encourage consistency through standardization of a common approach that works?
Bob... I'm definitely starting to agree on the URI construction bit... more and more I find myself wishing that documents would hand me a URI Template that I can expand in context rather than an opaque URI that I need to follow, especially when it comes to pagination of a collection.
+Jörn Horstmann ... just using a string works for many cases yes, but that's not really so much the point... going back to +Mark Nottingham's original post, he states, "My immediate use case is being able to generically pull links out of JSON documents so that I can "walk" an HTTP API" ... what is really needed for this is not necessarily a generic way of encoding links in a JSON document, but a generic way of marking links in a JSON document as I describe here https://plus.google.com/u/0/113701541741306361978/posts/QY5pmwHKtDM... The representation of those links in the document is really far less important to the stated use case than being able to recognize which fields in the document should be interpreted as links.
+James Snell, yeah, I've been working a ton on JSON Schema stuff lately outside the context of validation. The Google APIs client for Ruby uses my Autoparse library to generate data structures using the JSON schema objects found in our discovery documents. If we published URI templates akin to the JSON schema 'links' field, you'd be able to easily use the current object as a context to automatically assemble the hyperlink, while also maintaining link metadata. You'd just be moving the link metadata out of the data response and into the schema.
Also, if, as +James Snell is indicating, there's actually a use-case for "walking" an API, it's dramatically easy to do this via a hyperschema, since you don't have to contend with any of the various JSON formats that might be being sent over the wire that a set of hyperlinks might be being embedded within.
+Bob Aman, as part of the Abdera2 update, I included the ability to use JSON objects to provide the context for URI Template expansion.. as well as the ability to translate a request URI into a template and merge it with additional context data to construct a new URI. It's definitely a much better model, I think, than trying to deal with opaque, static URIs all the time.
+James Snell +Bob Aman +1 for URITemplates. they differ from HTML.FORM@method="get", HTML.IMG@usemap, etc. only in implementation detail. they are all mutable links (parameterized links if you like); affordances returned in the response representation that give client apps (M2M or H2M) details on how to construct additional requests. in these cases the URI construction rules are generalized enough to be applicable to a wide range of use cases which means client coding is simplified along the lines of "uniform interface" rather than a specific "one-off" coding of custom URI construction rules common in many current Web API designs.
RE: "in these cases the URI construction rules are generalized enough to be applicable to a wide range of use cases which means client coding is simplified along the lines of "uniform interface" rather than a specific "one-off" coding of custom URI construction rules common in many current Web API designs."

I see very few people building these generic HTTP clients. Instead I see people writing code against a specific API and a data format that they know. With Google+ we encourage people to use our client libraries so that they can easily achieve their goals rather than having everyone writing their own client libraries.

I suppose on some level there's a schism between people who feel that Atom succeeded and people, like me, who believe that building extremely generic data formats doesn't actually help specific people get specific work done. In that respect Atom, the Atom Publishing Protocol and GData failed.

As someone maintaining a feed parser I much prefer Atom but as someone optimising the developer experience of using Google+ I find that the lesson from GData is that you should give people a data format that matches the domain they're working in rather than something generic and extensible.

Simple beats generic and extensible if you want to get an API or data format adopted. I wish it wasn't true but it's true.

/cc +Michael Mahemoff +Paul Kinlan +Will Norris
+Ade Oshineye the point about developers building to specific apis and formats is certainly quite valid... that's been my experience as well. Personally, for me it's not so much about trying to come up with a generic data format that's good for everything as much as it is doing things in a consistent way to make it easier for developers to either transition from one to the other or combine different parts in new and creative ways. For instance, if we have two JSON formats that each represent links in a different way but both label their links with as "myFooLink", then developers are restricted in their ability to use those things together and must rely on two completely separate code paths when dealing with each. That sucks for them, especially when there's no real reason for the JSON formats to be doing things in two different ways. Those of us whose job it is to design data formats should be striving to maintain as much consistency as we possibly can, and avoid needless redefinition of the same data structures over and over and over. Simple may beat generic, but Consistent beats everything as it allows a developer to reuse code and knowledge they already have, even when doing new things.
Hey Joe,

Yes, we all know that URIs don't belong in data formats. Dirty me for even thinking of it.

Anyway, just putting in strings doesn't cut it. I want to create follow-your-nose APIs using JSON, and that means typed links. From what I've seen, people have been doing this (or want to do it) enough to pool resources and come up with a common way to do it. We won't hunt you down if you decide not to, don't worry.
+Ade Oshineye interesting comments. in general i have little disagreement w/ your sentiment but there are some details in your remarks with which i will disagree...

"I see very few people building these generic HTTP clients. Instead I see people writing code against a specific API and a data format that they know." while the reasons this perception is shared are quite varied, i suspect at least one of them is that there are very few opportunitiies for client developers to leverage generic client code. this tracks along w/ comments by +James Snell regarding consistency of execution. also, i am not sure that the lack of generic clients is a positive development nor that it should be encouraged over the long term; in short: the existence of this situation is not, IMO, proof of it's long-term value.

"building extremely generic data formats doesn't actually help specific people get specific work done." while this may be true, advocating for consistency and for increased uniformity is not a call for "extremely generic data formats." IOW, increasing the use of affordances in responses does not require needlessly obtuse implementations. the assumption that there are only two alternatives (highily specific responses that require pre-built client libraries vs. extremely generic data formats that don't help) is not a healthy POV, IMO.

"...you should give people a data format that matches the domain they're working in rather than something generic and extensible." again, this is a false choice; you can do both w/o much trouble at all. +Joe Gregorio has shown this path viable through the python library that relies on dynamic information on state transitions to provide additional flexibility and extensibility. there are other ways to go about this work including the "HAL approach" by +Mike Kelly (document state transition details and hard code them into clients), the use of "generic inline transition descriptions" i employ in Collection+JSON, and the use of domain-specific designs i use in Maze+XML. these are design choices, not true/false issues.

"Simple beats generic and extensible if you want to get an API or data format adopted. I wish it wasn't true but it's true." in my experience this "simple beats <fill-in-the-blank> is a common meme that is handy, but ultimately an unhealthy view of the world. also, my personal experience is that there is an arc to adoption that usually starts quite specific, trends toward general, and eventually ends at no-longer-relevant. this applies to most all engineering efforts i can think of, way back through time. some arcs run quite long (printing press), some rather short (the US water-canal system). software efforts are no different, IMO. i also find it unwise to point to some place on the arc and claim "this is where we stop, any other effort is not worth the time." IOW, attempts to freeze motion along the arc at the "early-specific-simple" phase is rarely a good idea.
By the way, from a language POV, JSON and XML are equivalents (excepting xml pi and entities). JSON is just a concrete syntax that is easier to parse for javascript clients, and more concise. The AST for both formats are just the same. Why links should be natural to one, and an anathema in the other. The structures are the same, its just different serializations.
+Yannick Loiseau I think that's only true if there are no text nodes in the XML. i.e., if you're not using it as a markup language.
+Bob Aman The equivalence also holds in that case, but I indeed referred to XML used in a structure representation language fashion (S-expression), not a semi-structured document representation à la HTML (i.e. the same usecase as JSON here) (it's still a markup language in both cases)
I think there are several subjects in the discussion here:
- is hypermedia constraint valuable in a resource oriented architecture ? IOW, is it desirable to use an hypermedia representation for these resources
- is JSON a suitable serialization for such an hypermedia representation

My pov is:
- yes : if we are using the web, resources should be identified(linkable) and linked (see REST arguments, e.g. loose coupling, discoverability, interconnection...). Otherwise, it's ORB/RPC (which can be fine in some situations)
- yes, not less than XML

Granted this pov (which is the +Mark Nottingham one if understand him correctly), the next issue is can we, or how to, represent these (typed) links in JSON consistently and in a discoverable manner. Two pov again: schema based (this attribute value encoded as string is actually a URI), or convention based (values encoded this way are URI). I'd favor the convention one, since making schema understanding necessary to parse/understand a format is often a high entry barrier.

+Ade Oshineye I think we don't see much hypermedia clients because most of API out there are not providing typed linked resources. There are some however, see e.g. OPDS. It's a chicken/egg problem...
+Yannick Loiseau Well, there's more than one possibility w/ schemas.

You can denote, via the schema, that this attribute over here is a URI, not just a string. But that tells you nothing about what the URI is for. Is that a URI to another API endpoint? Is that an HTML page? Perhaps an image representation of something? Knowing that a string is a URI is useful, but not generally sufficient. In fact, if you're just using URIs as strings and not doing much else, convention is probably still necessary with this technique.

But there's another schema-based option. You can use a hyperschema that describes the relationships between resources rather than just the structure of the resources themselves. This is the approach I'm advocating.

Taking the Google+ API as an example, a hyperschema would allow you to trivially navigate from a post to the set of comments on that post, or the list of people who've +1'd it or shared it. This would be done by giving you URI templates with variable names that correspond to the JSON object attributes necessary to assemble the URI for the related resource. All metadata about the URI hyperlink would be stored in the hyperschema rather than in the data.
+Bob Aman exactly, I didn't want to go into much details, but saying that an attr value is a URI does not suffice. You have to qualify this link (rel, type, etc. see my previous comments), that is why the "just put a string here" approach is not enough imho.
The hypershema is interesting, but that's more an argument /for/ links in JSON (whatever the approach), with wich a agree :)
However, I prefer a direct link approach since it remove the overhead of schema parsing/matching and link generation. This is more like if every HTML a hyperlinks were GET forms. But both approaches have pro and cons, and link generation /is/ useful in some cases (e.g. to deal with user inputs, see forms). I just think that all the schema stuff are overhead; I may be wrong on that point
+Mike Kelly link relations suffice to qualify a full link, but not to allow user to generate the link given user input (like html forms). a form-like object, or uri template allow this generation (they don't have to be in a schema though). Parametrization of the method used and how to generate the content (w.r.t. a given content-type) can also be useful to a "complete" hypermedia representation (see. +Mike Amundsen hypermedia factor http://amundsen.com/hypermedia/hfactor/ for example)
+Yannick Loiseau if you are talking 'user-input' (as in, human input) then forms are of course useful and you should probably follow the advice of 'just use HTML', since that is what HTML is for. This dialogue should be about machine consumed applications; in that context link relations are a viable, simple, and practical way of indicating to clients what inputs are required for a given link w/ a URI template or what request methods and representations should be delivered to the target resource of a given link.
+Mike Kelly if by "link relation" you also mean uri template, I agree with your point. However, it can be useful to allow the representation to give more information about how to build the query besides the URI (method, content-type and so on). HTML forms are not always applicable, e.g. if you want to build a dedicated client, being a RIA or a more weighty one (desktop app). For example, we can imagine a revised twitter-like hypemedia-based API (using JSON as serialization), in which the representation of the feed give you the URI where (and what) to POST to create a new post, and in the representation of post where and what to POST a comment, and so on (atom like). However, these informations can be obtained using HTTP and a characterization (a container, ...). But this introduce a little more coupling and (cachable) requests in the loop.
+Yannick Loiseau I think if you are walking away from html (to 'RIA' or desktop apps) only to reinvent it again, you might be missing a trick.

wrt building request method/content with in-band form-like control - it seems from previous threads on the topic that the benefits of that approach are still not clear when the consumers of the application are automated (i.e. m2m interactions), and that the associated additional complexity may not be worth it. All of this can be handled by taking it 'out-of-band' and simply documenting the relevant behaviour against a given link relation.
+Mike Kelly html is (at least initially) a document representation format, not a data/GUI format, and the browser platform can't (yet) provide all the integration and feature that a desktop app has. But I took an extreme pov on purpose, to illustrate all usecases/issues/possibilities that hypermedia can offer. Not sure if this (form-like json) would be my choice. A well understanding of the HTTP uniform interface (at the app level) and a typed resource (container...) would suffice imho.
+Yannick Loiseau indeed - sounds like we're on the same page there, really. In terms of typing resources - how would you go about this? My approach to this is that resources cannot type themselves, and instead 'typing' is client side concern and determined by the context provided by the link relation which lead the client to the resource in question. i.e. a client following the link { "order" : { "href": "..." } } should interpret the subsequently fetched resource as an order.

(this is getting very off topic, though, happy to continue discussion but perhaps on another thread?)
+Ade Oshineye "believe that building extremely generic data formats doesn't actually help specific people get specific work done" How can you say that when HTML has been used for by so many people to allow people to "get specific work done"? What makes HTML + web browser so special that the same cannot be done with a different generic media type and different user agent.
+Mike Kelly several paths to explore. One is that the semantic of the rel indicate which kind of resource you access, and thus if it is sound to DELETE/PUT/POST on it. An other is a combination of OPTIONS + Link header, e.g. with a req/rep like the one bellow, the client "know" that the resource is a container it can post on. The type on the content to post is harder to define without a full schema analysis though, but we slide into semantic web world here :)

OPTIONS /article1/comments HTTP/1.1

HTTP/1.1 200 Ok
TCN: list
Alternates: {"comments.json {type application/json}},{"comments.atom" {type application/atom+xml}}
Allow: GET, POST
Link: <http://www.w3.org/1999/02/22-rdf-syntax-ns#type>; rel="http://www.w3.org/2000/01/rdf-schema#Container"
+Mike Kelly ... a link relation alone is certainly does not provide enough context in most case by itself. Take the instance of "self":"http://example.org/foo" ... given this, I can somewhat kind of guess that "self" is likely a link relation and, accordingly, the given URL points to a representation of this resource, but it gives me no additional context as to what I can do with that URL. I know I could likely safely do a GET on it, and could possibly do an OPTIONS to get more detail on it, but that's the most I get. There are cases (as is evidenced by all the schema stuff that's in Google's Discovery API, just for example) where much more information is required than just a simple link relation.
+James Snell in the case of HAL there is no question that self is a link relation, since it has conventions for that exact reason e.g. "_links": { "self": { "href": "http://example.org/foo" } }.

The self relation isn't intended to give an indication of what can be done with a resource. This is the job of whatever relation actually lead your client to the resource in question.. the only resources where this is not feasible are entry points.
+Tim Bray imo, a common interface for expressing links promotes re-usable tooling server/client side. A media type for linking can enforce positive constraints which benefit the web, like guiding publishers to follow the web linking spec, and it can help publishers lift focus away from unnecessary considerations (i.e. how to link to or embed another resource) and increase the focus on the application itself (i.e. the application's link relations and traversals between resources).
+Tim Bray does a DSL (something like selenium) for writing automated clients count?
+Tim Bray ... one abstraction free use case... coming from a real world scenario... I have an Activity Stream with activities from multiple sources, I want to perform a rollup of those so I can make statements such as "Joe, Tim and James all posted comments about the link http://example.org", regardless of how specifically those comments appear in the Activity Stream. Basically, that entails scanning through the individual activities, inspecting the objects in a variety of ways to identify the links, then spinning it around to collect which actors have performed relevant activities with regards to those links. With Activity Streams, we fortunately have a consistent model for links using the "url" field name that makes it easier for us to perform that kind of analysis but the code for doing so ends up being very specific to the Activity Streams model.... which, of course is fine, but then we end up being required to write a completely separate code path for dealing with other potential non-abstract, non-hypothetical scenarios.. such as scanning a service discovery document for resource links (ala Google Discovery Service)... a concrete use case of which is the development of automated API testing infrastructure (I just had a discussion this morning with developers looking to do just that).
+Yannick Loiseau Well, no doubt, schemas are overhead, but if implemented correctly, they're fixed costs instead of variable costs.
Hey +Tim Bray ,
Sure. I'm looking at the long term for the OpenStack APIs. These are arguably more complex than your typical google / twitter / etc. Web APIs, in that there can be several deployments, several implementations, and arbitrary extensions for each deployment and/or implementation.

As discussed on my blog <http://www.mnot.net/blog/2011/10/25/web_api_versioning_smackdown>, it appears that this case would really benefit from a linked, follow-your-nose (as Roy says) style Web API, so that we're not baking URIs into clients.

Part of doing that is coming up with a way to serialise links.

While it's true that we could come up with a "local" convention for linking (and indeed, there is already one, as there is in many other places, including Google's APIs, AIUI), it seems to me that we have many, many people banging away at the same problem, and making some of the same mistakes (as well as some unique ones).

This seems like a great opportunity to get it right by doing it once, well. One of the biggest complaints about RESTful APIs is that the tooling is sucky-to-non-exisitant, and I'd like to change that with code as well as specs, so that people don't have to depend upon a single implementation.
(joining this late, apologies) - the comparison of json data links to C pointers seems fitting, though I fear they're not enough of a solution. Many are implied already rather than being links (e.g. an ID, which can be used to generate the link) so some context is required. Moreover though, having plain pointers won't work too well unless the problem of reducing request counts can be solved.
Essentially, you don't want a browser JS client to do one request per pointer dereference, it's just too costly (for now). Either it should be made cheaper clientside (which has a limit) or some server protocol is needed for knowing which pointers to dereference within the result?
I don't know if my point was lost or was just glaringly obvious to everyone.... if you want richer data types in JSON, that's not going to be type-safe by just establishing an usage pattern (which I think Mark's proposal is). Without type-safety, follow-your-nose APIs are eminently spoofable (by accident or by attack).
Adding first-class user-defined typing to JSON would be a MAJOR change. An usage pattern may be a good first step, though.
+Harald Tveit Alvestrand in that someone can potentially insert links into JSON documents, a la XSS?

I suppose that's true, but frankly if you're letting people write whole objects into your JSON without checking the contents, you've got much bigger problems...
+Mark Nottingham I was thinking of code along the lines of "for each element in this list, if it is a link, download it to my cache and replace pointer". Then someone puts a link in where a text line was expected, resulting in strangeness.
If I can say "If this was supposed to be a link, let's follow it", I think the number of security bugs due to quick scripts may be slightly smaller.
Yes, XSS is a classic example of how one can maliciously exploit other people's lack of checking.
+Bob Aman Why fixed ? Every time a client want to "dereference" an linked (via the schema) object, it has to refer to the schema, extract the template, and build the uri. The first two step can be one shot by code generation, but you'll always need to construct your uri; whereas with embed links, you just follow them.

Depending on the use of the schema/embed uri template/embed full uri, I see several costs:
- bandwidth: transfer the schema/embed link representation
- computing: parse the schema, "validate" the data, match attributes against schema to identify potential links, interpret the template, build the links
- entry: human understanding, debugging, readability, need to build a schema parser/validator or template macro expander.

I see two use of the schema: code generation and dynamic interpretation.
- code generation can remove some bandwidth cost if the code is bundled in the app (e.g. mobile app), but you loose some evolutivity and augment coupling. If its code on demand (COD), you have to download it as you have to DL the schema in the case of dynamic interpretation. Cache can help though. For embed links, they don't have to be heavy (with well defined uris)
- with code generation, where the schema is "compiled" into a parser/validator/data interpreter, most of the computing cost is done at compile time (i.e. fixed, was it that one you referred to ?).

So if you expect your customers to use your client API code (that you have compiled for the schema for them), that's ok. But you isolate the service, i.e. less interoperability and integration, the evolution is harder if not using COD, generic/dynamic clients are harder to implement because the data can't be interpreted without the schema, so they have to include schema parser/matchers. All this leads to silo application imho (which can the the aim). IOW, you fall back to ORB/RPC: no direct linked data, description schema that compile into stub code... look like WSDL and all the WS-* stack to me, which again can be OK for some use cases, while my preference go to realy rest compliant services.
(This is a real question, I haven't deeply dive into json schema yet)
btw, seems we already had this discussion :) https://plus.google.com/118148240205592032989/posts/Vxu7xgaBnnc
+Patrick Coleman caching already reduce the request counts. However, not every related object have to be linked, some can be embed (or partially embed with a link to the full representation, like atom/OPDS e.g.), and some can be anonymous objects (without their own URI), see BNodes in RDF, or composition vs. aggregation in data structure modeling
(Cross posting this from +James Snell's share where +Mark Nottingham commented that he wanted: " ... just a convention that people can use if they want").

This is a bit like dates can (and often are) written down using ISO-8601, or in accordance with the W3C note. And a bit like how very many XML media types use an unwritten convention of using <link rel="" href=""> elements, or even the <atom:link rel="" href=""> element.

Neither of these are governed by a media type, but if this (json linking) is to succeed, I need a document to reference when I in my media type documentation need to say "and links are the mnot standard format."

There will be different ways of doing things, which will have varying benefits, and a media type or server designer would be well off if a few standard ways of linking emerge.

I doubt, however that this will make people write media type agnostic clients that go looking for links...
All this snarky 'pragmatism' is very trendy, but it doesn't actually make a lot of logical sense. If we're establishing a set of conventions which present a certain type of capability, this is exactly what new media-type identifiers are for. I'm sorry for everyone so fond of the string 'application/json', but that's the way the web works: application/json doesn't do links. If you want links from a JSON-based format, you need another media type e.g. application/hal+json or application/collection+json.
+Mike Kelly Absolutely. I agree that if you serve application/json you're relying on out-of-band knowledge for further processing. Just like application/xml document with a <link rel="foo" href="bar"/>. It's completely pointless. But in the context of a real media type (e.g. application/vnd.twitter.tweet+{xml,json}, perhaps) some convention is useful. It's also a bit like using the CURIE syntax, or a URI Template. It's a convention which media type designers may use if they so wish.
+Erik Mogensen you're assuming that's it's necessary to mint a media type for a given application. It's not.. HTML is a good example of that in practice and demonstrates the macro and micro benefits of a generic hypertext interface on the web.
+Mike Kelly... The introduction of a new capability by extending a base format does not necessitate the minting of a new media type... Case in point XML and XML Namespaces. It's only when that format and extension are put to a specific use (e.g. Atom) that a new mefia type is required. What is being discussed here is not a new type of document but a convention for how to use an existing document type. It's analogous to the use of microdata within html... It's a new capability that goes beyond what html natively provides but it doesn't require a new html media type.
+James Snell the analogy isn't entirely correct; microdata and RDF/a use new attributes, thus an extension point designed for this use case. There is no such extension point in JSON. A better analogy might be Microformats, which re-use the @class attribute.
In either case tho, +Julian Reschke, a new media type wasn't required... Developers just need to be aware of what conventions/extensions are in use.
+James the big difference is that in one case you extend the format using a legitimate extension point, while in the other case you have to hope that your extension never ever conflicts with somebody else's use case.
+James Snell "It's only when that format and extension are put to a specific use" < 1. Isn't linking to and embedding resources a specific use, relative to plain JSON? 2. What's the downside of minting a new media type?
+Mike Kelly - the downside is that embedding/identifying linking information should be orthogonal to the use of JSON; don't expect that all future uses or JSON will use application/json as media type. (why? because that's not restful)
+Julian Reschke why/how can that ever be completely orthogonal? any representation has to have a base data format, right? I may have missed your point, sorry.
+Mike Kelly adding link information (and processing them with generic libraries) should be independent of the payload. If I have a use case for a JSON-based format with it's own media type (application/vnd.julian.foobar+json), I still want to be able to use the link extensions.
isn't that just extension? e.g. vnd.julian.foobar+json < hal+json < json
+Julian Reschke also, when I write a web app I don't need to extend HTML.. Do you not consider that an important property of HTML?
+Mike Kelly this works only until you need multiple inheritance, such as Mark's link extensions + somebody else extensions for a different purpose. And re your webapp comparison: HTML comes with markup for links on board, and also has tons of extension points (@rel, @class, @data in HTML5, ns-based extensibility in XHTML); JSON doesn't have anything like that because it's all syntax.
+Julian Reschke could you not spec vnd.julian.foobar to inherit from multiple different sources?

sort of the point of hal is to provide (and encourage the use of) an extension point with an equivalent of @rel
+Mike Kelly media types do not really have inheritance, except for some things you can deduce from the top level type, and some things you may deduce from the "+json" prefix in the future. As such, I don't think that assigning media types to JSON conventions will be helpful. Even if the link professor knows how to deal with "linkextensions+json" it will still be clueless about "vnd.julian.foobar+json".
+Julian Reschke huh? don't media types have whatever type of inheritence you can describe with a specification? media type identifiers are just a reserved tokens that correlate to some agreed specification which tells everyone how to interpret the content body, right?
+Mike Kelly so do you expect software that understands the link extensions we discuss to have a hard-wired list of all media types that use this type of extension?
+Julian Reschke No, I expect software that understands hal+json to be able to interface with the parts of vnd.julian.foobar+json that are inherited from hal+json.
+Mike Kelly How is the software supposed to know that one format extends the other? The only syntactic conventions supported by media types are top-level type and format suffixes (right now, only "+xml").
+Julian Reschke it doesn't know that, developers know that and they can reuse and deploy it as appropriate.
+Mike Kelly my understanding was that the whole point of the proposal was that link-processing software can discover the links without needing to be aware of what the semantics of the remainder of the document is.
right, which is why extending the media type is not a very good idea when it is not necessary. Sorry to keep referring to it but.. HAL's intended use is that applications don't extend it at all and instead focus on establishing link relations that can direct clients through their application, and establish meaning of various resources.
+Julian Reschke My thinking was that the _meta convention would be flagged by a new media type, or by a parameter on application/json; this is why it's necessary.

(boy, I'm finding discussion on g+ painful...)
+Julian Reschke ... agreed, a new media type is not going to achieve the desired result on this... there does need to be a way of signaling which set of conventions are being used within a particular document, but minting a new media type every time we come up with some new set of conventions is pointless. Optional parameters on the media type (like the type parameter that RFC5023 added to the Atom media type defined by RFC4287) would be a viable option, as would the link header notion I mentioned previous, as would some bit of metadata properties at the top of the json document.
+James Snell Referring to linking as just 'some new set of conventions' in the context of the web is very disingenuous.
+James Snell +Julian Reschke when i want to apply semantics (convention, domain-specific details, etc.) to an existing data format (XML, JSON, HTML, etc.), i use a profile. i document the semantic details in the profile (as deeply or vaguely as needed), and add a link to response representations (body, header, or both). client devs now must deal w/ two issues, of course (processing the media type and processing the profile), but it works well, is consistent, and doesn't require minting/registering new media types.
+James Snell when you say "a new media type is not going to achieve the desired result" - what exactly is the 'desired result' from your pov?
+Mike Kelly. In this case the desired result is simplicity and usefulness. Minting a new media type for every new set of conventions is pointless. If you're creating a new data model like activity streams or even googles discovery api or opensocial embedded experiences... Then yes, a new media type is useful and desirable. HAL may very well fit into that category. But if we're only talking about naming conventions or conventions of cetain types of values (links, dates, ids, etc) that can be used in any application of json, then creating a new media type isnt going to be useful in any way. Again, look at xml namespaces as an example of a significant new capability added to a base data format that didnt require the minting of a new media type.
+Mark Nottingham... The why is simple enough... Suppose that Activity Streams, for example, gets its own media type... Say, application/activity-json, and suppose I choose to use the _meta convention inside an Activity Stream, say, perhaps in a custom objectType... What would the media type be? Parameter on the media type works fine. New media type does not.
To be clear, the proposal is that people who wish to use conventions can either:
a) use a single new media type (or parameter on application/json) that flags the use of _meta in their format, or
b) define their own new media type that specifies what conventions are in use (including the use of _meta to determine this).

Their choice. It would be NICE if you could unambiguously tell what conventions are in use just by inspecting the media type / parameter, but that's not a hard requirement to me; people are going to want to define domain-specific media types anyway.

To me, flagging things like +xml and +json in media types is a nice-to-have, but I've never really seen the overriding value in doing so,.. The important thing here is the unambiguous use of conventions without risk of collision.
+Mark Nottingham "The important thing here is the unambiguous use of conventions without risk of collision." seems to me that most effective way to achieve (and ensure) your stated goal is to define a new media type, no?

" It would be NICE if you could unambiguously tell what conventions are in use just by inspecting the media type / parameter..." this sounds like the classic problem of the MIME spec as applied to HTTP - the inability to easily advertise/negotiate/determine any conventions, sub-types, etc. in a response. IMO, parameters or (as i stated above) a profile convention (which could be baked into the media type def.) seem the best ways to handle this, no?
+Mark Nottingham ... ok, now we're getting somewhere... of course, anyone who creates a new type of document based on JSON (or any basic data format) -- like we did with Activity Streams, like Google has done with the Discovery service, etc -- can create a new media type identifier for that media type. That document type can obviously use whatever conventions make sense. That's not really the issue here. The issue is when those same conventions are used within existing document types (e.g. an "ordinary" json document, or an Activity Stream, etc. In such cases, we really do need another signaling mechanism.. either an optional profile parameter on the media type or the content-profile web link like I suggested before. It definitely appears that we agree on those points. I'm perfectly fine with using the optional media type parameter so long as it remains possible to identify multiple profiles in use within a document (can be as simple as a quoted string containing a comma separated list of profile URIs... which likely more than 90% of the time would contain only a single URI).

Am I correct in assuming that we are in agreement on that particular point?
Human beings will invariably read the JSON your resources provide in order to construct useful clients. To me this consistent linking format is as much a matter of usability as anything else.

Links belong under "_links" the same way the primary navigation of a website belongs across the top.
Using rels for the keys of the "_links" object is common sense and totally correct.

Placing links consistently in JSON representations makes the world a better place, even if only aesthetically.
That's good design.
+James Snell " But if we're only talking about naming conventions or conventions of cetain types of values (links, dates, ids, etc) that can be used in any application of json" < well, we weren't until (for whatever reason) this discussion veered off from being about how to do linking with JSON into how to handle metadata in a generalised and composable way that needs to be arbitrarily mixed into other media types.
This all-things-to-all-people, composable, meta-metadata fixation is precisely what is responsible for the character of XML that many people try to avoid when they opt for JSON.

JSON is 'simple' because of what you can't do with it, not what you can.

The issue of representing resources (and their links) on the web is not 'any old problem' - it's a general, fundamental problem for everyone on the web exposing their API with JSON (read: many people), and it can be solved in a general and simple way, with a new media type that introduces some additional constraint on top of JSON.

I am going to continue working on application/hal+json, bringing it into line with RFC5988 and possibly introducing Mark/Tim's _meta idea. Perhaps you could generalise the bits you want from that into a separate profile'able spec - which would give us the best of both worlds?
+Mike Kelly ... I'm honestly seriously at a loss about what you're arguing against here because there seems to be a mismatch between what I've been suggesting and what you think I'm suggesting. The original discussion was about coming up with a convention for walking a document to harvest links, and about what new conventions needed to be put in place to do so precisely that. Part of that original conversation -- in Marks original post -- touched on the area of generalized metadata (the _meta block, the json-ld @context, the question over a profile param in the media type, etc). What I have suggested is nothing more than a consistent naming convention for this stuff and I've discussed how that naming convention can help address the problem in a way that is easily extended out to addressing other similar types of issues -- that is, we can address multiple similar issues with a single consistent approach that fits well within the parameters of the existing JSON model without requiring the invention of new media types, data models, extensions, or whatnot.

I honestly don't see how making statements like "JSON is simple because of what you can't do with it..." adds anything to the overall discussion. No one, as far as I can tell, has suggested anything that would even remotely alter the way JSON is used, or that would add even the slightest bit of additional complexity into the typical JSON processing model. People are already embedding links and metadata into JSON documents... why is it a problem to attempt to get people to do so in a consistent way?

On a technical level, what specifically about my proposal is throwing you off here? How does the naming convention fail to uphold the Spirit of JSON?
+James Snell it's a question of practicality.

I'm proposing a standard media type that establishes a set of linking conventions for 'follow your nose' APIs. Whereas you seem (afaict) to be proposing a profile specification that establishes a set of conventions.. for establishing sets of conventions.

Do we really need coordination at that level, given that media type (or profile) specifications are actually consumed by people? What's the value proposition?
I'm kinda with +Mike Kelly here -- let's not make this more complex than it needs to be. Having a convention for links at all is mostly for social benefit (e.g., showing people how to do it well, leveraging implementation, ease of use, less mental footprint) rather than technical.

The _meta thing is really just to help avoid collisions, and give tooling something to grab onto.

E.g., if _meta catches on, I can look for it with REDbot and start doing interesting things with JSON, without having to know the details of the format.