Reposted in such a way that it can be shared, because G+ hates me.

Standing on the shoulders of giants

We've recently gone public (yay!) with the Mir project that we've been working on for some months now.

It's been a bit rockier than I'd hoped (boo!). Particularly, we offended people with incorrect information the wiki page we wanted to direct the inevitable questions to -

I had proof-read this, and didn't notice it - I'm familiar with Wayland, so even with “X's input has poor security” and “Wayland's input protocol may duplicate some of the problems of X” juxtaposed I didn't make the connection. After all, one of the nice things of Wayland is that it solves the X security problems! It was totally reasonable to read what was written as “Wayland's input protocol will be insecure, like X's” which is totally wrong; sorry to all concerned for not picking that up, most especially to +Kristian Høgsberg and +Daniel Stone.

Now that the mea-culpa's out of the way…

Although we've got a section on the wiki page “why not Wayland/Weston” there's a bunch of speculation around about why we really created Mir, ranging from the sensible (we want to write our own display serve so that we can control it) - to the not-so-sensible (we're actually a front company of Microsoft to infiltrate and destroy Linux). I don't think the rationale on the page is inaccurate, but perhaps it's not clear.

Note: I was not involved in the original decision to create Mir rather than bend Wayland to our will. While I've had discussions with those who were, this is filtered through my own understanding, so treat this as my interpretation of the thought-processes involved. Opinions expressed do not necessarily reflect the opinions of my employer, etc.

1) We wanted to integrate the shell with a display server - there are all sorts of frustrations involved in writing a desktop shell in X. See any number of Wayland videos for details :). We therefore want Wayland, or something like it.

2) We didn't want to use Weston. Weston, the reference Wayland compositor, is a test-bed. It's for the development of the Wayland protocol, not for being an actual desktop shell. We could have forked Weston and bent it to our will, but we're on a bit of an automated-testing run at the moment, and it's generally hard to retro-fit tests onto an existing codebase. Weston has some tests, but we want super-awesome-tested code. We don't want Weston, but maybe we want Wayland?

3) At the time Mir was started, Wayland's input handling was basically non-existent. +Daniel Stone's done a lot of work on it since then, but at the time it would have looked like we needed to write an input stack. Maybe we want Wayland, but we'll need to write the input stack.

4) We need server-side buffer allocation for ARM hardware; for various reasons we want server-side buffer allocation everywhere. Weston uses client-side allocation, and the Wayland EGL platform in Mesa does likewise. Although it's possible to do server-side allocation in a Wayland protocol, it's swimming against the tide. Maybe we want Wayland, but we'll need to write an input stack and patch the Mesa EGL platform.

5) We want the minimum possible complexity; we ideally want something tailored exactly to our requirements, with no surplus code. We want different WM semantics to the existing wl_shell and wl_shell_surface, so we ideally want to throw them away and replace them with something new. Maybe we want Wayland, but we'll need to write an input stack, patch the Mesa EGL platform, and redo the WM handling in all the toolkits.

At this point, it looks like we want something like Wayland, but different in almost all the details. It's not clear that starting with Wayland will save us all that much effort, so the upsides of doing our own thing - we can do exactly and only what we want, we can build an easily-testable codebase, we can use our own infrastructure, we don't have an additional layer of upstream review - look like they'll outweigh the costs of having to duplicate effort. Therefore, Mir.

This is only possible because all the ancillary work done by Wayland developers, particularly Kristian. Mir is a Wayland-alike; we're piggybacking on a lot of good work done for Wayland. Hopefully we'll contribute back not just an awesome display server in the form of Mir and an awesome desktop environment in the form of Unity, but also low-level improvements that can be used by Wayland compositors. I'm particularly excited about our engagements with NVIDIA and AMD; although it's early days, I'm hopeful we can get a solution for “but what about proprietary drivers?” not just for Mir, but for everyone.
Daniel Stone's profile photoChristopher Halse Rogers's profile photoPatrick Goetz's profile photoSepero P (sepero111)'s profile photo
I'm happy to read that. As Ubuntu Evangelist I'll have some elements to bring in a conversation about the evilness choice from Canonical to choose to create another display server :)
So let me get this straight. When you decided on Mir, there was no input stack in Wayland. That implies that you then looked at the input stack in Wayland when writing the MirSpec page on the wiki. Which means you got it so utterly wrong that you either didn't actually look but simply wrote what you figured would be convenient, or you didn't understand the code. I'm not sure which possibility is worse.

Looking at a random input-related commit in Wayland, work was ongoing in early 2012. The first commit to Mir according to the bzr logs was in late June.

The issues with buffer handling, shell specifics, etc .. have all already been noted by Wayland developers as being able to be handled.

Let me make a suggestion:

Stop talking about Wayland because every time you do you dig your credibility hole a bit deeper. Canonical screwed up on this one, fair enough. Now you have Mir, and for whatever reasons you're committed to it.

Instead of going on about Wayland and causing yet more misinformation to float about the internets (I'm watching it echo around and engaging with people who are being actively misled by postings like this one) just get on with making Mir awesome. That is the only result anyone cares about.
+Aaron Seigo No doubt, Mir will be amazing. But please stop inferring and announcing malice, incompetence and manipulation where you only have a different conclusion by equally smart people.

We have on many occasions found both the means to collaborate with you, and the motivation to do so. Your recent posturing and vitriol towards Ubuntu seem determined to undermine the latter much more than Mir would create barriers to the former.
Aaron Seigo
+Mark Shuttleworth Had Canonical not spread misinformation about other projects, had Canonical not driven an unnecessary (technology-wise) wedge in the Free software graphics stack, there would be no issue.

You failed on both accounts.

What you call posturing, I call accurately accounting  the facts. What you call vitriol, I call accountability.

Do you believe in accountability?

Can we address the facts on the ground, or are we set to discuss the meta-concepts of who is being how nice to whom and who collaborated with whom when? .. all, I might add, while ignoring apparently what matters? As seems to be the usual case you did not address any of the issues I raised.

As a different-but-related topic, if you feel I've been ungracious or similarly offended your sensibilities, I'm happy to be pointed in the direction of what I did so I may address it. Vague implications gets us nowhere, though, I hope you'll agree.
So wait a minute we have to do all these things we have to do anyways and fragment toolkits and then maintain our own toolkit fragmentation patches because upstream will rightly tell us to get fucked? Because we want to avoid writing some tests for Wayland? I'm with Aaron stop digging. At this point Wayland is ahead of you and you could jump now without losing face, its going to be so more messy when you guys burn out and the poor guy left holding the can is trying to please Mark. Just use Wayland and join the community.
+Dave Airlie, am struggling to keep up. Yesterday it was 'they could have landed any changes they needed in Wayland' and today it's 'Wayland would rightly have told them to get fucked'.

And there lies the problem. The very people protesting their super-collaborative credentials have a long history of being super-antagonistic to Ubuntu in practice.

So, thanks for the true colours. I think we'll focus on making something amazing, and leave the bitching to you.

+Mark Shuttleworth You wrote:

"today it's ''Wayland would rightly have told them to get fucked'"

Can you point me (and the rest who are reading) to where that might be? We'd appreciate reading source material rather than hearsay. :)
I just hope that the work that Canonical is undertaking with proprietary graphics vendors will result in drivers that will work with Wayland as well. I understand that to be the "goal", but I hope that they also are released "early and often" for the community to test and give feedback for and especially so that the Wayland devs may have a chance to test, work with, etc. Not pointing fingers about back-room development, but it would be a shame to see the Mir drivers hidden away until Canonical is ready to make a big splash and then have the Wayland devs have to scramble to interoperate with them.
+Aaron Seigo I don't think there's anything in this post which misrepresent Wayland or Weston; if there is, could you please be specific so I can amend it?

I specifically addressed the “we could have handled the buffer management, window management etc” bit in the post. Yes, we could have. But the more things we need to handle in Wayland, the less useful it is to base what we do on Wayland. At some point it's just less effort for all concerned for us to do our own thing.
I mean in this case X.Org gtk and qt and mir specific patches not if you joined wayland, i must have missed where you approached wayland
+Ioannis Vranos and as a result of these actions Ubuntu has made more progress in the market by far than any other Linux desktop solution. To get to that point, you do have to develop some of your own software, not just wait for the right solution to appear out of thin air. And of course, if you look at the whole of Ubuntu there is an enormous amount of software "not invented here".
+Christopher Halse Rogers Thanks for attempting clarification. Just go out and make the best display server possible. In the FOSS tradition, it will compete with XOrg and Weston, just as XOrg competed with XFree86 before it. I trust that the best project will be adopted by the majority.
+Cole Mickens A challenge I see is that people say things like that, but then when it comes to pass .. do we do something about it?

(That said .. there won't be "Mir drivers" per-se .. there will be drivers that provide OpenGL(ES) calls that are utilized by Mir, Wayland, etc. just as they do now for e.g. SurfaceFlinger ... it's a minor difference, but significant as it reveals that this is not about getting GPU vendors to write whole new stacks, but simply expose functionality in API that can be used .. and that's a rather simpler matter of coordination between hardware and software vendors. Simpler than adopting whole new driver stacks .. but still significant enough that we ought to be asking for one set of interfaces, not N different ones because we have N different driver stacks.)
+Aaron Seigo ‘…and that's a rather simpler matter of coordination between hardware and software vendors.’ Certainly simpler than getting vendors to write whole new stacks; not actually simple, or it'd already have been done!
John Weir
As one of the bubbling masses waiting for Open source to not mean spending the rest of my life learning how to write code I applaud everyone who has worked so hard to give me hope and choices.

It has been really saddening lately to watch this tirade of selfrighousness and finger pointing. I tried RedHat in the eighties and gave up after a couple of miserable attempts that I know have so many choices that with my limited abilities that I can install and work is amazing.
If you don't like what someone is doing take your bucket and shovel and find another sandbox.

Using hostility will only result in hostility No matter the strength of your argument or the truth in your details you have already lost the argument
+Ioannis Vranos nonsense. Unity existed before Gnome Shell. And the design of Unity was clear up front, it's Red Hat's team that wandered all over the place before shifting to a design that bears a startling resemblance to Unity.

And yes, we've supported multiple implementations in different toolkits, thereby working on a broader range of hardware.
I really don't understand the Wayland issue. I was very exciting in the begining of 2010 when Mark announce that Ubuntu will use Wayland. Since, Wayland just release the 1.0 at the end of 2012, and announce they will not support android device.

That I understand is when Canonical begins to think how they could integrate Wayland (I think just befoire or after 12.04 LTS), the brainsorming shows that it would be easier to recreate a code they can control in their process quality, with a server-side vision.

Since, it seems that android was not support anymore by Wayland that's a big problem if you want that Ubuntu Core will be port on TV and Android device.

So for me the creation of MIR make sense. And it is a free software too, so where is the problem ?  We already have differents package system, differents desktop environnement. Now we will have differents display server. With differents way of think how a display server have to work...

+Dave Airlie I can't be with you when you say that we MUST use Wayland. Where is the Freedom here ?

I really think that Canonical can use the freedom to create another display server if they want. Maybe the MirSpecs wiki page hurts you in the way it was written, but it's not a reason to remove to Canonical their Freedom
+Robert Ancell oh sorry I think we all missed the bit where you try to collaborate. so how would you know if it was simple?
+Dave Airlie sure. I turned up to the community yesterday. I've never collaborated on any project in the 12 odd years I've been here. </sarcasm>
+Mark Shuttleworth okay I wrote my first comment on a bus, and maybe the logic wasn't clear enough.

The way RAOF's post reads to me, is we realised had a shit load of work to do to make wayland the way we wanted it, so instead of doing that work we added a shit load more work to the list and did Mir.

The hidden benefits of using wayland, like getting upstream toolkit support maybe don't matter now, but they will eventually. I know you guys will post upstream patches to GTK, QT, and Mesa, the question is why those communities should accept patches that probably won't have much ongoing maintenance? Like worked with Chase on the multi-touch stuff, but now all we see is that nobody in Canonical knows how this stuff works, and (Red Hat) developers are left to fix the problems. Ongoing community maintenance of code is important, and just throwing patches over the walls to toolkits etc isn't consider best practice for sustainable development.
+Robert Ancell okay so you don't consider open source development practical then because collaborating is too hard? I'm just trying to get why you think the fact that collaboration is difficult means you shouldn't bother engaging in it.
+Vincent JOBARD  going after Android devices may not be a good idea. Ubuntu Touch is using a lot of Android code already.

If they decide to build Mir to run on android assured devices they may end up like Aliyun OS.
+Dave Airlie no, the point I was making is you can't just wave your hands and say "collaboration is the easiest path". It's only one factor to weigh into choosing how to develop software.
I agree Wayland won't support android, well then if the Qt Ubuntu touch needs android then mir is the way forward. 
+Dave Airlie as for the costs of collaboration, how's flink_to and a console revoke ioctl going? ☺

Those are things which (in the bizzare world where Xorg controlled the kernel) would have made X better which were blocked due to the need for collaboration.

As for the hidden benefits - toolkit support, etc - what we wanted already invalidated the existing Wayland toolkit support. We wouldn't have got that either way.
+Robert Ancell okay so I thought Canonical was into being a committed open-source company, so collaboration should be a larger factor than if you were say Google or Microsoft. Like I can understand factors, but they should have weightings applied.
+Sean Fell <citation needed> wayland has had proof-of-concept demos on Android, there just wasn't the community interest in finishing them off at the time.
Android Device are everywhere. On TV and on phone. Manufactureer wants warranty that they could sell their products.

No Wayland support for Android Devices is definitively a reason to use MIR, for a total convergence
I'm not sure where this “Wayland doesn't support Android” meme has come from; Wayland's been running on Android since (from memory) late last year.

Maybe us saying “we want to support running on Android devices and we're not using Wayland” has morphed into “we didn't use Wayland because it can't support running on Android devices”?
Canonical is dedicated to open source , the first company to have an OS that spans phones tablets TV and PC with open source software.
+Christopher Halse Rogers flink_to was written by krh, and he stopped pushing it as a solution, he never wrote X support for it even.
Now flink_to became dma-buf fd passing, so really the crappy solution was never right and we have a proper solution now, so in that case collaboration proved the better answer.

Console revoke ioctl, did you guys actually write that? because I never saw the upstream discussion on what was required. We know revoke() is hard, not collaborating on it isn't going to make it easier. The thing is doing things right is harder, who knew? The kernel has proved this for years, that we get better because we do things right, not just adopt the first solution someone throws over a wall.
Android support removed from Wayland, are they in the dark ages.
+Christopher Halse Rogers  I don't think Google will like the idea of you running Mir on android devices.

Ubuntu Touch will be described as a "non-compatible" build of Android if it uses Mir. That's if it's not already. Ubuntu Touch is using a lot of Android code.
+Dave Airlie I was basing the revoke ioctl example on my memory of XDC in Toulouse where it was discussed.

Getting things right is great, but getting things right after a couple of years of not doing them at all has the obvious cost of not being able to do them for a couple of years. Getting something working now does't have to prevent you from cleaning it up later, either.
+Antoni Norman except we're not running Mir on Android? We're doing something totally different to the Aliyun example. There's (virtually) no Android userspace on Ubuntu Touch - we're using nothing but the hardware support.

Ubuntu Touch is a ‘non-compatible’ build of Android in roughly the same way that Android is a ‘non-compatible’ build of Ubuntu.
+Sean Fell I don't think it is overly relevant to this discussion, but others are already there. We can point to the obvious case of Android, or the perhaps less obvious case of Plasma.

In general, it'd be really great if we could keep to the topics at hand rather than inject new wild distractions, creating ever growing dust bunnies of distraction (Aliyun, Wayland on Android, who has collaborated with whom in the past on other projects, etc.) This thread started with a fairly clear point, and when conflicting viewpoints have been presented the response has seemingly been to jump to other topics. That is not how useful discourse is held.

If you agree that the goal should be to come to a clear understanding of the issues (which does not imply agreement, btw :) so that we can all move forward with a clear understanding of the landscape, then let's do that.

Cheers ..
+Vincent JOBARD I'm sad that Mir exists, I don't say Canonical can't do that, and they MUST use wayland. So maybe I missed your point.

The thing is the costs of doing Mir are going to be an increase on the maintenance burden of the Linux desktop stack, from graphics drivers to toolkit maintainers. I worry about that because in the past that cost has fallen elsewhere when Canonical have decided to chase the new shiny object over there.

So for a project like since I'm one of the few people left writing code for the X server in all of this, what is the difference between merging Xwayland and Xmir? It pretty much comes down to what are the benefits of the code. Considering we already have Xquartz and Xwin backends for MacOSX and Windows which are very well maintained. The thing is does Canonical or the personal contributors of this code, look like they are going to maintain this code for as long as its in the X tree? or will it just be developed inside and thrown over the walls? We spent years getting Apple into the right place, and I think we learned a lot, now I think before accepted Xmir into the codebase we'd want to see a demonstrated commitment to maintain the code, something that has been lacking on previous contributions.
Android support for Wayland was developed by Pekka Paalanen from Collabora.  His approach at the time was to use the Android user space (broken libc and all), which made for a very painful port, but Pekka did a very good job.  Collabora abandoned the effort and we removed the backend.  The approach going forward is to use libhybris just like Mir is planning.
+Joe Mitchell thanks your comment was both insightful and worthy of a place on the best comments ever list. Thanks nobody ever considered the contribution such a comment could make to our lifes.
+Dave Airlie so, when Canonical contributes it's throwing code over a wall, and somehow inferior to, say, Red Hat's contributions?

Do you ever wonder if the attitude portrayed here might be toxic to people's reasonable expectations of collaboration?

Have you ever seen Canonical folks calling Red Hatters idiots, tragi-comic, crap or other names? Have you ever seen a Canonical upstream think it's OK to tell a competitor to get fucked with their patches? I doubt it. But that's normal behaviour towards Canonical by some of your more vocal colleagues, now talking loudly about Mir.

Methinks she doth protest too much.
+Joe Mitchell How wildly unproductive, thank you. +Aaron Seigo from what it seems to me, and this is a relative outsider's perspective, any slight against Wayland/Weston was an accidental oversight and there was no truly malicious intent behind it. It might have been a massive oversight, and a somewhat insulting one, but at this point the effort on behalf of Canonical to try and clarify their position and apologize for the mistake should at least be acknowledged. Could they have theoretically done everything they wanted to do in wayland/weston? Yes, but they decided that it would take less work for them to actually just make their own. With that out of the way, let's leave it to the respective camps to just make their code as well as possible and keep the lines of communication for sharing ideas open. If no one ever decided to start from scratch instead of contributing to an existing project, the Linux kernel wouldn't exist, and there would only be one distro to choose from.
+Dave Airlie ah. So, before accepting Canonical's patches, you'd like to see a higher threshold of commitment than other patches you've already accepted.

Doesn't sound like a level playing field to me.
+Christopher Halse Rogers Thanks for engaging the community. As a happy end user, I really want Ubuntu to continue to succeed.

I have concerns about Mir: partly just a fear of the unknown, partly (me being paranoid) worried about compatibility of tools I have come to rely on, and partly about what Mir could mean for the maintainence cost of various projects in the Linux universe.

But, I should keep an open mind, and I wish you and team the best, and I hope that I'll continue to be a happy Ubuntu user for a long time to come.
The decision has been made and there's nothing that's going to change. This has been done before. This discussion seems largely due to fear of what Canonical is turning into and I think it's unwarranted.
+Mark Shuttleworth I've seen Red Hatters call Red Hatters all those things in public, private and all over the Internet? I'm not saying you should read fedora-devel, but there is plenty of evidence that Red Hat folks rarely feel constrained in their name calling of anyone, especially their colleagues.

Also you don't need to say those things, you just post some "facts" then a bunch of random people spread them, then you take down the "facts" and the bunch of random people keep spreading them as some sort of gospel.

But yes I do call "throwing code over a wall" as inferior. Taking the fastest moving ever software project on the planet, the Linux kernel as an example, it would not have been what is today if it accepted contributions in the over the wall way, I think all projects and companies could learn from that. Can you point me at any non-Canonical controlled (CLA) project that has ongoing maintenance and development from Canonical in a non over the wall manner? Because I can point you at a complete distribution full of them from Red Hat.
+Mark Shuttleworth because we've learned that contributions of that type in the past were not good for the on-going maintenance of the project. As I said we worked for a long time with Apple to make Xquartz be not insanely developed internally and out of tree. So its not a Canonical thing, its for everyone. I personally am not the main maintainer of any part of the userspace graphics stack, I'm just a contributor and my contributions go through the same process. I can't see Digia being too excited about maintaining more Qt backends because it makes them less agile when it comes to making changes to the Qt, and the same goes for lots of other projects. You've made the choice to run a different path, and it has long term consequences for other projects that will get dragged into the sphere of influence, they need to make a decision on whether they think its a good spend of their time taking Mir specific code on a one off basis or whether you guys are going to do ongoing maintenance of the codebases in the upstream projects.
Thanks, Michael Kwong, you expressed my sentiments quite well.  I may not have a good over view of the events, but I have come to thourghly enjoy Ubuntu,- the software and the community- and to rely on it.
+Christopher Halse Rogers Thank you for this post. A few questions:

Did you evaluate the compositors in QtWayland? Digia run nice demos during the Qt Developer Days: QtDD12 - Creating Window Compositors with the QtWayland module - Andy Nichols .

Also what do you think of the input method improvement Openismus brought to Wayland starting with June 2012 ( Are those the things your team had in mind when starting Mir or are there other things missing?
Threads like this are an example of why Open Source struggles in places; technological cat-fights that do nothing more than stir up bad blood.

Canonical has every right to invest in Mir to solve problems pertinent to the product it invests in, Ubuntu. Wayland folks have every right to question why Wayland was not considered a suitable solution to that same problem.Those reasons have been explained and both sides see the rationale differently. This he-said-she-said bickering is not helping anyone, and anyone unrelated to those projects who feels the need to throw angry rhetoric at the thread is not helping, irrespective of whether they are supportive or critical of Mir/Wayland.

Let's just get on with creating Free Software, even if we take different routes and different approaches - ultimately, the best solution will prevail and it will help deliver something that we all agree on - bringing Free Software to more people.
+Dave Airlie of course over-the-wall code drops are sub optimal. Like, say, OpenShift ;)

Your characterisation of ALL Canonical contributions as over-the-wall is just... typical generalised BS.
+Jono Bacon this sort of thing isn't open source specific. It happens inside companies as well, you just can't see it. So please trying to equate technological cat-fights with why open source doesn't succeed in certain areas is being either a bit naive or a bit condescending.

though I agree with the rest of the post, at this point it doesn't look like anyone is changing their minds, its a pity, the amount of code in Mir is quite small compared to the amount of code it needs to get where its going, but I guess we get to find out in a years time, if you burned a pile of money for not good reason! Until then I suppose we should eat the popcorn, and dream what would happen if your 6-7 developers could have worked on something bigger than Canonical dreamt.
+Mark Shuttleworth
okay so maybe I just work in an area where I don't see any of your on-going contribution work. Multitouch development for was the last experience, and granted Chase leaving didn't help anyone. Also parts of openshift if memory serves were acquired stuff, so yes acquisitions always pose a problem, and I don't think there existed any open source equivalent of what openshift actually is? or did I miss something?

openstack being a different story entirely, RH had originally done some work in a different direction, was roundly criticised and when upstream went elsewhere, what did we do? It looks like we realised upstream was more important than us, and moved our direction to it!
+Mark Shuttleworth Also I do know you are getting better at upstream contribution, maybe I'm hoping too far ahead, the work Maarten has done on dma-buf fencing is a perfect example of how hard it is to do things right in an upstream fashion. I'm hoping he can stay on the project long enough to help maintain it, once he does succeed in getting it merged, something I can't wait for.
So as the end-user what happens to us if MIR is successful/fails? Right now I prefer Ubuntu with Gnome3, what will be my options with MIR being implemented?
I personally am very glad to see this post because it explains the reasoning behind the decision to develop Mir.  I think it's very helpful to understand why one contributes to a particular existing project or chooses a different path.  Developers don't join a project out of a sense of obligation to donate their efforts; they do so because it helps them achieve their goals.  If an collaborating on an existing project is the best way to do that (and it often is), then the new developer will join in.  If the most efficient way forward is to build something new, then a new project will be formed.  Often, the choice isn't even between "duplicating effort" or "contributing to an existing project" but between creating something new or not building anything.  Plus, by seeing new approaches, we can often learn more from the radical departures than anything else.  If Mir is a resounding success, perhaps others will adopt it.  Even if Mir struggles, perhaps its existence will inspire Wayland developers to step up their efforts and even adopt new approaches based on lessons learned from Wayland.  We should welcome experiments in the Open Source world, collaborate where it benefits us, learn from each others' experiments, and encourage the diversity and flexibility that is the true source of freedom.
+Joe Mitchell if Mir is wildly successful, you might see a GNOME Shell built with it. :)

Most likely, you'll never need to care - X isn't going away and we'll need to support X applications for the forseeable future. Given that, I expect that you'll be able to run GNOME Shell in an X session on Mir for a good long while.

Only if GNOME decides to reimplement Shell as a Wayland compositor would this be at all problematic; at that point you might not be able to run Shell on Ubuntu, at least until someone submits a Mir backend for their compositor.

Some of this confusion is due to the somewhat nebulous entity which is Wayland/Weston/Mir.

Wayland is a display-server-shell-compositor protocol; if you implement it, you get a client library, a spec-to-code generator, and (mostly) an EGL platform for free, but have to do all the rest yourself.

Weston is the reference implementation of a Wayland compositor. As mentioned in the post, it's a test-bed, not intended for production use. You might fork it to form the basis of your display-server-shell-compositor, but you wouldn't use it as-is - nor do the Wayland developers expect anyone to use it as-is (as far as I'm aware).

Mir is closer to Weston than Wayland, but a Weston that's actually intended for production use. We're not really deliberately defining a protocol; it's more like we're building a helper-library for creating display-server-shell-compositors. We're building it specifically to be a Unity-display-server-shell-compositor helper library, but that's likely to result in something that's more generally useful.
It's important to give credit where it's due and it is nice to see that +Dave Airlie has done that along with some criticism too - makes for a nice balance.

I'd also like to praise the excellent upstream work of Canonical employee David Henningsson both in +PulseAudio and in the ALSA stack.

I just wish all my fears about the "skunkworks" approach with the "tada" reveal from last year were not borne true with this recent announcement. It's truely sad to see technical discussions about Wayland happening now rather than six or nine months ago. The "big reveal" approach seems to have seriously backfired here.

But hey, if nothing else, it is bringing both GNOME and KDE people together in their universal dislike of the way the Mir project was born behind closed doors, in secret! In all seriousness however, I do appreciate that it is often hard to announce something before you have something to show. But when the general community was so clearly moving towards Wayland, not being open about the plans here does just seem at least very inconsiderate!
MIR is available under a Free software license, but the copyright policy for the code is all Canonical. This gives Canonical specific advantages when working with other vendors that are particularly likely to come into play when dealing with graphic chip vendors.

I don't really see people talking about that, but that is the part that stands out the most to me. Canonical was able to get a foothold as a business because of the generosity of licensing and copyright terms that the Free software ecosystem provides. In their drive to make an almighty buck, they are having to drive wedges into the communities that gave them a start. They share, but they don't share like others. Their software is Free software, but it's not like the Free software that others have provided. It's all because of copyright, and the advantages that keeping ownership of copyright provides.

Sure, they can do it. It's legal, other companies do worse stuff, but people who give craps about the Free software can see that it's pretty shitty.
+Dave Airlie I agree that these catfights happen internally too, I have been working in software for long enough to see plenty of that, but my point is that the public he-said-she-said cat-fights are often demoralizing for the onlookers, and often don't actually move the topic forward.

Of course, Open Source has been tremendously successful, but what saddens me is that bickering and fighting has become a tolerable norm across many Open Source communities - I am not pointing fingers at any community in particular, just a general observation from my experience in our industry.
Incidentally, “Mir” isn't an acronym. Although if enough people use it as such, I'm going to claim it stands for “Mir Isn't Recursive”.
+Jim Campbell So it's still free and nothing illegal is happening here. Is the problem that they have gone away from what everyone else was doing to do their own thing (for whatever reason) or is it the amount of success they are seeing?
+Joe Mitchell From my understanding it's partially that they are branching off and doing their own thing, but more importantly the initial explanation of what they were doing was handled in an inadequate way. No matter what, there's always going to be some critics, but had the announcement been better planned and implemented I highly doubt the criticism would be anywhere near as vicious as it has been.
+Christopher Halse Rogers Mutter performance increased massively not too long after though, if canonical had just contributed to mutter things would have gotten better even faster. Mutter today in gnome-shell runs super smooth. and I think you'd be kidding yourself if you don't think compiz had/has its own many performance issues...

I think its a similar situation: Instead of working with upstream and improving mutter, canonical went and duplicated a bunch of work and used a totally different window manager.
It isn't just that Canonical would make everything closed source. It's that owning the copyright allows them to privately sell exceptions to the GPL when they find it convenient.

I think the latter is more likely than the former, especially when working with graphics chip makers.
So I don't know a whole lot about the reasons why or why not to be upset about Mir. Here's what I do know: we now have proprietary driver releases on Linux from AMD and Intel as well as Steam. These are huge steps for Linux and if I'm not mistaken Ubuntu was a large catalyst for this. I hope I'm right in saying that Ubuntu and Canonical will not do anything to dissuade people from using Linux, and will keep it free and opensource as it has been.
+Eric Frederich The creator of linux mint has said he intends to keep using X for now, namely X in combination with ubuntu (as it is highly unlikely that ubuntu will drop all support for X right away).
In case anyone's noticing posts go missing here, I'm trimming out contentless comments as I find them. There have been surprisingly few, actually!
+Mark Shuttleworth Just like to say thank you for helping create such a great product. I know there is a lot of negativity flying around but you still have a lot of supporters. Watching you wade through all this BS and still push for what is necessary is inspiring.
It's a damn shame no resolve is going to happen. Looks like we are going to have two display server. (What is a huge waste of resources).

Unless Gnome supports Mir, it looks like the Ubuntu #Gnome Remix is going to get killed off in the crossfire.

We have already seen Mir effect Ubuntu-TV:
Who knows what else Mir is going to effect.

Wayland already has Samsung and Intel on board:

If only Canonical Ltd. joined in as well. Wayland could of been almost ready to release. Instead both projects are a few years away now.
You guys talk a lot of tech. But in short, will Mir solely run Unity? Or will it be like X which you can run other desktops on it? The reason I ask is because I found Unity horrid and after 2 months switched to Gnome 3 which I find actually usable and productive and if the answer to the first question is yes then I'm not going to bother following further discussion on Mir.
Looks like I get to write another post about ‘What does this mean for $PROJECT’
"Unity existed before Gnome Shell"? +Mark Shuttleworth, I suggest you refrain from statements like that if you want to be taken seriously here.

Unity's commits start in October 2009, and, as we've seen with other projects from Canonical afterwards, Unity was never revealed to the public until May 2010.

In contrast, GNOME Shell's code base goes back to in October 2008. In addition to the code back then, it was already publicly documented on various wiki pages. And that's only counting the concrete stuff, not the initial brainstorming phase for GNOME 3 that goes back even further in time.

But let's not let facts get in the way of a good story, shall we?
+Matthew Thompson Can you imagine working on, let's say a guitar solo, for months and when your done someone uses the term "horrid" to describe it? I think there is too much of a disconnect with the end-user sometimes and we don't realize how long or how proud someone might be about some of things that we discuss here. 
+Christopher Halse Rogers  I think that would be a good idea. Because how I see it now. If Gnome Shell only decides to support Wayland (and lets face it. If Mir is only for Unity I doubt they will support it), then Shell is not going to work on Ubuntu. So this will also break #cinnamon and #pantheon DE's on Ubuntu. If this was your end goal. Then you have succeed. No one will be able to fork Ubuntu or run any other DE apart from Unity or DE's that only use X.Org.
+Mark Shuttleworth 'But please stop inferring and announcing malice, incompetence and manipulation' is hilarious to read, when in another post I have you spouting absolute horseshit about how I'm behind a conspiracy to sabotage Mir by keeping Wayland development proprietary.  Which seems to be based on a total misunderstanding of how Wayland works (common theme developing ...), a total misunderstanding of who my employer is, and a total misunderstanding of what my employer does (which is not starting proprietary forks of viable open source projects, since that's stupid and harmful to everyone involved).
I imagine a lot of ubuntu derived distributions will rebase under Debian. Not necessarily a bad thing. Really excited to follow the development of Mir.
Fuck You all. Wait and see for May or October release of Mir. And then start a flame war. I want the technical discussions going on, here now. Not pointing to the random things like aliyun OS, Android.

One thing i understood is that Canonical rely much on testing and quality stuffs. I know the pains of wayland developers(due to the misinformation in initial Mir Spec), but it is not necessary to put frustations on canonical guys even they corrected the spec. Wayland and Mir will be awesome. Competition is good. and we have Gnome, KDE, XFCE, LXDE, Enlightenment WM, awesome WM, ratpoison WM, and so many WMs outta. Cant it be said about display server? X, wayland and now Mir. Thats good. if you think that is so much duplication. I accept that, C++ and Java and C# and Objective-C? Ruby and Python? Qt and GTK+? Cool. No one want a single solution to a single problem, and that problem continues to evolve much.  

If Canonical want to control it, and it s fine, given the current focus on testing and QA. If Canonical is what it looked 4 years back and developed Mir, then i ll say it s stupid. When you are trying to market the thing of huge stake, you ll hv a lot of control fetish.
There will be an other troll sbject for the friday's : Mir vs Wayland (after the  Emacs/Vi, KDE/Gnome, Qt/Gk3+...)
I'm just here for the drama.  I'll use whatever works.
I fail to understand how Canonical creating Mir for Ubuntu turned into Mir vs Wayland???
It is the direction Canonical wants to take Ubuntu. it is their project... The code is out there, and they are not stopping anyone from forking it and implementing wayland or contributing in that direction.
People bashing that decision, why don't you guys form a community and contribute Wayland for Ubuntu rather than wasting your efforts in bashing Canonical? I have read all posts regarding why Ubuntu picked Mir over Wayland.. In no way they disregarded the project and efforts put into it. It was just a decision they took based on inferences they got. It is as simple. I am not a contributor to Ubuntu, but this weird behavior from people makes me wonder if Canonical should just take Ubuntu under their total control and let people like you bash them like toddlers.
Take for instance: Elementary OS. They have done a beautiful job of creating an OS the way they wanted to. That is a mature decision by sensible people.
As Mark said, don't poison the well behind you. If you want to move on, just do.
+Dave Airlie How does that make a difference to you? and for that matter how does that stop you from using or contributing to Ubuntu? Please do not get me wrong. I am just trying to understand the reason for such burst up. I will suggest just have a cup of coffee, get your mind to settle down, come back and start using Ubuntu. Nothing has changed really... Just that Ubuntu is now not a small project, it is becoming a big project and certain decisions are to be made for the same. Nothing that involves community efforts being demeaned in any way. If you think Wayland is more suited in this regard, prove it but sensibly. I am sure Mark would love that. Throw it in his face that see Wayland is better and I along with other community members have made it that way. Now you include it. He will be more than happy to do that... But then again, you saying it is better and you spend your money and resource in doing what I want doesn't solve anything. Please try to understand from their perspective.. 
I know it is just not their money and also donation, but I am sure you get my point.
+Udit Mahajan We aren't Ubuntu developers, and we're not trying to push Wayland on Ubuntu.  If Mark decides that he should go off and write his own display server, that's his choice.  That's not what any of this is about.
+Jeff Fortin 

From wikipedia:

Gnome Shell: Initial release April 6, 2011; 22 months ago
Unity: Initial release June 9, 2010; 2 years ago
+Jono Bacon I think your perception of what causes Open Source projects to struggle is a bit misguided, "cat-fights" can cause some tension and headlines but the real reason why it struggles is because of fragmentation. Different specs, small changes for the sake of being different, countless hours of duplicate work being done to achieve similar goals creating a moving target for software developers is the real cause. Rather than actually having an intelligent conversation with the community providing actual answers to questions, Canonical seems to be avoiding these basic questions that could have detrimental answers to the rest of the community.

I agree everyone should just focus on their projects but "bringing Free Software to more people" is not justification for hurting the rest of the Linux community causing possible issues with driver support (an area where Linux already suffers). If you guys were a bit more clear in your responses and ensured Mir would work hard to keep a consistent standard so that both Mir and Wayland could share drivers nobody would be fighting, but instead of being professional, addressing community questions and working towards a solution that is good for everyone in the community, Ubuntu's responses have been doing nothing but adding cats into this fight.
+Daniel Stone sorry i'm not a dev but i have a question (not a troll don't worry). I understood that Wayland fall down Android support (maybe is a misunderstanding) so I'd like to know if it's possible to have Ubuntu running on a phone and having a full desktop experience when plugining it with Wayland ?
Someone please get Jono Bacon in here and take internet rights away from Shuttleworth..

Edit: Right, Bacon's here. Damage control, everybody!
+Daniel Stone I think the work you guys have put in to Wayland is amazing and I have been following the development for quite some time now. I don't have the required skill set or I would indeed even contribute to the project. But could you elaborate for the community, if possible, as to where the project stands? I could not find much recent information about it on your posts..
Anyone failing to notice the FREEDOM to use ANY software?
In my perspective, a future release of Ubuntu Linux will be like this Linux kernel + Mir Display + Unity. Ubuntu Derivates, for example Kubuntu will be like this Linux Kernel + Wayland/X + Kde. thats good. The Mir support for kde/xfce distros (OpenSuse/Kubuntu) will come late than that of Wayland. 

And about the choice, there will be a point of time the installer(not sure whether it is ubuntu derivate or other distro) will offer user to choice three options : X Server, Wayland and Mir similar to the choice of text editors and Desktop Environment(Gnome/KDE/XFCE). I admit that X will be phased out not much earlier than 5 years from now. I hope someone dares to run Unity on wayland.

so i want the wayland and mir developers to work hard and better and let the choice left upon the users. And we get from the user feedback and bug fix requests, and then fix them as possible.
+Daniel Stone then what IS the problem? You are complaining about MIR. You are complaining about canonical's approach to "open source" (your definition of it that is)
if you think that Mark has a choice, and is exercising it, then why are you upset? 

You say he has a choice, then you complain about exercising that choice?  makes sense that...not!

way i see it, Open source is filled with glory hunting "next Linus Torvalds" wannabes nowadays.

Can you folks just give this up and go write some viruses or something? quit moaning all over the internets
AFAIK all this is stems from wanting to be the one whose name is in the lights for writing ** software** for Ubuntu, (the popular linux distro).

Because it seems to me that even if Canonical was to employ 100 developers to work on wayland and make it all good and nice, they still have the choice not to use it.
and when they don't use it, WE WILL STILL GET THIS MOAN_FEST.
pettiness all round.

and FWIW +Mark Shuttleworth should NOT get into these type of discussions at all. LET THEM MOAN.
+Roshan P Koshy that is precisely what I was thinking. Every distros is looking for a X alternative and that is vision with which Wayland started. As an X alternative. Not as a default Ubuntu display Server. I can understand +Daniel Stone frustration though, when you have put so much effort into a project you do want to see it adopted everywhere. But Wayland would still be the de facto display servers for many of the Ubuntu alternatives. And we never know, with the competition between Mir and Wayland might just be good for both the projects..
+Mark Shuttleworth As the Head of Product Strategy at Canonical you should be ashamed of the glaring unprofessionalism you are portraying in your responses. You seem to be taking many of these comments far too personally. Under all the hate there is a concern here that should be addressed in a formal manner, and that is weather or not Canonical will do what they can to keep a fairly consistent code base where drivers could work seamlessly across both Mir and Wayland, thats it. There is no need to go into these long drawn out arguments when you yourself could easily answer the question with a yes or no. 

Getting into these arguments,telling people they are "bitching" (even-though you could be accused of doing the same), making snarky remarks at other projects such as Openshift, and telling everyone how "amazing" Mir will be rather than proving it when the time comes shows weakness and doubt. If Mir is going to be as amazing as you claim show us, but you dont have to cause a rift in the entire Linux community in order to make something great. Maybe if Mir is as successful as Ubuntu TV is (...oh wait...) then you could make that claim, but as of now your claims are nothing more than marketing and predictions and should be presented as such when making a sales pitch not when addressing the concerns of the community. If you wish to be perceived as a visionary and great leader then be one, you should respond to us like one and do what you can to get people rallied around your cause, and address the concerns of the community rather than taking down to us (though some of us do deserve it...).
+Fola Dawodu we don't write software for Ubuntu, got any more interesting crap insights you need to share?
+Daniel Stone some code != existed, the designs for the product are the first stages.

1. "We were part of the GNOME shell design discussion, we put forward our views and they were not embraced by designers," 

2. Unity Released

3. Gnome 3 Released

4. Wayland Released

5. Ubuntu Next will feature Qt/QML for better convergence with Ubuntu on other devices.

6. Mir announced.

Where exactly is the problem.
+Mark Shuttleworth, I have to agree with mark waters here. Though I would not go that far calling you the beautiful adjectives he did..
These storms are nothing but the concerns as to where Ubuntu is leading. I believe the Ubuntu team should start assuring people, but you replying the way you are presently is causing even more stir among the members.
You are the face of Ubuntu (even though not CEO), so your actions directly reflect the mentality of Canonical as far as the community is concerned. You need to be careful as to how you present yourself. We all appreciate the amazing work you have been putting into this project and would hate to see it all going the wrong way just because of small misunderstandings like this...
+Udit Mahajan I'm planning to revive my blog and do that soon - it's too long a story to cover on G+.  But it is shipping in mass-market products. +Vincent JOBARD Pekka wrote an Android backend before libhybris (which Canonical use heavily, including having forked and licensed under an incompatible license, before being persuaded to contribute their changes back under the original license).  Unfortunately, due to a lack of interest all around, this was subsequently removed.  There's nothing preventing anyone from picking it back up and continuing development, including pushing it further with libhybris, and nothing in the Wayland protocol that hindered its progress at any point.  +Fola Dawodu I'm mainly complaining about Mark accusing me of conspiracies based on totally false grounds, if you actually read anything I've said here.  Also, if you want to see my name in lights, you can grep for it in Ubuntu changelogs and see how I was working on Ubuntu back before it was called Ubuntu.
+mark waters I don't think Mir is hurting the rest of the Linux community. Does it fragment the display server world, yes, but does it hurt the community, I don't think so. By that same notion GNOME, KDE, Unity, XFCE, and other desktops also hurt the community. Also, +Mark Shuttleworth is not the CEO of Canonical, but head of Product Strategy.
This has nothing to do with my opinion of Mir in any way (which I'm happy to discuss in person, over beer), but rather the silliness of people taking issue with the "the current solution doesn't quite work how we want, so we'll write a new one" approach:

Isn't that exactly the systemd versus upstart argument? With exactly the same flame wars, but in reverse? And that wasn't the big, evil Canonical forking paths from the bigger, innocent RedHat, but exactly the opposite. Again, actual opinions of upstart and systemd will be provided over beer. This is merely about the pointlessness of the arguments.

A little competition is healthy. Maybe both survive, maybe one whithers and dies, maybe one or the other is forked, or a whole new option appears and becomes the obviously better solution for all. We all agree X sucks. We all agree it needs replacing. Some folks disagree on how. This is precisely the right time for divergent ideas, specs, and implementations.

Edit: I am about as far from a corporate shill as one can get.  Despite my employer, I suspect many (other than +Daniel Stone) would be shocked to discover I actually have opinions about our company policies, and some of them aren't favourable.  The above was purely addressing the crazy in the thread, not the technical arguments.  Let the tech flow.  Bring it.  Stop with the attacks, they're demoralizing to all involved.
+Adam Conrad because systemd started with a clearly defined list of reasons that is was different from upstart, and the core problem was the core of upstart wasn't compatible with the core of systemd, so yes systemd could have evolved upstart, but it wouldn't have been upstart anymore. The thing is there hasn't been a single technical reason given why Mir is required beyond wayland, that anyone is sticking by, including the Mir developers. Lennart has always consistently messaged the why not upstart from a very technical point of view, as never demonstrated he didn't understand the upstart technology. You might remember Red Hat shipped upstart in a major release of RHEL, so it wasn't like we weren't part of the community, we had developers working on it as well. So if you want to compare this to systemd/upstart or a Red Hat/Canonical battle, then you seemed to have missed the point.

The thing is this is forking Linux at a level we haven't forked before, and it isn't for any valid technical reasons that anyone is willing to stand behind, so to most of us who work in the area, it kinda hard to see what the point of forking the Linux graphics platform, esp wrt binary drivers and upstream toolkits is.
+David James +Mark Shuttleworth You really must get better at checking facts:
date: 2008-10-31 04:22:44 (GMT) date: 2009-10-15 10:40:35 UTC

Pointing that out, despite loving Unity. Also I believe to remember that Canonical was actively involved with the initial UI mock-ups of GNOME Shell, let me check GNOME mailing list archives.
+Jono Bacon The big problem with Open Source: Facts are so much easier to check, than with Closed Source.
+Dave Airlie, like I said, some of that is beer fodder.  I'm merely asking people (on all "sides") to calm the &^!% down and move the &!$% on.  Technical arguments can be weak and still be valid, if they're what led you to go the route you went.  Past tense.

Absolutely, if you discuss your technical arguments up front with a community of differently-minded people, you may find their weaknesses, and you may find middle ground.  Sometimes that happens, sometimes it doesn't.  It didn't.  Past tense.

In the end, Mir will be a very different beast than Weston and that might be a good thing, it might prove to be awful.  It will certainly scratch the itch Canonical needs scratched in the very short term.  If it ends up later needing to converge in a Wayland-compatible way, or have its different ideas end up in someone else's implementation, so be it.

(Note: I have no emotional attachment to either code base, I've worked on neither, but I do have emotional attachment to some of the people on both sides of this "debate", and while I'm a big fan of bitterness and arguing, I'll remind you all that I'm far more bitter than any of you, so stop forking my emotional state and learn to contribute to mine instead)
+Mark Shuttleworth Don't let them get to you, just continue what you are doing there are millions of us who appreciate the work you do and what you have provided to us the community.
+Mark Shuttleworth Sigh... Let me put this straight and simple. 

There is a reason why there is ONE XOrg spec, and ONE OpenGL spec. There is a reason why there is ONE GNU/Linux kernel and ONE mesa project. And there is a reason why we, the software develpoers, want ONE EGL Display Server Protocol to replace X11.

The reason is because we actually legitimately care about the entire scope of Linux distributions as a whole, rather than just ONE operating system. We want something that will make developers lives easier and benefit everyone the same. You claimed in your blog post that you choose to deviate away from the "unorganized" and "non-collaborating" Linux community because you want ubuntu to lead the way towards a world where free software dominates...

Perhaps you should wake up? Companies like Google and Github have already made that world a reality. I mean we now live in a world where Microsoft commits to the Linux Kernel.

We live in a world where some of the most renown university professors and scientists give away their knowledge for free.

We live in a world where nvidia contributes support to an open source GPU.

We live in a world where Android/Linux, the ACTUAL largest opensource project in the world has 40% of the mobile device market share. That is nearly 10x's more than Windows Phone. Which is amazing considering that Desktop Linux has only ever been in the single digit percentage range for almost 2 decades.

Android is only a success because it is the bi-product of all the labor and love that went into Linux. Hard work contributed by the countless hackers that make up the open source community you are ignorant and egotistical enough to insult.   

My point is that we dont NEED canonical to take on any sort of "Linux messiah" role, to lead us "lost and arrogant sheep" towards some "FOSS promised land". Not if it comes at the cost of HARMFUL FRAGMENTATION OF THE LINUX ECOSYSTEM. Which is exactly what Mir is. As RAOF's post has made absolutely and perfectly clear. Your entire motivation for Mir can be summarized as,

"We want Ubuntu to be the most badass Linux disr... ahem, badass operating system in the world. So were going to take all the hard work that Kristian Høgsberg worked on over the LAST 7-8 YEARS contributing to Xorg, AIGLX, DRI2, KMS, EGL, and then deny him the honor of letting his protocol become the defacto Linux standard in place of X11.

This is of course AFTER we assured and lead everyone on that we would use and help contribute to wayland, but NOPE we like Mirland... I mean Mir. Whats wrong with wayland you ask? We dont like wayland's variable names."

Unfortunately, you may still be wondering, why does it matter if Ubuntu does it's own thing and implements Mir? I mean Its free and open source software right? Why cant Ubuntu do what they want with Linux like Google? Nobody bitches at them?

The difference is that they didnt try to take over Linux on desktop after android on Mobile became successful. They didnt roll their own in house distro, and instead continued to support Ubuntu, Fedora, Archlinux, Mint, etc. And when they did start making chromebooks they went and did their own thing. They didnt try to swoop in and steal Ubuntu's spotlight, just to benefit financially from Linux's rising popularity. Which is fortunate for Canonical because Google's influence and brand would pretty much make Ubuntu redundant.

Which is what Mir will do to Wayland, just because Ubuntu decides to be a Linux OS that focuses on user experience and marketing. Despite all the work that went into Debian, and despite all the work that went into Mesa, Canonical has all the influence because ubuntu has the user base to encourage investment towards making Ubuntu a profitable market. Despite all the work we do, you and your company can just swoop in a rally support behind your harmful, selfish agendas. 

If after reading all this, and you are still not understanding why people are pissed off at Canonical. Ill make it very black and white.

We are pissed off because Canonical doesnt have the moral right to take this away from Kristian Høgsberg after benefiting so much from his contributions. If Ubuntu didnt exist we hackers would all still be here hacking away making Linux the best operating system in the world for us. However if Kristian didnt exist we wouldnt be having this discussion because Mir would not have a reason to exist.

Until you understand this, I promise ubuntu and canonical will never be anything more than a mediocre success.    
+Adam Conrad Nobody would been seriously upset if Canonical wouldn't have backed their project with badly researched arguments that discredit the Wayland community. Personally I decided to remain calm after I got ask to not call Canonical to be lying. Fair enough. But water is boiling again, after me seeing that Canonical still tries to manipulate public opinion with bluntly wrong statements.

Everyone: PLEASE just comply to the standards you request from others, maybe this will calm down minds.
Go go Wayland. Prove that Mir was a wrong decision by bringing Wayland on phones, tablets, TVs and whatever. Ubuntu will come back to you for sure, if they stuck with Mir. If they succeed, also no problem, then its Linux success anyway.
Despite all the hype about Wayland or Canonical's plan to achieve "full convergence across the form factors" by 2014 with Mir, neither of these "next generation" display servers is going to replace on desktop anytime soon. btw, you Canonical team said back in 2010 that you until the end of year will move to Wayland. Now it is 2013 and you are talking about Mir. Please make a choice.
+Sirus Laia Actually, if Mir helps to finally resolve this legally questionable and technically unbearable situation we have with NVIDIA's graphics drivers, this all will be a huge, a giant step forward.
I think this whole mess, and so called cat fight would have been avoided if the developers of Mir engaged with surrounding communities before announcing it out of the blue. For something that can have such a far reaching effect on the free desktop world in general I think it is callous to just suddenly appear from the left field with something, instead of engaging openly throughout. Maybe some changes to Wayland could have satisfied the needs of Ubuntu? How do you know if you don't ask.

My fear is that things will once again break randomly, like they did in the days of the pulse audio transition.

This argument should have happened much much earlier in the process, because from at least some perspectives it seems that Ubuntu ignores the efforts of the wider community and just makes their own stuff. Not Invented Here syndrome. If the folks at Ubuntu could have made a good case for Mir, or changes to Wayland early in the process it may have been more likely that other communities would have got on board and there would not be this unnecessary antagonism arising.

Sadly the Ubuntu philosophy is supposed to be "a southern African ethic or humanist philosophy focusing on people's allegiances and relations with each other." Regardless of who is right or wrong I don't think that the Ubuntu people are living their own namesake.
Wow. G+ appears to be a bit trigger-happy on the spam detection. Just saved a legitimate post from the spam queue. It wasn't a particularly interesting wall of text, but it didn't deserve to be classed as spam.
Hi +Christopher Halse Rogers did you not say in the wayland irc-channel: "I'm unfamiliar with wayland's input handling"? And now you are familiar?


liar liar, pants are in fire, hanging from a telephone wire? ;)
+Ioannis Vranos You are just repeating the anti-Ubuntu narrative, like an echo-chamber. It's not nice. 

+Fabian Di Milia "the unity performance is a joke" If you have claimed x% slower, it might have made sense. But saying the performance is a "joke", you lose all credibility.
+Dave Airlie :
Can you point me at any non-Canonical controlled (CLA) project that has ongoing maintenance and development from Canonical in a non over the wall manner?

I miss an overall picture on what projects Canonical is contributing to, but FWIW I'm paid to work on this:
As I understand the conclusion should be "Mir, the wayland display server created for Ubuntu", because otherwise you can't outweigh the cost of duplicate maintenance of compatibility. It's an overkill.

So why not Wayland?

Well, maybe it's smart to have 2 display protocols, in order to have the better one win. Let's make sure this is going to be Wayland.
+Simos Xenitellis  It's not a secret, that unity's (compiz) OpenGL performance suck... check launchpad? check phoronix? install a game?  read news? read dev-blogs? e.g. install heroes on newerth and run it on unity... heroes of newerth will lag like hell and it's not playable! replace unity with  e.g. metacity and it runs without any lag!!! they released some fixes and enabled unredirect fullscreen windows by default in compiz, but it still suck! ...  and yes... unity is a joke, if you cant even play a game like heroes on an amd 8 core cpu with 16 GB ram, ocz vertex 3  and a evga gtx 560!  Btw, yeah is use nvidia-blobs and not nouveau and yeah im linux user since red hat 5.1 (1998) and yeah im gentoo user since gentoo 1.2 (2002).
I fear that Canonical/Ubuntu is going the M$ way. Right now it's only a fear but this "incident" isn't good and Canonical's way of handling things has become cumbersome. I've been using Ubuntu since at least 5 years and I am grateful and thus I'd have to nothing to ask of Canonical. But if I get the feeling that actual users are becoming a drag in their eyes like I did with Unity and now this doesn't look good either.

I might go back to Debian. What a horrible thought! ;-)
+Khaled Blah that seems like a bit of an exaggeration. I am sure there is an amicable solution to all of this and a logical way forward.
One thing that baffles me, is the lack of follow-up argumentation in this thread. Now, +Jay Luek already asked, but nobody seems to be interested in this, although it seems to me that it's a fairly central question... Please, what exactly are your "various reasons" to want server-side buffer allocation? Is there a sensible and detailed explanation as to the technical motivation behind this apparent design necessity?
+Tjaart Blignaut That's why I said "I fear" and not "I am certain". And yes, you are of course right, there is a way forward. This is about communication though and not the best technological solution so I am not certain this way forward will be found or used if it is found.
+Pierre Vorhagen Well if I understand well the Wiki, if it is server-side, you can send the chain of the keybord input directly into the search engine of Unity, while it is display on the client application. They're talking about shell level gesture  . So I imagine that the interaction can be read more quickly as a client-side server
+Fabian Di Milia One issue you have there is that you talk about "they" and "they", as if the developers are a different breed. I use Ubuntu and I feel part of the community. I understand that GUI stack has a 20+ years old legacy, that Compiz and X.Org are not optimal, and I am willing for change that will simplify the whole situation.
My graphics card is much weaker compared to yours, however it works reasonably well with CS:S that I have been trying, along with Unity.
I am not antagonising the developers; I might put up with some inconvenience, however for my case it works.

This democracy in free software brings all sort of voices, and sadly many of them are negative and stop-power voices. As if our enemy is just ourselves.
For me the real problem is we can't compare systemd vs upstart or even Gnome Sheel vs Unity or apt-get vs yum with MIR vs Wayland for one simple reason. Changing the protocol used to create GUI application is fundamental to the desktop application compatibility, the choice of desktop, init system or package manager isn't.
All in all I wouldn't be worried about Mir at all if it didn't pose such immense compatibility risks, I can totally live with incompatibility when it comes to creating desktop environments. I really don't care if anyone can use a different desktop environment on Ubuntu because there are more than enough other distros.
What I do care about is when Ubuntu breaks compatibilty for everyone, potentially excluding other distros from running popular closed source Linux software like Skype, Google Talk etc. that will then only be ported to Ubuntu. It's not about telling Canonical what they may or may not do, that's entirely their decision, it's about making clear that we as the greater Linux community can't stand quiet when Ubuntu risks making everyone elses distros broken.
Breaking compatibility at the GUI layer would make the porting between Ubuntu and the rest of the Linux world only slightly easier than porting from Mac OS X to Linux and we all know how many companies have refrained from that effor!
+Niklas Schnelle That you're be worried is OK, but for now we can't say if it will break the compatibility or not. It's just FUD. We have to observe.
Mir must work with gnome and kde desktop because of the official flavor. It's a prerequisite. And if the gnome env and KDE env works with Mir, so the application will have to work with it too
i think this drama proves the point that there was no chance of a cooperation or contribution from canonical and the wayland-team in the first place. This thread brought all the stuff from the past back into the business that makes it clear that the calling for contribution to wayland is just not the truth.

The Wikipage did have some text that can be interpreted in a wrong way, but it was corrected and it was excused on several times. My opinion: get over it!

Ubuntu said back then they want to use and contribute to wayland. But they now think that another project will suit them better. That is not uncommon in technical enviroment. Just think on Redhat first using Upstart and then switching to systemd. 

Of course the wayland-team can now shout out:" we just need a patch here and a bit of work there. and we could easily have done that this way and that that way". So do it all that way and prove your point that wayland is better, not only calling that theorethicaly. And again: would you have accepted patches from canonical that are not for the goal of RedHat or KDE (e.g.) but for the total convenience of Ubuntu?

And in the end that is all about the fear, that the prop. driver support will choose one side that "wins" at the end. Then my question is: what did the other big commercial companies like RedHat or Novell do in this case?

At the end i think the most thing the Linux-Community lacks of is teamwork! That means not everyone has to do the same thing in the same amount all the time, but that there are also specialists who can do what others cant do. If you push these out of the team doesnt only make the specialist not look like a teamplayer!
One of the issue is if Mir works, because of the partnerships of Canonical with manufacturers, Mir could became a "standard", and Wayland an alternative to Mir...

You 're never happy when you arrive in second place ^^
+Achim Behrens actually I don't care about the driver side either, all porpiertary drivers have had so shitty quality (whether on WIndows, Linux or Mac OS X) I'd rather live with 1/10 the fps but get proper support than use that piece of crap Nvidia and AMD provide.
Thank Turing there are great Open drivers for all desktop/laptop graphics cards out there and even some for the mobile ones.
+Jono Bacon I respectfully disagree with your statement about Mir not hurting the community, it has already started a flame war between us. The only question that people have is will Ubuntu ensure that vendors wont need to make Mir specific drivers and if those drivers will work with Wayland in the future? That is all people want to know. I dont think Mir fragmentation is on the same level as GNOME or KDE or XFCE, if i write a KDE app it will work just fine in GNOME and vice versa so there is no concern of weather or not it will work.

I apologize for if some of the things I said previously sounded somewhat harsh towards both yourself and +Mark Shuttleworth (I will correct his position title in my previous post) When in the heat of the moment sometimes I don't watch my wording as much as I should.

For me at least I guess I just feel let down, Canonical and Ubuntu are supposed to be the heros of Desktop Linux and if Mir can keep driver compatibility people would drop all this fighting, and we can all go back to loving you guys like we should.
I think I'm going to have to hold my comments and opinion forming until I know whether this means I will be able to use MIR with a usable(to me) environment i. e. not Unity.
+Dave Airlie Grub has significant worthwhile contribution from Canonical, but it's the only desktop-relevant case that immediately springs to mind. +Colin Watson has done great work there, and it's depressing that the fact that he can cooperate on a difficult project with an awkward upstream is completely ignored by other Canonical developers.
+Niklas Schnelle I agree that AMD/Intel drivers are pretty buggy. I do like playing Steam on Ubuntu though and for that I need them. Or at least I thought I did. How are the open drivers for the Linux games on Steam?
People pointing out that GS codebase was older than Unity's should note that GS was almost completely different before Unity came out. GS borrowed a lot from Unity. Not that it's a bad thing, just saying.
+Joe Mitchell on modern laptops with Intel graphics most games will run fine with the integrated graphics that has great vendor supported open drivers. The open radeon and nouveau drivers will probably work for many games too but your mileage may vary especially when using hybrid graphics solutions.
+Pierre Vorhagen No need.  The goal behind it is to enable an optimisation for full-screen clients where you can display the client's buffers directly, rather than having the compositor copy it every frame.  You don't want to be doing server-side allocation if you don't have to, but in this case it's a totally legitimate thing to do.  And Wayland supports it.
+Christoph Anton Mitterer Update your factbook: Canonical employes quite a number of Debian maintainers. Above 20 was the last number I've seen floating around. Also Ubuntu is pretty strict about Debian first those days: Almost zero chance to get a package into Ubuntu today, unless you've tried Debian first.
And they gave plenty of good reasons why ubuntu should be separate from debian, many of them good for the community: it kept debian a separate entity with its own release schedule. Debian was working well for some use cases, there was no need to change this, just to improve other cases.
+Aurélien Naldi May I point you onto this important note at "Note that testing does not get the timely security updates from the security team." - So effectively plain Debian only gives the choice between pretty old software versions on one side and no guarantees about security fixes on the other side. So for me Ubuntu fixed quite a major issue.
Where's the best place to keep up-to-date on the Mir project and to possibly get our hands on Beta releases? 
+Vincent JOBARD , the problem is that canonical's contribution to the free desktop graphics stack doesn't entitle canonical to just swoop in, fork wayland, rewrite it in C++ because they don't like Wayland's variable names, and take all the proprietary support. If Mir was an obviously technological improvement on Wayland, then we wouldn't be upset. But its not.

All I see is Canonical attempting to vendor lock closed source 3rd party apps to their platform when Ubuntu goes mobile.

I mean that's all dalvik is. That's all cocoa is. That's all Mir is. The fact that some of you people are so naïve to this astonishes me.

Mir is harmful. We don't need a desktop Android. 
+Roger Roach Canonical it's so revolutionary that they will do vendor lock with free software! The first case in the history.
+Roger Roach You say that Canonical wants to lock dispaly server but can you proove it ? As I see from my side is that Canonical has another approach will use some work did on Wayland but will threat with Manufacturers too. And because Ubuntu is Kubuntu Lubuntu Xubuntu and Ubuntu Gnome, they need to make a display server working with gnome KDE LXDE XCFE. 
Having two display servers will be terrible. It would be like having two brands of cola, two text editors, two desktop environments, or, horror of horrors, two open-source kernels. Duplication of effort is the worst of all evils. I mean, just think what a utopia we could have created by now without things like "competition," "money" and "freedom of choice." Freedom is slavery, man. What we need is one display server to rule them all, cause the situation with X11 has been so great that nobody was dissatisfied at all.

I realize that the main problem here was actually Canonical's misrepresentation of Wayland, which is a considerable grievance, but this post exists as part of the effort to clear that up. It may be too little, too late, but harping on it is only going to escalate the situation.
For those (+Jeff Fortin, +Mathias Hasselmann, etc) surprised by +Mark Shuttleworth's revisionist history of GNOME Shell and Unity, +Jeff Waugh wrote a well researched series in 2011 that included a bit on this -

+Ubuntu's silence at that GUADEC and the following Boston Summit was deafening. I recall at the time they were more interested in getting GNOME to switch to bzr instead of git.

I'm sure there is absolutely not a single, no not even one, no not even a tiny similarity to their behavior in relation to mir...
+Mathias Hasselmann : indeed this as well. Ubuntu made me feel comfortable to start switching my mom's computer to free software, which would have been less comfortable with plain Debian. I'm still confused, both hopeful and scared about mir but up to now Ubuntu offered a lot to the community at large from my point of view...
+Dave Airlie I'm not sure why what I outlined in the post doesn't count as “technical reasons”. We identified our technical desires, estimated how much effort would be involved in making Wayland support those desires, estimated how much effort we'd save by using Wayland instead of doing our own thing, and decided that it'd be technically easier and less effort to build our own thing rather than build a Wayland compositor.

It's reasonable to dispute that estimation (I certainly did, initially); it's (kinda) reasonable to annoyed that other concerns also factored in - I'm sure the fact that we control it weighed in favour of Mir over Wayland; I don't think it's reasonable to claim that there are no technical reasons why we didn't go with Wayland.

It sucks that this means there'll be more maintenance work in projects that you're active in; it's reasonable to be annoyed there.

But even in the parallel universe where we start with Wayland and do our thing, we'd still be imposing that burden.

If we did server-side allocation in a Wayland protocol, we'd need to patch the EGL platform and XWayland in almost exactly the way that the Mir EGL platform and XMir look today; the primary difference would be calling wl_mir_ functions rather than mir_ functions.

If we changed the wl_shell interfaces, we'd need to redo the toolkit integration patches.

The only way we don't add maintenance is if we never make any visible design decisions differently to Weston. That seems like a pretty restrictive requirement.
As the owner of I hope this all works out for the better of the USERS, you know the people who actually use all the crap you guys bitch and moan at each other for?

Get a grip people, this is just a stupid pissing contest now.

After all I don't want to end up having to split my website into "gaming for ubuntu" and "gaming for the rest of linux" because of developers having to do all kinds of hacks for different Linux's already isn't enough? sigh
+Liam Dawe The split ecosystem has already been reality for the last 3 or 4 years.  It is what it is.
+Jim Campbell Here here. Bradley M. Kuhn's article nicely describes the issue that "Not All Copyright Assignment is Created Equal"

I would be much more comfortable with Canonical's copyright assignment regime if copyright were assigned to a sister non-profit organization setup expressly for that purpose, and if that non-profit organization issued a FSF-like promise to never make their software proprietary. Dual-licensing could still be possible for compatibility with proprietary software where necessary (e.g. display drivers) but only under the strictest terms & conditions and only with the promise that no non-free fork would ever be created.
+Christopher Halse Rogers you guys so show little understanding of the input side. The main toolkit interaction fun isn't buffer allocation, its going to be input, input methods, xkb crap etc. Granted you might have no choice but to rip that code out of wayland and rewrite it in C++ but that wouldn't be test driven design then would it.
Christopher Halse Rogers
4) We need server-side buffer allocation for ARM hardware; for various reasons we want server-side buffer allocation everywhere.
This is where you go world of fail zone.

The reason why wayland does not do server-side buffer allocation by default is 4 fold.

1 remove the huge memory leaks problems X11 suffers from due to not knowing when a buffer is no longer required.
2 if kernel is controlling allocations when application dies that application will die by out of memory and other things the memory is free imedentaly.  Also can disappear off screen imedentaly.
3 finally very clear to see what application is bleeding memory if it don't end up all stacked in the server.
4 Linux Security Modules controling application to application interactions.  Inside the display server you have hidden the creater  memory from the Linux Secuirty Modules.  So now you have todo some form of super hack like X11 to work around that. X Access Control Extension (XACE).  You don't need this super hack when you give memory management to the kernel and have applications place there allocation reqests direct to kernel.

From what Christopher has said Mir is going to bleed memory exactly like X11 does.   Be hard to secure like X11 is.  Be hard to find what application is causing the server to bleed memory like X11 is.   Worse its going to be insecure like X11.   Server side buffer allocation is basically not workable.  Server side buffer allocation is where the secuirty of X11 starts falling apart.

Christopher Halse Rogers why should we be happy to see Mir.  Drop the allocate in server idea please it just don't work.  It is never going to work we have tried for 30+ years to make it work.  Windows and OS X don't do display server memory management they also don't bleed memory in the display server.  Display Server Memory allocation is a huge error of X11.  We need that feature gone.. 

Memory management is the job of the kernel.  When its not in kernel you are asking for trouble.  So if ubuntu wants more memory control that is add feature to the kernel.

Weston is never going into production.  Mutter for Gnome and Kwin for KDE are currently lining up to be there wayland compositor.  This is the problem KDE and Gnome will not want a display server back at all.

The wayland change is secuirty work must now move to kernel.  Cgroups and other functionality that systemd has implemented solve lot of the problems that you have most likely done this stupidity for.

Basically everything else Christopher Halse Rogers said I could go fine.  Just number 4 is complete foolishness.

The intel developer of wayland spent years trying to fix up server side allocation.  Unless you are willing to commit more than 10+ years to it and still have the risk of failure don't do server side allocation.  Client side allocation is far simpler to manage and not get wrong.

I could see a need for a software buffer allocation device.  Note I said device.  A form of usermode driver.  This is if you want userspace buffer allocation.  Advantage on platforms that don't need it you can drop it.  A usermode driver like this could be used by kwin, muttler and others implenting wayland.

As a usermode driver it is possible for the kernel to track.

Basically Mir is implementing something in the Display Server that should be a driver.  Something the kernel can track who is accessing it.  Functionality that don't own in the Display Server.

Wayland design is correct.  You want a DRI3 usermode driver doing client side buffer allocation in userspace.  Instead you have build a display server with server side buffer allocation.

Both achive storage in userspace.  Both can interact with userspace stacks.  One the kernel can naturally track.  DRI 3 usage of filehandles means you know from the driver side when a buffer is no longer connected to a client side.

Something you have not considered is that at time Mir should not be looking at the contents of the buffers.   Only drivers and output should be able to see every buffer contents.  Wayland it is possible to pass through a buffer without Wayland Compistor looking at it and the LSM prevent wayland compistitor ooking at it.

As a DRI3 usermode driver on different hardware you can swap it.  This is a case have 1 thing do that 1 thing very well.

The roles are now basically defined.  Memory allocation is something kernel.  Compositing and shell is something userspace.  OS X and Windows also define it this way.  It is the only way that works.
+Peter Dolding Server-side buffer allocation isn't an issue: there are extremely valid cases for when you want to do it, particularly on embedded/mobile/media chipsets.  The allocation is still done by the kernel, but it's an issue of who arbitrates the resources.  Wayland supports server-side buffer allocation 100%: it's just a matter of when the underlying EGL implementation (the details of which Wayland is unaware) decides to do it.  And, as has been noted, I'm doing this right now on Wayland.  You definitely don't want to do it all the time - you want to do client-led allocation when you can, and server-led allocation when you have to.  But in the context of Mir and Wayland, it's (almost) a non-issue.
Dear Mark, dear Christopher, I follow the Linux and Ubuntu developments intensively almost since Kernel version 1. I think Canonical has done a major mistake in communication with regards to Wayland and Mir. I think the only possibility to get back credibility is to invite the core Wayland programmers to Canonical for a face to face meeting. Such a meeting would help to get the future of the Linux graphics stack right again. I am not saying that there must be an agreement over future-developments, but I am saying that there must be a mutual understanding of each others projects independently whether there is cooperation. If such a meeting does not take place, it will be absolutely detrimental to Ubuntu AND Linux. Please do it and also announce it in a press release.
Danial Stone I would like to see a case where it is really a valid case.  Where it something a usermode kernel driver cannot do.

I-underlying EGL implementation decides to do it.-   You note that wayland is unaware.

This is leaving it in the hands of the drivers.  This is the problem.    Of course some EGL implementations are not secure(that just happen to be closed source).  They run there own userspace allocation management that is basically as bad as X11 and yes some leaks like X11.  So the result is we have swapped one item that would eat us out of ram for another that will eat us out of ram.  You mind end up deciding that supporting some of these stacks is not worth the issues.  This happens when you making embed devices where you give up on a complete soc because its EGL is leaking memory like mad.

Yes for broken EGL single thread request or EGL that cannot track when application ternminate you should have a wrapper over the top.  Both of these fixes should be in driver space with a wrapper driver.  Don't fix driver issues server space other than black listing a driver and using a different driver that happens to be a wrapper that fixes.  Yes doing this the chip is going to be slower.  Fixing in display server is also going to be slower than getting maker to fix driver as well.

You have 4 spaces.
1) Kernel mode drivers.
2) Usermode kernel drivers.
3) Displayserver/compistitor/some EGL makers horid hack of link to who ever opens the display.
4) Applications client side.
3 and 4 allocations should appear client side. If it broken. Space 2 is used to fix it.  Fixing in space 3 means you no longer can see issues inside space 3 so then its what is leaking.  Space 2 the allocation can be still directly assocated with the application that created it and when that application terminates makes sure clean up is performed also find applications or compistors that are leaking.  Small devices a little slower and no memory leaks is better than a little faster and leaking mad and failing due to out of memory..

Memory is a kernel job.  Some closed source EGL drivers don't get this yet.  You cannot do a correct out of memory clean if it the thing doing out of memory does not know what owns to what.  You cannot do correct application ternmination if you don't know what owns to what.  You cannot do correct display if display does not know that application has gone by by.  Because you risk running out of video ram.

It does not matter if it userspace EGL non driver or userspace display server doing the buffer allocation server style.  This broken behavour.  This is coping X11 broken behavour.  We need to draw line in sand.  If you are doing this borked behavour in your EGL drivers we will wrap you.  You will run slower than your competitors who do it correctly.
+Peter Dolding The memory allocation is still done by the kernel.  If an application is fullscreen, you want the GPU to scan out from the application's buffers directly, rather than placing a copy in the middle.  Ditto for displaying video.  Doing this requires physically contiguous memory.  The clients can't be the ones to arbitrate allocation of the contiguous memory as they don't know when to request it.  So the server arbitrates it, and the kernel still does the allocation.  Your comment doesn't actually apply to this situation at all.

And to be clear - even though I've said this before and I think repeating it won't help - the allocation is done client-side (through the kernel) by default.  It's only when it's possible for the surface to be a scanout/overlay target, that the client is told it should instead request buffers through the server.  So in the non-fullscreen case, there is zero performance penalty, and in the fullscreen case it's a lot faster (and uses a lot less power) since you save yourself a fullscreen blit per frame.
There is a problem here.
-Doing this requires physically contiguous memory.-
The only thing assured to pull this off is a kernel mode driver.  Even a usermode driver cannot to be sure.  From userspace you see virtual memory behind the memory management unit.   You don't see what is physically contiguous.

Daniel Stone this is the problem.   X11 running vesa had to access /dev/mem or /dev/kmem to truly be sure that what it was interfacing with is physically contiguous memory.  Yes we do not want to be doing this it brings nice big security holes.

Its possible for the gpu to pick up anything that is physically contigous and copy into video card as an overlay to be drawn.  GPU exploiting texture overlay.

This is the problem.  Daniel Stone.  Mir cannot do physically contigous memory dependably without putting the request to kernel anyhow.  Effectively to use a gpu to rapid place scanout/overlay target that is many times more effective than cpu you have to call to kernel to create the buffer in the first place.

-The clients can't be the ones to arbitrate allocation of the contiguous memory as they don't know when to request it.-
And how does mir know.  You are presuming application will not change mind between requesting buffer to fill and when it wants it displayed.  We cannot be certern of this.  So go to all the effort of requesting and making a contiguous memory block and the application throws it away and requests a new one what then.  Or worse waits ages to dump it so you run out of contiguous memory because user changed applications leaving large lot of contigous memory allocated.

How does mir know is important.  Because if its the active window.  Exactly why could this not be like a cgroup flag around the application.  Or a flag on a buffer or /proc/pid/flag.  So informing the kenrel.  This removes mir from being in the middle.  There no valid reason at all for the display server to be doing man in middle.  Just you are not telling the drivers or the kernel enough information.   You are wanting to save power and blits.  This means you don't want to run any extra code.  The client application does not need to know.  The driver in kernel space is the one that needs to know.  There is more than 1 way to tell the driver /proc and cgroup and flags on buffers can all be used to achive it.  Mir solution is don't use the other methods to tell driver.  We will take the server solution.  Lets not take a solution to inform driver so Mir does not have to run todo this.  Results you still avoid blitting.

Yes I will give you server solution is faster to implement.  It does not require improving the driver standards.  No matter how you look at it a wrapper driver implementing the correct functionality that encourages hardware makers to ship with that functionality will be the best long term.

Also just to be horible.  Some video cards fragmented to hell in memory is perfect fine.  Different implementation give me a map of data I am ment to copy.  These are ones that share ram with the cpu and also share the mmu and are able todo virtual addressing to make non contiguous physical memory appear contiguous to them with no performance difference or power usage difference between fragmented and non fragnement.  On this type of hardware will be wasting cpu cycles particularly when you run out of contigous memory and went to the effect of creating contigous memory and the gpu really did not give a stuff if it was contigous or not.  So all those cycles creating contigous was a waste of time.  Larger than blitting in fact.  Out of contigous memory problem will also still happen.

The advantage is hardware particular and its a disadvantage on the other hardware.  The hardware its a disadvantage on is the hardware that can have higher ram allocation levels without issues.  This is also why this stuff owns in drivers.  Not the display server or compistor because what you are refering to is hardware dependant.  Most people you would say it to would not know it.
+Christopher Halse Rogers Logic suggests that reinventing the wheel takes more effort than using the community standard. And as +Daniel Stone just alluded to, the only valid technological issue is of course a non issue like everything.

What are not technological reasons for forking community accepted standards is

A. Not liking API variable names.
B. Not liking C.
C. Not understanding Wayland.
D. Rejected commits.

So let's get real and be honest. The pros of Mir for canonical.

A. Non peer reviewed commit.
B. Control.
C. De facto vendor lock.
D. Make up for the fact that your engineers have to build it to understand it.

Just know that I was actually looking forward to developing on Ubuntu. Now I have every incentive to support anything else.

The thing is that I've been highly sympathetic towards canonical about Ubuntu controversies.

Unity: You want to stand apart and make a recognizable trademark? Go for it.

Amazon ads: Aye, daddy gotta get paid right?

Then you guys do something like this??? If you guys keep pissing developers off your only competition in regards to an app market will be PlayStation Mobile. But hey, at least PlayStation Mobile runs on Vita. 
+Peter Dolding Dude, I don't think you're reading what I've been writing.  The allocation is done by a kernel driver.  Again: the allocation is done by a kernel driver.  The kernel driver ensures that the allocation is physically contiguous, and gives userspace a handle back to that memory.  Speed of implementation has nothing to do with it: without giving clients direct access to manipulate the display (a security risk), you still have to have the compositor involved.  But we eliminate a copy.
Also, platforms which don't require physically-contiguous scanout buffers, won't implement this in their EGL stack for the obvious reason that they don't need to.  Platforms which do require it - of which there are many - will.  And it's an implementation detail.
Daniel Stone you have also forgot that for a server buffer management you have to context switch.  From the process you are running past other processes until the server comes to the top again.  This is the ablsoute last thing you wish to be doing.  Its deadly important to find a way that an applicaiton can request a buffer contiguous or not contiguous without having to go to server.  Because going into the kernel is the applications time slice.   All applications on the mir or X11 requests for a server have to try to get limited timeslices.  This is why X11 at times stalls.  It runs out to fime slices to process the requests.  Daniel Stone you are duplicating X11.

Yes while you are testing only a small number of applications attempt to have the server do stuff it appears fine.  Its not like mir is going to be able to fork a thread per process on the device like database servers do.

So its very important to work out how you can inform the driver in kernel you want contiguous from this program or you don' t care or its running in background and fragmented at this stage should be preffered.  These could be done as flags of some form.  It must not require a server for the allocation to be able to go ahead.
/proc/{pid}/  cgroups around application, Flags on buffers in some case might work.

These all need to be explored.  Only if you cannot make any of those do it then return to server solution.  The server solution is going to sux because you will have it freese because you run out of time.

Mir design contains a centrel point of failure.  Mir has enough work sending the instructions it needs so compositing happens without manging buffers.

It is very very important not to over load the centeral point and off load as much as able.  Creating contigous buffers is a random time event.  Could be slow could take many seconds.  A single stalled application is annoying complete interface stalled is pissed off users.
+Roger Roach reinventing the wheel only takes more effort if the wheel you want is already substantially invented. We wanted a different wheel, and estimated that turning Wayland into that different wheel would take more effort than inventing a new one.
+Peter Dolding compositing implicitly requires handling buffers - you need to take the contents of the client's buffer, composite it with all the other client's buffers, and display the result. Every client frame requires a context switch to the compositor anyway. It won't add any more context switches to have the compositor hand the client back a buffer.

I don't think that you know what you're talking about.
+Ioannis Vranos I don't believe Compiz was an abandoned project when Unity started up, really. IIRC, the C++ port had just been completed when Unity began. And considering that GNOME Shell still lacks decent replacements for some of Compiz's usability features til today (e.g. ezoom's super+scroll for zooming), I believe it was a good move.

That Unity completely and utterly blew Compiz's lean footprint out of the water (RSS=~20MB → RSS=~90MB) is a completely different matter which I believe reflects the way the initial Unity developers handled things (libindicate and accompanying libraries also used to be a mess of memory leaks that manifested with an uptime > 4 days).
+Christopher Halse Rogers
-Every client frame requires a context switch to the compositor anyway. It won't add any more context switches to have the compositor hand the client back a buffer.-

Christoper did you never flow chart the functions of Mir compared to Wayland.  Yes it does add more context switchs when the compositor is under load.  Unlimited more context switchs with the application unable todo anything until they complete..

1)I need a buffer.
2)I have to request buffer
3)wait for mir
4) mir runs hopefully process request for buffer and sends me buffer
5) wait until I run again
6) check if I have bufffer if not wait return to 3.
7) fill buffer send to mir que
8) wait for mir
9) Mir run
10) Mir displays content..
11) Done and return to possibly 6 but if I need a different size buffer its back to Mir..

Wayland weston.
1)I need a buffer.
2)I create buffer
3)I fill buffer
4)I send to Weston que.
5) wait for weston to run.
6) weston display content.
7) Done application can move on to something else content is displayed and to user..

Only 1 lot of waiting compared to 3 if I am luckly with mir.  Infitity if I am unlucky with mir..  That short path applies to all wayland compositors.  Both are the same lenght if I can reuse the buffer.

The big problem in the mir and X11 is you can get trapped between 6 and 3.  Mir almost looks exatly like the X11 processing flow chart.  This is something you don't want to look like.

Christoper the best of both worlds is to fix wayland to have a flag accesssed by application allocation to work out if it should be conitunous or not and throw away server side buffer allocation.  Any attempt at server side buffer allocation risks a stall.

If under Wayland has to blit a few times before application recieves message todo conitunous allocations or a few times are conitunous when it has been told no more at least the system has not stopped.

Even under Mir you are not going to get conitunous allocation right all the time.  So its not really worth possible never ending loop.
+Peter Dolding you've got the Mir request loop wrong; it goes:
1) I draw to my buffer
2) I tell Mir to display this buffer
3) I do something while waiting for Mir; either blocking or some other work
4) Mir responds with a new buffer
5) I draw to my buffer

With Weston, it's:
1) I draw to my buffer
2) I tell Weston to display this buffer
3) I do something while waiting for Weston; either allocate a new buffer and draw to it, or block, or some other work
4) Weston calls back to say it's done
5) I draw to my buffer.

You'll notice that there are the same number of context switches in either case. It's possible to do more work in the Wayland case; between where the client has submitted the buffer and when it gets a reply.
+Peter Dolding No, it doesn't require an extra context switch per frame.  That only holds true if you exactly replicate DRI2's model of doing one roundtrip buffer request per frame.  In the model I have implemented, we hand the client two stable buffers.  So there's an upfront cost of one roundtrip, and an ongoing cost of absolutely nothing whatsoever.

Also - and this is in caps since you don't seem to be reading at all - I AM IMPLEMENTING THIS BASED ON WAYLAND.  I'm a Wayland developer, and have nothing to do with Mir.  I also worked on X for nearly 10 years, so I would know if I was reimplementing X.  I gave a talk at LCA in January about why no-one should ever reimplement X: you should look it up.

Congratulations though, for getting Mir and Wayland developers to agree on something.
+Daniel Stone but of course you don't know what you are talking about right? This debate is pointless. If canonical wants to reinvent the wheel into a square wheel. What can we do about it? Let's just hope that plenty of people have common sense and don't buy a ford that won't be going anywhere anytime soon. Game, set, match. I'm out. 
The reality is that it doesn't matter what anyone outside of Ubuntu says, they are going with Mir. Wayland is one more outside community that they don't need to depend on.  Eventually everything in their OS will be built by them so all dependencies exist in their world and they have control over them.

They will be more than happy to tell you to choose any other Linux distro if you want something different than what they offer.  What the outside community doesn't realize is that Ubuntu has a grand plan, that is most likely 3-4 years out, maybe even farther. In that plan is all the issues with Linux and how to resolve them so that you can create the next generation OS that rivals both Windows and Mac.

I complained along the way about changes that were occurring, but as time goes on, I am starting to see why these changes are happening. They are all part of the grand plan and the pieces are starting to fall into place to provide an amazing OS. Unity is a great example of people trashing what they don't understand.

With each release of Ubuntu, Unity moves one step closer to the complete vision of what Ubuntu had in mind. For example when Unity first came out it only supported one monitor, now it supports multiple. When I first tried it I was so locked into how I have done things, I switched to Mint Linux. Now after a year I am starting to see the logic of Unity with wide screen monitors and how the side bar works great and gives more space to the screen.

So I think the problem with Mir is the lack of understanding the long range vision. Ubuntu is focused on working across multiple devices, so I imagine the needs change from X or Wayland. You have DRM, you have to live in corporate environments, you want to control development speed, you want seamless integration that includes a stripped down code base. I don't know, but these are just some of the ideas I thought up. I can only image what a room of 25 people can come up with.

Most users just want it to work. They just want to do what they want to do and the computer/tablet/cell phone shouldn't get in the way and I think Ubuntu is focused on exactly that.
As an impartial observer outside of the politics and baggage, what I read here is that Canonical decided to create MIR to avoid all of the politics and baggage, and with good reason. It's hard to collaborate with people who might refuse to implement features that are critical to your project. This leads to fragmenting and duplication of effort. At that point, you might as well risk bruising a few egos and roll your own.
1) You don't want X11.  Good.
2) You don't want Weston.  Ok.
3) The input stack wasn't written, but it is now.
4) Everyone except you already knows that Wayland supports this.
5) You want minimal complexity so instead of re-writing Weston you are re-writing every graphics toolkit.

This is why no one is taking your technical reasons seriously.

The main reason that you have is social in that you don't get along with upstream.  Those guys are mostly bastards and slow to apply patches.  I can buy that.  If you had said that then it would have made more sense
Let's take the Mir-Wayland debate. Leaving all the technical reasons aside, Shuttleworth is alienating a very large section of the community to go after his vision, which is becoming a media giant. He's banking on gaining the support of the 'masses', and in his eyes the neckbeards don't hold much value. And there's Canonical's biggest problem: Everyone is jumping into the fray, even the people that have no clue what's going on. Reminds me of the classic Icarus tale. Shuttleworth is trying too hard to become a new Steve Jobs. Not saying he couldn't pull it off, but it must be getting awfully hot over there...
+Dan Carpenter you've clearly read enough of my post to see that I've got five points in it; one wonders how you didn't get the rest.

We knew that you can write a server-allocated buffer model in Wayland. That you can, however, is not actually the most interesting fact; you can write a webserver in postscript if you really want to.

What is interesting is how much work it is. Which, you may notice, is the metric my post is applying. If you go a server-side buffer allocation model, it turns out that you get to make significant changes to the EGL platform and XWayland.

Also, deviating in any client-visible way from Weston means you need to rewrite the toolkit; the choice isn't "rewrite Weston or rewrite the toolkits", it's "rewrite Weston and rewrite the toolkits, or just rewrite the toolkits".
+Daniel Stone I don't know.  But Dave and Mark are talking about how they tell each other to "get fucked" when Canonical submits patches.  Also the Christopher's post mentions that they don't like upstream review.

The technical reasons for not using Wayland are nonsensical "we can't test code if it already exists."  eye roll  It's really that Canonical doesn't get along with upstream for whatever reason.
+Dan Carpenter Dave said, with fairly colourful wording, that toolkit upstreams would potentially reject Mir patches because it would be too much work for them.  Mark decided to deliberately misinterpret that as part of his mission to convince everyone that Canonical secretly creating their own private window system which requires copyright assignment, is everyone ganging up on Canonical.  And, with the greatest possible respect to everyone involved: Dave isn't a toolkit developer and so can't necessarily speak for them; Mark isn't involved with anything technical so can't (and, going on current form, shouldn't) speak for anyone; and hey, not even I'm involved with either toolkits or Mir.  Hey ho.

+Christopher Halse Rogers Just talking about XWayland here, I don't think server-side buffer allocation would really involve many changes at all? Certainly far fewer than XMir - unless your input model in particular looks very similar to XWayland's, you're going to be rewriting a hell of a lot more code than the few lines which deal with buffer allocation.
+Christopher Halse Rogers +Aaron Seigo is right when he says stop digging yourself deeper and hurting your credibility by going on an on about buffer handling.  It's just not the issue you are making it out to be.
+Daniel Stone for XWayland you need to deal with the rendering-model mismatch - at least the way we do things. You don't have the previous rendering in the buffer you're submitting, so it's a bit awkward
+Dan Carpenter yes, buffer handling was a surmountable problem. As was window management. As was (at the time) the lack of input handling. And, given all those changes, the lack of upside of using Wayland.

If it were just buffer handling, then maybe we would have chosen differently; I don't know, I wasn't making that choice.
+Stéphane Raimbault he left one thing out. Mandrakesoft where the first with freeshiping cd`s. And all that is said for Ubuntu about bringing new users to Linux can be said for Mandrake from 1998 do 2004 and for OpenSuse before that.
+Christopher Halse Rogers Where is the discussion where you brought these concerns up with the Wayland team?

Nobody particularly cares that you've done your own thing, but we do care that you decided you couldn't work with the community on yet again and we wonder what that means for the future.

Is every critical change to the Linux ecosystem going to be subverted nine months later when Canonical announces its disagreements?
+Christopher Halse Rogers
funny that, how neither Moblin nor Gnome Shell seemed to have run int this show stopping abysmal peformance issue; we did a fair amount of benchmarking off different compositors when working on Moblin and MeeGo, and Mutter at that time outperformed them all, Compiz included. :)
The key quote is this "we don't have an additional layer of upstream review". And the bottom line is that RedHat and Ubuntu just can't play nice together. So the Ubuntu guys would rather write their own thing than have to fight with RedHat guys to get their changes committed. Sigh.
Can we please stop the nonsense and focus on the important thing: which one is better for the user and the developer, Mir or Wayland? Regardless of which one arrived earlier, which one was developed by who, which one Qt/Gtk+/etc already support, etc

My only complaint about Mir is Canonical developed it behind the scenes instead of with the community. Other than that, I want the best performance and stability on every device (desktop, mobile, wearable, microwave!).
"we can use our own infrastructure, we don't have an additional layer of upstream review". And right there Ubuntu missed an important opportunity.
+Mark Shuttleworth BTW, what about Mir for non-Linux? NetBSD and other BSDs are increasingly complaining Red Hat is driving the open source Unices to "the Linux way or the high-way"
Christopher Halse Rogers
"1) I draw to my buffer
2) I tell Mir to display this buffer
3) I do something while waiting for Mir; either blocking or some other work
4) Mir responds with a new buffer
5) I draw to my buffer"

This presumes my application can move on and do something else.  Blocking prevents application of preping update ahead of Mir being ready and sending to Mir.  The ablity to prep buffers without waiting for the display server is key.  So my application has got an important notice to be displayed as soon as possible and the past buffer I sent up que is wrong so should not be displayed very long at all.  By yours I am locked I cannot tell Mir the buffer I have already sent is dead.  If you say I can you are not allowing for the race condition that I am running on 1 core and Mir just happens to be running on another when I kill it.  Or lets hope the locks are right. 

How long before Mir responds with a new buffer.  You cannot place a fixed time on this Christopher Halse Rogers.

The other thing under Wayland you don't have to wait for wayland to tell you it done displaying the buffer before you can be telling it to display the next.

Christopher your Weston is off.
"With Weston, it's:
1) I draw to my buffer
2) I tell Weston to display this buffer
3) I do something while waiting for Weston; either allocate a new buffer and draw to it, or block, or some other work
* 3.1.1 If send buffer is out of date do everything under 3.1.X as many times as required.
* 3.1.2 Allocate new buffer or reuse buffer returned from compistor
* 3.1.3 draw to this buffer.
* 3.1.3 tell Weston to display this buffer.
4) Weston calls back to say it's done
5) I draw to my buffer."

So effectively the first buffer sent to Weston might never see the light of day.  Christopher Halse Rogers you are working on the presume that everything sent to the compistor has to see the light of day.  You are forgeting about dropped frames.  Application need the means to have compistor drop data if the compistor lagged out too long for the data to be valid to to display any more.

Mir is lacking the key ablity to drop frame effectively.

Daniel Stone I have watched your LCA video.
"In the model I have implemented, we hand the client two stable buffers.  So there's an upfront cost of one roundtrip, and an ongoing cost of absolutely nothing whatsoever."
You are right to claim this is better that mir solution. There are some very big buts against what you have just done as well.

What about systems with likes of  8 or more cores.  How can be sure that my application can update the second buffer and the compistor will not be touching that buffer at the same time.  Yes the fun of locking here.  It is simpler of I can send a buffer to compistor then send another buffer to compistor to replace it and so on if something has updated.  Not wait for compistor to return something or swap something.  This is fairly much lockless solution where due to your two buffer limit force me to lock..

Next cost absolutely nothing is not true.  Application only need to send one frame ever what have you done Daniel Stone.  You have doubled the allocation.   The compistor don't know exactly what the application will or will not do.  A issue inside X11 with memory bloat is buffer allocation inside X11 server since it does not know when a buffer will or will not be required.

The means to tag a buffer that any new buffer created off that buffer must be created a particular way would allow the application todo the allocations and the compistor to tell applications how it wants it.  Advantage compistor does not have to guess if application needs a single buffer, double buffer, tripple buffer or quad buffer or more if the application is doing the allocation.

Now if a buffer is produced wrong the compistor can already cope this is proven.  It is a one off blit and reallocation to correct a incorrectly sent buffer.  Reallocation can be transpartent to application if the kernel using MMU does it.  Again this is taking the non blocking method.

Notice lot of games use tripple buffering and you only supported double Daniel Stone.  You are trying to mind read applications.  Mind reading what applications will do is not possible.  Tripple displayed, sent to be displayed and being worked on.

Also the problem if you go tripple or quad or any other larger default you are wasting memory.   Also application comes to last frame it need to display it can drop the back buffers if it in full control Daniel Stone.   So freeing up memory for other things.

Daniel Stone same here how are you going to allow application to drop frames.  Remember you cannot be sure that the compistor and application are not running in sepreate threads on different cpus at the same time.

This is why I say to both of you spend some time flowcharting what you are doing.  Be very particular to consider how what you are doing is going to work.

Daniel Stone yours does not work well,  Mir does not work well.   Try mine where you are informing the applications buffer allocation system what you want.  Effective applications will recycle buffers.

Even so sloppy applications still will do stupid things like calling your double buffer to send one frame,  creating a new double buffer and send one frame.  This gets highly expensive if you are allocation conitunous allocation of physical memory every time.

Christopher Halse Rogers same applies to mir how do you know the buffer you have spend the time allocating in mir the application you send to back to will ever use it.  If the application never uses it you have just leaked memory if it never frees it.  You can bet developers will forget about the compistor allocating stuff.

Its very hard to static analise for memory allocation errors if the memory allocation error happens due to interactions of two different code bases.

Lets make it simpler for coders todo 1 frame and require extra command todo more.

So far Daniel Stone and Christopher Halse Rogers your solutions are broken.  You are not providing what Applications like games need.  Even non games you are not providing the best solutions.

Even yours Daniel Stone.  How do I drop a buffer that has already been sent to compistor from being displayed and a different buffer displayed instead if the compistor has not displayed it yet and I don't know if the compistor has just started.

Multi threaded programming.  Its a brain bender.  Modern hardware you have to allow that the compistor/display server has to cope with this.

Both of you are not asking yourselves the right questions.  You are both looking at ok I what to make this high power effective.  You are not looking hang on application needs todo stuff.  Application will need to dispose of some of the data it sends to the compistor because its out of date before the compistor can render it.

Dropped frames are a normal.  If your solution cannot fail to display a few buffers due to them being out of date copies due to newer version sent to compistor your solution is broken.  Like there is no point telling GPU to pick up out of date buffer.   Basically tell application yes compistor has seen this buffer you can free or reuse it now.

We want a quality experience.  Quality is not displaying out of date information when current could have been sent.

Power is only 1 half of the problem.  Not blocking application is another part.
Dear Mark and Christopher:

Please don't think the entire linux-using world is against your goals, methods, Mir, or Ubuntu.  We're not.  Some of us, especially people like me who've been on the Linux train for quite some time, see what you're doing and have only this to say:

It's about time.

I jumped to Ubuntu from Mepis when the Warty Warthog came out.  Finally, a Gnome-desktop using Debian with no "will it work on this hardware?" guesswork and an installer that wasn't hell.  It was a big thing at the time, and has since become a standard for distros everywhere.  I've been with Ubuntu ever since -- helping with bugs near the end of each release, troubleshooting, I even helped work on the smaller icons in the Humanity set once upon a time.  

And I can say, having watched the history of everything unfold to this point: I really, really get why you're going this direction.  And its a damned shame other people don't.

I've seen Ubuntu try to contribute code to Debian and have a lot of it rejected.  Working code, that solved real problems today, in practice, instead of tomorrow in theory... but not in the way Upstream would want, so no dice.  Same thing with Gnome.  Same thing with Pidgin, and countless others.  And it galls me to no end to see those patches get rejected from my Ubuntu install (where that particular bug is actually fixed, or at least worked around), and see people have the nerve to call Canonical a poor collaborator.  

On the other hand, Unity has gone from "barely workable mess with no features or configuration options" to "fully fledged, working desktop" in, what, a year or two?  With no bickering, or code rejection, or mailing list arguing?  

Heh.  Small wonder you want to house your own projects.  I don't see how anyone can blame you.  

I've seen Gnome flounder with Gnome 3, not knowing what it was going to be, and taking way too long to become it, until you guys said "fuck it" and created Unity.  And not a moment too soon.  Ubuntu needed a next-gen desktop, and it needed it yesterday, and waiting on external projects was no longer helpful.  In fact, it was hurting the distro and brand.

And despite all the guff Unity took, isn't it interesting that Gnome-Shell completely redesigned itself to practically *be* Unity shortly after the first screenshots came out.

The thing you guys get, that others seem to completely fail to understand, is that it isn't all about the one holy elegant code design architecture.  There are other factors in the software world that are just as important, that have nothing to do with code... like visual design, and especially TIMELINESS.  Wayland threatens to pull an Enlightenment 17: it'll be a near-perfect, efficient, beautiful, well-thought-out and elegant framework released about ten years too late.   Waiting wasn't a good answer when Gnome 3 was figuring out what it should be, and it isn't a good answer now (after five years of dev and no 1.0 out yet).

I don't envy you.  It's a lot of work, and the Linux community seemingly just loves to shit on anyone who wants to change the status quo.  But if anyone can fix the X-mess (which has been ongoing throughout all my years of using linux) in a reasonable time-frame and with backward compatible support, I think its you guys, and I can't wait.  

Thank you for all you do.  At least one of us appreciates it.
Having read most of this thread, to me, the entirety boils down to the fact that it would have been really courtious for Ubuntu devs to talk to the Wayland devs, discuss their problems and differences prior to a starting a new project with the same end-user goal.

The Linux [distribution] community is way too small to effectively maintain multiple implementations of so many vague concepts.
+Aaron Seigo +Dave Airlie I just have 2 questions.

1. Didn't you see that coming (2-3 years ago)? Not about Mir, but about Canonical's step away from communities. Why are you acting surprised?

2. How Ubuntu with ~0.1% contribution (at least in the beginning) managed to get the ~99.9% of Linux Desktop trend? Communities also have responsibilities. 

+Mark Shuttleworth You certainly pick nice names, Unity and Ubuntu. Sometimes I am feeling you did it just to make fun..

Ooh, and there is no reason for using bad language. Absolutely no reason..
Are patches accepted into Wayland / Weston without tests?  If so, I think that code acceptance policy is a valid concern for Canonical to have and one worth changing if at all possible.
+Mark Shuttleworth is using GNU/Linux and the FOSS community to bootstrap his own Ubuntu ecosystem to compete with iOS, Android, and Tizen. That environment ultimately may or may not be based on GNU/Linux.  FOSS licenses give him the freedom to do this. Developers and users equally have the freedom to decide if they want to participate or not.
+Peter Dolding You're wrong about how Wayland works (look at buffer release events - we already have that locking and it's 100% irrelevant as to who initiates the allocation).  You're also wrong about how my solution works (it doesn't enforce double buffering, and the client is still in 100% control of the number of buffers it uses).  It doesn't change any client-visible behaviour.  Any.
+GonzO Rodrigue Wayland 1.0 was released 5 months ago.  And a large part of those five years of development were spent working on the lower levels of the graphics stack (e.g. Mesa, DRM/KMS) in order to make both Wayland and Mir possible.  But hey, why let facts get in the way of a good flame ...
This is basically why I am abandoning ubuntu. I am sick and tired of people who don't understand X condemning X. Why not improve X? X has decades of incredible innovation.  BTW: "from the sensible (we want to write our own display serve so that we can control it)" is not sensible unless "control" is your ultimate objective and this is almost always instead of quality or usability. 
My doubt is... commitment. Let's see how well Canonical commits to this in the long term. Android came from nothing and is the most popular Linux desktop today, so in theory it is possible to Mir to win the "race".

Personally, I have been advocating that the whole Linux desktop migrates to Android. Mir reusing something from Android is great, but X is only one of the many problems in Linux desktop. IMHO the biggest problem is the lack of a single and stable API.
The reason why I dislike this approach is that it does not improve the environment. If there are bugs in X (and there are bugs in every large project), you fix the bugs you need to fix.  So with Mir, after the investment of time and money into open source, we will have two display systems with many bugs instead of one display system with fewer bugs. Everyone who has said that X can't do something has been proved wrong.
+Daniel Stone *What* "flame"?  Nothing I said was a "flame".  It doesn't matter why Wayland took five years to make, just that it did, and that no major distributions are using it yet, and that it doesn't (to my knowledge, maybe I'm wrong on this too) support proprietary drivers (which are necessary because Nouveau still hard-locks any machines I try to use it on), and...

...and, well, it's been a long time.  Possibly too long.  That's my point.  
+Elvis Pfützenreuter Google has much more dev's, much more know how, much more weight, much more money then canonical ever had and google still needed years to be there where they are today! 
+Pau Garcia i Quiles +Łukasz Gruner - we've got no plans to make Mir run on *BSD, although we're currently using parts of the graphics stack that - as far as I'm aware - aren't currently supported on the *BSDs.

Once we nail down a driver model with NVIDIA, AMD, Intel, et al this might be clearer.
+GonzO Rodrigue I'm using Wayland on a proprietary driver right now.  It's that proprietary vendors (like NVIDIA and AMD) need to add support for Wayland.  And for all Mark's posturing, neither of them have publicly said anything about Mir, let alone committed to porting their drivers.
Daniel Stone I presumed that yours was closer to Mir.   Lets just say when you need todo multi buffers on mir because what you have already sent to Mir to display is now out of date and has not returned yet life gets problematic.

Even so it does matter who allocates.  Doing a Valgrind or equal is always cleaner if the allocations the application triggers directly are all inside the application.  Shared memory has always been a very good way to lose track of what you are doing if you say everyone sharing memory can allocate it.

As soon as you say server side allocate you are going to make my debuging harder.  I do not particularly like any ideas that make debugging harder.

There is another downside to server side allocate.  Application running headless.  There is the possiblity that the display server or compistor completely dies and has to be restored by a hartbeat event.  What are you going todo to all the applications the user had running kill them or jam them while the compistor/display server is failed.

This is why I say you don't know how long I am going to be locked for.  Now ideal is when display server/compistor is restored application can reconnect if they was functional the complete time the display server was gone.

As soon as you do server side allocate the program can get to a point its wait for the compistor to respond to contiune even that its got other events to process.  The applications are no longer running 100 percent isolated from the compistor.  So you bring back the all system crash.

This is the problem you are saving a little bit of power.  You are giving up stablity.

Daniel Stone you are trying to say you have not altered the operations you have.  Not in a good way.  Is your saving worth risking up giving up the means to restart the compositor while the applications are running?

The idea of server side allocation is always problem.  You are creating a single point of failure.

Daniel Stone test yours when you intentionally freese the display sever.  There is going to be a point that application will be jammed.  Where it does not have a buffer to write its output to for the events its recieving by network or other sources.

In theory with client side allocation you can start all the desktop applications and the compistor at exactly the same time.  When you have a 16 core system  That is 1 compistor and 15 applications running side by side.  Not having to exactly wait for each other.

This is the new problem of multi threading.  If I have to wait for the compostitor to return me something I can have like 15 core waiting ilde out of 16 core when there is a lot of things todo.

Power effectiveness and Performance is a blance game.

Daniel Stone is very simple to think in single core processor event loops.  Multi core processor event loops are a lot more tricky.  What works on a single core processor will stall a multi core.

Try finding a new android phone without at least a duel core.  Fairly soon it will be without a quad core.

The complete idea of server side allocate becomes more and more ineffective the more cores you add.

In multi core where able if a message you have sent has not made it yet it must be non blocking to the application where able.

Sending to application that all this buffer must be created X way.  If that message has not arrived at application yet  it not blocking.

Application places request to server for buffer and needs it.  This is blocking.  Application can only move forwards once it has buffer.

"client-visible behaviour" Lag and non responisveness is still client-visible behaviour.  Worse it becomes end user visible behaviour.

Yes the why is this application not performing well I have 16 cores or more.

We are going into a multi-core processor world.  The odds of seeing single core again is getting very low.  Most of your time working on X11 Daniel Stone is a majority of single core.

If it going to cost a little extra bliting you many in fact still be saving power doing it in a non locking way.  The question is how many cores until you do due to allowing timeslices allocated to be used effectively.

Daniel Stone there are a lot of questions to be answered.  Server side allocation will appear to give some short term gains.  But it has a price.  The problem is as hardware gets bigger the price gets worse.

Client side allocation scales even accross NUMA.  Yes it would be isane to wayland on a 4048 core system.
All I can do is wait and see what happens--which, I hope, is that there will be high-quality graphics for all Linux distributions that graphics hardware makers will support.

I have to admit, though, that Canonical's development and announcement of Mir after having for years said they'd move to Wayland reminds me of OS/2 and Windows 95.
And yes, Mark's comments about Unity existing before gnome-shell are even more ridiculous when you consider that Mark is, at the same time, telling people that Unity was started because gnome-shell doesn't serve Ubuntu's needs. And anyone who was there at the genesis of these projects knows that Shuttleworth was plenty aware of gnome-shell at the time he announced Unity would be making its own shell.
So, please dear Canonical devs, people and Ubuntu fan boys, before making souch statement please agree with yourself first. Such behevior can, and in the end will, only harm free and open source software
+Daniel Stone said: "It's that proprietary vendors (like NVIDIA and AMD) need to add support for Wayland."

Well, if that still did not happen, will it ever happen?

What about the graphic cards in SoCs? Still no Wayland support, nor X.Org support.

Wayland has the chicked and egg problem; the companies are not motivated to add Wayland support because there are no products ($$$). And there are no products because there are no drivers with Wayland support.

What would Sun Tzu do in this case?
Unity = NextStep improved. Let no one come to sell the idea that Unity is something unique and pure in design. Unity is an enhanced copy of NextStep...
+Simos Xenitellis Quite a number of SoCs support Wayland - including OMAP (you can find videos of Rob Clark demoing this), and more are adding support all the time.
+Menporul Prabhu  And how is that bad? Would it have made any difference had the code been released under MIT or any other permissive OS license? I'm not citicizing, just trying to understand how that is bad when most open source projects do the same thing?

What happens to stuff that is not supported on Wayland? If you're a hacker then by all means go ahead and code but what about the people who, like me, want a system that just works?
Im a retired old fart--and an end user. My gosh, such polarization in the above blog is ridiculous.

Nothing is forever. In the end some new genius is going to merge Mir functionalities and Wayland's, calling it MirLand.  MirLand will have the best(or worst) of both Mir and Wayland.

As an end-user, my frustration with Unity and Gnome equally. It is with the number of mouse clicks to start an application that is not listed in favourites.

Mint14 -- Cinnamon allows me to do it in 1 mouse click.  Thank God for that because, my tendons controlling my forefinger are healed.  With Unity and Gnome, both were causing me extreme pain because of click-click-click to traverse the presentation screens to where I could start my application.

The next interface application should allow me to introduce my own tags for a program.  I want to say  My Editor  and have it start Vim. Or I want to put in some keywords such as Below Zero (as an example), and have the weather application pop up.  Eventually, for my desktop, I would like voice activation. I much prefer that to sore tendons from so much left mouse button clicking or sliding on a screen or expanding an icon with two fingers..

Guys, come out shaking hands. Life is too short to take stubborn positions.
+Martín Cigorraga not necessarily - you can do what Weston has done, and have the desktop shell out of process with a privileged protocol. Or you can do what I think we'll be doing, and have the shell in-process but have clients handle the display server going away cleanly.

There's no inherent reason why the display server temporarily disappearing means that all my clients crash.

If we do it right there'll be a visible pause, possibly the screen will fade-to-gray, and then everything will come back as it was.
+Mark Woodward I'm not a developer, just IT Admin, user and I really like the "Software libre" world. I would like to respond what you said "Why not to have just one X server with few bugs". It seems to be something like "why to have gnome, kde, xfce, fluxbox, windowmaker, unity, etc, etc", I think this is part of open source world. You could have tons of forks and they could benefit from one to each other if they keeps GPL license and they try to colaborate with each other instead of try to understand why not to work only in one project. 

For me, It sounds something like "why to have bsd, freebsd, linux, barrelfish, hurd, etc, etc. If we could have just linux?" I think this is because open source/Software libre exists.

I really appreciate what is happening in this world, with lot of those smarter people. Always could be better if they focus their time in what they are doing with that, try to collaborate with others, and use the rest of free time to take a beer, have time with family, enjoy, etc. And not use free time to try to kill others, but seems that maybe people needs wars, or they don't need but for some reason when they think different as others they can't live with that? :(. 

Maybe I didn't use right words, I just know what words to use in spanish language and I tried to write it just with few words that I know when use english, sorry for that. 

Hope Mir, X, linux, and all SL projects continue focusing to make it better like always did! people is really smart and could make amazing things, I see every day things that I was thinking for years come true, that's the most amazing thing in SL world. I'm always trying to collaborate with what I can: translate, document for usage, donate, investigate, reporte issues, testing, etc. 

Now I expent more than one hour reading this post and comments, but I couldn't read all of them. 

+Mark Shuttleworth Thanks for all the work all ubuntu's team had done and will do. Try to show the world that your work could benefit others projects. And make landscape open source? :) jaja. (Joke. Not really necessary, I think it you are fine trying to find ways to collaborate to SL projects and get money to continue doing it, it could be the most hardest part of it for everyone, more hardest if you need to show you really collaborate). Co-operation between different projects could be hardest to find too, but keep doing things to find ways to benefit from this kind of collaboration. 

Now I definetively go home, will give lot of kisses to my 5 months daughter, and later could take share some beer with my lady. 

Christopher Halse Rogers
"There's no inherent reason why the display server temporarily disappearing means that all my clients crash."

There is if you are not very careful with server side allocation.

The risk with server side allocation is three fold.

1) Kernel makes mistake in freeing the server memory and frees blocks shared with other applications resulting in crash of said application.
2) Someone decide to implment secure clearing of memory on termination of the display server/applications.  Result is all memory allocated by the server being zeroed or filled with junk so application fail.
3) Applications jammed because they are waiting on the server to respond.

Client side allocation is the least likely to take a wrong turn.
+Peter Dolding the graphics buffers are all refcounted in the kernel, which should make (1) and (2) a non-issue.

And (3) seems like a feature rather than a bug to me, at least for the vast majority of applications. The server has gone away; anything in the client that wants to render or receive input should block.
+Daniel Stone
Bad theory.
"Also, platforms which don't require physically-contiguous scanout buffers, won't implement this in their EGL stack for the obvious reason that they don't need to."
Go read current opengl implementations  You will find Nvidia ATI even Intel implement features they don't need for the hardware for software compadbility.  Software compadiblity modes are slow.  This is why the server must be able to turn this off.  Yes the EGL might return that it supports physically-contiguous scanout but this might be emulation for software compadiblty.

History of opengl warns you of this.  Any feature that does not exist in hardware might end up existing in software in an opengl stack.  This is why it must be optional.
+Christopher Halse Rogers

 "the graphics buffers are all refcounted in the kernel, which should make (1) and (2) a non-issue."  Read my 1 and 2 again.

1) is handled by reference counting in kernel correct or should be in an fictional world.  Reality throws up a few spanners to this idea.  Error can come about due to an allocation not being not being detected as in use by application yet.    Its all timing.
Mir allocates buffer.
Mir returns buffer to client.
Mir dies
Application gets buffer message.
Kernel cleans up buffer because mir was the only one assocated with it at this point.
Application spits chips because it just attempted to use a buffer that does not exist any more.
Yes race condition complet ass to debug only will turn up if display server crashes at the right time and kernel does not see that buffer being sent back owns to the application.  I don't want to have to be looking for race conditions.

Sorry all refcounted does not become good until application is referenced to the buffer.    Before application is referenced to buffer it could go by by on Application.  This is why I hate server side allocation so much.  The kernel refcounted works badly for client side allocated as well.  But with client side allocation only you then only have to worry about the server code handling deferfenced handles correctly.  Yes client side crash still could cause mir to go splat trying to access a allocation that the kernel has nuked if mir does not check if the item its been told to display is still in existance.

Refcounted to handle without disaster requires you to check if the handle still exists by the time you get it.  Application developers I don't trust that much.

Next issue here is when an object has zero reference items that have touched it yet the display server has said hey this is going to X application should kernel free or not free.   Thinking application might never ever use that buffer you have sent to the application.

Server allocation being sent to client application creates impossible locations for the kernel to work out what it should do in case display server crash.   Reference counting does not magically fix this.   Reference counting stops allocated from being freed by all known users.    When you get into the case that of the maybe users you are in trouble without coding to handle buffer being cleaned up incorrectly.

Client side is a lot less vague.  Yes the client allocated yes they want it.  So as long as the client remains don't attempt to clean up.  Yes client sending buffer to server and dieing at exactly the same time before the reference is assigned to the server can cause some pain.

Display server coders can be judged as skilled and careful enough to build in tests into there test suite to detect client side issues.  Client side coders being judged as making safe code is highly risky to user expenicence.

2) Issue I detailed is not handled by kernel.  2 is someone deciding to secure something like mer and adding a library that tracks all allocations then proceeds to destory them.  This is to prevent data leaks in case of application termination.  Result if there is not a clear split in allocation locations items like this kick toe big time.  So resulting in nice big failures.  If you don't allocate you don't trust it to remain basically.

"And (3) seems like a feature rather than a bug to me, at least for the vast majority of applications. The server has gone away; anything in the client that wants to render or receive input should block."

Its a feature in some cases a bug in another.  Currently with input application can check if any is waiting if no keep on going.  So input is already non blocking if application is coded to be non blocking. 

Take a case of something like blender rendering up a video or someone downloading a file or many other operations.  These might trigger the application to want to render something.  Your response is stall.  Now if the client side application is not coded well.  You might have snapped a user download or something else.

You are presuming the client side application is quality.  Reality you can fairly much bet against client side application being quality all the time.

Next think VNC/RDP with network lag.  The ablity to disconnect an application and let it run is a highly useful feature to a render farm it also useful in some nice cases.   Thin Terminals.  So as a user  I can disconnect a thin terminal intentionally and the applications I have running get to run until they hit the point of truly needing user intervention.  So work time is not lost if a person has to move from one terminal to another.

headless has some advantages. can operate effectively over links too slow for vnc and rdp.  Because it can go headless.

Ok there is a 4 issue Christopher Halse Rogers and Daniel Stone.  Using cgroups to limit and application memory allocations.   How can I pull this off perfectly if you go and server side allocate stuff.   Server in 1 cgroup applications in another cgroup.  So they are being mesured split.

2.8+ Linux kernel will allow userspace started cgroups limiting memory.  So there is no reason why a user cannot choose to run one of there graphical applications in a cgroup to kill it if it goes memory nuts.

Yes server side reallocate of some form is valid..
1) Client allocates
2) Server recevies allocate.
3) Server calls to kernel to safely reallocate this buffer to a more performance gaining location.

Where is the push back to client here.  There is not one after than making the existing buffer processed and up for resuse.  Also the reallocate of exactly the same size also means to cgroup maths that allocation does not have to be added to the servers memory usage.

This is why who allocates is important.  Who quota of allow memory should the buffer come out of basically.  If the server allocates the kernel will believe it comes from the server quota.

The does not matter who allocate is bogus.  Unless you can explain to me how you will make sure the allocation will appear in the correct cgroup memory usage item.  Heck in the correct /proc/(pid)/status of  usage allocation.

This is where server side allocation comes appart normally.  You look in /proc/pid mir/status see its grown look in application and have it appear small.  Why allocation has tricked mir into doing the allocations for it.

Reporting goes wrong when you start using sever side allocation.

The 5 is Out Of Memory Killer.  Allocating memory like mad could push the display server to the top of the most wanted list to the Out Of Memory Killer.  Ask postgresql developers about this.  They fall fowl of it all the time.  Fairly much they refer to it as kill postgresql then kill everything else.  This comes down to how much postgresql has to allocate under load.  Number 5 again refers to status information in /proc/pid application/status being wrong.  Its something that you cannot afford to be wrong.
+Peter Dolding Look, sorry, I've really tried to be polite and explain this to you, but you just don't know what you're talking about.
+Christopher Halse Rogers  You said:
"We need server-side buffer allocation for ARM hardware; for various reasons we want server-side buffer allocation everywhere."

From the Wayland IRC Discussion:
00:35 <daniels> RAOF: fwiw, i've got a wayland backend for arm hardware which does server-side allocation right now.  didn't require one single change to any of the clients, or even compositors.  it's all internal to the egl stack.

Again, I have yet to see a single technical reason for not using Wayland (we're not talking about Weston here).
+Patrick Goetz and, you might note, directly under my quote I say “Although it's possible to do server-side allocation in a Wayland protocol, it's swimming against the tide... we'll need to patch Mesa's EGL (and XWayland)”. That is: we'll need to make the changes to the EGL stack that Daniel alludes to!

I don't believe I've said or implied that any of these things are impossible in Wayland; indeed, I'm pretty sure I've explicitly said that we could have done all of these things in Wayland.

The argument is, and has always been, that we estimate it would be more effort to do a Wayland compositor than to do something new.

If you don't see ‘this technology will not make our lives easier’ as a valid technical reason, then I guess we don't have any valid technical reasons. But, to be consistent, you'd also have to claim that there's no technical reason to choose Wayland over X; after all, everything we want to do is technically possible in X.
+Christopher Halse Rogers  Thanks.  I only noticed that after posting my comment.  I'm now almost through reading all your blog posts about this and very much appreciate the time you've put into explaining things to all the little people out there.  I will admit that the Mir spec gave me a headache (I'm not a graphics developer and have no idea how you/krh/Daniel Stone keep all this complicated shit in your head), but hopefully the final outcome will be a definitively better graphics stack for linux, something that has been an absolute necessity for at least a decade already.
+Christopher Halse Rogers BTW. after reading through the comments on your blog posts I continue to be unconvinced that Mir is necessary or that the technical reasons which inspired Mir are the best possible solution.  I have to agree with Lennart Poettering on this one:  you must use the full technical capabilities of the linux kernel (e.g. cgroups) in order to have the best possible system.  Taking business considerations into account (i.e. note well that Microsoft is failing to penetrate this market), anything done for Arm should be treated as an afterthought, not a primary design impetus.
+Mark Shuttleworth We love Ubuntu and want to see it have continued success. What we really need is more diplomacy. Please be diplomatic. Tame disputes, do not flame disputes.
Add a comment...