Shared publicly  - 
Technology: What ails the Linux desktop? Part I.

The basic failure of the free Linux desktop is that it's, perversely, not free enough.

There's been a string of Linux desktop quality problems, specific incidents reported by +Linas Vepstas , +Jon Masters , +Linus Torvalds and others, and reading the related G+ discussions made me aware that many OSS developers don't realize what a deep hole we are in.

The desktop Linux suckage we are seeing today - on basically all the major Linux distributions - are the final symptoms of mistakes made 10-20 years ago - the death cries of a platform.

Desktop Linux distributions are trying to "own" 20 thousand application packages consisting of over a billion lines of code and have created parallel, mostly closed ecosystems around them. The typical update latency for an app is weeks for security fixes (sometimes months) and months (sometimes years) for major features. They are centrally planned, hierarchical organizations instead of distributed, democratic free societies.

What did the (mostly closed source) competition do? It went into the exact opposite direction: Apple/iOS and Google/Android consist of around a hundred tightly integrated core packages only, managed as a single well-focused project. Those are developed and QA-ed with 10 times the intensity of the 10,000 packages that Linux distributions control. It is a lot easier to QA 10 million lines of code than to QA 1000 million lines of code.

To provide variety and utility to users they instead opened up the platform to third party apps and made sure this outsourcing process works smoothly: most new packages are added with a few days of latency (at most a few weeks), app updates are pushed with hours of latency (at most a few days) - basically it goes as fast as the application project wishes to push it. There's very little practical restrictions on the apps - they can enter the marketplace almost automatically.

In contrast to that, for a new package to enter the major Linux desktop projects needs many months of needless bureaucracy and often politics.

As a result the iOS and Android platforms were able to grow to hundreds of thousands of applications and will probably scale fine beyond a million of apps.

(Yes, we all heard of the cases where Apple or Google banned an app. Don't listen to what they say but see what they do: there's literally hundreds of thousands of apps on both of the big app markets - they are, for all practical purposes, free marketplaces from the user's point of view.)

The Linux package management method system works reasonably well in the enterprise (which is a hierarchical, centrally planned organization in most cases), but desktop Linux on the other hand stopped scaling 10 years ago, at the 1000 packages limit...

Desktop Linux users are, naturally, voting with their feet: they prefer an open marketplace over (from their perspective) micro-managed, closed and low quality Linux desktop distributions.

This is Part I of a larger post. You can find the second post here, which talks about what I think the solution to these problems would be:
Ralph H's profile photoVentak HR's profile photoTony Sidaway's profile photomandrit Hoffmann's profile photo
> Desktop Linux distributions are trying to "own" 20 thousand application packages consisting of over a billion lines of code and have created parallel, mostly closed ecosystems around them.

Is this true for Debian also? Or is it not a "major Linux distribution"

But if I get your drift correctly, are you suggesting a git supermodule repo for packages ... free to fork as and when one pleases. Can git scale to billions of lines of code .... .... .... ?
Why did maemo fail? And please stop blaming Nokia for it. Can a completely collaborative development compete against product minded development?
it is bs to compare mobile applications, which are a few bunch of screens a service otherwise given by a web site, to applications in a linux distro. the author is totally making a terrible mistake here. mobile apps lack in size, complexity and who said they are great?
Very true. And this problem has become more important after major Linux distros started offering LTS releases--so because I'm still using Ubuntu 10.04 (which is still OK for me in the core OS), I have to put up with two years old versions of everything else; e.g. I use Apache Maven, but I can't just run apt-get and upgrade to Maven 3.0.4, it's stuck in some jurassic 2.x version so I need to go through some inconvenience to get/install/use the new one. (And as a Java dev, I still get it easy as most things I care about only depend on openjdk, which contains a very strong ABI that shields apps from 99% of the random brownian motion of the native OS.)
+Tim Davies most people don't need/want a large number of 'extensions' to their desktop - this creates a base user experience which is different from machine to machine. most people want/need different userland (custom?) applications running on top of a standard base OS. the fact that gnome 3 has a bunch of extensions for it tells me that there's a bunch of people writing software that for all intents and purposes is rather closed - it'll only work on certain distros with certain supporting libraries, further highlighting the fragmentation and packaging mess Ingo is talking about.

FWIW, a big part of the linux package management problem is that most (all?) tools mix in userland app code with system-level requirements. i'm using the same tool to update /bin/ls as I am to install a python app - seriously? that is by and for geeks-only, and until a separation is made there, the problems it creates will continue to be usable/manageable only by geeks.
I'm 100% with Alper Akgün, was my first thought straight away, I think most people are looking in horror at the way windows 8 is going, why would we even want to go a similar way, if we want quicker updates we compile it's not hard, Linux should remain all about choice, yes have the dumbed down tablet style distros if you want but don't lecture that it's the ideal, most users probably want to deviate quite a lot from a standard Distro once they know their way around. (Debian sid user here btw)
Empowering developers to publish their work in their own release cycles has nothing to do with the "dumbing down" of applications to "apps". It appears Alper and others here are confusing release cycles and QA processes with actual contents of packages being published.
I see the point, but the article used a number of bad comparisons, android is nice but really a toy compared to linux, with the multitude of desktops and library versions across linux how would something as simplistic as rapid market style releases ever work
This is an interesting post, but I don't agree with the main premise. For me, Gnome 3 on Ubuntu is enough. Everything is very well integrated. A case in point is Rhythmbox. When it's open, a roll of the mouse down to the lower rigtht-hand corner reveals an icon representing Rhythmbox. Click on it and see what's now playing. Click on that and you can start and stop the music, advance to the next track or go to the main Rhythmbox interface.

On Gnome 3, I have virtual desktops I actually use. I can pin my favorite apps to a dock and can say goodbye to the multi-level menu. And there are many tweaks and configuration adjustments that can be made to Gnome 3. For me, using Gnome 3 on Ubuntu is far, far more interesting than any version of Windows or MacOS out there. I can write, browse, listen, watch, teach myself programming and learn more about how computers really work than I ever did with Windows or Mac. The Linux desktop is not dead and it is not perfect in every respect as are the others. It is a diamond in the rough. It is, ultimately, the Swiss Army knife of operating systems.
@scott, can you consistently run multiple apps that all use sound resources at the same time? for years with 'linux on the desktop (and laptop)' I had nothing but trouble with apps that would clobber each other if they both wanted to play sound. Running amarok then loading gaim would often mean gaim would silently not work (or not start up - I forget the specifics). Skype and anything else with sound? Good luck. Record your mic and listen to it while listening to something else? Problems.

I'll have someone say "oh, that works for me!" or "works when you do XYZ", but those miss the point. For every "works for me", there's dozens or hundreds that things don't work for. For every "works on distro X, just upgrade!" you have dozens or hundreds of people who can't switch distros. If everything's going to work only on selected combinations of blessed hardware with only specific versions of a system... most people will opt to go back to mac or windows.

You may be able to teach yourself why computers work with linux, but when you know that something should work, but doesn't, because of politics or laziness or just few resources, in some ways it's worse than not knowing at all.

I'm glad Gnome 3 on some version of Ubuntu is good enough for you, but it's not for a lot of people. The reactions of "works for me" in these articles perhaps miss the point of the original articles' authors: "desktop linux is a mess" with subtext usually being "we'd like more people to be using it, but it's a mess, so they don't". Why get more people using it? Primarily it would make it easier to get consumer-level hardware support for common devices/chips/etc, meaning an upward spiral of adoption. But just because you've adapted to the "mess" that is desktop linux doesn't actually mean that it's good, it just means you've adapted and learned (and probably have put in far more time to get to the point you're at than you should have).
Nick, from the Hacker News discussion, the way this would work with Open Build Service and Maemo-style community QA is that " You still package and build an application separately for each distro version. And then the community tests it before it goes to the stable apps repo. So broken packages (either by packaging, or by app not functioning properly) are unlikely to pass QA.
So if your package conflicts with something, either fix that issue or don't release for that distro versio."
@ Michael Kimsal, both pulseaudio and jack work flawlessly on my system, what you describe I have experienced and it was frustrating but it was quite a number of years ago, the only reason I could see that happening now is if the hardware was quite non standard. This past week I have used Debian, Fedora and Zenwalk and no issues arose on either of my laptops. Of course not everyone will be so lucky and a frustrating few hours of googling may be needed, but the point is a lot of us feel passionate about the way linux works now and don't want it to change so radically that the choice is reduced (as seems to be happening everywhere else).
Michael, the setup I have actually took less time to build than previous versions of Ubuntu. I have played sound from multiple sources simultaneously with no hiccups. I haven't done much recording though. For that, I'd recommend Ubuntu Studio as the kernel is designed for that purpose. I've used Skype with no problems, too. In sum, this is perhaps the most reliable Linux desktop I've used so far.

I'm not so sure that this is a "mess" as you put it, either. But then my requirements aren't that high. I'm not exactly a power user and when I explore, I tend to do so in the terminal to learn Bash and Vim. I have found that everything works as I would expect 99% of the time. Most problems that I have encountered can be solved with a few minutes searching.

I've had such a good experience with Linux that I just can't imagine myself going back to Windows at home. I still have to support it at work, but I suspect that in time, that will change, too.
A very good article. What about the economic part of it? How to sustain the developers and the effort required to maintain such an infrastructure? not everything can be done on the spare time.
I love you for saying this. Unfortunately, no one and absolutely no one is listening :( Desktop linux users are pariahs in this enterprise dominated world. For the corporations that fund Linux developers, "Desktop" is not important. There is a desktop in the enterprise distribution, because Redmond competitor's product has it.
scott and nick - glad it works for you, but you're in a minority. i spent almos 7 years on various desktops and laptops with linux as my primary OS, and it was never an easy experience. Each update got easier, but I wouldn't ever say "easy" for all basic OS multimedia stuff. I'm talking 2001-2008. I gave up after that, and went to a mac. is it perfect? nope. i've ended up paying for some utilities that might or should have been 'free', and I really miss fish from kde.

re: "mess" - the point of this thread is that it's a mess, but it's also a point which you started off by saying you didn't agree with, so again, I'm glad it works for you and your needs/wants are met. I use linux a lot for server stuff.

nick - if "choice was reduced" and there had been fewer of those audio problems 5 years ago, would that have been a bad thing?
I think that closed source OS(s) encourage vendor lockin. The App market model means even less control by the users on their data. The data gets stored by thirdparties at "the cloud" .. It is a very scary model actually.
Yes there are problems with the linux desktop.

Unfortunately comparing it to iOS or Android (which is linux) is comparing apple and oranges.

I don't really care about linux on the desktop anymore, but it is on my notebook ;)

And unless you make your living with a computer the desktop is dead. It will be phones and tablets for most users in a few years.
@ Michael Kimsal, I'm not even a minority I've had problems in the past with it too as I've already said (wifi and sound issues few years ago), I actively sought a distro that worked for me, and tailored it with the best packages I could find for what I do, I didn't just boot up the first one or two and give up. It's amazing how well it does work when you consider the chaotic processes that produce this OS, it's all about perseverance, in the end I much prefer it to windows, but as always it's down to individual choice, but I do think as with many things you get back what you put in.
Seems like Ubuntu actually have made a start on this long ago. They support only their main repository of packages themselves, universe is the rest of Debian. Universe is community supported, so anyone can submit updated packages to it with security fixes.

And their launchpad system, with it's personal package archives, is a step in the right direction for allowing anybody to independently provide their own applications to Ubuntu users. It's not as tightly integrated as the Apple App store of course, but it's a step in the right direction.

I think their new software centre is trying to address this situation even moreso (offering paid for apps too). I don't know much of the details here though.

Tbh, I didn't even notice what Ubuntu were up to with all this until I read this post. Quite clever!
I read part 1 + 2 of this post and disagree that this is the basic failure of the linux desktop.
Personally my main distro is arch linux and I really love the packman + AUR package management, but I must say that the ubuntu software center is the right direction for mainstream users which just want to install/update apps. It even supports easy payment/donation of software (something that I absolutely miss in other distros)!

Back to the basic failure why linux fails on the desktop (in my opinion of course). For non linux users there is not a single official promoted main distro! Don't get me wrong the problem is not that we have dozens of distros around - the problem is that we don't have THE LINUX (DISTRO) the one big thing which represents the official face of linux to the world. Such a thing must be done very well no excuses no experiements it just have to work and must have solved anything that sucks in the competition. Things like USB Wifi, Internet must work out of the box - no excuses no experiements. I think many parts of this are already done in a lot of distros. The question of which desktop environment to choose is open here but the distro has to choose one and only one.

Linux desktop solved so many things that really really suck in Windows/OSX so I think it's possible to go widespread.
If linux wants to be a real competitor on the desktop market we need:

1) Fokus (alias one official uber-distro) If we can't say this is linux, it looks so ans so and works this way, we never will hit the masses. never. never. never.
2) Marketing (The distro has to be promoted as much as possible in the web in a modern artistic style, not the GNU HTML 1990 look, you know how OSX looks like, how Windows looks like, but nobody knows how Linux looks like!).

And all the other distros? Could coexist as ever but I must admit it would be good direction if very similar distros merge together at some point.
+Nishanth Menon I don't see how one can avoid holding Nokia responsible for Maemo's decline. For one, there was the switch to MeeGo. Nokia made it clear that Maemo was no longer the way forward at that time. Then they decided MeeGo wasn't, either.

I don't want to cry about the past any longer, though, and I get the idea that +Ingo Molnar is looking forward, not backwards. Thanks +Henri Bergius for pointing me to this interesting thread.
Really great post!
Distros are really good at being distros, ie. a collection of parts and that's the only way they know how to roll so they therefore force that model upon app developers as well. Come and join our collection of parts!
I'm getting more and more convinced that it's time to break away from The Peoples Central Packaging Committee and actually trust app developers with this thing, because I for sure trust them (I currently have Firefox 11 on all but my Linux system, because, well...).
+Nick SB agreed re opportunity presented by Windows 8.

I'm both a Windows and Linux user (use determined by need) and while I am impressed with Windows 7, Windows 8 is losing me (see my take: at least for anything other than smartphone or maybe tablet use.

User disenchantment with Microsoft's increasing moves toward a closed-yet-disjointed ecosystem should open up opportunities for a truly user-friendly Linux desktop and even portable solution. I used to think that the Linux Foundation was the right driver, but with MeeGo cast aside and the unknowns of Tizen, I think the Mozilla Foundation is the place to look now...

EDIT: I can't +1 the +Manuel Pietschmann post above me enough.
IMHO, in current Linux distributions use a good package management systems. Because when I execute `apt-get install`, I'm sure that this package will work fine in my system and its code was checked. I trust the official repositories. More, in addition to directly program package maintainer create some scripts and configs, which provide package working in short after installation.
If I want to get fresh software, I think Ubuntu ppa is a good scheme. Other way is download source code from official programs site and install it, but in most situation you will must download sources of some other dependencies packages. I think current situation is good.
There is only one exception that I could remember it is a important security bugfix. I think that there should be a faster way to deliver them to the end user.

I agree that a single developer with inconspicuous small program, has a huge troubles to bring it to upstream of any distributions. Yes, it's problem.
I think Ubuntu almost solved this problem with their PPA program. Ubunutu itself provides basic repositories with general purpose stuff in it, you can add PPAs to get newer programs at your own risk(which is what the app market basically is.) The only thing that's missing is making those PPAs easily searchable through their software center while giving the power user the ability to use the CLI(or more low level programs like Synaptic) to do what they want(or compile their own)
Again, I have previously made all of these same points. But people do listen to you, so maybe this will help :-)
Maemo didn't get it completely right either, using debian packaging system (that is better suited for os creation) for normal apps. Deploying software needs to be made straightforward, and it shouldn't be a 'skill set' to be acquired.
+Afief Halumi and others. In Ubuntu we actually have most of the application submission infrastructure in place for application developers, it's just difficult to get people to use it and we're still only beginning:

This still doesn't solve the problem for large complex applications but it's a start.
I've often thought that it's bizarre that distros make little distinction, package-wise, between fundamental OS components and the user-facing apps that depend on them.

It does seem unsustainable that every distro has to duplicate the effort of maintaining a build and patchset for every piece of software that any user could conceivably run on them. That's not to say that it doesn't work; but it's awkward to explain to a new desktop Linux user why they have to wait several months after the release of a new version of some app they use to actually get the update.

PPAs are great, but they're too fussy and often riddled with dependency issues that trip up the non-technical user once the number of PPAs on a system reaches a certain critical mass.
The problem is really how Ubuntu gained a lot of momentum, and yet sucks. If Linux Mint, or SolusOS or Netrunner or something good were to start making a real numbers impact, and had support for the latest wii emulator (at similar framerates to windows), and the like, we'd be getting somewhere. Meanwhile devs are screwing around with unity, gnome 3, all this SHIT that users HATE. Not fixing all the stuff that is broken, and removing the junk, they are adding junk and broken stuff stays broken. I agree, the lack of a reasonable "App Market" for Linux is hurting the movement, but it's not the main problem. The disconnect between devs and users is the main problem, and until they quit the Unity horseshit altogether.... The whole thing is a nerdy waste of time, really.
I'm with +rhy thornton and +Michael Kimsal on this. From my perspective, Ubuntu hit a peak around 2006-2008, and then started sliding downhill. A big part of the slide is the adoption and roll-out of base technologies that where half-baked, and incompletely debugged. Shipping these as part of the base system just caused huge pains. Way back in the day, it was udev, what with lost events, unexpected sequences, stale config files: when udev didn't work, you pretty much had to be a kernel programmer to figure it out and hack around it. Its gotten stable now, but still gives me the willies. Today's problem child: plymouth. Back in December, I spent 3 days trying to work around the crashes and hangs there. It still crashes, but at least I can now boot. How did plymouth ever get out of acceptance test in Ubuntu? How could they possibly ship such a raw, buggy piece of software as a part of their core system?

The plymouth experience was so bad, it feels like whining to mention a non-bootable X11, a failing lightdm, never mind unity, etc. These core systems need to debugged far more thoroughly before being unleashed on the world. At least the audio mostly works these days, most of the time.
I gave Kubuntu another shot late last year, and it was almost as painful as using Windows Vista. There's a lot of potential in Ubuntu distributions, but many posters here are right: the project needs to redefine its focus.
I'm not sure this is an accurate description of what's happening. While I might be tempted to give Android a try on my desktop, if it's available, to suggest that it's substantial competition to any desktop Linux strikes me as fantasy. How many actual installs are there?

I run Android on my phone and love it, but when I need to do a substantial amount of work I need a workstation, and as workstations go you can't beat GNU/Linux for providing a stable environment with a huge range of tools. Android isn't even in the picture yet.
For anybody suffering pain from Unity and whatnot, I recommend . See? You are not at the mercy of whatever nonsense your distribution managers throw at you. It's Linux, run whatever you like!
I like what I wrote in the other thread so much that I will cross-post here. Send me hate mail if this was wrong.

Let me take a step back: what are the core problems, really? What's holding back widespread Linux adoption? The difficulty of using LibreOffice? No. A non-intuitive version of firefox? Uh, no. Is the Gnome2 panel so strange and bizarre that we had to invent Unity and Gnome3 to "fix" it? Uh, no. The problem is that newbies install Linux, and simple things don't work, and they can't fix them. Or maybe it works great for 6 months, and then one day it stops working. Who knows why... but they can't fix it.

I call these "sysadmin problems". Its what made Windows horrible to use: simple shit suddenly stopped working, for unknown reasons, and it was impossible to fix. The plight of the newbie was driven home for me by my recent experiences: broken sound (after a recent reboot, sound stopped working in chrome; worked great in firefox. Days later, I discovered in the sound server, the slider bar for the chrome master volume was set to almost zero. How did that happen? I was lucky to find the answer so quick; it might have stumped me for months. So WTF?). Network problems (I have two ethernet cards. After reboot, system confused which was which. Networking was mysteriously broken till I swapped the cables. But not before ifup/down mysteriously stopped working for 15 minutes!?!? WTF?) Graphics issues (It seems that Unity/Gnome3/lightdm hate my 3-year-old nvidia graphics card. Gnome2 works great. WTF?) Boot issues (I run raid1, and one of my filesystems is on lvm. This is completely untested/unsupported in Ubuntu: there's initrd breakage. Buggy udev scripts. Hanging/crashing plymouth. WTF!)

Am I alone on this? I don't think so. Prowling the RedHat or Ubuntu help/support/ask-a-question sites/chats/wikis seems to show zillions of newbies who can't get basic things to work. Am I right? I dunno. Before we start arguing solutions, lets get a better idea of what the problems are. And lets be scientific: can we trawl the help/support sites, and count up what the issues are?
+Tony Sidaway I've been using a Linux desktop for 17 years. Recently, I've started having to troll through newbie forums to solve problems that I shouldn't be having. I take this to be a warning sign: either I've got early onset dementia or brain cancer, but for now, I'm in denial and blame it all on the distro.
+Linas Vepstas

I think many of the problems you are seeing are symptoms of three root causes compounding:

- desktop Linux has started, in relative terms to the rest of the market, losing marketshare.

- unlike 10 years ago today there's better alternatives even for technical users - who instead of complaining loudly and pushing back against bad changes will simply leave.

- Linux desktop developers have not noticed the increasing silence of users leaving. It's hard to notice - in fact it's easy to mistake silence for approval ...

Thus Linux desktop projects don't have the capacity anymore to push the kind of changes they used to be able to push.

This is what I meant when I said "the death cries of a dying platform". It is silence.

And I think it's ultimately fixable, by promising and legislating stability and learning to "let loose" the hands of application packages - my thoughts about that are outlined in part II.

Really, good sw distribution is a natural strength of FOSS - instead we let it become our main weakness. So it's fixable IMO and we can do much better than the closed source alternatives.
I'll admit I have not read all the comments above, this might have been said already.

I know that at least Ubuntu, has a strict focus from the parent organisation (Canotical) on their primary packages. That would be their package repository called 'main'. It's very small, in relevance to the 'universe' repository.

What's outside of the 'main' repository, is often taken cared of either community contributors - or from Canotical staff that has time over from other chores - If I'm not mistaken.
Is there a tracker from which we can tell whether desktop Linux is declining?
1. Whenever someone discovers a problem with a Linux distro we get a new Linux distro.

2. Software projects are not prepared for release as part of the/a distro by the developers but by distro maintainers or people associated with the project that do this preparation work. It is not the defined end-goal of releases such as is the case for Android apps.

As Ingo points out, a complete Linux distro is really really large. Also, if you keep changing things, people get upset and migrate away because they want to be left alone. They are not interested in dicking around with the OS. They want to get work done. I suspect this is the reason I see so many developers switch to Apple: they need UNIX but are tired of dicking around.

The abundance of distros (and OSes) means there is no single target for a given piece of software.
They (Google, Apple, M$) also provide united development tools and SDK. On Linux we have variety of tools available. Maybe one of the selling points could be "do not be a slave of those companies"... especially when we know M$ tends to simply shut down platforms as they like. Why not selecting for example Perl as the recommended tool for applications development? Who wants to be depended on whatever Apple/MS/Oracle decides?
@Vinit kumar - The Writer seems so much biased about Apple/Google ecosystem and their product.

Do you even know who the writer is?
I must be living in a dream. In a mere 2 decades, in my experience, computer software has been revolutionized by fast update cycles, open source, and accessibility. We take these for granted. And now people are mourning the imminent demise of various large distributions as if that threatened the whole ecology. There will always be workstations, though I wouldn't want to forecast what they will look like in ten years time. Whatever they look like, somebody will write a system that runs on them, because that's what we do. All the rest is, of you'll excuse the pun, window dressing.

So I suspect that the problems perceived here are mostly from lack of perspective.
+Michael Kimsal I don't agree at all. I consider desktop Linux to be a superior experience in almost every way to OS X or Windows 7. I own both a Mac and a Windows 7 license and I don't use either of them because I'm less productive with them and they "feel" uncomfortable. Have I put in time learning to use desktop Linux effectively? Definitely. But I spent years learning to use Windows effectively, so what's your point? In fact, I spent well over a year using OS X full time at work and I actually switched to Windows 7 when I got the chance because I didn't like it.

Just because user X can't get feature Z to run properly on distro K doesn't mean there is a fundamental problem with desktop Linux itself. If anything, the problem is one of communication on the part of distros and desktop Linux "evangelists".

For example, people should know going in that their hardware may not be fully supported. Ubuntu maintains a list of supported hardware, but how many people check before installing? If someone isn't using compatible hardware to run whatever distro they're trying to use, that isn't the fault of desktop Linux generally and "go buy a new computer or use the OS that came with yours" is a perfectly reasonable response in my opinion. You can't run OS X on a random Dell, but Apple makes that very clear from the start (and against the license agreement) so no one can reasonably be upset by that fact.

People should also know in advance that just because there's a package in the repo, doesn't mean the software will work properly in their particular environment (even MS and Apple have problems with this, there's no way to test every possible combination of HW and SW). This is especially true if you've installed something from source, or through some other unsupported channel. Does Apple test all their updates against software installed with Homebrew or Macports?

Finally, they should know that desktop Linux isn't a magical paradise where everything works all the time no matter what (no OS is). But too often this is the kind of message you see (or used to see, maybe not so much recently) people communicating. Grandma's computer has a virus? Install Ubuntu, she'll never have trouble again! More likely, she'll try to install some game her friends are playing and get upset when it doesn't run... Can't run "Ultimate Crystal Cave Smashout 3D" on Ubuntu? Try WINE! It runs Windows programs on Linux! Except when it doesn't...

Stop by the Ubuntu forums and take a look at some of the newbie questions. Very often, the problem boils down to "Windows (or OS X) worked that way, Ubuntu works this way, how do I make Ubuntu work that way?". I'm not saying desktop Linux doesn't have problems (Unity, barf), but the fact that desktop Linux distros only support limited hardware and have problems from time to time doesn't make desktop Linux fundamentally different from OS X or Windows 7.
I think most of the people in this discussion use Linux because they like it more and are much more productive in it compared to other OSes. I believe though that it would be nice to cross a "critical mass" of desktop users, after which the usage would go dramatically up itself. More users = more money invested = better experience for everybody. Maybe.
I don't think that's plausible, frankly. It makes more sense to aim the service revenue model at a cloud platform, which leaves the user to choose the operating system that suits them best. Google's strategy.
There is also one other thing that for a long time I've thought is a problem with Linux: API stability. With Windows you can take an application binary from 1995 and most likely run it on a computer you bought today. If you can't, it's mostly because it was designed wrong. With Linux, you sometimes can't even take binaries from last year and make them run without problems, let alone from a decade ago. And even the popular "just compile it from the sources" won't work since the APIs have changed for libraries etc. You'd have to make changes, run chroots or something else. A huge problems sometimes. Applications aren't supported forever, not even OSS. Libraries are developed separately, distros use different versions etc. And some people would like to use commercial applications. And they are usually available only for some distributions.

For me, it seems absurd that someone would suggest changing distro to get something working. Since my work machines use Windows, it would be quite strange if I had to install a new Windows to work on music and then another to work on network security etc (yes, I know, people can do everything on a single distro, if you know what to do, but we're talking about users without knowledge about compiling kernels or applications etc). Still this is one option I many times see on the Linux community. "I can't get X to work, I use distro Y", "Oh, install distro Z, it's geared for that kind of a thing." So strange and inconvenient.
a million +1 for this man! I hate this waiting for fixes with fall promises of security as an excuse, the user chooses what he wants and is the king. Decentralisation is an imporant thing, but use the right method for the right job.One or two distros should be enough but they should let you easily get new versions of software without breaking everything everytime, the base system should be stable that's what is important, the developers should judge if their software is stable
For that to happen ABI (even more than API) compatibility over time would be required and is kind of a no-go on the maintenance of most core packages and tools...
I don't see a distribution like Ubuntu as a limiting box. If there's an application for which I want to be state of the art I can usually find a PPA with with more up to date versions. If I'm more adventurous regarding a particular ap I can go the source and compile route. I look at it like layers of risk. List risk is the distribution repositories. As to 3rd part PPAs some are relatively safe because they are designed to work with stable releases of in my case Ubuntu. Other PPAs focus more on the release in development and in that case there is more risk. This is certainly the case for GIMP PPAs. I just don't see where I'm boxed in with old releases.
Ingo Molnar is absolutely right!

And this idea to set FREE software developers from begging maintainers to include their programs in the repo but then became forced to follow strict release cycles, isn't so new -- Canonical's Launchpad is a perfect place for end users to install fresh apps directly from the authors instead of waiting for distro's repository managers, when they decide that this exact app is "stable enough" to maintain.

So I think Ingo's text is not a "new distro manifesto", but i would like to see it as a call for major desktop-distros to consider switching to the ROLLING-REALEASE strategy!

When i just came to Linux-World from Windows, it was a lot of fun to re-install my hole system every 6 months, but after a few years past, i realized what pain in the neck it is! :(
I don't want to reinstall and re-configure everything again and again, that's why right now i'm still on Linux Mint 10, which is based on Ubutnu 10.10 (i think it's also related with the fact that gnome3 and unity came out and i wasn't ready to use them).

But 10.10 is getting older, i feel a lack of PPAs for it, so now i am totally convinced that rolling release strategy is a huge win for desktops. Right now i'm thinking about Arch or Linux Mint Debian Edition (also rolling release distro).
Yep. Just start with a distribution that approximates what you need and add anything else you want to /usr/local . Don't like the window manager? Install something else instead. This isn't rocket science.
Ralph H
I think "organizaion" is as much of a problem as inadequate freedom. Once you cross the line where there are more lines of code the a single person can read, there need to be other ways to share the knowledge. Has anyone analyzed how many lines of code in a typical linux distro are duplicated?

I once tried making this case for fewer lines of code (20,000 packages, millions of lines of code) to some Red Hat folks I met - but "software people" don't necessarily see this as a problem. I propose that a new method for packaging objects (where code can run in the "kernel environment" or "userspace environment") needs to happen if we expect.

I point out that Apple regularly implodes its software ecoystem every 10 years or so. (MacOS 1984-1997, MacOS X (Mach/Unix) 1997-2012, and IOS (2007-<whenever>)
The problems described by the author are problems common to all desktop OS platforms, and are actually at nearly cancerous proportions in the proprietary OS world that lacks any sort of objective engineer peer review or widespread field testing. In my opinion the worst thing that ever happened to the proprietary OS world is that the whole development of the OS was turned into a "product" oriented iteration instead of being seen as a platform in a constant state of adaption and flux. That means that arrogant leaders like Bill Gates and Steve Jobs as well as product managers had all the say in how the project would go without actually doing real world testing or having intense engineering peer review from people outside of the organization who used different/competing social and OS management systems.

This lead to shallow OS's that lacked high performance computing characteristics that had purely cosmetic functionality. I used Mac OS X for a single day and gave up out of disbelief that Apple could deliver such a half-baked system. I have been using Windows for over 10 years because I have had no choice until recently. I use Linux because I want to and because the average distro makes both Windows and Mac OS X look like they were designed back in 1995 to be the centerpiece in some retro "home of the future". Microsoft and Apple wasted hundreds of billions of dollars on projects that would have led better results if run by Open Source consortiums (competing systems and approaches) using objective peer review.

With Windows and Apple the public and developers were herded into single use case platforms that were both archaic and non-standard with the core enterprise ecosystem. There's a reason that Linux dominates over 90% of the server, enterprise (Pentagon, NASA, IBM, etc...) and high performance computing markets because it can be adapted to so many different use cases (ecosystems) and still deliver high performance and security. Desktop computing innovation was held back by nearly a decade and has yet to recover, it is only now getting a boost ironically because of Open Source like Linux and the tablet/mobile Universe and the reappearance of a diversity of use cases.

The enterprise - and largely Linux and Unix - driven enterprise using the current Unix derived idea that one could maintain their own ecosystems and at the same time contribute to and peer review other ecosystems wizzed passed the desktop people into the 21st century and desktops have been catching up ever since. The fact that Google is merging Android into the Linux Kernel is a sign that Google understands where the "whole game" is going and that eventually Linux Desktop and Unix style mobile devices [Linux Desktop] + [Android Mobile] will be intertwined together at some point and will have to communicate and share common standards in terms of system operations on the kernel level without handcuffing everyone in the user space. This cooperation with ecosystem independence is unique and ideal for engineering futuristic innovations while keeping a single kernel standard.

Only the Linux world has this capability and it is the only thing leads to a truly diverse Universe of ecosystems and true choice. The experimental nature of Linux Desktop means that developers can actually go ahead and test different things in the real world, get user feedback in the field and refine, they are able to test combinations and strange radical features without imposing them on other people's ecosystems that may be configured to totally different purposes. Because Linux is still a largely free operation - not considering the enterprise versions like Red Hat - this activity is very cheap and should be taken advantage of. Also, this process may be totally chaotic but in the end will deliver results that are not able to be matched in the Windows or Apple worlds for some time. This is why we have Linux on embedded systems, mobile, server, desktop, super-computer, main frames, etc...and Apple and Microsoft only have their stupid desktops and dumb-box mobile gadgets.

The question is, whether you think "natural evolution in the wild" is better than "intelligent design in a small gray box"? If you look at nature you can easily see that evolution is clearly the most powerful design and development process. Let Linux Desktop evolve in the wild and when it eventually matures in 5-10 years it will the desktop component of a larger Linux/Unix driven mobile ecosystem that wont handcuff developers and users on the user space layer but keep a common kernel core.
+Ventak HR, the point that there are no other OS' with software distribution characteristics described and demanded by Ingo, is practical and historical not true. On theoretical level the classical PC concept (having an standardized base-system which can be simply extended and administrated with (hardware- and) software components by the user himself) is overlapping Ingo's vision. The PC is existing. (Also can be noted that the PC use case is somehow in contrast to the centrally managed workstation vision, which unix comes from and which has stricter separted roles in the software live-cycle managment.)

The (Wintel) PC or Apple Mac PC with "just-click-installers", stable ABIs and 1-file-self-contained-portable applications providing and enabling the requested software bazaar to the desktop users as to third party software providers alike since long time.
(e.g. Jon Udell decribing the better installation characteristics for even developers with Windows here

Your points about developments speeds and scales (evolution vs revolution) are intresting... my interpretation of Ingo's post here is that Ingo also identified the evolutionary, chaotic powers of the linux development (the 'bazaar') as too restricted by some characteristics in the linux software distribution infrastructure. One limiting point to the development powers are the fixed, centralized structures ('cathedral') of the distribution focussed software development process. The other one is the (too) strong binding/integration between operation system (system libraries, directory structure etc.) and the applications itself.
Both aspects are weaker or non-existing in Apple or the WinPC world (noted by Ian Murdock, or Mozilla developer smedberg

Ironically, this both linux infrastructure characteristics are conceptual even the opposite of the bazaar powers, which should and could be the strength of linux.
@mandrit Hoffman, The problem with that analysis is that Apple follows an even more restrictive policy than any other proprietary system. Software written for the Lion OS version - believe that is 10.6 - cannot be run under Leopard - 10.5 - creating all sorts of problems. If the user wants to auto-update they cannot do so without paying first, thus Mac OS X at any given version is frozen in a state of deep freeze unless the user it at the latest version. Apple also will begin imposing the app store on all Mac PC users, thus developers will be further restricted and will have to be approved and so forth.

Windows is locked down to only do things - the Windows way - which has barely evolved in years and is archaic. People were forced to use the Windows way which because it was the only way for the PC, the installer system that relied on the absurd registry and installers that could stop functioning for too many reasons. Remember those installers that "could not find" or would corrupt contents and shutdown? The point being that "it just works" is not an ideal but a restriction that hides either the deliberate totalitarian nature of Apple or the bumbling incompetence of Windows.

Ingo is wrong when he says that current Linux repo systems are "They are centrally planned, hierarchical organizations instead of distributed, democratic free societies." In order to be "distributed, democratic free societies" they must give up the freedom to maintain different delivery and code approval systems. This implies that everyone while socially free to organize must use the same software tools for package management and give up all freedom.

Those that maintain those tools will hold back Linux software development because people will no longer be free to use their own delivery system or to experiment with alternative mechanisms on the social spectrum. An enterprise Linux distro company like Red Hat has the right and the need to maintain it's own ecosystem for it's own engineering reasons (ensuring high performance computing standards and security), why should Ubunutu (consumer focused) and Red Hat (enterprise focused) have to standardize when their purpose is so different?

Ingo is also wrong when he says and praises Android or Apple iOS:

"What did the (mostly closed source) competition do? It went into the exact opposite direction: Apple/iOS and Google/Android consist of around a hundred tightly integrated core packages only, managed as a single well-focused project. Those are developed and QA-ed with 10 times the intensity of the 10,000 packages that Linux distributions control. It is a lot easier to QA 10 million lines of code than to QA 1000 million lines of code."

This means that for the sake of "it just works" Apple and Google take total totalitarian control over their App Market delivery system and can pull or disapprove apps and control and limit the evolution of those apps forever. Thus, the App Market will never have any revolutionary software, just mundane use cases that will cause developers to never bother to create competing ecosystems or to be in hand cuffs to never innovate for the sake of approval. Some developers spend millions of dollars to create apps only to be locked out for some arcane reason and are covered in miles of red tape. At least in the current Linux system if Red Hat or Ubuntu is slow to include an application there are at least 10+ other repository maintainers among the ecosystem of distros that will be able to include that app in their own package systems. Developers have plan A, B, C, D,...etc for all disapproval situations.

"Just Works" is not the right way, its a small grey box with a padlock...
It's the diversity of its software ecology that makes Linux-based distributions so unruly. Why are we still talking about the "Linux desktop" in a world where the most common personal computer in use today is a handheld communications device. Over 700,000 Android devices, each with a Linux kernel, are activated every day.
+Ventak HR I completly agree that Apple has the most restrictive platform policies e.g. this harvard paper "Opening Platforms: How, When and Why?" page 2, has this order of openness (from open to closed): linux > windows > macintosh > iphone.

On the fuzzy argueing about "not working installers in windows", such happens effectively not anymore nowadays, "dependency hell", installer problems etc. were fixed years ago (around 2000). The desktop usage experience, it must be said like that, is plainly great with Windows.

"This means that for the sake of "it just works" Apple and Google take total totalitarian control over their App Market delivery system and can pull or disapprove apps and control and limit the evolution of those apps forever."
This argument is supporting Ingo. Ingo ask why the Linux ecosystem is not providing an uncontroled market platform ('the bazaar'), completly free of political control, unlike what these markets suffering now from apple or google.

Ingo thinks "just works" and free as in "free speech" are not in conflict and I completly agree. The model we have (or "suffering" according to this point of view: now with the centralized distro approach is far less free than his vision. The control the distributions have over the repositories is maybe not commercial motivated, but it IS a form of controlling and limitation and also an practical hinderance for direct distribution of software to the users. This was criticised multiple times already e.g. by Mike Hearn (autopackage) years ago: [...] Hearn has critiqued native packaging systems as both out-dated and anti-democratic."The whole idea of packaging/installation is bogus and leftover from the times when software was distributed on floppy disks," Hearn claims. "The web 'instant activation' model is better [...]" (

"An enterprise Linux distro company like Red Hat has the right and the need to maintain it's own ecosystem for it's own engineering reasons" Here I could not disagree more. The priorization, it is completly OK for distributions to break compatiblity and platform experience for minor technical advantages is bogus, misfocussed and one of the reasons for the problems the linux desktop is suffering today.
@mandrit Hoffman, you make several really good points, but...

...In reality it's all about "use cases", that's what Linux is really all about. Each distro adapts itself to a series of use cases which are different and not compatible all the time. The problem is that each use case changes rapidly or becomes obselete and new ones appear all the time. If you watch this interview with Linus Torvalds himself:, you will see that the real problem is that there are simply too many desktop configurations hardware wise and use case wise and this is the difficulty of the desktop.

It's a matter of combinatrics. Let's examine that in depth. How many hardware devices does the desktop need to support, printers, input devices, hard disks, motherboards, chipsets, ram, etc? How many different types of users would it have to support, novice, media editor, application/system developer, web-developer, office worker, student, web surfer, tablet user, etc? How many different systems will it run on, sever, mobile, tablet, netbook, laptop, desktop, towers, thin clients, embedded, cars, planes, trains, tanks, helicopter subsystems, nasa space ships, etc? How many different output devices will there be, vision impaired, braille keyboards, large lcd monitors, crt monitors, touch screens, televisons, etc...?

Scenarios = X * Y * A * B * C * E * F * S * H * F *(....) = HUGE NUMBER!

One can take anything above the Kernel level, that is those items that deal with the Userspace and use case specific elements and throw it away, refresh and start over again to adapt for each specific scenario or group of scenarios and this is one of the core brilliant aspects of the Linux system. This way Linux desktops can accomodate huge ranges of deployment and use scenarios and adapt in a purely objective independent fashion. This is why Linux is used in reality on probably over 70% of the world's computing devices including mobile, desktop, embedded and servers because of this deployment characteristic.

On the Developer level it's part of our job description and basic application design that one has to adapt and make as many deployable combinations of our applications as possible for these different scenarios. We developers tend to be lazy people and so the unified approach seems great, but is out of step with the wider reality that faces in terms of raw numbers. This is like a developer complaining that all computer hardware manufacturers should only use Intel i386's because it's too annoying to have to develop for AMD and ARM processors and duplicates efforts. It's not practical or even possible for developers to tell System Makers or Operating System distributors/developers to adjust themselves to our needs because we want to be lazy.

We have to take the tools available, be creative and ensure that anyone using any system or operating sytem can use our software and that it's operation is uniform accross all devices/scenarios. If we can't do that we have failed as application developers, it's that simple, no need to blame distros, manufacturers or anyone else but ourselves. For example, if someone wanted to deploy an office producitivy software package over a network at a large corporation and enable the workers to access this over the internet this deployment, packaging and dependencies would be very different from someone who just wants a word processor application to do their homework or write a letter on their laptop. This is a reality, not a hypothetical.

This is the reason why having .rpm's and .deb's, yum, apt-get and even simple make from source and these are all necessary as tools that are great for a particular scenario and that's why they exist. The utter failure of Microsoft and Apple with their respective desktops is that they both failed to understand this wider ecosystem diversity and forced all devices to use the same systems in a centrally unified approach. Sure this allowed for a big single wide open application market ("app store") but one that could only deploy applications for a limited set of scenarios, desktops and laptops. This held back the evolution of different systems and diverse deployments. It would take an entire NT Kernel rewrite and upgrade to facilitate such deployments to adjust for major scenarios and this is why Microsoft and Apple were never able to create any meaningful solutions for the server, embedded or mobile market.

Apple is only just beginning to get into the mobile/tablet game but their systems - aside from the brilliant artistic endeavours of their terrific user interface design team - are technolgically inferior and less capable then comparable Linux derived mobile systems and not open enough to operate in the wild efficiently. Also, Microsoft is now facing their own failures head on with the Windows 8 development fiasco because they realize that their own unified approach failed because their Kernel is specialized for desktops and applying that to mobile touch screen tablets is difficult and requires too much hacking. If they had developed parallel "ecosystem" based systems that enabled a modular approach as the major Linux Distro communities (Canonical, Debian, Fedora, Suse, etc...) and companies have with the different approaches to different or similar problems, they would have deployed tablet based OS distributions years ago and be in a good position going forward.

Likewise, the failure of the wider Linux Distro community to adopt "Autopackage" and to develop any meaningful alternative speaks volumes about the core issue here. In reality the only way to deliver your "an uncontrolled market platform" is via a single unified and centralized system and to use a word from the Bitorrent world a "tracker" system. With Bitorrent, sure you can use many different clients and feel that you have a decentralized market but in reality it is still centralized in one core protocol and this would impose centralized requirements on the Distro and would be a move that would deprive the distro of it's freedom to adapt itself for it's own particular use case which is impractical and impossible given the high number of unique use case scenarios and deployment systems in use.
@Ventak HR : ok, your concern seems to be the varity of use cases an operating system faces. Agreed, this is a demanding task... but I didn't agree on your conclusion that this must be achived on distribution level. You argue also the varity of distros is required (technically!) to fulfill the needs of all users... I would argue this approach as solution for adaption on all the potential use cases is doomed to fail for two reason: first, complexity and burden. I would argue the primary requirement of an operating system to adapt on all potential hardware is task enough (and undoubtfully not a solved problem) for the exidting distributions. They should not be burdend with the additional sisyphusean task of trying to "forsee" the use cases of their user group. I take your combinatoric argument and cite Ingo "the cathedral can't scale beyond 100packages", this exploded has ... even with your assumed specialized, reduced user group the task is too enourmous. Second argument, distributions will always provide only an approximation of the real use case of an user or user group, because individual preferences exist, change, etc ... in the end for perfect adaption on the use cases we would need as many distributions as users.

But wait! ....Apple and Microsoft found a way to achieve exactly that, a distribution for every user, in a non-painful way for both sides. How they achieved that? ... They didn't try to assume the uses cases of the users beforehand (that's wasted effort!), they just gave them the freedom to adapt with "modules" called "applications" their OS to their very specific use cases! They allow the users to build their own "user distribution" with maximum flexibility possible... so, every user has is own distribution, all use cases are fulfilled, great! Additionally the burden of work on the distribution would be (roughly approximated) halfed by taking away the ugly decision taking and packaging of applications, maybe we will than have an better, perfect hardware support? Better polished desktops, better usabilty etc... just by freeing development ressources for the distros! Would that not be great?

Therefore, I agree with Ingo totally , this is the only way to go, let the users take their own software decisions, take away this decision from the the central place called "distribution" because even the ten-thousend packages big ubuntu cathedral is orders to small to satisfy the varity on needs on applications and funky use cases users like to have. (noted also with some examples by benjamin smedberg: Distributions should focus again on their core concern, being a operating system, being an as much as possible adaptive layer between hardware and software, that's already task enough.
Sorry, but did somebody claim that Microsoft and/or Apple had produced satisfactory operating systems? Don't think so.

Install a Linux kernel by whatever method you like. Add whatever userland you like. Add some apps. Get on with your life.
+Tony Sidaway in sense Ingo was describing, the "degree of freedom given to user for software decision", yes, both are more successful & satisfactory than the linux distro approach. (not to mention the voting by "feet"... the market share numbers)