Shared publicly  - 
 
Technology: What ails the Linux desktop? Part II.

And yes, I hear you say "but desktop Linux is free software!". The fact is, free software matters to developers and organizations primarily, but on the user side, the free code behind Linux desktops is immaterial if free software does not deliver benefits such as actual freedom of use.

So, to fix desktop Linux we need a radically different software distribution model: less of a cathedral, more of a bazaar. The technology for that is arguably non-trivial:

- it would require a safe sandbox enforced both on the kernel and on the user-space side. Today installing a package is an all-or-nothing security proposition on most desktop Linux distributions. Users want to be free to install and run untrusted code.

- totally flat package dependencies (i.e. a package update does not forcibly pull in other package updates - content duplication can be eliminated at a different [for example file system] level).

- a guaranteed ABI platform going forward (once a package is installed it will never break or require forced updates again). Users want to be free of update pressure from the rest of the system, if they choose to.

- a mesh network of bandwidth. Users want to be free of central dependencies.

- a graph of cryptographically strong application reputation and review/trust status, so that different needs of security can be served: a corporate server requires different package credentials as someone trying out a new game on a smartphone. This kind of reputation system allows people to go with the mass (and thus seek protection in numbers), or go with authority (and thus seek protection by delegated expertise) - or a mix of these concepts.

The Android market comes close functionally I think, except the truly distributed mesh network and structured reputation architecture, and it's not FOSS either, of course.

I see elements of this thinking in the Gnome3 extensions 'market' - but it does not really handle security nor does it guarantee a stable platform.

Free software has stupidly followed closed source practices 10-15 years ago and we never seriously challenged those flawed closed source software distribution and platform assumptions. Today closed software has taken a leap and FOSS will have to react or go extinct. I think FOSS software will eventually react - I think free software is ultimately in the position to deliver such software distribution technology.

[ This is part two of the article, the first part can be found at: https://plus.google.com/u/0/109922199462633401279/posts/HgdeFDfRzNe ]
272
61
Stan Warren's profile photoValegaonker Pavan Rao's profile photoDaniel Baulig's profile photoMario Moder's profile photo
101 comments
 
It would also spell a death knell to freedesktop and KDE/GNOME humongous ecosystems.

It's easier to do so if the APIs use the style of Plan 9 instead of crazy batshit insane D-Bus using monstrosities that are hard to track (for accountability and security).
Ingo Molnar
+
19
20
19
 
+Paweł Lasek

There's another underlying problem here that I did not want to go into to keep the post smaller: even the core desktop Linux bits are too fine-grained, too modular - they have a too large "cross section surface" to each other than necessary.

This makes two things hard: keeping the project focused on the goal and giving a good external protocol (ABI) for others to rely upon.

So in a sense they are 'too free' in the wrong places, and not free enough in the places that really matter ...

It's basically freedom not used as a productive tool but applied indiscriminately and randomly.

In that sense it's no surprise that historically monolithic projects like the Linux kernel with 15 million lines of code but only a few hundred of ABIs work a lot better (and it's a lot easier to provide the ABI) than many user-space Linux desktop projects where often a library is nothing but thin functionality and a wide ABI.

It's not fun to hack on an ABI-mostly project that is only there to limit you!

Firefox came close to providing the right method via their extension market but then they messed up by not providing security nor a real platform.

Larger, more integrated projects work better in that sense, as long as they make use of their size and provide a good ABI/platform externally and actually nurture them as prime time members of their ecosystems.
 
+Ingo Molnar IMHO the problem is not the cross-section, but the tightly-coupled nature caused by relying on shared libraries and customized protocols that break OS semantics (D-Bus, CORBA, etc.)

Also, thanks to shared libs and laziness of developers, there are no "optional" dependencies, only "on" or "off".

Meanwhile, they are happily destroying the sensible "common API" of Unix (files, extended nicely in Plan9) with unconstrained pulling of ideas from other systems without adaptation (those ideas might work in different environment, but wouldn't work properly in *nix etc.)

If the services are exported as filesystems, with modern security extensions (TOMOYO in all of its instances is nice fit) you can easily lock or unlock parts of the system, as well as provide a common API that can be called by anything.
 
+Paweł Lasek

Good points - I tried to make more of a meta argument: if you are trying to provide a really good ABI you have to actively work on reducing its cross section. No ifs and whens about that. Today's Linux desktop does not even try: we have literally tens of thousands of unmanageable ABIs, and if we break user-land we fix it up and recompile - we have all the code, or so we think. As a result we create constant friction that rubs off most of the third party ecosystem.

So if you are arguing about how the core would be "more beautiful", you are probably right (although I do argue that there's a need for good IPC and RPC: and Unix/Plan9 is historically weak there and dbus/corba/binder/etc. fills a need with that.).

My argument is that we need to isolate away most of the 'extra' applications, because it's very expensive to make them 'beautiful'. Instead we should concentrate on a much smaller core and give a good, stable, reliable ABI to the rest. Then, almost as a side effect, the core will become more 'beautiful'.

One effect of doing that is that this isolates bad third parties - while it still keeps the door open towards incorporating good third party contributions. It's a quality firewall in essence.

By desktop Linux trying to be everything, by trying to control uncontrollable entropy we have let down our defenses, and despite having an order of magnitude more coding manpower than Apple or Google, we deliver only a fraction of the quality. #justsaying .
 
Ubuntu goes into right direction with their PPA, as far as distribution is considered. The true problem is that APIs and ABIs are not stable enough.Too many projects struggled with qt, gtk, and other upgrades instead of focusing on true functionality. Bah, yesterday my colleague wanted to draw some diagram.I suggested kivio. Pity it lagged behind some api upgrade and is not packaged anymore.

Still, linux desktop ecosystem is fairly rich and deserves more respect than it gets...
 
Actually, Plan9 got IPC quite well... as well as being a good base for putting a binder system on top. In fact, the intents in Android (which are something I love the most about it, both as developer and user), have direct parallel to a certain program in Plan9. Now replace its config file with a filesystem that an app could use to register itself as a handler, pointing to its own exported filesystem, and suddenly you've got a lot of the "interlocking" done away with, in a "beautiful" way.

The thing for me is that most of the important, iffy desktop support APIs like "current network config" (that would include proxy) and others could use just that, with some helper libs on top.

And for ABIs that have to be close, we could have more time to make them work well (X11/whatever, ALSA, etc.)
 
Which desktop user are you actually targeting? I don't really see the connection between your list of requirements and what the average college kid wants from an operating system. Users want to be able to easily install an app and would not care how that happens internally... tarballs or rpms, why would they care?

Maybe I am stuck in some old paradigm and don't get your vision... but to me it's pretty obvious that the biggest issue preventing GNU/Linux from penetrating into the desktop market is the complete lack of polish and consistency of quality across even the most basic user-level applications.
 
+Stou Sandalski

The first part of the article lists some (but not all) of the problems:

- application availability
- application update delays
- security
- functionality
- polish

The desktop user does not care why a system is better, the college kid picks the one that offers good value for these attributes.

Desktop Linux has obvious deficiencies here, and I argue that it is not caused by lack of manpower but mainly caused by wasting manpower on the wrong design.

I.e. I think it's ultimately fixable, just in a different, architecturally deeper spot.

Many people will disagree.
 
+Paweł Lasek

Plan9 never really took off, which makes it hard to judge many of its technologies and how they would affect today's systems.

I also think it does not matter much because loss of quality is not irreversible: well managed projects can increase in quality monotonously, so if done right a project will eventually reach Plan9 quality (in its internal design).
 
+Stou Sandalski What we are talking about are issues that make it inherently hard to deliver a polished out-of-the-box application.
 
+Ingo Molnar Frankly speaking I wouldn't suggest implementing Plan9 directly, however, if we agree to require an extended security model - like Tomoyo, which is much less invasive than SELinux to user, while being more 'featureful' than AppArmor (afaik) - we can enable (already supported since long time ago!) features in linux, some of which are already partially in use (gvfs mounts a per-user FUSE filesystem).

Use the filesystem as the simple IPC backend (with some libraries to make it easier), and leave complex ABIs to stuff that really needs them. All that, while shedding the ugly bikeshed of freedesktop.org ("Let's destroy the few good parts of X11 and leave the bad!") and KDE/GNOME ("let's create FUCKHUEG dependency chains that are 90% separate from each other, but one application will pull it all in!"). Make the interop simple and fun, and suddenly it might be much easier to deliver a cleaner, nicer system. And I mean nicer for end user. Who the f*ck wants to download a complete Desktop Environment they don't use just because they wanted to add Japanese language input? That happened to me - I, fortunately, knew how to recompile the package. A normal user would go "Dafuq I need 3G download for that one program, and what are those cryptic names?"
 
+Paweł Lasek

Agreed - and there's many similar stories.

Step one would be to remove pressure/churn from desktop developers: by opening up towards a truly free package system that necessarily allows compartmented third party apps beyond trusted core packages.

That is a non-trivial step both technically and socially - and until it's taken desktop developers are caught in the treadmill of doing too much and being spread too thin, both spread thin among the many variations of desktop Linux and spread thin among the many applications that are owned collectively but not so much individually, which removes much needed attention and polish from them. That also causes the core getting a lot less attention than it should.
 
+Ingo Molnar That's one of the reasons I'm arguing for making such a simple but open IPC (that is, I can use it without much training even from a script) - it will reduce the dependency hell which drives the need for the complex packaging, because current approach is to "link with this library - oh, and since other methods are hard, let's make it hard dependency"
 
Flat package dependencies do not work because they do not allow relying on new features, so new features can never be used.

The real problem here, I think, is lack of manpower in the open source community. Or parts of it, anyway.
 
Nobody in their right mind would "want to be free to install and run untrusted code." I personally look at the source of anything I install, and if it is not open source, I virus scan it. However not everyone is a developer, and that is where the security issue comes in, average, inexperienced users.
 
A micro kernel might reduce dependencies, but then system performance would suffer...
 
+Frank Kusel A microkernel would actually increase dependency hell - right now, the kernel is self-contained enough that it's nearly completely unaffected by whatever idiocy happens in userland.
 
+Paweł Lasek Hmm. Standardising all the interfaces the Linux kernel exposes sounds like an interesting proposition. Just not sure it would be easy to accomplish. If I think about the hassles with sound system standards, graphics standards, etc, it would take a huge amount of energy to accomplish.
 
Genode (http://genode.org/) uses a microkernel and virtualization to deal with security.

NT, Mac OS and iOS use microkernels. Virtualization is used on NT - e.g. running XP applications in a virtual machine on Win7.

VMWare ThinApp and others use virtualization at the application level. An application uses more disk space because it doesn't share libraries with other applications but you don't have to worry about dependency hell.

http://tunes.org/~unios/oskernels.html#nt

http://technet.microsoft.com/en-us/library/cc750820.aspx

http://www.roughlydrafted.com/2007/07/13/iphone-os-x-architecture-the-mach-kernel-and-ram/
 
+Rene Sugar NT doesn't use a microkernel, it's a hybrid like Linux and OSX (in practice).

btw, OSX (which encompasses both desktop and iOS) is actually based on the single kernel that nearly killed microkernels - Mach. In the oh-so-common move of putting the unix server back into ring0, they called it XNU ;)

+Frank Kusel I didn't mention exporting kernel interfaces - those are actually quite tightly controlled and kept rather stable for anything that doesn't interact with devices directly.

What I am advocating is getting rid of cancerous shims that put important parts of the system as completely "black box" elements, as well as killing off the use of shared libraries as method for "general" APIs, as it encourages the spread of multiple incompatible versions that require tight binary compatibility.
 
Ingo you talk out of your s*****r when it comes to mainstream user and a Desktop requirements. and I see no connection what's so ever between this article and what an office lady or even an internet power user want with his or her PC! they need applications that works, OS that is fast stable and secure.. your bits and bytes can be kept for yourself.
 
+Ingo Molnar Tangentially, doesn't your argument - with which I concur - imply that the non-core part of the Linux Kernel (drivers) is headed for the same kind of quality problem? Since the lack of stable driver API/ABI from the kernel encourages all drivers to be in the mainline? In fact, top kernel developers frequently urge vendors to mainline their drivers. Isn't the kernel trying to own 20,000 drivers? Will that scale eventually? Doesn't your prescription for the desktop also apply to the kernel?
 
+nash oudha Once again... We are talking about "How do we make the application environment go this way?". The "normal user" won't be able to describe it like that, but instead would go like "I wanted to install this simple app to make a photo album of my granddaughter photos, and it wanted me to download 3GB and eaten my poor mobile internet!"
 
+Paweł Lasek But moving towards the Mac/Windows model of "simple packages" means larger downloaders, not smaller ones.
 
I don't believe standardization is possible or even wanted in the open source community. To standardize is to limit options for development. I'm for leaving it wide open and let the best rise to the top. I do not think Linux fails to provide applications, in fact many of the most popular windows apps were Linux apps ported to windows. The main problem with Linux continues to be lack of support for popular hardware, Magic Jack, etc.
 
The fact that the community doesn't or can't compete with the advertising and marketing approach of commercial software. People have been brainwashed into either the MS camp or Apple, it's either, or nothing else.
 
+Stan Warren I'm thinking less "standardization" as in "careful specification", but more in making a very simple (think "slightly more advanced than pipes, sometimes less") way to make software interact with things critical (IMHO) to a modern desktop.

The current methods leads to a lot of fragmentation that doesn't cross-pollinate.

As for hardware... I really haven't had a lot of problems since a looong time ago...
 
+Chris Lichowicz That's because for non-programmers, who need to use desktop applications, Linux often isn't very good. Like I say, lack of manpower. Even Google Docs can't import MS Word documents correctly, with all the resources at Google's disposal - therefore it's hardly surprising that LibreOffice can't either. And LibreOffice is among the better applications available for Linux.
 
+Robin Green I use Open Office to translate different versions of MS Word documents for the Windows users that have different versions of MS Office.
 
and I'm pretty sure that would be because MS deliberately changes the format to make things awkward (as windows 7 complains about grub when installing service pack updates)
 
+Paweł Lasek I think BSD was an attempt to answer the problems you listed. I agree with you to a point, but I would prefer to see both approaches used. The wild anything goes wide open approach and several branches that try different specifications and may the best win.
 
Microsoft holds the monopoly. Linux wasn't aggressive enough to defeat such a giant within this industry. Looking at Apple. Can you see a difference here. Defeated only but once, but has proven itself of more than worthy to sustain this market. And yes, to dust with Microsoft and its "old" way of creativity. Apple is King!
 
What I like about Linux is that you can take someones work and use it in something else or "improve" it without a team of lawyers on your side.'
 
+Robin Green - Linux desktop apps no good? Son what are you smoking? I only have one windows machine, I run 4 other comps with Linux. I use the Linux comps 95% of the time and the applications work. I think the bigger problem is that most people don't even know how to use the OS, whether windows, linux or apple. Much less do they take the time to learn how to use the apps. Windows people just want to use, not think.
 
+Jeanine Hokaday - while expounding the virtues of apple, it's pertinent to know that apple runs BSD which is a unixy version of linux. Classic (OS9) was the last true Apple os. In essence an Apple computer is just a big over- priced silver or white Linux computer.
 
+Eric Faccer great project Eric with good goals. Wish I was not spread so thin right now.
 
+Chris Lichowicz The apple desktop is just gnome with Mac icons. I agree about the overpricing, a person is much better off with a "hackentosh" instead of a "macintosh"
 
+Chris Lichowicz - BSD and Linux have no historical technical relationship. While there's been a lot of cross-polination on current development, and both are unix-like OS's, (Linux is a reimplementation, BSD's are an evolution of the original codebase), saying BSD is a type of Linux is like saying is Ford is a type of Chevy.
 
A couple of comments above, Ingo listed - application availability, application update delays, security, functionality, and polish. Let me say that considering first three Linux does better than Windows. My kids use dual-boot Win7/Ubuntu computer. While they have no problem installing windows games from CD, they have no clue how to find anything else, while they do reasonably well with finding and installing apps from Ubuntu repository. And surely agreeing to update on ubuntu is safer and easier than finding patches on the internet and installing them.
 
+Christoph Martel I never really understood why plan 9 died out. Perhaps it was too outdated hardware support-wise when it was finally open sourced?
 
no, REALLY? (what a revelation)
 
+Paweł Lasek I installed XUbuntu the other day. If you search for "choppy video ubuntu" on your favorite search engine, you will find a lot of people having hardware problems. If you select the 3rd party software option, XUbuntu fails to install. Some of the 3rd party software it is trying to install has become paid-use only. The NVIDIA drivers wouldn't download using the additional drivers application. If you go to the update application, it says the NVIDIA drivers aren't available. The latest release didn't come with a control to select the sound source so sound didn't work until I downloaded a control and selected a sound source. With the choppy video playback, it isn't usable for a lot of what an end-user wants. An end-user doesn't want to deal with that. They'll go to the Apple store and pay too much for memory to avoid dealing with that hassle.
 
If you need funding for a distro around this concept, please initiate a Kickstarter - I for one will support you. Ubuntu and Redhat need a walled garden as.part.of their business strategy. What you are talking about can only happen as a separate distro - maybe the Cinnamon desktop guys will team up ;)
 
I agree with the first three points without reservation. But the facts on the ground today is that copying /bin/true from Fedora 16 to Ubuntu 11.10 results in: ./true: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.14' not found. The responsibility of that design decision can't be put on the current Linux desktop / package management developers. +Ingo Molnar, are there enough Linux plumbers around willing to do the work so that GLIBC can provide a way to target older ABIs like TARGET_ABI on Android?

The success of Dropbox, Skype and Google Apps also make me think that most user are really quite happy with big centrally hosted services. It's just that free software providers can't offer these services without a way to cover the operating cost.
 
Pawel - I think your heart is in the right place, but I think your solutions will do more harm than good, and can be mostly countered with this XKCD comic: http://xkcd.com/927

"I wanted to install this simple app to make a photo album of my granddaughter photos, and it wanted me to download 3GB and eaten my poor mobile internet!". It's not 3 gigabytes, it's hundreds of megabytes, and that's what happens on Windows & OS X: simple photo apps are hundreds of megabytes. And it only happens a couple of times. Once you've pulled in all of KDE and all of Gnome you never have that problem again, unlike on Windows or OS X where it happens every time you install an app. And then you slam freedesktop.org and d-bus and propose "better" solutions which will very likely have different problems already solved on freedesktop.org. Face it: freedesktop and 'd-bus' have "won" on Linux; the only way to make things better rather than worse is to work with them rather than against them.

Ingo has it nailed. Flat dependencies on a small but stable core is the solution. This means that applications include all of their dependencies outside of that small stable core. Yes, your trivial photo app becomes many hundreds of megabytes. But deduplication can happen both on the network and filesystem side to reduce that footprint if possible. It means that if you change something else on your system, none of the dependencies for that photo app are touched. Yes, it doesn't get automatic security updates for its libraries but that's a canard. A significant fraction of vulnerabilities are at the application level, so you need a good application update regardless. The important thing is that the knowledge of which libraries applications are using is widely available and that application distributors get properly notified & chastised if they don't update.
 
"Flat dependencies" work well for Android/ iOS, but completely getting rid of it may create more problems than it solve. What would people do with KDE apps, ship the entire (huge) QT/KDE runtime with each app, and then waste resources to deduplicate it? It can be done, but it seems too extreme. Coordinating a common QT/KDE runtime dependency for all these apps (without falling in a dependency hell, and with good support of versioning) wouldn't be impossible. Surely a true open source community can do some things different.
 
Diego: deduplication is easy, cheap & a solved problem. Git is the most obvious example: if the same file exists in two different places in the same repository it's only downloaded once and the .git directory only stores a single copy of it.
 
I am wondering how Wayland might tie into this?
 
+Ingo Molnar I have my doubts that having a stable and agreed on ABI is doable on Linux. There's been some attempts to achieve this in the past, with very limited success.

The ideal IMHO would be something like the Nix ( http://nixos.org/nix/ ) - dependencies are automatically and tightly defined, and you can install multiple versions of the same app or library on the same machine, without conflict at all (!). Use this on top of a (semi-?) P2P distribution network, and it would solve pretty much everything you mentioned was blocking Linux today.
 
Maybe its time to look at how FreeBSD has done it for the last decade again?
 
You know I agree with you. I have made many of the same points on many previous occasions. We are done as a desktop proposition entirely if we don't sort this stuff out 5 years ago...
 
I gave up on just using a FOSS desktop. I run a Mac at home for consumer stuff, and a growing number of Android devices. They understand a stable platform, independent distribution, and they are not so "holier than thou" about being Free to the point of useless. I want flash. I want stuff other people have. It is not acceptable to say "yea but, we're more free so just suck it up and live without". Consumers won't do that either - that's why we're 1%. It has nothing to do with the UI and everything to do with "but I can just install binaries on a Mac or PC that I download and they work".
 
+Mourad De Clerck That Nix looks very interesting, and I look forward to experimenting!

Has anyone had experience with GoboLinux (http://www.gobolinux.org/)? It looks to be abandonware at this point, but its goal had been to renegotiate the Linux file system to store packages in individual directories, allowing for multiple versions of libraries as well as apps in a sensible hierarchy. Last I paid attention, their challenges had been the creative use of softlinks to maintain their filesystem without breaking Linux... but it seemed to have created a rather Apple-esque environment in Linux, allowing package installation by mere copying, with only some clever hacks.
 
What Ingo is proposing is a great idea, but will cause much pain and inconvenience for the existing eco system of applications. Because most if not all would need to be rewritten at least partially to accomodate the ABI/API's.

There have been attempts to deal with this most notably Jolicloud and Chrome OS. Although these are not really where you are going they point to the same principles, all the local applications attach to the cloud a common structure of web apps that always work correctly do not need configuration (from the users perspective) or special libraries or dependencies, all these problems are solved by the mere fact that the web apps are maintained by their respective teams and only require a browser, or thin client.

Can we do something like this but target the local desktop rather then remake the world?

Some possible suggestions for this project would be something like this "Universal Linux Compliance", as a possible name.

Define what a desktop environment requires in respect to supporting the most number of likely applications that the user would use. These would include which libraries and library variants are required. Define the kernel requirements and modules that are needed. Choose applications that are using shared objects and try and solve the problem of dependencies on various libraries in a manner described here. The presence or absence of a shared library used by an application is found by a program like "ldd". Could a kernel module be developed that does something like this, the application starts and the kernel does an ldd like scan when it finds a missing library the kernel can call an outside module that automatically installs that version of the shared object? This way the user is only inconvenienced the first time the application is run, because it will be slow to start, but after that the application will run normally. The kernel can keep track of these shared objects that are needed by this version of the program, using a one way hash and file size it can determine if the application has changed and deal with the libraries again. When an application is changed or removed the shared object can be assessed as to whether it needs to be on the system or not based on some reference tracking algorithm.

Although this is a small portion of what you are proposing, it may be easier to acheive, while we are moving the eco system to the main goal. We could use similar approaches to other problems like this that cause dependency loading.

Each distribution that "complies" with this approach must prepare for each version of their distribution a package list that contains all the listed objects that are "deemed" required or demand loaded. Each application that is "compliant" must reduce their dependencies on "non compliant" objects or find other ways of dealing with them. The "compliancy" project would continously maintain the "lists" of required and demand loaded objects and each distribution and application would be responsible for compliance. The application or distribution could lose compliance if they do not keep up. A problem with this approach is the need for a "central" command that determines what is required, demand loaded or not present.

Just a suggestion.

Thanx

Julian
 
I don't think it's an issue of the Free Software desktops following a closed source model. Rather, the "everything compiled at once by a single distributor, now wait 2 years for new versions of everything" status quo was a necessary response to the RPM Hell / DLL Hell of the late 90s. With the state of the code at the time, you couldn't install stuff yourself at all unless you knew how to debug gcc error messages. So the solution was to provide a middle-man that could debug gcc error messages and did that for you, and called the result of their work a "distribution".

That, of course, was a completely appropriate hacker response, but not a consumer-friendly response. "Stop breaking shit so that I have to care about gcc in the first place" is the better response, but hackers don't like doing that because "breaking shit" is fun, and is how innovation happens. Freezing APIs and ABIs for 8 years so that people can write applications that don't need to be re-targeted at every version of every distro causes stagnation, and that's the best way to scare off volunteer developers. (But, a great way to get commercial developers. Double-edged sword.)

I'd love to be able to update KDevelop or Amarok on a different schedule than my KDE-based desktop (Kubuntu), or KDE itself. But unless I want to get friendly with gcc, I can't. Pro tip: If you need gcc installed to do something, and you are not a C/C++ developer, then you fail and the system is broken. Sorry, it's true.

Similarly, I'm a PHP developer so having MySQL and Apache on my laptop is normal. For anyone else? They shouldn't even know what those things are, much less have them installed.

That's really what Ingo is getting at, I think. Divorce the operating system distribution channel from the application distribution channel. They should not be the same thing. They should be able to move freely of each other, not be mashed into a single blob of packages, 98% of which only about 8 people care exist because they're just dependencies for other stuff.
 
Following up, this is an area that I think Android does get it right. Each application is sandboxed to its own user account. It does not have access to screw with other applications' data. That makes it reasonably safe to install stuff. (Reasonably, not completely.) Also, there's no adding of shared dependency libraries. If you have 5 apps that all use libxml internally... suck it up, link them statically, and move on. Memory is no longer $200/MB, and the trade off of eliminating DLL hell (and the epicycles of distributions on top of them) in favor of letting applications be truly stand-alone (something OS X figured out a decade ago) is worth it in most cases today.
 
Not upgrading libraries with a security vulnerability for all apps is an epic fail. I would have thought that someone working for Red Hat would understand this, and realise why the current model used by Red Hat and almost all other distros is superior to this piecemeal process. I only know of a BSD, DragonflyBSD, that stated an aim to one day support installing multiple versions of any package, and I don't know if that was ever implemented. As I said, it's a bad idea from a security perspective.

Also, Linux sometimes ships with unannounced security fixes, and who knows how many other packages do the same (perhaps sometimes they fix security holes without even noticing that they were security holes), so it's best to just run the latest packaged version of everything.
 
Many parts of this discussion remind me of the problems I face in my job as application architect, doing application and enterprise architectures. So I might as well throw in a different angle of looking at the problem.

Having many applications in an enterprise often results in many dependencies between them. Like "We have to upgrade our application X because the new fancy stuff Y we bought only runs with the new version". "We can't update application X because it's mission critical and costs Z amount of money (and application V depends on the old version)" ... and so on.

The solution that has been proposed is service-oriented architecture. That actually can solve this problem when done right (but most don't do it right unfortunately). The difficult part is defining the services. Or in terms of an operating system think of libraries. Some best-practices (which I think are most relevant here) are:

- Implementation hiding: a service (or library) should have an interface that does not depend on its internals. A good example of this is the POSIX file interface. There are many many different implementations, but you can use all of them with a single interface. A bad example would be the /etc/passwd file. Early *nixes just used the file. Now its format is case in stone, even its permissions, and actual passwords have moved to /etc/shadow. (pam modules help here - but are they *nix cross-platform?)

- Separation of concerns: Make a service do one thing - and do that right. Mixing up concerns couples the update cycle of all applications that even depend on one of the concerns only. For example a multi-language support library does not need to be part of a GUI framework - if the GUI framework just uses this library. But this way around it can also be used by other frameworks, browsers, applications, and so on. (bear with me, I have no idea how that actually currently works on Linux - I just made that up. But I hope you get the idea). libc is another such example. It has many different parts but you get them all in one piece - updating a bug in, say, date handling requires you to update all other parts as well.

- Supporting different version at the same time: Either via adapters or separate deployments, support multiple versions of a service, so other updates of other applications are not coupled to updates of the service. Libraries can be loaded in different versions - but it gets messy when both access - and change! - the same stored data, which then should also have a version number. Even for something trivial as an emulator snapshot format (when going into FOSS) there can be compatible and incompatible updates: compatible updates can be read/used by (some) older versions, but incompatible updates require new versions. POSIX file system is a bad example here. It is horribly outdated for new requirements, but where is an (official) new standard you can rely on? It's so cast in stone because every one uses it.

- resilience to failure: An application should not just quit when a service is not available. There should be a fall-back solution. That could be using a different version of a service, or for example queuing for outgoing messages etc. A good example these days are browser plugins. If they don't have a codec or plugin installed, it asks the user if it should install it - and then show the media.

- dynamic service lookup: to enable the advantages given above, applications should look up the service implementation for just the services they need (separation of concerns) by requesting a service interface (implementation hiding!) in a specific version (different version support!), and support getting - or not getting a result (resilience to failure). This implies some kind of service registry.

Of course once you have such services, that does not mean your dependency hell goes away. But it decouples services from each other in a ways that makes it more manageable. In an enterprise you usually then have some kind of governance about which service may use which other service. And this is of course a difference to the bazaar approach.

I think a flat dependency list, as proposed, would make those applications some kind of monolith again. The system is supposed to de-duplicate the dependencies with other programs. But that really only works with really well-defined ABIs. For example how are you going to ensure that library X.1.2.3 delivered by one such application Y is the same as library X.1.2.3 requested by application Z so you can de-duplicate it (namespace problem - imagine I develop a private "libc" my own application relies on)? If they are the same, is the library compiled with the same set of switches, optional functionalities and so on? So you de-duplicate only identical files - but that really only works for applications from the same source (or you need to make sure the compile process creates the exactly same binary - or you need to ignore e.g. build time timestamps, debug info whatnot). So I think de-duplication won't work. Also where will you put the border? Will the X server be included in the dependency list and delivered with the application? I guess not. But what with KDE, or Gnome, or any of the other GUI frameworks? Yes, those programs would be HUGE, and as de-duplication won't work (in my opinion), it would suck resources...

What would be an alternative? The application should request each service (in a separation-of-concerns manner, i.e. with smaller granularity) it needs in its specific version, from a globally controlled namespace - here comes the app market again... It would have mandatory dependencies, and optional. Dependencies would be defined as interface, and optionally a preferred implementation. The installer could just download the missing mandatory deps and ask to download the missing optional dependency implementations. Using the information what is already installed and what is available in the marketplace the installer could probably select the "best" implementation.

A global namespace (e.g. including the name of the respective marketplace for each dep.) would give you the uniqueness to ensure you have the right dependency. "Service" ABIs/APIs in the above sense would allow to reduce the resource usage.

On the other hand (putting on my hat as an application architect again), I don't think an application developer would use such a scheme, if he does not have control over the implementations of the services to use. It's probably a problem of boundary. Noone (or not many) wants to explicitly control the version of the X server. For KDE or Gnome, it probably already is the major version number and probably a minimum minor version number. But for e.g. an encryption library it would be the full version number, and probably delivered with the application. And not even talking about runtime containers for each application within its "boundary"...

Ok, I don't have a real solution either, but hope I added some useful info to the discussion.

André
 
+Larry Garfield The Android model does still have some problems. An app has to list permissions it could use and is then granted access to all of them. The problem is that it has to list everything even if a particular user is not going to use a particular permission. The alternative is dialog boxes on first use of permissions which has many problems too.
 
+André Fachat Yes. You make a good point about multiple versions of a library, but in an enterprise you can just use VMs to separate apps, or as you say, a SOA. Individual home users would still be better served by the existing model of compatible library upgrades for all packages.

Debian has had optional dependencies for as long as I can remember, and now has a program (aptitude) that offers you a choice of solution to dependency resolution problems. It seems that Red Hat has never seen optional dependencies as a priority. I guess it's better for them to be driven by apps, because the app can advise the user or admin if and when they are likely to benefit from installing an optional dependency.
 
Roger Binns: I totally agree that the Android model is far from perfect. I'd also love to see it offer a finer degree of detail, as any app that touches the network uses the "access any network resource" permission; so even if I know something is ad supported, I have to give it access to the entire Internet, not just an ad server, which is a security hole.

It's still a more flexible and modular system than desktop Linux distributions.
 
> totally flat package dependencies (i.e. a package update does not forcibly pull in other package updates - content duplication can be eliminated at a different [for example file system] level).

Maybe another way to approach this goal of simple, non-breaking install is chrooting each app with its dependencies, like NixOS does?
 
How about ChromiumOS + Native Client? It provides:

- application availability
- no application update delays
- great security and sandboxing
- functionality
- polish
- the power of the web in your native apps
 
Linux for the Desktop!!! but which kind of desktop?

I believe you have two kind of users. Two opposite sides.

On one side you have a bunch of hackers and wanna-be-hackers who like to read Makefiles and want to compile stuff. They like to go to the terminal and create scripts. They like to customize stuff. They like to debug their operating systems. They like to understand how their systems work. They don't like to have to reboot the machine on every update. They like to sit on their computers for hours and after a couple hours know more about their computer. They like to feel secure because their operating system is the most secured and crackers have trouble creating worms for it. They like the freedom of using any code want on their OS. They don't like controlling technologies and such (DRM, etc). They don't want to pay 2000$ for a 500$ computer. They don't like to change hardware if they believe the improvement is not worthwhile. Their experience with technology is enriching. They like to be in control.

On the other side you have the majority of the population, let's say 90%. People who use computers regularly. They play on them, they work on them on an application level. If something fails, they want to be given instructions of how to fix it, not explanations of how the system works. If they have to reboot 5 times a machine or wait 15 minutes for their machine or server to boot up, reinstall their operating system or even buy a new computer, they will. They just don't want to be bother. They want their computer experience to be simple. Their understanding of computer security is to have a good antivirus, and pay regularly for it. They don't care about controlling technologies(DRM). They go to the supermarket and buy a new technology toy once in a while just because it looked fancy. They believe the computer technology is hard, expensive and meant only for a few. They want someone else to control and fix their operating system

Some people change sides throughout their lifes. Some other people work on one side and go home and relax on the other side. Some of us are in between, getting the best of each.

Android, MacOS and Windows target the second kind of users and neglect the first kind. So what user desktop community is Linux targeting? at first it was the first one and now is the second one.
¿Can you seriously believe that you can target the 90% of the desktop users and not affect the other kind? Don't you realize this is the main reason most of us stop using Windows in the first place?

Thanks God, Linux is free enough and any mayor change affecting the main user base won't get mainstream.
 
As a philosopher of mind and language I am entitled to make the abstract meta-argument here. Long term solution, (could be 50 yrs if the right decisions were made in educational theory and practice), everybody is, as a matter of course, a top-notch programmer, because programming was taught from the age of two. Then free could be free and no need to dumb down for the "average user". This fantasy means programming is not a specialized activity but the fabric of life, or as Wittgenstein said, "a form of life" So, as in any good abstract thought experiment, how do we back this down to something doable right now? WHY CAN'T WE PUT MORE RESOURCES TO EDUCATING THE COMMON FOLK IN CS AND IN PROGRAMMING?
 
I DO_NOT_WANT every app to run it's own version of every library. That will turn linux into resources hungry monster. Slickness is a big part of Linux distros.
 
+Larry Garfield While you might have the patience to demand fidelity from the permissions, others will not. Are you going to count DNS traffic against an app? Ad servers are not clear cut as separate from other servers, nor are they politely and correctly labelled. What about an app that gets it ads from the same server it gets the leaderboards from? In reality the problem is not which servers data is sent to/from, but what that data is. A gif of a cat doesn't matter. A token representing your bank account credentials matters a lot. There is no easy way of nicely encapsulating this in an API or user interface.
 
+Chris Lichowicz Case in point. I opened my email today and all my emails were gone. All. I fixed that problem, and then I found that search in KMail is broken, with bugs like this: https://bugs.kde.org/show_bug.cgi?id=281227 unfixed for 6 months.

I think I'm going to go back to notmuch. But an ordinary user would switch operating system and go back to Windows or Mac.
Spuk O.
+
1
2
1
 
there are possible solutions right now to some of the problems pointed, like isolating apps on chroots or VMs or user accounts (Android like), or combination of those... but I think the root problem is the failure to set and actually follow a few standards (FHS, FSB, ...), at some point every distro tries to be a laboratory for some group's idea...; I also agree with +Paweł Lasek about API disease, some recent stuff was sucked from non-*nix and got too important without enough care to adapt it to *nix
 
+Paweł Lasek, there are still lingering hardware problems. I'm the proud owner of a brand-new Matrox G550 graphics card, dual DVI out. Driver has been open-source for nearly a decade; just not the docs. Some trivial register on that card needs to be set to say "video out to DVI connector not VGA connector" but no one knows what it is and no manufacturer is telling. So it doesn't work :-(
 
big problem is that neither systems nor applications developers understand or entertain the idea of atomic system functions or application elements (just drop in "a container," everything runs inside that container; call it a directory if it's easier to visualize). Linux particularly has destroyed many UNIX concepts and working systems of 20+ years ago, adopting the windows style disaster "dump an application into system directories and toss in a few libraries" without regard to the end user experience. Slack development guidelines to prevent the need to simply update your .enviro-script with a new path to a new container.....
 
The infighting I see here is a perfect example of what the OP was talking about. Linux is and will remain a distant 5th or 6th place until there's some cohesiveness and direction among the community.

This conversation is happening about 10 years too late. Which is an improvement; finally the linux community is advancing faster than real time, they just started with a 30 year disadvantage.
U Das
+
3
4
3
 
I just want to speak about what I find good about the present Linux desktop
1. A full suit of software on first install. No need to download your own for each functionality
2. For many distributions, we can get a bunch of options for a given functionality (e.g. VLC / MPlayer / xine) but one that is directly maintained by the distribution. A mix of choice & "recommendation". The "recommendation" has presumably included integration testing with for other "recommended" pakages.
3. Single updater, meaning no per app logic for updates. Restart app, or, for kernel updates the OS, once for an impacting update cycle. Fixes for same issue come in together for most packages.
4. I rather like the dependency concept. The download + apply of a patch for a single library can fix multiple apps! No need to wait for each app's fix to come in at a different time.
5. Some idea of when to expect major upgrades. Easier to plan the desktop life cycle.
 
The conversation above is getting derailed because people are jumping to solutions before fully understanding the problems. This leads to a fractured and argumentative tone for this all, with attackers and defenders. Needlessly polarizing.

The problems, as I see it, are not about app distribution, or ABI's, or libraries. The problem is "sysadmin". Its what made Windows horrible to use: simple shit suddenly stopped working, for unknown reasons, and it was impossible to fix. So, for me, for example: broken sound (after a recent reboot, sound stopped working in chrome; worked great in firefox. Days later, I discovered in the sound server, the slider bar for the chrome master volume was set to almost zero. How did that happen? I was lucky to find the answer so quick; it might have stumped me for months. So WTF?). Network problems (I have two ethernet cards. After reboot, system confused which was which. Networking was mysteriously broken till I swapped the cables. WTF?) Graphics issues (It seems that Unity/Gnome3/lightdm hate my 3-year-old nvidia graphics card. Gnome2 works great. WTF?) Boot issues (I run raid1, and one of my filesystems is on lvm. This is completely untested/unsupported in Ubuntu: there's initrd breakage. Buggy udev scripts. Hanging/crashing plymouth. WTF!)

Am I alone on this? I don't think so. Prowling the RedHat or Ubuntu help/support/ask-a-question sites/chats/wikis seems to show zillions of newbies who can't get basic things to work. Things that I loosely categorize as "sysadmin problems". What's holding back widespread Linux adoption? The difficulty of using LibreOffice? No. A non-intuitive version of firefox? Uh, no. Is the Gnome2 panel so strange and bizarre that we had to invent Unity and Gnome3 to "fix" it? Uh, no. The problem is that people install Linux, and simple things don't work, and they can't fix them. Or maybe it works great for 6 months, and one day it stops working. Who knows why... but they can't fix it.

So before we start arguing solutions, lets get a better idea of what the problems are. And lets be scientific: can we trawl the help/support sites, and count up what the issues are?

I mean, I agree with a lot of the comments above, and there are many great ideas there. But I just don't see how much of anything above solves any of the actual problems that I experience.
 
Disk space is probably the single least important feature of shared libs.


They allow a library update to magically improve performance of every installed app.
They allow a library update to magically improve robustness of every installed app.More importantly They allow a library update to magically improve the security of every installed app.
They free developers from wasting time reinventing every wheel, poorly, not to mention inconsistently from other apps, instead of developing the high level app that actually does something useful for a person. Admittedly static libs do the same, just a bit less well since static apps are stale already by the time they are installed.

I don't care about disk space either but I think all the rest is pretty important.
The security facet alone might be enough to justify the practice even if there were no other benefit or even an overall cost/overhead.

Then again, maybe those benefits are only of any value in the context of the platform the way it currently is, in that they primarily address what could be called gaps in the platform. Fill those gaps and those same features lose most of their value.

If the platform provided reliable sandboxing and the ecosystem provided reliable reputation, peer review, & reporting, then maybe the security value goes away. An insecure or misbehaving app will either be harmless or at least exposed and probably even prevented from reaching most people's view once enough people report it's faults.

If the platform provided very streamlined updating for apps, then maybe much of the magic value of updating a shared lib goes away since it won't be too horrible for most app developers to update their static apps whenever a library they rely on updates.

Derelict apps will lose possible benefits they might have had if they used shared libs, but then it just comes down to numbers. How often does that actually happen, and, matter? (lib update materially improving a derelict app)
 
Brian White, this is true only if the application is not buggy. And people do write crap that works only by chance and only with a specific version of the library.

OTOH, my own experience with games included in various Humble Bundles says that one has to delete the bundled libraries when possible. E.g., the bundled libSDL might be compiled with support for OSS or ALSA while I use PulseAudio. Or libSDL-image can be compiled with libjpeg.so.62, and both of them are bundled, but the program also pulls in libjpeg.so.8 via some system library, leading to clashes. IOW, right now, bundling does not really work. One has to wait until a more stable platform emerges (i.e. until the hell freezes over) before that.

As for the security issue fixed by forcing the applications to use system versions of the libraries - I'd say that it is an extra. With all other platforms, the only parties concerned with security are the user himself and the software authors. With linux, the distribution makers appear as the third party that is not needed in other worlds, and, for some reason, they also care about security. But without them, the status quo would not be worse than in the other worlds!
 
sounds so familiar.... some history

2005: Mike Hearn (Autopackage) "Improving Linux# What's a desktop Linux platform? Why do we need one?" (http://web.archive.org/web/20050924203640/http://autopackage.org/faq.html#5_1, http://www.internetnews.com/dev-news/article.php/3493316)
2006: Benjamin Smedberg (Mozilla) "Users must be able to make their own software installation decisions. [...] Ubuntu has created a software cathedral with “more than 16,000 pieces of software”. [...] But the Linux desktop must also provide a method for users to install software from the bazaar." (http://benjamin.smedbergs.us/blog/2006-10-04/is-ubuntu-an-operating-system/)
2006: Ian Murdock (Linux foundation/LSB, Debian) "Unless an application is included with your Linux distribution of choice, installing that application on Linux is a nightmare compared to Windows.[...]Remember that one of the key tenets of open source is decentralization, so if the only solution is to centralize everything, there’s something fundamentally wrong with this picture." (http://ianmurdock.com/linux/software-installation-on-linux-today-it-sucks-part-1/)
2009: Tony Mobily (freesoftware magazine) "Every GNU/Linux distribution at the moment (including Ubuntu) confuses system software with end user software, whereas they are two very different beasts which should be treated very, very differently.“ (http://www.freesoftwaremagazine.com/columns/2009_software_installation_linux_broken_and_path_fixing_it)

In short: linux (the ecosystem, not the kernel) is not an real platform, because differentiation between system parts and application parts is non-existing in the commonly used distro-integrated distribution model.

But sadly, the "community" is very conservative on user centric and distro-independent solution approaches:
2007: Autopackage struggling to gain acceptance (http://web.archive.org/web/20080331092730/http://www.linux.com/articles/60124) ->end of autopackage project 2010
2010: http://www.gaslampgames.com/2010/11/13/dear-linux-community-we-need-to-talk/ CDE in trouble?

sigh... I think a major chance to change this flawed distribution eco-system was the Windows Vista disaster 2006/2007, desktop user were motivated to switch. But this chance was lost when the visionary autopackage solution was ignored and not supported by the Linux Foundation/LSB or Mark shuttleworth (Ubuntu) on the Free Standard Group's Packaging Summit 2006 (https://wiki.linuxfoundation.org/en/LSB_face-to-face_%28December_2006%29, http://web.archive.org/web/20090305035002/http://plan99.net/~mike/xmlpres/presentation.xml). instead they tried the conventional model... the result is known -> still a weak, fragmented platform, still 1% desktop user base.
 
+Ingo Molnar i have always wondered why the linux fondazione doesn't promotrice such a project, linux deserve a common platform
 
@ Andy Burnette - "windows style disaster "dump an application into system directories and toss in a few libraries"" that is not really a common case under Windows anymore since approx. 2000. windows has favored since than the "private DLL approach" (yeah I know, horrible wasting of space & potential security flaw). The general priorization of local libraries (private libraries, http://web.archive.org/web/20010605023737/http://msdn.microsoft.com/library/techart/dlldanger1.htm) over system libraries makes it simpler to create "virtualized" applications (relatively) independend from specific OS installed system libraries. The success of this approach for the desktop user can also be seen in the existence of a class of applications with sandbox/container properties which are much easier to realize under windows (or Mac) than under linux - portable software / Stick-ware (in the sense of hardware portable, OS-version portable and directory position independent, http://en.wikipedia.org/wiki/Portable_application ).
 
I find it hard to take kernel developers seriously when they talk about desktop "usability" when the Linux audio situation is still such a mess.
 
@ Leo Comitale - the audio mess is mostly in userspace, and the underlying problem is that applications cannot agree how to output sound. So please don't blame the kernel devs for that. Here are two working solutions:

1. Install PulseAudio. Compile the kernel without any form of OSS support (even emulation). Compile SDL with support for PulseAudio only (no ALSA, no OSS). Set default gstreamer output backend to pulsesink. Configure libao to use PulseAudio. Configure Phonon if you use KDE. Install a file that directs the default ALSA device to PulseAudio. Uninstall all applications that still use OSS, or wrap them via padsp. Uninstall all ALSA applications (except PulseAudio and JACK) that open anything else than the "default" ALSA device or expect mmap to work. Result: all remaining applications have working sound.

2. Uninstall PulseAudio. If your distribution sets the "default" device to pulseaudio, find that piece of config and remove it. Compile the kernel without any form of OSS support (even emulation). Configure gstreamer, Phonon and libao to use ALSA. Uninstall or wrap in aoss all applications that still use OSS. Uninstall all applications that insist on using PulseAudio and cannot be reconfigured. Uninstall all applications that want to open ALSA devices other than "default" and "default:X" where X is the name of your sound card. Add lines similar to the following to /etc/modprobe.d/soundcards.conf:


options snd_hda_intel index=0
options saa7134_alsa index=1
options snd_pcsp index=2
options snd_usb_audio index=-8

Add a line similar to the following to .asoundrc if you need some non-default subdevice (e.g. S/PDIF on Intel):


defaults.pcm.!card Intel
defaults.ctl.!card Intel
defaults.pcm.!device 1
defaults.ctl.!device 1Result: sound works in all remaining applications (stereo only, and without such fancy things as realtime DTS encoding on S/PDIF output).
 
@mandrit Hoffmann, yes, I stand partly corrected on the windows dump disaster;-) But, the very very poor programming and systems design lessons ingrained from those bad habits infest linux to this day. /usr/bin has everything under the sun in it, whereas programs, applications, or functional suites should exist somewhat separate and apart from one another, if only to enable disaster recovery, or proper upgrade of item A while not obliterating item B's dependencies. Just food for thought. In the "old days" one could readily deposit (still not an atom container) a program or functional "area" (I lack a better term...) in a separate directory, and there was little hassle to untangle a problematic addition to a system. Today, far too much effort is spent on package managers which must hunt to and fro to install and conversely unwind a problematic program.

Yeah, I sure would like in the 20 years for a single focused and stable audio subsystem to interact with :-)

I really do appreciate the idea that installing many a distribution brings up a 90% functional desktop with applications and so on, while many commercial (closed?) systems developers or vendors would logically be dragged into court for such polite behavior. Thus with many a simple linux install, subsequent time is spent customizing the look and feel for personal taste, not installing patchquilts of programs to solve decade old [design] flaws and problems. While some do a better job there, their solution is to sell you outdated hardware at a premium price.......
 
Am reading this and everything screams Arch Linux. Well, at least to a point. They have AUR which is community managed software sources (decentralized). Updates are not dependent on distributor. Hm, maybe that's a good model to start from. FOSS programmers always had good ideas and approach to problems. I am not sure how this one crawled under us.
 
+Alexander Patrakov But the problem is that solution 1 is basically what is already done in Fedora, and it has never worked reliably. And it is getting worse, not better! In Fedora 16, pulseaudio almost always takes about 1 second to start up, and frequently fails to start when used by mplayer. In KDE 4.8.1, doing almost anything with sound on my desktop (quitting mplayer, changing the sound device, restarting pulseaudio) makes KMix crash. And recent Fedora 16 update(s) somehow reset my default sound device and/or removed my audio permissions, so I was scratching my head for hours wondering why I coudn't hear anything, except sounds played using sudo.
 
@Andy Burnette (and also @Brian White) I'm not sure how effective the unholy influence of Windows really was in poisoning the OSS developers minds. Maybe another aspect can be credited too, which is common in the minds of the OSS people, and for sure one reason of structural problems we have at moment. Linux never completly understand and gulped the "Personal Computer" vision and use-case, I think the unix roots older than the IBM PC are to credit here. Ulrich Dreppers rant Static Linking Considered Harmful illustrate this thinking nicely (maybe this specific post, which is old, was even policy defining) : "[...] why dynamic linking is superior: fixes (either security or only bug) have to be applied to only one place: the new DSO(s). If various applications are linked statically, all of them would have to be relinked. By the time the problem is discovered the sysadmin usually forgot which apps are built with the problematic library. I consider this alone (together with the next one) to be the killer arguments." (http://web.archive.org/web/20100527213559/http://people.redhat.com/drepper/no_static_linking.html)
if someone follow this thinking/recommendation and point of view of security as primary goal for an OS, we end in the currently common architecture as perfect solution! this means pretty tight binding of application and system because of the strict OS-wide, distro-centralized managed shared libaries organization for the (perfect) sake of security. maybe some imperfection, some sane compromise is required to achieve an desktop platform also for users and not only for (security affine) system adminstrators. Or in the words of Ingo "Users want to be free to install and run untrusted code."

@Robin Green & Alexander Patrakov: linux audiosystem: over-engineered and under-standardized (e.g. http://blogs.adobe.com/penguinswf/2007/05/welcome_to_the_jungle.html), even PulseAudio's Lennart Pöttering agrees on that: 2008: "At the Audio MC at the Linux Plumbers Conference onething became very clear: it is very difficult for programmers tofigure out which audio API to use for which purpose and which API notto use when doing audio programming on Linux." (http://0pointer.de/blog/projects/guide-to-sound-apis.html)
 
Like the proposed solution. But to get there, in practice, you need a 'smooth' path from the current situation to the new one you describe. It won't happen otherwise. Maybe a Pt III with a description of this path would be a good idea ;-)

Currently distro peeps will block most efforts to let say developers package themselves (eg like how http://www.qupzilla.com/download does). Instead of 'taking' the package from the developers, it would make more sense for them to start building that proper sanboxed security infrastructure...
 
And I do agree with +mandrit Hoffmann that autopackage could've been a solution. Still can be, I suppose... the openbuildservice needs autopackage support :D
 
yes, autopackage, go, go, go! Mike Hearn, where are you? You should be around somewhere here at google.... please rescue the linux desktop! ;)

some "backup" approaches -> listaller (merged some of the autopackage code base, e.g. binreloc http://web.archive.org/web/20090125113506/http://autopackage.org/docs/binreloc/) (http://listaller.tenstral.net/), Simon Peter's klik-successor project -> portable linux apps (http://portablelinuxapps.org/) or Philip Guo's CDE (http://linux.slashdot.org/story/10/11/13/0029203/cde-making-linux-portability-easy)
 
Don't forget Zero Install (http://0install.net/) which could solve a lot of the described problems. I guess it needs some more (recent) packages to gain traction (Firefox, LibreOffice) but this looks like a chicken-and-egg problem to me. ...No recent popular packages -> nobody cares -> no recent popular packages... and so on.