@mandrit Hoffman, you make several really good points, but...
...In reality it's all about "use cases", that's what Linux is really all about. Each distro adapts itself to a series of use cases which are different and not compatible all the time. The problem is that each use case changes rapidly or becomes obselete and new ones appear all the time. If you watch this interview with Linus Torvalds himself: http://youtu.be/ZPUk1yNVeEI
, you will see that the real problem is that there are simply too many desktop configurations hardware wise and use case wise and this is the difficulty of the desktop.
It's a matter of combinatrics. Let's examine that in depth. How many hardware devices does the desktop need to support, printers, input devices, hard disks, motherboards, chipsets, ram, etc? How many different types of users would it have to support, novice, media editor, application/system developer, web-developer, office worker, student, web surfer, tablet user, etc? How many different systems will it run on, sever, mobile, tablet, netbook, laptop, desktop, towers, thin clients, embedded, cars, planes, trains, tanks, helicopter subsystems, nasa space ships, etc? How many different output devices will there be, vision impaired, braille keyboards, large lcd monitors, crt monitors, touch screens, televisons, etc...?
Scenarios = X * Y * A * B * C * E * F * S * H * F *(....) = HUGE NUMBER!
One can take anything above the Kernel level, that is those items that deal with the Userspace and use case specific elements and throw it away, refresh and start over again to adapt for each specific scenario or group of scenarios and this is one of the core brilliant aspects of the Linux system. This way Linux desktops can accomodate huge ranges of deployment and use scenarios and adapt in a purely objective independent fashion. This is why Linux is used in reality on probably over 70% of the world's computing devices including mobile, desktop, embedded and servers because of this deployment characteristic.
On the Developer level it's part of our job description and basic application design that one has to adapt and make as many deployable combinations of our applications as possible for these different scenarios. We developers tend to be lazy people and so the unified approach seems great, but is out of step with the wider reality that faces in terms of raw numbers. This is like a developer complaining that all computer hardware manufacturers should only use Intel i386's because it's too annoying to have to develop for AMD and ARM processors and duplicates efforts. It's not practical or even possible for developers to tell System Makers or Operating System distributors/developers to adjust themselves to our needs because we want to be lazy.
We have to take the tools available, be creative and ensure that anyone using any system or operating sytem can use our software and that it's operation is uniform accross all devices/scenarios. If we can't do that we have failed as application developers, it's that simple, no need to blame distros, manufacturers or anyone else but ourselves. For example, if someone wanted to deploy an office producitivy software package over a network at a large corporation and enable the workers to access this over the internet this deployment, packaging and dependencies would be very different from someone who just wants a word processor application to do their homework or write a letter on their laptop. This is a reality, not a hypothetical.
This is the reason why having .rpm's and .deb's, yum, apt-get and even simple make from source and these are all necessary as tools that are great for a particular scenario and that's why they exist. The utter failure of Microsoft and Apple with their respective desktops is that they both failed to understand this wider ecosystem diversity and forced all devices to use the same systems in a centrally unified approach. Sure this allowed for a big single wide open application market ("app store") but one that could only deploy applications for a limited set of scenarios, desktops and laptops. This held back the evolution of different systems and diverse deployments. It would take an entire NT Kernel rewrite and upgrade to facilitate such deployments to adjust for major scenarios and this is why Microsoft and Apple were never able to create any meaningful solutions for the server, embedded or mobile market.
Apple is only just beginning to get into the mobile/tablet game but their systems - aside from the brilliant artistic endeavours of their terrific user interface design team - are technolgically inferior and less capable then comparable Linux derived mobile systems and not open enough to operate in the wild efficiently. Also, Microsoft is now facing their own failures head on with the Windows 8 development fiasco because they realize that their own unified approach failed because their Kernel is specialized for desktops and applying that to mobile touch screen tablets is difficult and requires too much hacking. If they had developed parallel "ecosystem" based systems that enabled a modular approach as the major Linux Distro communities (Canonical, Debian, Fedora, Suse, etc...) and companies have with the different approaches to different or similar problems, they would have deployed tablet based OS distributions years ago and be in a good position going forward.
Likewise, the failure of the wider Linux Distro community to adopt "Autopackage" and to develop any meaningful alternative speaks volumes about the core issue here. In reality the only way to deliver your "an uncontrolled market platform" is via a single unified and centralized system and to use a word from the Bitorrent world a "tracker" system. With Bitorrent, sure you can use many different clients and feel that you have a decentralized market but in reality it is still centralized in one core protocol and this would impose centralized requirements on the Distro and would be a move that would deprive the distro of it's freedom to adapt itself for it's own particular use case which is impractical and impossible given the high number of unique use case scenarios and deployment systems in use.