Shared publicly  - 
Here's the summary that people seem to be missing these days:

"There are a number of folk in the Linux ecosystem pushing for a small core of tightly coupled components to make the core of a modern linux distro. The idea is that this “core distro” can evolve in sync with the kernel, and generally move fast. This is both good for the overall platform and very hard to implement for the “universal” distros."

I touched on this a week or so when I did the FoodFight podcast interview, but I don't think that people really understand what is happening here, and why it is happening.

Given the recent flames on the Gentoo mailing lists about how "horrible" it is that someone could even consider using an initrd to boot a system that has a separate /usr partition, and the weird movements by some Gentoo developers to deny that there really is a problem at all that is being solved by this type of work, I seriously wonder how much longer a "general" distribution such as Gentoo or Debian can keep up the charade of trying to provide all options for all users.

I just don't think it can be done well, sorry, which is why I strongly recommend tightly-coupled distros for desktops for anyone (like Fedora or openSUSE or Ubuntu), and Debian or Gentoo only for servers or embedded systems where you know exactly what you are putting together, and why you are doing it that way.

And yes, I too buy into this tightly-coupled-components idea, and have for years based on the work I have done in this area. I think you will find anyone who has worked in this area to agree with it, it's only those who are either higher up or lower down the stack that seem to object.

Many thanks to the ever-thoughtful Hack-the-Planet blog for the link:
Daniel Bo's profile photoMichael Shigorin's profile photoJonathan Dowland's profile photoRich Freeman's profile photo
For desktop Linux there's two viable roads forward: the tightly integrated, rapidly iterated model you are talking about (hopefully providing a strong ABI towards an external ecosystem of apps), or the way of the Dodo bird.
I don't mind a tightly coupled core, so long as this core stays small and focused on essentials. What I'm seeing now is a "core" piece by piece assimilating the entire system. How long before the shell becomes part of this as well? The editor (others having been declared obsolete)? Web browser? Where will it end?
Great push in right direction! The pain of running upstream kernels in full blown distros removed me from the loop of testing things like linux-next an contributing to kernel devlopment in general for years. Maybe one day I can get back to the crowd again :-)
is this stuff usable on other unices or its just for Linux?
the feeling here is that we follow the road that huge Linux vendors want us to go, and the hell with the rest...
+Måns Rullgård

Please, please make one decent editor and one decent xterm part of the core...

Ideally we want "The Core" to be self hosting: you can write, debug and post new modifications to The Core using itself only and no external app ...
hmm. well first, i have a problem with a tightly coupled core where someone else dictates all my options. a big difference between a reliable ABI and specific set of software. sort of like if someone decided that "nano" was the only acceptable editor for this core and somehow made it difficult for other editors to integrate. quite a stretch, but replace nano with something like sysvinit.

i think the focus for the core should be on the ABI, not on the "nano"

there's too many ways to do everything and everybody keeps inventing yet another way to do something. i care less how you view something in your "nano", i care much how the data interfaces work. those interfaces should be coerced into using standard ABIs with deviations for experimenting folded back in should they prove superior.

i don't want a core where only one editor is approved. i don't like vi and i don't like xterm. i have numerous reasons, technical and aesthetic. if you force people [me] to use them, we'll be back at square one because the community [me] will fork in a heartbeat and not support you
but don't get me wrong. i strongly support a core set of components which makes it easier to spearhead progress. i'm a gentoo user but i don't pay attention to a number of gentoo devs. i run dozens of systems using ~x86/64 software and my kernels are usually within a few points of current. amongst these systems, i often am playing with new things found in the kernel.

i don't have a lot of difficulty doing these things, but i'm probably not the typical person.
Don't give up on Gentoo. After all, it is relatively easy to set up a Gentoo machine with the latest udev/systemd, and without OpenRC nor SysV: I mantain such an overlay:

(I need to update the documentation, but it works in all the machines I administer).

What Gentoo need is only the ability to use this "core distro" without interference from the legacy stack. Greg, a step in the right direction (and one you can help as Gentoo dev) would be for baselayout to stop depending unnecessarily on OpenRC: I opened a bug for it a month ago, and it is being ignored:

Let the council fight the windmills if so they desire; we just need the ability to run Gentoo as a modern distro. If they want to waste their time with legacy crap, it's their time the one being wasted.
+Måns Rullgård don't be silly, comments like that make no sense. I'm guessing you have never worked in this area to be saying things that make no sense.
+Andrea Cascio so you have had an initrd for years with openSUSE, what's the big problem? Switch to Gentoo, that's fine, but be aware of the issues involved please.
Coming from someone, who has a Gentoo and a Fedora desktop box, both running a full-blown Gnome desktop: On Gentoo, if you know what your doing it basically works, with some hickups here and there. It is also very light on resources (my 4GB RAM laptop never swaps with Gentoo, the same box running Fedora and RAM is full with stuff I don't need).
+David Ford the people doing the work get to dictate the API, so get involved if you don't like the APIs that are being created.
But I really like the idea of a base system. Sounds just like FreeBSD. Perhaps a bit smaller. Go for it!
+Canek Peláez I'm not giving up on Gentoo, I'm used to the constant complaining by people who don't do the work, that doesn't bother me. It's the "we must support all options" idea that doesn't work in the end, sorry.

As for your bug, I'll go comment on it, where that belongs.
i am involved in things. it becomes more burdensome when too many people get involved as it will become harder to reach consensus
Heh, strange to find my words echoed back to me over the social network :-)

Highly modular systems will continue to exist, but we all have to keep in mind that the modules have to cater to the lowest common denominator -- that is a ton of interoperability work for a small number of geeks slowing you down. That work will surely continue in niche distros where learning about the distro by tweaking it is the goal.

At the same time, Linux is becoming very competitive on tablets, phones and a rainbow of related embedded products. To go from "very competitive" to "winning" it has to move faster and use all the new features of the stack, without slowing down to write, test and maintain compatibility for every old and broken module.

That second world is where systemd is taking us. It is a winner for new and interesting consumer hardware; and also for tightly configured servers. There is a ton to like there.

And we'll have to get used that to change the core distro, you have to patch it instead of swapping components in and out. It's a different model... it worked for the BSDs :-)
We have inconsistent demands on the distros. We want it all to work together like the pieces were not developed by raving lunatics pretending none of the other projects existed, yet from time to time we want to be raving lunatics pretending other projects don't exist.

If all the major distros are agreeing on something like this, it is often for a good reason.
My problem with Kay is that he has a very strong idea of what he wants, and it is fundamentally about a desktop distro in a very specific mold. It's not at all clear how well it will work in an environment with lots of Fiber Channel attached disks, for example. Which is fine, except that companies like Red Hat and SuSE have tied their community distro to folks who only fundamentally care about the desktop, but their customers are folks who need to worry about things like full init.d backwards compatibility (including system V init levels!!!) and things like huge numbers of FC attached disks where it may not at all be scaleable to enumerate them all at system startup.

What this is going to mean for future enterprise customers at Red Hat and SuSE is going to be entertaining to watch, but I for one will be switching to Debian Testing because systemd looks like a slow motion disaster from where I sit, and even on my laptop, I don't think I want it. GNOMEos? When GNOME is actively hostile to power users? I don't think so....
+Andrea Cascio openSUSE will not work with no initrd if you have a separate /usr partition, that's the issue here that Gentoo and all distros are having right now, it's not Gentoo and openSUSE specific at all.
+Greg Kroah-Hartman

Exactly - from a sw engineering side a 10,000 packages distros is a space of 2^10000 combinations.

Even if it was 120 packages that's still madness: there's about 2^120 atoms in the known universe...

Do one core thing and do it well - the rest is decoupled by a good ABI.

This model worked well for the Linux kernel, for iOS and for Android.
+Theodore Ts'o if I remember right, Kay's work was one of the reasons suse booted at all on the big s390s with thousands of devices (this was ~2005, so I might be making it up)
As a #gentoo user I've been holding my head in my hands at this. I've moved across to the tightly coupled core, and funnily enough the sky hasn't fallen in. In fact, strangely enough, things work much better.
+Theodore Ts'o are you kidding? Those changes make hundreds of thousands of disks work better, and was one of the main reasons they were made years ago, based on the work done on the "enterprise" distros due to their customers complaints.
If you don't like systemd, fine, but please evaluate the technical issues involved, don't get stuck on personalities, the disk issue is separate from systemd. All of the systemd features are being added because people want and need these things, and they provide a real solution to a problem we have had for years.
To ignore these users and their problems would be foolish of us if we wish to continue to succeed.
+Andrea Cascio Then I recommend using Debian or Gentoo (for now), as you have those options there. Otherwise you are at the mercy of those that put the distro together, and you need to either trust that they know what they are doing (which they usually do), or take your usage elsewhere.
+Theodore Ts'o , +Greg Kroah-Hartman it's also worth pointing out that I shaved one hour off the boot time on a 16000 device system by running dmesg -n1. Sometimes the console isn't your friend.
+Martin Langhoff good point about the BSDs. There are a number of changes I wish to make to the kernel that I just can not due to a few userspace issues that the BSDs overcame years ago because they have everything much more tightly coupled together.
Oh well, I'll just have to plan things out over the much longer-time-frame than I would have wanted, but that's nothing new, and is just something that we are well used to by now.
+Chris Mason I think you may be remembering your history wrong. The blkid library (which I wrote) was very careful to avoid scanning all possible attached disks unless it was absolutely necessary, and the cache file was designed to allow for exactly this. If it was already in the blkid cache, it wouldn't do a full scan. Kay's volid system, in contrast, populated a the by-uuid and by-label directories at boot time, which fundamentally required enumerating all possible disks at boot time.

+Greg Kroah-Hartman The information that systemd (or one of its tightly bound co-conspirators) was going to require enumerating all disks at boot time was something that agk (Alasdair Kergon) told me at the Collab Summit. I haven't been following it closely enough to know of my own personal knowledge, but I tend to trust agk's observations on these matters, since he's one of the people who is responsible for making RHEL work well for enterprise customers with large amounts of fiber-attached storage.
+Chris Mason wow, that's sad. I know our console isn't good in some areas, that really shows where it is a problem :(
+Chris Mason Heh. I recently made changes to e2fsck so that we could limit the amount of messages it would print out on a per problem type basis, and then arranged to have a separate system to store the full logs on disk. This came up because we had a configuration bug that caused e2fsck to spew so much stuff that calculations indicated that if it weren't for the watchdog kicking in, the machine would have spent over two hours printing e2fsck messages to the console, even at 115.2 kbps.... Something you might want to think about for btrfs's fsck. :-)
+Greg Kroah-Hartman re-phrasing your own words back to you:

If you don't like Gentoo, fine, but please evaluate the technical issues involved, don't get stuck on personalities

I.e. if you disagree with few Gentoo developers, it's no reason to claim Gentoo is broken and evil.

Gentoo is a viable distro and the only one (besides LFS) that gives user complete control over all the knobs in the system. Yes, supporting and testing it is a nightmare. But the flexibility is sometimes worth it.
+Ingo Molnar "a 10,000 packages distros is a space of 2^10000 combinations." - how is it different from kernel modules? There are lots of dependencies b/w them, some can be built in and others can be compiled as modules. Sounds to me, that the next step would be to have all the drivers built as modules all the time and eliminate the option for users to compile them statically in or even disable some completely...

From the old days I've learnt to disable as many unused drivers/modules in the system, as possible, when configuring the kernel. Most modern distros come with all of them enabled as modules though. That's no reason to be the only way... Hence, testing is required for both ways.
+Greg Kroah-Hartman Yes, that was deliberately somewhat silly, and I am relieved you realised as much. Now why can you not realise that to some people, bundling sysinit, udev, inetd, etc all together appears just as outlandish?
The bundlers of systemd, udev and journal together are coming for inotify and D-Bus next. The changes will make Linux more complex, but perhaps it will also be faster and more maintainable?
+Alison Chaiken Only for those who need those components. Some of us would rather not have to bother with them at all.
I just love that part:
1. People like to flame about anything Lennart has written.
2. People like to flame about NetworkManager.
3. Therefore Lennart wrote NetworkManager.
Hm. Chill-out time. Is there a thread-lockdown, go argue it over a beer button in this G+ thing?
It's not politics folks, it's an Open Source project that you can choose not to use. It's also not teaching your kids weird shit at school either.

Meanwhile I'm replacing XDG autostart .. brb.
And still you seem to miss the point that for some people having the system unbootable because dbus, glib or whatever you "tightly couple" breaks IS an issue.
+Luca Barbato that's a calculated risk. Everyone using systemd needs to decide whether the risk is appropriate, and pass on it. Or help fixing it.
+Auke Kok again the problem is that if you shovel it as THE WAY because YOU KNOW BETTER you will get resistance even if it is the best ideas. Then if you end up with fun tactics like merging udev (that most people use and is hard to replace) with systemd (that less people use, let alone by the fact it is a young project) and then claiming in 3 weeks that stand alone udev is unsupported, you sort of undermine any sort of confidence people dubious might have. That said I guess I might start look at #udev and play with it. Or do something radical like move to mdev completely and not just for early boot.
The sad part in this it's always taking away options, even if they just bit-rot, and so freedoms. Yes, none likes fiddling with options, but you are thrilled when you need them and they are there.

Example: My old soundcard used to lock up during use. It happened sporadic (every few weeks), mostly with Flash, good luck debugging that.
But no problem, i don't have a Windows System. ALSA built as modules, an init script unloading and reloading all ALSA modules and after "/etc/init.d/alsasound restart" my sound was back.
Until the ALSA dev said that unloading ALSA was not supported. The support for it vanished from the init script.
Thanks for nothing...

"separate /usr is broken" is the same category. How does it used to work the last years on my PCs, without initrd? (Yes, i know that complicated setups need it, but that "every start is complicated" is a cancer we inherited from the everything-for-everone distros like RedHat/SuSE/Debian)
We are removing features and proud of it...
And no, "It's legacy" is not an argument, spinning rust is faster on the outer sectors, so you want a small root partition, followed by the swap partition, and then the rest of the system. Save me the hymn of SSDs, mine broke down months ago.
+Luca Barbato yeah, people working in this space are pushy and brash. Comes with the territory -- you just can't get changes through any other way.

So politeness and gradualism, as much as I like them, aren't going to happen. A tightly integrated core is coming through, and it is a good thing. You can change it, but it will become a larger project -- maybe the current "dozen inits" world is left behind and people join efforts in a single non-systemd option (upstart).

A Linux with a dozen slowly evolving inits and loggers is not useful today, except for exploration and learning. We had 20 years to fool around with them. As a platform, it's time to grow up and focus on one integrated core stack. Some components may have alternative implementations, but it'll be one, consolidated.
Well I think it can help a lot for those cases where components or improvements (libraries, daemons, etc) are decided to not belong in the kernel and can be done better in user space, and then are dropped and forgotten or end up with a bad or fragmented user space 'solution' because there is not really a core development group or discussion platform where those things are picked up.
+Matteo Bernardini Of course this is only for Linux, why would we (honestly) care about any other Unix system? Linux surpassed the Unixes years ago on many levels, they are free to try to catch up with us if they want to, all of our code is open, and they are welcome to participate if they wish to. But there is no requirement that we need to do extra work for them, as remember, they didn't do extra work to work with us for many years for the same reasons.
+Martin Langhoff did you do any sort of measurement? I'm afraid not.

So all ends with who has the larger mouth and the larger group of people following it because IT IS THE WAY and goes with the larges unchecked claims and so on.

Here is mine, "OpenRC+Busybox is way faster, smaller and feature packed than systemd. Everybody should use it". Oh I used should, I should had said MUST and bind it to some other component everybody is using for a reason or another.
Politicians have used this bundling tactic for very long time to pass controversial bills. Just slap "against child abuse" section and any PIPA/SOPA/ACTA gets passed w/o much resistance! As nobody will be brave or stupid enough to go against that... :)

I'm wondering if bundling udev into systemd was "inspired" by above? :)
+Denys Dmytriyenko don't be snide, udev and systemd merged code bases because the developers doing the work on it wanted to. If you wish to do work on a stand-alone version of udev, you are free to break it apart again and do so.

+Luca Barbato If openRC + busybox is faster and has more features, then great, people will use it. But obviously, systemd solves solutions that people have, hence they use it. There's no "marketing" going on here, it is people who do the work, know the problems, solving them.
People will use it IF they are aware of that.

People flocking on something new most of the time do not check the alternatives simply because they aren't aware of them.

Hence my reference about who is more vocal or has more outlandish claims running being unchecked, like systemd being the fastest, the leanest, embedded ready and the solution to ALL the problems.

That said I'll try to actively get more people aware of openrc, try to get some fair comparisons and hopefully help fixing bug on its side since I'm pretty aware that openrc IS NOT perfect.
+Ingo Molnar Can't resist pointing out that if there is ever a "The Core", we can finally have Fedora Core back again ;)
I agree on this core concept as long as it is flexible and allows room to build in capabilities which are specific to certain scenarios.

From the comments I have read I think that most of the problems are logistical and ego issues rather then technical. I do feel that better communication and project management would solve a lot of the ego issues. I would also point out that having multiple distros implementing different startup strategies puts more strain on administrators.

In my humble opinion, the best way forward would be to incorporate the best features of current init system into systemd and move forward with it as long as really delivers improvement on the old system.

As a developer of 2XOS, I have to really look into the systemd project and implement it if it delivers the stated improvements. We should never be afraid to evolve and look forward.
+Geoffrey Said Which other init system you did evaluate similarly? And why you are interested on systemd and not upstart or openrc?
What we are potentially seeing are not just code portability problems between rpm- and deb-centered distros, which would be no big deal, but a Fork between ARM- and x86-centered distros. Debian-derived distros would dominate gadgets, and Fedora-derived distros will dominate enterprise. With Canonical's taking a pass on the new IBM PPC and Fedora relegating ARM to "secondary," we are already well down this path. Unlike the Android Schism, which Torvalds was able to head off, this new one is occurring mostly in userspace, where he has less power to prevent it.

Like +Theodore Ts'o , I switched from Fedora to Debian Testing recently, not because I think +Lennart Poettering 's changes are a bad idea, but because I want to be on the +ARM side of the rapidly deepening Rift Zone.
+Theodore Ts'o I'm actually close to switching my Debian Testing laptop over to systemd, the major reason I don't is that I won't be able to run all my systems with it (and that it's still a little beta for my servers). I have a one, and soon probably more, Debian k/FreeBSD ZFS servers, and the full Debian userland is the reason I installed them in the first place instead of trying some of the other hacks.
+Luca Barbato Our init system is loosely based on the SystemV model but every script has been written from scratch by us. To tell the truth I have to see the merits of all the systems you mentioned but this is one of the points I mentioned in the post.

Why do we need 4/5 systems to start a Linux distro? An administrator would have a headache maintaining such systems. What I would like to see is a general consultation and more co-operation between distros so that we can move forward with one or mostly two start systems. This would streamline things and be much more maintainable.
Guys, spare your ammo, the real fight will be over whether Linux Core aka Core Linux(TM) will run GTK or QT...
+Jóhann B. Guðmundsson The propietary blobs from TI for OMAP GPUs are only available as debs. That's not my doing: talk to Rob Clark or Nipuna Gunasekera! Trying to use those blobs with RPM-based distros was truly painful. I persisted for a fruitless day, then switched to Ubuntu (which I dislike) + chroot.
+Alison Chaiken Sorry Alison, I don't get the whole .RPM vs .DEB issue. Both are just "packaging" and can be unpacked, eventual binary blobs inside are still just binary blobs...
-1 on this one:
Funny enough I've updated today five pc with Gentoo, a squashFs root filesystem, where all /*bin, /lib* are symlinks and using dracut as initrd which bindmount /var /home /etc (and a couple other).

Now, what many people really hate is to loose flexibility and being forced to use some kind of added complexity.
Hardly difficult to understant, arguably a price fair enough for having real advancement in the desktop area (has been argued indeed).
+Vladimir Pantelic The TI .debs not only had binary blobs inside them, but also scripts to install and configure the blobs. I managed to get that procedure working exactly once by using alien, but it cost me an entire day to patch the scripts to work on MeeGo rather than the Ubuntu they were intended for. When I saw what a waste of time using even an automated alien and patched blob-install scripts was going to be while tracking MeeGo and Ubuntu, I simply gave up. Embedded developers cannot afford to waste time on periperhal problems: the main ones are time-consuming enough to solve!
+Luca Barbato That would be fine for me. But a standard interface needs a standard implementation which most of the case is a single implementation that sits in the background and allows you the flexibility you want. As +Vladimir Pantelic has said, ALSA provides an API but it is a single implementation that has been agreed and adopted. We need something like this.
On a side note is there documentation to point me on how to integrate systemd and how to migrate our init scripts?
+Geoffrey Said Have you read the systemd documentation yet? If you have questions, please ask them on the systemd mailing list, the developers will be glad to help you out.
It seems to me that those opposed to systemd do so because it appears to fly in the face of the "Write programs that do one thing and do it well" philosophy, while those in favor of it do so because it solves perceived problems.

I understand the anxiety of those who fear that choice will be taken away from them with the conglomeration of many of the major OS internals (choice being one of the main reasons many of us chose Linux in the first place). And, yes, people are always free to choose to use another solution/init system, but this may not be a practical or realistic choice (For example, individuals often lack the time/resources to fork and maintain udev themselves).

With that said, I understand that many of the system designers/developers who work on this daily have encountered problems which are solved or at least made easier by more tightly coupling the relevant systems.

+Greg Kroah-Hartman Could you give a few concrete examples of the types of problems that cannot be solved with a more "Unixy" system (i.e. that necessitate the conglomeration of core utilities)? Perhaps this would lead to a more substantive discussion rather than a religious war.
+Alison Chaiken er...we were not "relegated" (that implies demotion). ARM has never been a PA in Fedora. It will be a PA, it will just take work. Don't worry about architectural fragmentation - a lot of work is underway to avoid that. For the record, I don't hate the "core" idea (disagree over some of the components). We need a tiny core Linux platform. That should include libraries, it should not include Firefox or LibreOffice!
+Greg Kroah-Hartman I have not read it yet but I have been to the project's website. When we finish our 7.1 release I will have to do some R&D on systemd and a new kernel.
+Geoffrey Said In Fedora we have migrated around 400 legacy sysv initscript ( around 100 - 200 left to go ) which is about the number in total of what's in Opensuse.

Opensuse has as well migrated a whole lot ( combined their work with ours and Opensuse should be covered I would think ).

The only distribution that I'm aware of that contains more init script than we did in Fedora is Debian so minor distribution like Gentoo and Arch Mandrake should be covered at this points.

So if you and your distribution are thinking about migrating to systemd you should...

Drop by the systemd channel and ask before migrating legacy sysv init script so you wont waste your time in migrating something that might already be migrated but not yet upstreamed.
( Systemd units are cross distribution usable )

One of the lessons I have learned handling the migration phase in Fedora is that if I would have to do it all over again I would make the switch in one release cycle so I recommend that you and any other distribution thinking about making the switch to do it that way.
+Geoffrey Said , +Vladimir Pantelic is perfectly aware of tinyalsa and we all we know that other implementations of udev do exist (mdev at least), socket activation happen to be something xinetd does since long time ago and I could keep going on all the features borrowed by systemd. My problem with this current approach about PROBLEMS (read it as you were jumping and screaming) is that the problems are not defined and the whole process happen to be forceful.
+Jon Masters I'm aware that the current status of Fedora-ARM is temporary, but nonetheless there's no doubt that Fedora is drifting towards enterprise-only and Canonical has thrown in its lots with consumer and gadgets. Going back on those choices is not going to happen. Politics and personalities aside, different choices for userspace are an inevitable if regrettable consequence.
+Alison Chaiken Hehe. I wish Fedora was "drifting toward enterprise-only" (well, not quite, more nuanced than that), but actually I would say the opposite is happening :)
You're free to interpret what I said however you choose :)
+Greg Kroah-Hartman +Jon Masters this discussion about a core distro makes me think the term 'fedora core' could be revived... After all, fedora is a bleeding-edge distro. fedora core could be the minimal set of packages to actually boot and interact with the core set of packages for a modern linux kernel. fedora core could move faster than even rawhide.
I already suggested that. Partly tongue-in-cheek. But partly serious.
+Ingo Molnar sorry but systemd is rather forced down the throat and that's not what many are comfortable with (especially those of us who happen to know what the pilot project usually is and how things tend to go when rushing forward and skipping that).

Separate /usr hysteria seems rather due to braindamaged RH package base with crap like stuff in /lib/udev depending on libs in /usr in various ways (, and due to RHEV management pressure it seems. +Lennart Poettering and +Kay Sievers is this the case?

(my problem is that redhat tries to sell that to the world pretending the reasons to be completely different, that is, by lies -- that's just how Microsoft pushes its very very needed products, BTW; and both of them do solve some real problems indeed, the question is the quantity and quality of the problems introduced instead)

+Martin Langhoff hey, we've seen hostile GTK3 takeover and GNOME name ruination by so called mobile folks already -- all along with good ol' FUD and misrepresentation. With friends like those foes can just sit back and rejoice.
I read through the thread and find this discussion interesting, but I don't understand how the Core is philosophically any different than what the LSB tried to do less directly (which I think was fairly well received) or ... I think it was ESR who started up the "core" Linux distro which SUSE and some others based themselves upon, which wasn't well-received and didn't last long, IIRC. I've searched, but my memory of that project is so hazy that I can't even find a lead about it. Does anyone remember the name?

It's open source. No one can shove anything down your throat. I say go ahead. If it works well, people will use it even if it doesn't match their philosophies, just as Linux eclipsed HURD. If it doesn't work well and isn't liked, it will die like the core distro mentioned above or a million other attempted forks.
Why all the hate for +Alison Chaiken for switching to Debian due to its better ARM support? So what if she's not "fixing the problem" by suffering in Fedora-land until they catch up. It's not like she's contributing to the problem either. Perhaps, with the benefit of mature ARM support, she will be enabled to contribute MORE to the community than by staying put?
Sorry, I don't know what cormierism is meant to be…
+Greg Kroah-Hartman Not sure I agree that Gentoo shouldn't support all options.  Sure, it can only support the options that people are willing to put the work in for, but as long as people are willing to do it that's what Gentoo is all about.

If Gentoo isn't about choice, then what good is it?  

By your own argument, there are a bunch of vertically-integrated distros that do one or two things really well.  I think that niche is already well-covered.  Why try to out-Ubuntu Ubuntu?

The whole point of a distro like Gentoo (and to a somewhat lesser degree Debian) is to be all things to all people.  For those who need a distro that is all things to all people, they're the best choice.  THAT is their niche that provides the polished user experience (if you can call a lack of polish, polish).

I agree that the lists can get a bit curmudgeonish at times, and I filter that out.  What is important is to support a reasonable set of options.

Personally I like the generic distro.  It allows me to have a moderately consistent experience across everything from a desktop to a server to a diskless set-top box.  Sure, there are niche solutions that might be closer out-of-the-box for each of those, but then I have to keep on top of multiple distros all moving in different directions, and heaven forbid the client on my desktop or set-top box is not compatible with the server component on my server.

My biggest concern with the vertical-integration trend is fragmentation.  Hopefully with everybody working on their own init+X11+DE and who knows what else we don't just all end up with suboptimal experiences, since nothing is shared but maybe glibc (but not gtk vs qt!).
Add a comment...