Shared publicly  - 
 
This is just very cool.

I finally got around to play with the "AppImage" version of +Subsurface, and it really does seem to "just work".

It not only allows for a project to create a complex Linux application (in the case of subsurface, one that uses very recent library versions that many distributions don't even have available yet) that works on multiple distributions, you don't even need to really even install it. Download a file, mark it executable, and run it.

It comes with its own little embedded compressed ISO filesystem that gets mounted and contains all the required libraries.

Sure, it means that the end result is much bigger than a distro-native binary would be, but if you want a way to build applications for your users without limiting them to a particular distribution, or having to build fifteen different images, it really looks like it works very well.
415
70
Helene Reite Wiik's profile photoKostas Michalopoulos's profile photoLaércio de Sousa's profile photoImran Yarow's profile photo
79 comments
 
And now it's your responsibility to keep up with security issues in every library you bundle. Sound like fun?
 
Yay, somebody rediscovered NeXTStep's application (and library) Bundle format only 20 years later! ;)
 
This could be very useful, especially for applications that are a pain to install and configure (looking at you, Skype), or that users only need a couple times. 
 
The distro-wide holy grail when it comes down to distributing your software, but sadly as +Måns Rullgård pointed out, who is going to maintain QA on that?
 
sounds familar to the old static vs dynamic linking discussion ?

 
+Måns Rullgård When the alternative is "we can't support distro X", who cares?

The whole "omg, the sky is falling" security argument is bogus. It's particularly bogus for non-core user-centric applications (ie the stuff that doesn't come with the distribution, and that the distribution makes actively impossible to build).
 
+Andreas Beder So it goes a bit further than just libraries. You often need other files too - things like icons, theme files, translation files, etc etc.

And not having to install it means that you don't end up having to have root, but it also means that you don't have to worry about all the usual packaging format issues or cleanup issues etc.

And no, it's not a new problem, and yes, others have done similar things before (and are doing similar things right now). But it's still nice to see a project like this that fixes a problem that is very real.
 
Interesting.

+Matt Della easiest way to deal with Skype was creating a VM for VDI.
 
+Matt Della exactly. I don't think the "AppImage" approach is for core - or even very common - applications. Those you should get from your distribution anyway.

But the things that are a bit odd? Like logging your scuba dives? First off, there are too few users for your distro to care, and it's too much effort for the developer to have to bend over backwards about distro differences (trust me, +Subsurface tried that for years). The library version skew just makes it impractical, unless the library is old and very stable. The effort to test (or even to care, quite frankly) is often too high.

Now, if you don't use any odd libraries, you may never hit the problem. Maybe building the app natively "just works" on every distribution, and you may be able to use a build-server (like the OpenSuSE one) that generates different kinds of packages. But with the GUI people still switching things around all the time, that usually doesn't work all that well for complex GUI apps.
 
The AppImage approach is really really useful. It allows us to do on Linux what we can do on the other OSs we support: use the system libraries that are there and bundle the additional pieces that we need.

Subsurface certainly is a somewhat extreme case as we want very very recent versions of a few libraries (libgit2, Qt5 for example) and we even have our own forks of a couple of libraries because we want to be somewhat "ahead" of upstream.

But what is not just a Subsurface issue is the other underlying problem: it is a major pain to get an application to be included (and kept current) in dozens of Linux distributions. And from an application perspective what I care about is that my users can use the latest version. Regardless of the release cycle of the distribution, regardless of what the distribution thinks of the libraries that I use in my application.

So yes, I'm a huge fan of this and think that Simon (probono) has done a really cool job here. I'm sure there's room for improvement (and the colaboration with Simon has been just wonderful so far, so I'm certain things will continue to get better), but the fact that we now can tell users "oh, just use the AppImage and it will work" is such an improvement.

Just yesterday a user wasn't able to get things to work on Fedora 23 (which should get a semi-official package of Subsurface 4.5.2, soon, as they are more willing to work with our needs to get our own versions of libdivecomputer and libmarblewidget bundled). But instead of telling them how to work around the issues they ran into the simple solution was "use the AppImage".

Yay.

 
Whilst I love the Linux way(s) of distributing software, it does pose a barrier to entry for some people. The bundle approach is the winner for broad adoption.
 
I was a huge proponent of this years ago. The original author had some good examples back on the old linux portable apps forum.

That being said the bundled libraries can overwrite system libraries and visa versa so security isn't such an issue.

Another thing to keep in mind is user accessibility. Adopting this approach would make it a lot easier for users to delete and manage applications on linux. As well as an AppStore. This alone should justify its need to exist. The ability to reduce user friction and dramatically decrease the difficulties in user on boarding.
 
Portability between different distros and distro versions is such a huge unaddressed problem in Linux it isn't even funny.

As a developer of cross-platform applications I'm sick and tired being stuck having to support random stone-age compilers and random stone-age dependency versions just because of Linux.

As a user I'm sick and tired of being stuck on old versions of applications. That is if said application is even available on this specific distribution in this specific version in the first place.

The time wasted due to this problem being unaddressed and ignored is staggering. Even worse are the tons of different workarounds people try to employ to get around these issues which - due to the problem at its core remaining ignored - all end up brittle and restricting. Building on old compilers with prehistoric distributions, messing with LD_PRELOAD trying to force in specific dependencies, chroot, attempting to static linking the world, special wrappers trying to redirect all external dependency references during runtime and various combinations thereof are just the tip of the iceberg.
 
Eventually Linux will reinvent the Translucent File System and the useful parts of Multics. This might not be a bad thing,, but the process could be short-cut by revisiting how these problems were handled when memory was a seriously limited resource and rebooting a system was a very infrequent (and expensive) operation.
 
+Linus Torvalds​​​​ that's the wrong approach to security: you're definitely straddling the fence with this one, to say the least. Don't get me wrong, the end result is fantastic (given its opacity, I assume): but, folks get confused about the whole "there are no absolutes" thing at times, hence the "oh noes sky is falling (has fallen)" security mantra.
 
+Nathan Schulte the most secure system is the one that nobody uses.

But if you don't see that that is a problem, then I can't help you. 
 
+Linus Torvalds​​​​​, I agree, it's a tough problem. Your app sounds like a great candidate where none of the security argument much matters. Though, also, you know what security is and more importantly how it's applied. I think the mantra's concern is that lots of developers have little understanding, and thus it's not apparent when/where it counts/hurts.

Reading about dealing with the various distributions sounds like a nightmare indeed. Having tools for this side of the line should only help both ends: it vitalizes the project (better experience, more users), which can in turn vitalize the distributions (more competition, easier to satisfy distribution packaging demands with momentum (active users, demand/competition)).
 
AppImage appears to be the way to go... Even if a bundled library is compromised, doesn't it limit the breach to the bundled app? (Considering the app was isolated/packaged intelligently)

Bundled apps with libraries bring the security focus of the application to the developer where it should be rather than on the system administrator. The end result is a better designed, security-minded, and well-coded program.

I am tired of hearing lazy devs baulk at the sound of packaged apps because they do not want to manage anything other than the color in their favorite ide whereas containerized deployments are proving to be very secure and are becoming the gold standard (SELinux, chroot jails, VM/docker per app, etc.)

It is time we have devs think about all aspects of their program lest we have another java-version-dependency-fiasco.

/rant
 
That graphic had me thinking you had taken up playing Adventure on a 2600.
 
+Paul Swanson , +Isaac Leonard​ I was going to mention that this seems like Docker for desktops, and as with Docker I don't know if it'll good or bad for Linux, but it solves a problem people have
 
+Andrew Wippler you're assuming that the application developer will do a better job of patching library versions than the distributions package manager. At this point, many aspects of shared libraries don't matter on PCs with ample everything; being able to patch one library and fix a heard of apps is probably one of the only "Big" pluses left.

Reality isn't black or white, so much as grey smacks in the head all over. So who cares as long as something gets done.
 
This is just a way to get closed source applications easier to install on Linux. It is not really a problem for open source applications except on Fedora. All other distros have a strong packager community. People who are experts at making packages.

A bundle is also many times bigger... there are security issues...

I really hope it will not catch on.
 
I'd rather have something like the steam runtime. It still allows apps to share runtime libraries, but it's user maintainable...
 
+Daniel Sandman you really don't know what you are talking about.

+Subsurface​ is very much an open source app, and it has been very painful to make packages. Fedora had by no means been the biggest problem either.

The security issues are way overblown.

The size increase is unfortunate, but unavoidable. Until different distributions have the same base libraries, and until base libraries don't change ABIs. Which would be lovely, but isn't likely to happen.
 
+Linus Torvalds The thing with closed source apps is that they have blobs... which often require a specific version of dependencies. So here the bundle solution is solving a problem.

The only thing you have to be concerned when it comes to open source.. is to release the source. Then packagers at different distros should handle the rest. Not an issue for you the developer. Fedora has historically been bad at this.. why I mentioned it. Was a while since I last used Fedora so could very likely be wrong about how it is now. It's maybe a subjective perception but somehow I feel the embrace of these kinds of solutions have mainly come from Fedorians.
 
I have not followed the +Sagemath and Ntop projects close enough in recent years, but I do know they were both building specific libraries are part of the project to ensure "agnostic deployment". Packaging for +Sagemath in particular has been a curly issue.
 
+Daniel Sandman open source software obeys the same ABI pickles as everyone else when distributing binaries. Which is to say, usually great until you want to run on an older version of glibc or a totally different libc; or any other libraries with such ABIs. Package managers don't always handle that very well without back porting things yourself or changing distributions.

Building from source does not magically fix the problem of building one set of binaries for my entire household instead of compiling on EVERY machine.
 
+Terry Poulin Because that is so frequent </irony>

Any distro with a good packager community does not experience such issues.

...Packaging on Linux works really well as long as the source is open. Sure you can hit an occasional issue while compiling but those are pretty rare and fixable. Also.. OBS is a nice solution.

Have you ever had to compile a package for every machine in your household? Didn't think so... some package managers do handle it somewhat graceful in fact. I mean with Yaourt (Pacman wrapper) I can download the source from GitHub and it compiles and install from which ever commit I like. Same thing can be accomplished with OBS.
 
If this fixes a problem and works why so much hate?

The important thing is that the individual user have the opportunity to choose what to use with a platform and ecosystem like Linux.
You are not forced to use a specific way of doing things like some other platforms.

The choice to use Qubes, Fedora, Ubuntu or whatever desktop distribution, it's still up to personal preference and liking.
And the same goes with the different applications and ways of deploying them.

Then of course the security shouldn't be totally ignored. imho
 
If stuff is opensource, we don't really need this.
Packagers create the packages and ship the needed libs if needed.

Even works on closed-source apps if there's a tad of documentation on it.
 
this tool is a solution to non-problem. the only problem it is genuinely solving is 'move installed app somewhere else' which is irrelevant for you. everything else (one installer file with bundled libs) is solved by any proprietary app like games long ago. they all build on old distro and bundle libs too (so no, you can't support debian if you can't build on debian because it is too old). real problem is for example "what to do with graphics drivers which are libraries?" and it is just ignored by this tool
 
+Terry Poulin 1) you can compile on one machine for EVERY machine. 2) you can use same distro on EVERY machine
 
Hmm . My immediate worry ( having only read the readme ) is that appimage apps seem to lose knowledge of the working directory they were started from. ( I guess it might be preserved in a environment variable).

I tend to CD into the directory with my files and launch apps from there - yes even GUI ones - and like the to save back to that directory . not that all normal applications can manage to make that easy. ( open/libre office, I'm looking at you )

+Serge Pavlovsky​ I'm not convinced it is a non problem . ive used OS in the dim past which worked like this. The filer registering them when it saw them automatically and it did have properties I occasionally miss. Take this weekend when an my partner left her laptop PSU at home but needed to run an app I wrote for her which she keeps on a memory stick. My distro only had newer incompatible library versions. Having the app self contained would have helped there.
 
I'd like to mention that such problem is being adressed by xdg-app which is (I think) pushed by gnome.

You can, now, see their progress as they are more or less able to run gimp or inkscape.

More info on the project:
https://wiki.gnome.org/Projects/SandboxedApps

You can follow +AlexanderLarsson which is a main developper of it and Christian Hergert (gnome-builder https://plus.google.com/104653208253044284320) which will work to make gnome-builder able to easily integrate xdg-app
 
+Roger Gammans maybe it is your problem, but it is not linus' problem, at least he didn't mention it being his problem
 
As +Cédric Briner​ said: xdg-apps may be a better option when it gets a bit more mature: "apps" can depend on "runtimes" which provide a shared, stable set of libs. This reduces the size and facilitates proper security. Of course apps can still bundle additional or patched libs. Of course this will only work if a small number of runtimes take over...
 
+Daniel Sandman
Your viewpoint is one of the main reasons why the year of the Linux desktop will start at the fist day after time has ended. Users will never fiddle with header files, compilers, make, gmake, cmake, dependencies (sometimes even circular ones as happened to me) and so on. If confronted with such inconveniences they will give Linux the boot, and continue to use M$ Windblows.
 
+Daniel Sandman The "release the source and let the distros worry about packaging"-approach only works for popular software (i.e. software package maintainers care about). The amount of small and niche software is way too large to expect distros to contain all of them. This is a solution for those that don't end up in the distros.
 
If only everyone used sane distributions that dont use ancient software for "stability" (often causing unstable behaviour on newer hardware).
 
My concern with the approach of shipping prebuilt images is that you end up with fifty unpatched versions of openssl, fifty out of date ca-certificate packages allowing revoked certs. Out of date timezone data, and all the things that distributions currently solve.

Maybe we need a simpler way of saying "build me a contained installation of this package from Debian Unstable and keep it up to date". This would solve the portability problem without introducing all the security issues of freezing versions and never upgrading them.
 
Bah, the whole "can't support distro X" is pretty much a creation of package managers not handling having multiple lib minor version installed side by side, even though ld can tell them apart easily via soname.
 
Hm, alas AppImage doesn't seem to be a solution for our needs: running the Subsurface download on Debian 7 complained about 'fuse: failed to open /dev/fuse: Permission denied'. And running on CentOS 5 (kind of a hard-core test but alas RHEL5 is still somewhat popular in the Enterprise world) fails because libfuse.so.2 was not found.

Maybe libfuse could be linked statically, and failure to open /dev/fuse could be signalled more gracefully.

For the time being, I'll have to stick to a 'statically link everything and build stuff on ancient Linux systems' approach, I guess.
 
Storage is free, why not use it?
 
AppImage looks really good! Just one question: how does it handle updates? Is it easy to put together an AppImage application that can automatically download updates and replace itself with a new version?
 
+Jukka Suomela AppImage does not do that AFAICR, you need something like Qt Installer Framework (which is really nice, btw) in addition. That gives you multiplatform support as a bonus.
 
+Carl Michael Giblhauser I never said users should. A user is != packager. Normal users do not have to fiddle with those things today and will not need tomorrow either. A user always have the ability to though...
 
+Florian Tischner Except on Arch and possibly Debian based... On Arch making a package is so damn easy almost everyone can do it. Which means pretty much everything released can be found through the package manager. If you aren't able.. the community usually helps to solve it for you.

Also as I mentioned earlier.. SUSE's Open Build Service is pretty damn good. 
 
Does this also solve anything in regards to upgrading the package once it's on a users machine?
 
+Daniel Sandman just shut up. You have no idea what you are talking about. None. ZERO.
People like you are the problem. People who trash-talk solutions without understanding the problem. Who spout bullshit about what works and doesn't work with distributions.
No, distributions do not just package something that is open source. They have their own weird ideas of how things should be. Debian is an especially terrible example; there those ideas are just braindead. "this library doesn't compile on SPARC32, it therefore must not be in Debian" - "oh we packaged a two year old version of this, that's good enough" - "oh, you, the app developer must follow our random, arbitrary, onerous rules in order for us to package your software".
Funnily Fedora is actually one of the easier ones to work with. But either way, the point is that I, as the app maintainer, don't want my app bundled in a distribution anymore. Way to much pain for absolutely zero gain. Whenever I get a bug report my first question is "oh, which version of which distribution? which version of which library? What set of insane patches were applied to those libraries?". No, Windows and Mac get this right. I control the libraries my app runs against. Period.
End users don't give a flying hoot about any of the balony the distro maintainers keep raving about. End users don't care about anything but the one computer in front of them and the software they want to run.
With an AppImage I can give them just that. Something that runs on their computer. As much as idiots like you are trying to prevent Linux from being useful on the desktop, I can make it work for my users despite of you.
 
+Dirk Hohndel Mind your language please... You are a grown man.

Use OBS if the distro do not like to work with you then. If you necessarily feel you have to take responsibility of building packages yourself. You could make it build packages automatically after every commit to git if you so wished. And I said possibly Debian based as I know it is not always a straight shooter with Debian. They do have a crazy big catalog of binaries though which means people have gotten in packages to them. I am more of a rolling guy and agree with the complaints you made against them.
 
+Linus Torvalds​ early Christmas for you. 3 col google+ on 'very wide screens' and a binary distribution solution.
probono
+
3
4
3
 
+Frerich Raabe  If fuse is not available on your system, then you can loop-mount or extract the AppImage (it is also a valid ISO file) and execute the AppRun file contained therein.
 
+probono  Hey man, Long time no talk. Nice to see you around. I'm glad you saw this! :)
 
As cool as it might be, apps distributed like this are a security nightmare.

 
I assume this is probably cleaner than building the package myself from source or using something like FPM(fucking package manager) to force the package build if I don't have enough info for the SPEC file? 
 
+Måns Rullgård +Nathan Schulte +Linus Torvalds To be honest from working on a Linux application that supports multiple Linux distros. It would be much easier to handle the security updates of libraries yourself, than it is to build for all these distros. The amount of times a integration/unit test build breaks on an older version of RHEL/CentOS because the developer was using a newer version of a library is quite high I find. I don't see how security is an issue either to be honest, it's as easy as putting at the start of a build script yum update and use that new library to bundle with this app.
 
+Linus Torvalds Did you check Google's Go? It already supports cross compilation and binary distribution but of course not all programs can be written in Go ;)
 
+Eric Curtin Handling security means you have to track every library you use and make a new release of your app whenever one of them fixes an issue.
 
+Måns Rullgård
"And now it's your responsibility to keep up with security issues in every library you bundle. Sound like fun?"

Any project that has a Windows or Mac version (and the majority of the popular OSS projects have) already has to keep up with bundled libraries' versions anyway.

Also -- and way more importantly -- the "one copy of a shared library" policy is a double edged sword. Just to give an example, a distro's patch introduces a bug in OpenSSL and suddenly all your apps are critically vulnerable -- http://practical-tech.com/operating-system/linux/open-source-security-idiots/243/

So much for security with package management...

How many more bugs have been and will be introduced by distros and  compromise each installed app's security? Bundled apps are not affected by this kind of problems. So which model is actually more secure?
 
+Måns Rullgård Yeah, you do have to track your dependencies security updates. So what? You have to do so anyways if you take security seriously. Not every security issue can be fixed in the dependency.

The flipside is that I can actually cut off old, buggy and potentially insecure dependency versions instead of ending up in an ifdef legacy hell to support them. I don't have to implement crude workarounds for security issues with older versions. I don't have to keep supporting inferior/unsafe protocols just because some of my users are stuck to some LTS dependency on some old LTS Linux distro. I can even use modern tooling and compilers.

Even better: Users can actually update to newer, more secure versions of an application. Versions actually tested, signed and supported by the original developers instead of some random package maintainer.
Krita
+
5
6
5
 
+Todor Imreorov Making cross-distribution packages has always been a goal. It's also really hard. We've tried klik, autopackage and others. But we'll check out this option.
 
+Måns Rullgård To be honest tracking the security updates on some applications is far easier than building for all the platforms, but maybe that depends on the application you are working on.
 
+probono
would it be possible to provide (optionally if needed) a GUI for users to type their root password (explain that FUSE is missing and this is why the program asks for password) and have the launcher loop-mount to a temp folder and run the program from there? It'd be somewhat better than relying on the user to know how to do that manually.
 
The idea of making bundles for Linux apps is not new, but AFAIK embedding a tiny runtime in the ISO header, turning an AppImage into an ELF executable, is THE catch of AppImage project.
 
Ggjsssjjvvghijft55tttttrtttte .m ,, nT_TvzzchxlvcvvdhgtodoiGpdiC. . ..bb 
Add a comment...