Profile cover photo
Profile photo
Linus Torvalds
Linus Torvalds's interests
Linus Torvalds's posts

Post has attachment
Working gadgets: Atomic Aquatics scuba regulator.

I started buying my own scuba equipment after a trip to Belize many years ago, where the rental equipment was "sketch", as my daughter recently put it.

And I've replaced just about every piece of equipment since, but the Atomic Aquatics first and second stage regulators remain of my original batch.

.. and that's despite the fact that I bought the regulator used, which should probably tell you something about the price of their titanium regulators (ignore the plastic B2 face plate on the second stage, that has been replaced during normal service).

The SS1 is a later addition, as are those blue miflex hoses.

Post has attachment
Working gadgets: Ubiquiti UniFi collection.

I don't think gadgets that look like space pods are necessarily automatically good, but it does seem to be the theme today. First the Vostok I cat litter box, now the wireless networking UFO you attach to your ceiling (or wall).

I've used the UniFi stuff for several years now. It used to be how I bathed my house in the warm life-giving glow of WiFi radiation, but as I posted late last year, I actually use Google WiFi at home these days.

That didn't make the UniFi gadgets go away, though. It just meant that now it's used in more challenging areas that need a bit more flexibility than the regular home mesh routers necessarily want to do. Unlike the regular home mesh networking, the Ubiquiti stuff comes as a smörgåsbord of options, so you can get the stuff that suits your needs.

When I originally started using UniFi, you had to run the UniFi controller on one of your machines (not all the time, but for setup), and I found that part somewhat annoying, especially since I tend to upgrade my machines more often than I want to upgrade my wireless network (and then I'd lose my configuration and have to redo it all over again).

These days, you can still do that if you want to, but I actually just use the small cloud controller that looks like a pack of gum and you just plug in to your router. It is just a small standalone embedded computer doing all the same things, just in a small form factor that you can then ignore. You can use it as a local controller or with cloud access as you want.

The UniFi gadgets are definitely not as simple to set up as your modern average home mesh network routers, but you can add outdoor units and in general cover more than just a single home with them.

Purely hypothetically, if your buddy had a cabin by a river, and you wanted to make sure there is WiFi coverage while fishing (because, let's face it, fishing is boring), this is what you'd use.

And the PoE setup means that you only need one cable to the access points (and the mesh units means that you can easily make some hops wireless).

Post has attachment
Working gadgets: Astronaut Cat Home.

Yes, it looks odd. Like a russian space capsule for your pet. Yes, it takes a lot of room. Yes, it's expensive. But it actually does work.

The official name is "Litter-Robot III Open Air", which is awesome in a really cheesy way. It's arguably a horrible eye-sore, and sane people would just scoop their cat litter by hand from any number of perfectly good litter boxes that you can get for a small fistful of dollars.

But I've tried several different versions of automatic litter boxes, because if there is one defining word for me, It would be handsome lazy. The original littermaid worked fairly well for us (many many years ago - "lazy" is not some mid-life crisis, as much as a defining part of my life), but stuff would get stuck, if you know what I mean. And the version with metal tines took that to a whole new level. And the other random version by another manufacturer we tried would do the same.

For a couple of years we just gave up, and did the manual thing. The cat reacted to our inevitable failings by mostly going outside instead, which worked, but wasn't optimal. And a few months ago, I just said "F--k it, better living through technology", and decided to go for the Russian Cosmonaut Cat look, even if it seemed ridiculous.

And it really does work, so far. The cat is happy, I am happy, and we haven't had a single "things stuck" experience in three months so far.

Dammit, if I can go out in public wearing white socks and sandals (and if my wife married me despite that), our family can definitely live with an oversized cat litter box that looks like the Vostok I capsule.

Edit, since it is relevant: one very big reason for automation was that the dogs seem to be fascinated by the "organic almond roca", if you know what I mean. Enough said about that.

I was cleaning up my office over the last two days, looking for a piece of equipment that I'd lost (trust me, not that hard in what used to be a really messy office), and throwing out a lot of old gadgets that I no longer use.

Because I love crazy gadgets, and not all of them are great or stay useful. It's not always even computer stuff: my wife can attest to the addition of crazy kitchen gadgets I have tried.

But while waiting for my current build to finish, I decided to write a note about some of the gadgets I got that turned out to work, rather than all the crazy crap that didn't. Because while 90% of the cool toys I buy aren't all that great, there's still the ones that actually do live up to expectations.

So the rule is: no rants. Just good stuff. I will also ruthlessly delete negative comments, in addition to the obvious spammy ones. Because this is about happy gadgets.

I suspect it's going to be a very short list.

Edit: trying out the "collections" feature on G+. Maybe it works, and maybe it just makes all these posts go into some black hole. We'll see.

Post has attachment
So +Dirk Hohndel just made the first public beta of the new Android +Subsurface release.

We've had a mobile app for a while now, but realistically it used to be more of read-only experience: useful for having your dive log with you to check things like "how much weight did I have last time with this equipment" etc, but you realistically needed a real computer to actually enter the dives (and then just syncing over the cloud service to get that data to the mobile device).

The new 2.0 version has a ton of other improvements, but the big feature is that it's now approaching being useful as a tool to sync with your dive computer. The BLE downloading in particular is something that a lot of modern dive computers support, and that fits the mobile world really well.

So you can really let your inner geek shine in between dives by taking out your cellphone on the dive boat, and syncing your very latest dive immediately.

Note that it really is just a beta release right now, so in order to get it you have to sign up for that. And not all dive computers are supported, although an increasing amount are (I bought two dive computers in the last few months just to work on that BLE thing - any excuse for new toys).

I also like the new hot pink theme. I think it started more as a joke to try out different colors when we had some bike-shedding discussion about the look of the app, but the pink theme really is fabulous.

Dirk is a wuss for not making it the default.

Post has shared content
I was traveling for LinuxCon China, and as usual in order to make long travels bearable, did a side trip for diving.

And again, as usual, I didn't do any photography, because +Dirk Hohndel just makes my photos look bad, bad, bad. So I'm sharing his pictures instead.

I like to think that I do bring a camera on my dive trips, it's just that I also bring along somebody competent to operate it (and take it through security - Dirk gets stopped way more than I do due to his camera equipment).

FedEx seems to have enabled DKIM.

Good for them.

Or rather, I guess it would be good for them, except their delivery manager mailer seems broken, so the emails all have

ARC-Authentication-Results: i=1;;
dkim=neutral (body hash did not verify);

and gmail considers them spam.

Insert "Annoyed Picard" meme picture here.

Post has shared content
Mainly a maintenance release from +Subsurface: no big new features, but lots of small fixes.
We are happy to announce the release of our latest update, Subsurface 4.6.4.

In the two months since our last release, we added a feature that a lot of users asked us about: the ability to quickly manually enter new dives with just depth and duration, without using the very nice, but sometimes a bit too time consuming graphical profile editor. We heard you - let us know what you think.

We also fixed quite a few bugs, improved the dive planner, improved import both from dive computers as well as other dive logs formats and dealt with minor issues here and there.
For all the details, please take a look at the full announcement below.

As always, binaries for Windows, Mac, generic Linux and a number of specific Linux distributions are available from

Congrats to +SpaceX​ for the successful re-use and re-landing of the first stage.

Following the live feed is really quite amazing, especially when the SpaceX crowd ends up cheering on success. 

I thought I'd write an update on git and SHA1, since the SHA1 collision attack was so prominently in the news.

Quick overview first, with more in-depth explanation below:

(1) First off - the sky isn't falling. There's a big difference between using a cryptographic hash for things like security signing, and using one for generating a "content identifier" for a content-addressable system like git.

(2) Secondly, the nature of this particular SHA1 attack means that it's actually pretty easy to mitigate against, and there's already been two sets of patches posted for that mitigation.

(3) And finally, there's actually a reasonably straightforward transition to some other hash that won't break the world - or even old git repositories.

Anyway, that's the high-level overview, you can stop there unless you are interested in some more details (keyword: "some". If you want more, you should participate in the git mailing list discussions - I'm posting this for the casual git users that might just want to see some random comments).

Anyway, on to the "details":

(1) What's the difference between using a hash for security vs using a hash for object identifiers in source control management?

Both want to use cryptographic hashes, but they want to use them for different reasons.

A hash that is used for security is basically a statement of trust: and if you can fool somebody, you can make them trust you when they really shouldn't. The point of a cryptographic hash there is to basically be the source of trust, so in many ways the hash is supposed to fundamentally protect against people you cannot trust other ways. When such a hash is broken, the whole point of the hash basically goes away.

In contrast, in a project like git, the hash isn't used for "trust". I don't pull on peoples trees because they have a hash of a4d442663580. Our trust is in people, and then we end up having lots of technology measures in place to secure the actual data.

The reason for using a cryptographic hash in a project like git is because it pretty much guarantees that there is no accidental clashes, and it's also a really really good error detection thing. Think of it like "parity on steroids": it's not able to correct for errors, but it's really really good at detecting corrupt data.

Other SCM's have used things like CRC's for error detection, although honestly the most common error handling method in most SCM's tends to be "tough luck, maybe your data is there, maybe it isn't, I don't care".

So in git, the hash is used for de-duplication and error detection, and the "cryptographic" nature is mainly because a cryptographic hash is really good at those things.

I say "mainly", because yes, in git we also end up using the SHA1 when we use "real" cryptography for signing the resulting trees, so the hash does end up being part of a certain chain of trust. So we do take advantage of some of the actual security features of a good cryptographic hash, and so breaking SHA1 does have real downsides for us.

Which gets us to ...

(2) Why is this particular attack fairly easy to mitigate against at least within the context of using SHA1 in git?

There's two parts to this one: one is simply that the attack is not a pre-image attack, but an identical-prefix collision attach. That, in turn, has two big effects on mitigation:

(a) the attacker can't just generate any random collision, but needs to be able to control and generate both the "good" (not really) and the "bad" object.

(b) you can actually detect the signs of the attack in both sides of the collision.

In particular, (a) means that it's really hard to hide the attack in data that is transparent. What do I mean by "transparent"? I mean that you actually see and react to all of the data, rather than having some "blob" of data that acts like a black box, and you only see the end results.

In the pdf examples, the pdf format acted as the "black box", and what you see is the printout which has only a very indirect relationship to the pdf encoding.

But if you use git for source control like in the kernel, the stuff you really care about is source code, which is very much a transparent medium. If somebody inserts random odd generated crud in the middle of your source code, you will absolutely notice.

Similarly, the git internal data structures are actually very transparent too, even if most users might not consider them so. There are places you could try to hide things in (in particular, things like commits that have a NUL character that ends printout in "git log"), but "git fsck" already warns about those kinds of shenanigans.

So fundamentally, if the data you primarily care about is that kind of transparent source code, the attack is pretty limited to begin with. You'll see the attack. It's not silently switching your data under from you.

"But I track pdf files in git, and I might not notice them being replaced under me?"

That's a very valid concern, and you'd want your SCM to help you even with that kind of opaque data where you might not see how people are doing odd things to it behind your back. Which is why the second part of mitigation is that (b): it's fairly trivial to detect the fingerprints of using this attack.

So we already have patches on the git mailing list which will detect when somebody has used this attack to bring down the cost of generating SHA1 collisions. They haven't been merged yet, but the good thing about those mitigation measures is that not everybody needs to even run them: if you host your project on something like or, it's already sufficient if the hosting place runs the checks every once in a while - you'll get notified if somebody poisoned your well.

And finally, the "yes, git will eventually transition away from SHA1". There's a plan, it doesn't look all that nasty, and you don't even have to convert your repository. There's a lot of details to this, and it will take time, but because of the issues above, it's not like this is a critical "it has to happen now thing".
Wait while more posts are being loaded