Profile cover photo
Profile photo
Konstantin Ryabitsev
851 followers -
Linux Foundation IT guy
Linux Foundation IT guy

851 followers
About
Konstantin's posts

Post has shared content
I thought I'd write an update on git and SHA1, since the SHA1 collision attack was so prominently in the news.

Quick overview first, with more in-depth explanation below:

(1) First off - the sky isn't falling. There's a big difference between using a cryptographic hash for things like security signing, and using one for generating a "content identifier" for a content-addressable system like git.

(2) Secondly, the nature of this particular SHA1 attack means that it's actually pretty easy to mitigate against, and there's already been two sets of patches posted for that mitigation.

(3) And finally, there's actually a reasonably straightforward transition to some other hash that won't break the world - or even old git repositories.

Anyway, that's the high-level overview, you can stop there unless you are interested in some more details (keyword: "some". If you want more, you should participate in the git mailing list discussions - I'm posting this for the casual git users that might just want to see some random comments).

Anyway, on to the "details":

(1) What's the difference between using a hash for security vs using a hash for object identifiers in source control management?

Both want to use cryptographic hashes, but they want to use them for different reasons.

A hash that is used for security is basically a statement of trust: and if you can fool somebody, you can make them trust you when they really shouldn't. The point of a cryptographic hash there is to basically be the source of trust, so in many ways the hash is supposed to fundamentally protect against people you cannot trust other ways. When such a hash is broken, the whole point of the hash basically goes away.

In contrast, in a project like git, the hash isn't used for "trust". I don't pull on peoples trees because they have a hash of a4d442663580. Our trust is in people, and then we end up having lots of technology measures in place to secure the actual data.

The reason for using a cryptographic hash in a project like git is because it pretty much guarantees that there is no accidental clashes, and it's also a really really good error detection thing. Think of it like "parity on steroids": it's not able to correct for errors, but it's really really good at detecting corrupt data.

Other SCM's have used things like CRC's for error detection, although honestly the most common error handling method in most SCM's tends to be "tough luck, maybe your data is there, maybe it isn't, I don't care".

So in git, the hash is used for de-duplication and error detection, and the "cryptographic" nature is mainly because a cryptographic hash is really good at those things.

I say "mainly", because yes, in git we also end up using the SHA1 when we use "real" cryptography for signing the resulting trees, so the hash does end up being part of a certain chain of trust. So we do take advantage of some of the actual security features of a good cryptographic hash, and so breaking SHA1 does have real downsides for us.

Which gets us to ...

(2) Why is this particular attack fairly easy to mitigate against at least within the context of using SHA1 in git?

There's two parts to this one: one is simply that the attack is not a pre-image attack, but an identical-prefix collision attach. That, in turn, has two big effects on mitigation:

(a) the attacker can't just generate any random collision, but needs to be able to control and generate both the "good" (not really) and the "bad" object.

(b) you can actually detect the signs of the attack in both sides of the collision.

In particular, (a) means that it's really hard to hide the attack in data that is transparent. What do I mean by "transparent"? I mean that you actually see and react to all of the data, rather than having some "blob" of data that acts like a black box, and you only see the end results.

In the pdf examples, the pdf format acted as the "black box", and what you see is the printout which has only a very indirect relationship to the pdf encoding.

But if you use git for source control like in the kernel, the stuff you really care about is source code, which is very much a transparent medium. If somebody inserts random odd generated crud in the middle of your source code, you will absolutely notice.

Similarly, the git internal data structures are actually very transparent too, even if most users might not consider them so. There are places you could try to hide things in (in particular, things like commits that have a NUL character that ends printout in "git log"), but "git fsck" already warns about those kinds of shenanigans.

So fundamentally, if the data you primarily care about is that kind of transparent source code, the attack is pretty limited to begin with. You'll see the attack. It's not silently switching your data under from you.

"But I track pdf files in git, and I might not notice them being replaced under me?"

That's a very valid concern, and you'd want your SCM to help you even with that kind of opaque data where you might not see how people are doing odd things to it behind your back. Which is why the second part of mitigation is that (b): it's fairly trivial to detect the fingerprints of using this attack.

So we already have patches on the git mailing list which will detect when somebody has used this attack to bring down the cost of generating SHA1 collisions. They haven't been merged yet, but the good thing about those mitigation measures is that not everybody needs to even run them: if you host your project on something like http://github.com or kernel.org, it's already sufficient if the hosting place runs the checks every once in a while - you'll get notified if somebody poisoned your well.

And finally, the "yes, git will eventually transition away from SHA1". There's a plan, it doesn't look all that nasty, and you don't even have to convert your repository. There's a lot of details to this, and it will take time, but because of the issues above, it's not like this is a critical "it has to happen now thing".

Post has attachment
Short of dumping money into a Luddite movement, nothing is bringing back blue collar jobs lost to automation. Just wait till millions of truckers start losing their jobs to automated EV fleets.

http://flip.it/LzsNcN

Did you say "alt.right?" If I've learned anything in my life, it's not to touch anything under the "alt." hierarchy if you know what's good for you. #nntp #newsgroups #getoffmylawn

Post has attachment
Ok, last one, I promise. :)
43 votes
-
votes visible to Public
28%
^]
60%
~.
12%
+++

Post has attachment
75 votes
-
votes visible to Public
7%
more
89%
less
1%
most
3%
view

Post has attachment
85 votes
-
votes visible to Public
6%
less +F
91%
tail -f
4%
tailf [*]

Post has attachment

Post has attachment
125 votes
-
votes visible to Public
52%
find ... -exec foo {} \;
48%
find ... | xargs foo

Post has attachment
Why don't we surface our roads with pykrete in Canada for the winter months -- instead of salting them? It seems it would be a heck of a lot cheaper in the long run due to saved costs of no longer dealing with damage to the road surface (freeze/defrost is murder on asphalt), vehicle corrosion, and impacts on the drainage system/ecology from all the salt runoff.

Pykrete (which is sawdust mixed with water and then frozen) is known to be extremely durable and resistant to melting due to low heat conductivity, offers excellent grip for tires and should adhere well to the road surface (see https://en.wikipedia.org/wiki/Pykrete).

I envision fleets of automated zambonis that operate in 3 stages:

1. Collect fallen snow from roads
2. Drive to an electrical charge port where the collected snow is melted to the temperature just above freezing (inside that same zamboni that collected it in the first place)
3. Mix in sawdust to create the necessary pykrete consistency
4. Go out and "pave" the roads

Apart from zamboni technology, this should be cheap to the municipalities as well, as most of them generate plenty of sawdust on their own from branch cutting and tree removal services. When roads melt, use the same zambonis to vacuum up the sawdust and either recycle it next year, or compost it.

Post has attachment
31 votes
-
votes visible to Public
16%
Boxers
6%
Briefs
77%
Commandline
Wait while more posts are being loaded