Communities and Collections

Posts

**Looking for comonoids**

In these waning days of google+, I'm looking for nice example of comonoids. Do you have any? If so please share.

The context is, probably unsurprisingly, Linear Haskell: one of the interesting features of linear types is that it can support non-trivial comonoids (non-trivial in this context can be defined as not arising from a !-coalgebras).

But I only have very operational examples of such comonoids (my canonical such example is reference counted reference (where you increase the reference counter when you duplicate)). But I don't feel these convey a good intuition. So I'm looking for better examples, of which we can think of more extensionally.

Add a comment...

It's traveling season for me.

I'll be at

- the Haskell Implementors Workshop and the main ICFP conference (23—26 oct)

- the EUTypes meeting in Aarhus (08—09 oct)

- Haskell Exchange in London (11—12 oct)

If you are going to be in any of these events: I'm looking forward meet you there.

I'll be at

- the Haskell Implementors Workshop and the main ICFP conference (23—26 oct)

- the EUTypes meeting in Aarhus (08—09 oct)

- Haskell Exchange in London (11—12 oct)

If you are going to be in any of these events: I'm looking forward meet you there.

Add a comment...

**Of crypto and logic**

Let's consider git. One thing that git does is hash each commit with a cryptographic hashing function (sha1, I believe). The hash then serves as the name for the commit (for example, each commit refers to the commit before it using its hash). This is a common practice called content addressing, you can find it, for instance, in Flatpack and Snap, as well as, in a slightly different form, in the Nix package manager.

To reason about this, it is convenient to consider that the hashing function is injective: there are not too block with the same name. It's completely fine because it will be the case in practice.

But it can quickly become inconsistent. Indeed the space of names (hashes) is finite, while there are infinitely many possible commits. So, the hashing function far from being injective, in fact the preimage of every hash is infinite!

In reality, hashing functions approximate injectivity with the following properties

- Collision resistance: it is vanishingly unlikely for a computer with limited computation power to create two commits with the same hash

- Second-preimage resistance: given a commit, it is vanishingly unlikely for a computer with limited computation power to create another commits with the same hash

(See these link for more precise definitions: https://en.wikipedia.org/wiki/Collision_resistance , http://web.cs.ucdavis.edu/~rogaway/papers/relates.pdf)

Now, of cours, it is extremely inconvenient to work with such definition. So, whenever possible, we will revert to assume that our hashing function is injective.

My question is: is there a way to make reasoning in terms of injectivity precise and consistent? Maybe there is a topos where collision-resistant hashing functions are actually injective, but I doubt that. I rather imagine a modal logic where hashing function would be modaly injective. It's not as convenient than direct injectivity, but it would be worth it if it were correct.

I've looked around quite a bit and haven't found anything which helped along these line. So, instead, I'm presenting it as a challenge to the world.

Add a comment...

Post has attachment

Public

**On weather and probability**

Among the many thing I've put off speaking of is the treatment of probability in weather forecast apps. But, fortunately, my hand was forced by this recent xkcd [ https://xkcd.com/1985/ ].

I'm that guy on the first two panels. I'm constantly infuriated by how probabilities of precipitation are displayed. If the app tells me that there is 20% change of it raining between 12 and 2, and 20% between 2 and 4, and 20% between 4 and 6. Then what does it tell me about the chances of there being rain at all?

Nothing! Without some knowledge of how the probability correlate, all I can tell is that it's at least 20%. Plus this isn't a particularly useful thing to know: if it rains for 1min, it's pretty much just as if it hadn't rained at all.

What we really want to know is the distribution of the rain duration. Or of the millimetres of rain. Or something. Which can be approximated by giving the quartiles for instance. Not this rubbish.

So why are we treated with these meaningless metrics, instead?

Add a comment...

**It is easier to break a lot of keys than only one**

Factoring an integer into its prime factors is really hard. Cryptography relies on that to create public key: take two prime numbers

*p*and

*q*(this is your private key) and compute their product

*pq*(this is your public key).

Cryptographers came up with functions

*f*which are easy to compute using

*p*and

*q*, and whose inverse

*f¯¹*is easy to compute using

*pq*. But it is

*really really hard*to compute

*f*using

*pq*: you need to know

*p*and

*q*.

So you can share

*pq*freely: you are still the only one who knows

*p*and

*q*, hence the only one who can compute

*f*. There are a number of applications of this idea [1, 2].

However, here is something which is easy: computing a common divisor of two number

*n*and

*p*[3]. It's so easy that given a million numbers, you can find a common divisor to all

*pairs*of them in a fraction of the time it takes to factor this number into primes. And here is the deal: if you find a common divisor of two public keys, you found the private keys corresponding to both of them!

So as counter-intuitive as it sounds: the more keys you have, the faster you will break them.

Disclaimer: this is not the only way to produce public-private key pairs, and not all schemes are vulnerable to such attack (at least that I know of). Also, of course, cryptographers are aware of this vulnerability, and work hard at generating private keys for which both primes are very likely to be unique in the entire world.

*See also the comments below for further resources, including what I believe to be the original source for this attack*

[1]: https://en.wikipedia.org/wiki/Public-key_cryptography

[2]: https://en.wikipedia.org/wiki/Digital_signature

[3]: https://en.wikipedia.org/wiki/Euclidean_algorithm

Add a comment...

Here's a fun fact that I failed to notice before:

An image factorisation (aka epi-mono factorisation, where a morphism f is equal to the composite of an epi-morphism and a mono-morphism, the intermediate object being the image of f) is also an image factorisation in the opposite category.

So if a category has all image factorisations, so does its dual.

I've got about zero use for this fact at the moment (if you do, please share!) but I kind of like it.

An image factorisation (aka epi-mono factorisation, where a morphism f is equal to the composite of an epi-morphism and a mono-morphism, the intermediate object being the image of f) is also an image factorisation in the opposite category.

So if a category has all image factorisations, so does its dual.

I've got about zero use for this fact at the moment (if you do, please share!) but I kind of like it.

Add a comment...

I'm flying to POPL next week. I'll be speaking Linear Haskell on Wednesday at 10.30 PT (18.30 UTC, I believe) in the Watercourt session.

Looking forward to seeing you there!

Looking forward to seeing you there!

Add a comment...

Post has attachment

Public

Ha! The UK Foreign Service estimates that romance languages are easier to learn for English speakers than German.

As I always say, English is basically a dialect of French.

(Please don't look at these few northern countries. They are just a distraction, believe me)

As I always say, English is basically a dialect of French.

(Please don't look at these few northern countries. They are just a distraction, believe me)

Add a comment...

Post has attachment

I've just published a a tutorial on type-class reflection in Haskell. Which is a mechanism to create type-class instances dynamically.

Instances are, by design, statically known. So reflection can have very powerful consequences. It's fun to think about, and it can be super useful sometimes, so it's good to know of it.

Instances are, by design, statically known. So reflection can have very powerful consequences. It's fun to think about, and it can be super useful sometimes, so it's good to know of it.

Add a comment...

Post has attachment

Earlier this month I gave a keynote talk at the FPConf in Moscow [ http://fpconf.org/ ].

FPconf has a fairly wide audience with functional programmers of every ilk (remember that Russia is home to JetBrains, where Coq programmers can be found!). The most commonly practiced functional programming language was Scala. And Haskell was second.

The subject of my talk is how Haskell's insistence on purity is most important when doing I/O-heavy programming (such as systems programming).

The rest of the conference is in Russian. Talks are progressively being uploaded, for the Russian speakers among you.

FPconf has a fairly wide audience with functional programmers of every ilk (remember that Russia is home to JetBrains, where Coq programmers can be found!). The most commonly practiced functional programming language was Scala. And Haskell was second.

The subject of my talk is how Haskell's insistence on purity is most important when doing I/O-heavy programming (such as systems programming).

The rest of the conference is in Russian. Talks are progressively being uploaded, for the Russian speakers among you.

Add a comment...

Wait while more posts are being loaded