How service providers of messengers with end-to-end encryption could be compelled to decrypt messages


The current situation reminds me a little bit on the beginning of the “Snowden affair”. Basically all of the so-called “technical experts” did either remain silent or even supported the lies and technical absurdities of Snowden and his supporters. That was the main reason why I started blogging.

One of the effects of the Snowden hysteria was that companies started to implement “strong encryption”, encryption they claimed even they themselves could not break if demanded by a lawful request. The “tech experts” loved it. And they did it again, they lied about technical facts. They claimed that it would make all users less secure if the manufacturers and service providers would be forced to introduce “backdoors” to be able to respond to legal requests.

But this is simply not true. Here[1] I already explained how a secure smartphone “key escrow” could look like, enabling that only the smartphone manufacturer could unlock the phone, only on his premises, and only with physical access to the device. Now let’s have a look at the infamous end-to-end encryption on messengers like WhatsApp or iMessage.

First let me highlight a fact that the “tech experts” don’t tell you: Most users, most clients, who want to use the messengers, have dynamic IP addresses, are behind firewalls, proxies or NAT-gateways, and are often offline. This means: Other clients cannot connect directly to them, which in turn means any messenger that aims to be usable by ordinary users necessarily needs a central gateway. All users connect to the central gateway, which then connects the clients to each other (and stores messages if a user is offline).

The central gateway is in the middle, where always, you guessed it, the so-called “man-in-the-middle” attack is possible. Always.

The only thing that can prevent from the man-in-the-middle attack is that the client checks that the other client’s public key changed, alerts him and he doesn’t send the message (or sends a fake message). Let’s explain this on an example: Bob sends Alice a WhatsApp message. It’s their first exchange, so Bob sends Alice his public key, and Alice to Bob hers. Both store each other's public keys (and because WhatsApp wants to be user-friendly, this is all done in the background, neither Bob nor Alice will notice).
Now comes Eve, the eavesdropper, into the game. She has access to the central gateway, and she wants to intercept the messages between Bob and Alice. She has no access to Bob’s or Alice’s secret keys, so if she wants to be able to decrypt the messages, she has to replace Bob’s and Alice’s public keys by her own public key.

This is technically no problem for Eve, because she sits in the middle. But both Bob and Alice could check that their partner’s public key changed, and of course they could become suspicious that someone is eavesdropping on them. But here a lot of practical problems are hidden:

First, the software, or the client, of Bob and Alice had to alert them. As far as I know, neither WhatsApp nor iMessage do this (you know, I told you, popular messengers aim to be user-friendly, and most users would likely be bothered about such alerts, appearing every time a communication partner gets an new phone or resets his phone), at least not by default.

The second problem is, that even if users are alerted, most will likely ignore the warnings and assume that the partner got a new phone or so (of course this is much more unlikely if the user is a criminal or terrorist who may anticipate being monitored).

But it gets even better for law enforcement: The service provider of popular messengers like WhatsApp is also the one who writes the client software, and it is easy for them to assure that the client never alerts if their (“Eve’s”) public key is presented.

Of course it is possible that users use an alternate, or patched software that still alerts, or that they have additional software installed that monitor the key exchanges and warn them if something suspicious is going on. But likely only technically very skilled people will do this.

So to summarize, it is possible that service providers implement a surveillance interface, even if they use end-to-end encryption. And this interface wouldn’t make ordinary users less secure, because it can only be used by the service provider itself -- who is anyway technically able to do it, as I explained. So all users should anyway at least anticipate the possibility.

It is not possible in a way that users are unable to notice that they are being monitored. Technically skilled and paranoid users could. But it would then be up to law enforcement to rate whether the surveillance target (and his communication partners) is such a technical skilled person, and decide whether they risk the monitoring or not.

As a final remark, all virus-scanning proxies with “SSL interception” enabled do basically the same. HTTPS is also end-to-end encryption. To be able to see the content and scan for viruses, the proxy needs to perform exactly the same man-in-the-middle attack. Where there is a will, there is a way.


[1] https://plus.google.com/+RolfWeber/posts/fPK3DyfYdNG 
Shared publiclyView activity