The Whattsapp Scandal

Since adding the feature in April, 2016, the Whatsapp app (or really its parent, Facebook) has paraded its “end to end encryption” as the reason to use it above all other smartphone message applications. It can handle calls, messages, video, files and just about everything any computer can and, because it’s encrypted end to end, nobody can read, see or hear any of it unless you want them to.

The pitch has worked; over a billion people now use the app and it is particularly prominent among people who need encryption — the computer protocol that makes reading your message impossible for anyone but the person you’re sending it to.

Activists, particularly, use Whatsapp to communicate everything from places for emergency demonstrations to important announcements to the latest information about their personal lives. Whatsapp is, in effect, a universe of communications for a billion people. It does everything and everything it does is encrypted. With Whatsapp, they’ve been saying, you are safe from intrusion and spying.

The problem is, you’re not safe at all; the encryption can easily be broken. That news, first made public in the Guardian [1], has provoked a public gasp and a joust between developers and activists covered by journalists who, anxious to provide both “sides”, cloud the issue more than clarify.

Unlike many other debates, there aren’t two sides to this story. Whatsapp is not safe because its encryption has a huge exploit (or weakness): a product of what the company says is an attempt to make life a lot simpler for its users. Basically, it rewrites the keys used for encryption without telling you and that means a third party (like the government) can decrypt what you’ve written.

This takes a bit of explanation. First, the basics…

Encryption uses keys — long, random strings of numbers and symbols and letters that make no sense and cannot be guessed. You get two: a public key and a private key. When you send me an encrypted message, the encryption program garbles it beyond comprehension using my public key, which your email client downloaded (and saved) before sending me your first message.

When I get the email, I use my private key to decrypt it. If I don’t have the private key, the email from you is unreadable: the garble the program turned it into. I apply my key and your message to me is magically transformed to human language. Unlike my public key that is all over the place, my private key is on my computer (or phone) and nowhere else.

That’s the security and that’s how the keys work in encryption.

Whatsapp works the same way except for one thing. When using an encryption program (like Signal) on my phone, when I change keys, I know the keys have been changed. When you change yours, I’ll get a notification the moment I try to send you a new email because it detects the key change and sends the warning.

With Whatsapp, if you turn off or break your phone, Facebook holds any messages sent to you. Then, if your phone comes back with a new key, Facebook sends a request to anyone who sent you a message asking them to re-encrypt the message to the new key.

But here’s the problem. Say I sent you a bunch of messages using your old key while your phone is turned off. Those messages are stored by Whatsapp and not delivered until you to turn the phone on. When you do that, and the new key is generated, the messages are decrypted by this new key. In other words, the message I sent to your original key (which I know was yours) is now picked up and decrypted by this other key that I don’t know and haven’t verified.

What’s more, Whatsapp doesn’t tell you it did this on your phone unless you turn on the notification (which people rarely do) and even then it tells you after it’s generated the new key and sent the old messages with it. You learn you’ve been hacked after they hacked you. Privacy advocates are crying blooding murder: Whatsapp has touted its end to end encryption and now we find that it has a “backdoor” (a way of getting into the app without using normal passcode protection).

Why is this important? Because it’s not secure enryption.

The federal government and its spying agencies like the National Security Agency and the FBI have a history of demanding that companies that store data decrypt it when a user’s data is encrypted. This is what happened with Apple computer in February, 2016 [2]. The government wanted it to decrypt the cell phone of the suspect in the San Bernadino terrorist attacks and Apple said it couldn’t break the encryption. The government found a way to do it but, up to then, it had been pressuring Apple to get its developers to develop a decryption method.

That dispute went to court. This time, were a demand made on Facebook for Whatsapp info, there would be no such defense. Facebook has a way of decrypting these messages. All if has to do is generate a new key for a phone and share it with a government spy and wait until the phone is turned off. In fact, cellphones can be disrupted and forced off remotely. The data isn’t safe.

Would such a thing happen? That’s been one of the two issues being hotly debate over the Internet by the app’s developers and just about everyone else.

The debate’s been clouded by the developer’s assertion that this isn’t a backdoor at all. They knew exactly what they were building into the app and did so to make encryption easier: a worthy goal given how complicated encryption can be for the average user.

WhatsApp itself issued a statement to the Guardian: “WhatsApp does not give governments a ‘backdoor’ into its systems and would fight any government request to create a backdoor.”

The problem says my colleague and comrade Jamie McClelland [3] in his superb blog “Current Working Directory” is that the government doesn’t have to ask. The backdoor’s already there. “…using the default installation, your end-to-end encrypted message could be intercepted and decrypted without you or the party you are communicating with knowing it,” he explains. “How is this not a back door?”

But McClelland, and many others, point out something even more disturbing: the complete lack of warning when keys are changed. “Why in the world would you distribute a client that not only has the ability to suppress such warnings, but has it enabled by default?”

That addresses the developers’ second argument. The issue, they say, isn’t what “could” happen but what “would” happen. Facebook insists that, were the government to demand its data, it would refuse.

It’s a laughable contention because Facebook is one of the most intercepted and data-captured protocols in the world. The government captures Facebook data regularly and it admits as much. Facebook doesn’t protest, claiming that its social media application is public and so protecting it makes no sense. So why in the world would it take a different position here when the circumstances are basically the same and, as Jamie points out, why would you enable the suppression of those warnings by default in the first place? Who, exactly, are you keeping in the dark?

What’s more, they may not need cooperation from the company. Government hackers and criminal data thieves are notorious for successfully hacking systems that have vulnerabilities without any permission. And Whatsapp, by all accounts, now has a big one.

Given what we already know about the blanket, constitution-dismissing surveillance under the Obama administration and what we can expect from the Presidency of a rights-dismissive, paranoid crypto-fascist like Donald Trump, do you really want to use this app on your phone?

While not as robust in features, an app like Signal can encrypt text reliably and should in the toolbox of every activist (or person for that matter) using a cellphone. Whatsapp should not.

Alfredo Lopez writes about technology issues for This Can’t Be Happening!