(It’s been a while since I’ve written a full blog post. I’ve been putting most of my efforts into my weekly newsletter – be sure to subscribe to get weekly tips and news on cyber security and online privacy.)
Headline Hyperbole
This week, we saw the following headline from The Guardian: “WhatsApp vulnerability allows snooping on encrypted messages”. This story was immediately picked up by just about every other major tech news web site, with headlines that were even more dire:
-
A critical flaw (possibly a deliberate backdoor) allows for decryption of Whatsapp messages (BoingBoing)
- WhatsApp Apparently Has a Dangerous Backdoor (Fortune)
- WhatsApp encrypted messages can reportedly be intercepted through a security backdoor (Business Insider)
I swear there were others from big-name sites, but I can’t find them – I think they’ve been deleted or updated. Why? Because this story (like so many others) was completely overblown.
Which brings us to the point of this article: our online news is broken. It’s broken for much the same reasons that the media is broken in the US in general – it’s all driven by advertising dollars, and ad dollars are driven by clicks and eyeballs. (See also: On the Ethics of Ad-Blocking). But the problem is even more insidious when applied to the news because all the hyperbolic headlines and dire warnings are making it very hard to figure out which problems are real – and over time, like the boy who cried wolf, it desensitizes us all.
WhatsUp?
Let’s take this WhatsApp story as an example. The vague headline from The Guardian implies that WhatsApp is fatally flawed. And the other headlines above are even worse, trotting out the dreaded and highly-loaded term “backdoor”. Backdoor implies that someone at WhatsApp or Facebook (who bought WhatsApp) has deliberately created a vulnerability with the express purpose of allowing a third party to bypass message encryption whenever they wish and read your private communications.
The first few paragraphs from the article seem to confirm this. Some excerpts:
- “A security vulnerability that can be used to allow Facebook and others to intercept and read encrypted messages has been found within its WhatsApp messaging service.”
- “Privacy campaigners said the vulnerability is a ‘huge threat to freedom of speech’ and warned it could be used by government agencies as a backdoor to snoop on users who believe their messages to be secure.”
- “If WhatsApp is asked by a government agency to disclose its messaging records, it can effectively grant access”
Now let’s talk about what’s really going on here. It’s a little technical, so bear with me.
The Devil In The Details
Modern digital communications use what’s called public key encryption. Unlike private key systems (which have a single, shared key to both encrypt and decrypt data), public key systems use two keys:
- Public key: Freely given to everyone, allows a sender to encrypt a message
- Private key: Fiercely protected and never shared, used to decrypt received messages that were encrypted with the public key
If you had a single, shared key, then you would have to find some secure way to get a copy of that key to your intended message recipient. You can’t just email or text it, or even speak it over the phone – that could be intercepted. The public key system allows you to broadcast your public key to the world, allowing anyone to send you an encrypted message that only you can decrypt, using your closely-guarded private key. In this same fashion, you use the other person’s public key to respond. This is insanely clever and it’s the basis for our secure web.
As is usually the case, the devil is in the details when it comes to crypto systems. The underlying math is solid and the algorithms have been rigorously tested. Where these systems break down is in the implementation. You can have an unbreakable deadbolt on your front door, but if you leave the key under your door mat or there’s a window right next to the lock on the door that can be broken… you get the idea.
Here’s the problem with how WhatsApp implemented their encryption. The app will generate public and private keys for you on the fly, and exchange public keys with the person you’re communicating with – all in the background, without bothering you. That’s fine – so far, so good. But let’s say Alice sends a message to Bob while Bob is offline. WhatsApp on Alice’s phone has used Bob’s last known public key to encrypt these messages, and they’re waiting (either on Alice’s phone or maybe on WhatsApp’s servers) for Bob to come online to be sent. In the meantime, Bob has dropped his phone in the toilet and must get a new one. He buys a new phone, reinstalls WhatsApp, and WhatsApp is forced to generate a new public/private key pair. When he comes online, Alice’s copy of WhatsApp figures out that the public key it has for Bob is no longer valid. And here’s where things fall apart. WhatsApp will then get Bob’s new public key and re-encrypt the pending messages, and then re-send them.
Bug or Feature?
That’s it. That was the fatal flaw. The “backdoor”. Did you catch it?
If you missed it, don’t feel bad. This stuff is complicated and hard to get right. The problem is that Alice was not warned of the key change and (crucially) was not given the opportunity to validate Bob’s new key. So, theoretically, some third party – let’s call her Mallory – could somehow force Bob to go offline for a period of time and then pretend to be Bob with a new device. This would trick Alice’s copy of WhatsApp to re-encrypt the pending messages using Mallory’s key and send them to Mallory. So, if you’re following along, what that means is that Mallory could potentially receive the pending messages for Bob. Not past messages. Just the pending ones, and potentially ones in the near future – at least until Bob comes back online.
This key change is part and parcel of how modern public key crypto messaging works. The only possible fault you can find here with WhatsApp is that they don’t (currently) enable changed key warnings by default and they don’t block re-sending of pending messages until the user (in this case Alice) reviews the new keys and approves the update (ie, satisfies herself that it’s really Bob who is sending the new key).
Is that a “backdoor”? No. Not even close. It was not maliciously and secretly implemented to allow surreptitious access by a third party. Furthermore, if Alice turns on the key change warning (a setting in WhatsApp), it would allow her to see when this happens – a big no-no when it comes to surveillance. Is it a vulnerability or bug? No, not really. It’s a design decision that favors convenience (just going ahead and re-sending the messages) over security (forcing Alice to re-authenticate a recipient every time they get a new device, reinstall WhatsApp, or whatever). You can argue about that decision, but you can’t really argue that it’s a bug – it’s a feature.
UPDATE: The EFF has an excellent article on this with a very similar description. However, it also mentions a new effort called Key Transparency by Google which looks promising.
Remove Profit from the Press
So now let’s return to the big picture. Online news sites produce free web content that we consume. But producing that content costs money. In today’s web economy, people just expect to get something for nothing, which makes it almost impossible for sites to rely on a subscription model for revenue – if you ask people to pay, they’ll just go to some other site that’s free. So they turn to the de facto web revenue model: advertising. The more people who view the ads on your web site, the more money you get. And therefore you do whatever you can to get people to CLICK THAT LINK – NOW!! (This is called click bait.) It’s the same influence that corrupted our TV news (“if it bleeds, it leads”).
Some things should just not be profit-driven. News – in particular, investigative journalism – is one of those things. The conflict of interest corrupts the enterprise. TV news used to be a loss leader for networks: you lost money on news with the hopes of building loyalty and keeping the viewers around for the shows that followed.
Maybe that ship has sailed and it’s naive to believe we can return to the days of Walter Cronkite or Edward R Murrow. So what are we to do? Here are some ideas (some of which came from this excellent article):
- Subscribe to local and national newspapers that are doing good work. If you don’t care to receive a physical paper, you can usually get an on-line or digital subscription.
- Give money to organizations that produce or support non-profit investigative journalism. You might look at ProPublica, Institute for Non-Profit News, The Investigative Fund, NPR, and PBS. This article also has some good ideas.
- Share news responsibly. Do not post sensationalistic news stories on your social media or forward hyper-partisan emails to everyone you know. Don’t spread fake news, and when you see someone else doing this, (respectfully) call them out. Not sure if a story is real? Try checking Snopes.com, Politifact, or FactCheck.org. This article also has some great general advice for spotting fake or exaggerated news.
- When you do share news stories, be sure to share the original source whenever possible. This gives credit where credit is due (including ad revenue). If you found a derivative story, you may have to search it for the link to the original source.
- Use ad-blockers. This may seem contrary to the above advice, but as I mentioned in this blog, right now the ad networks are being overly aggressive on tracking you around the web and are not policing their ads sufficiently to prevent malware. It’s not safe to blindly accept all ads. You can disable the ad-blocker on individual web sites that you wish to support – just be aware of the risk.
Need practical security tips?
Sign up to receive Carey's favorite security tips + the first chapter of his book, Firewalls Don't Stop Dragons.
Don't get caught with your drawbridge down!