Give Thanks and Donate

Thanksgiving is almost upon us here in the US. I felt it was a good opportunity to say thanks to some of the wonderful organizations out there working very hard to improve our security, protect our privacy, and defend our rights. If you believe in a cause but don’t have the time to get directly involved, then donating money to groups with the skills, time and talent to truly make a difference is an excellent way to go. You might even get break on your taxes, too. (Note that some companies have donation matching programs, as well – so you might ask your employer about matching your contribution.)

Many of these organizations will send you something for donating – a shirt, hat, sticker, magnet, etc. Display it proudly for others to see. Perhaps it will cause them to look it up or ask you about it, offering another opportunity to spread the word or spark some much-needed debate on these issues.

Electronic Frontier Foundation

If I was going to pick one organization that just does it all (and does it well), I would have to pick the Electronic Frontier Foundation. (This won’t come as a surprise to anyone who follows my podcast.) Staffed with top-notch technologists, lawyers and policy wonks, EFF is at the forefront of privacy, transparency, security, and free speech issues. They have been involved in hundreds of important legal cases, including an impressive string of legal victories. The EFF web site hosts some wonderful security guides, including tutorials and materials for people willing to teach others. They have created two of my most recommended browser plugins: HTTPS Everywhere and Privacy Badger. And that’s just the tip of the iceberg.

Saving Democracy, Fighting for Your Rights

Of course there are many other superb organizations that are fighting for your rights, holding governments and corporations accountable, and trying to improve our democratic institutions. Here are just a few that you might consider supporting:

You can find these and other great security and privacy links on my Resources page.

Browser Safety: Choose Your Weapon

Your web browser is your primary portal to the wild and woolly world wide web. For many people, the web browser effectively is the Internet. As such, it’s one of the most vulnerable areas of our attack surface (i.e., the sum of all the places where we might be susceptible to attack by digital bad guys). Therefore, it behooves us to choose the most formidable browser we can find, bolting on whatever extra ‘armor’ and ‘stealth’ technologies we can find.

How Do We Define a “Safe” Browser?

There are at least two primary aspects to ‘safety’ when it comes to web browsing: security and privacy. A secure browser will do whatever it can to prevent you from visiting bad web sites, warn you against entering sensitive information on insecure pages, identify sites that aren’t encrypted, and strictly enforce policies that prevent malvertising and other malicious web exploits.

However, while security is something that all browsers claim to seek, privacy is another matter entirely. Because much of the web is “free”, most web sites have turned to advertising for revenue. And unlike traditional newspaper and billboard ads from days of yore, web advertising is built on hordes and gobs of personal data. Companies like Google and Facebook collect intimate details on you in order to serve you highly targeted (and much more lucrative) ads. Data, as the say, is the new oil. In their lust for data, online advertisers have gone seriously overboard with their tracking technology, prompting many to use ad blockers. So a good web browser will help protect your privacy by severely limiting the ability of web sites and marketers to track you.

The Big Four

The four most popular browsers today are Chrome (60%), Internet Explorer/Edge (20%), Firefox (13%), and Safari (4%). It wasn’t long ago that Microsoft had a near monopoly on web browser use, but Google’s Chrome browser has come on strong and clearly holds the lead today. Internet Explorer and Edge are the default browsers on Windows PC’s and Safari is the default browser on Apple Macintosh computers. Firefox (which rose from the ashes of Netscape Navigator) is the only browser in the top four that is open source (meaning the source code is freely available for review). Firefox is made by the non-profit Mozilla Foundation, which is funded primarily by search royalties. Despite very different aesthetics, at the end of the day, all four of these browsers do basically the exact same things: they show you web pages. So how do you know which is safest?

Choose Your Weapon: Security

Let’s just get this out of the way now: it’s almost impossible to know which browser is the most secure. This is largely because all of these browsers are constantly rolling out new security-related features, fixing security-related bugs, and generally trying to claim the title of ‘most secure’. That’s a good thing – they’re competing to be the best, and so we all win. There are dedicated hacking contests to reveal bugs in browsers, but it’s hard to say whether the number of bugs found in these contests really reflect the security of the browser. How likely were bad guys to find these bugs? How severe are the bugs? What about the bugs they didn’t find? These hack-a-thons also don’t address factors like how quickly the browser maker fixes their bugs and whether the browser is smart enough to self-update (because if you don’t have the latest version, you don’t have the bug fixes). It’s really hard to compare the relative security of web browsers (see this article to understand what I mean).

However, if I had to pick a winner here, I’d probably have to choose Chrome. Google is doing some fantastic work in the realm of computer and web security. Furthermore, they’re using Chrome’s dominance to prod web sites to be more secure, as well. That said, I think Firefox and Safari are also fairly secure browsers. And you could argue that because Firefox is open-source, it can actually be audited by cybersecurity experts – unlike the other three major browsers. Ideally, this vetting leads to less bugs.

Choose Your Weapon: Privacy

Unlike security, there are significant and important differences between the four major browsers when it comes to privacy. And this (to me) is the real differentiating factor.

While Google has been a true leader in terms of security, they’re pretty much the worst in terms of privacy. They’re whole business model revolves around advertising (Google makes about 90% of its money from ads). And that leads to an enormous conflict of interest when it comes to protecting your personal data and web surfing habits. Apple has gone out of their way to basically be the anti-Google, making it a point of pride to collect as little data on their users as possible (and causing a collective freak-out by advertisers). But Firefox is also doing some great work in this area. In the coming months, Firefox will enable some wonderful anti-tracking technologies of their own.

So who’s the winner in terms of privacy? Today, I’d say it’s a toss-up between Firefox and Safari, with Chrome being dead last. Internet Explorer and Edge are somewhere in between, but with Microsoft’s recent penchant for collecting user data, I would put it closer to Chrome.

And the Winner Is…

Based on everything I’ve read, I personally choose Firefox as my main browser. No browser is 100% secure and it’s very hard for even the most erstwhile browser to completely protect your privacy. But I think Firefox, on balance, is the best of the bunch. Browsers are constantly adding new features, so I will have to revisit this periodically (and I will update this article accordingly).

That said, there is at least one reason to also have Chrome installed on your system. And we’ll talk about that below.

Beyond the Big Four

There are actually several other web browsers you might want to consider. This article covers some of them, but I’ll just mention three.

The fifth most popular browser is Opera, and many people enjoy using it. If you’re not satisfied with any on my list, you might give it a try. Opera is fast and works on both Mac and PC.

The Brave browser is an open-source browser built for privacy, with built-in ad blocking and tracking protection. However, in a move to try to acknowledge the need for ad-based revenue, it also has a mechanism to insert its own ads, which opens up a lot of issues. I would wait and see on this one.

Lastly, the Tor browser is all about privacy – in fact, it tries to achieve true anonymity (though that is extremely difficult in practice). It’s based on Firefox and builds in several kick-butt privacy tools that are too technical to sum up here. But if you really need to surf privately, you should give Tor a serious look.

Less Is More

Modern browsers all have the ability to add more functionality through plugins or add-ons. These extensions can both significantly raise and lower your level of security and privacy. So no discussion of browser security would be complete without discussing them. Let’s start with the plugins you should remove.

First and foremost, delete Adobe Flash. Flash was created years ago to enable all sorts of fun things – animations, video or audio, and online games. But Flash is horrendously buggy and mostly obsolete. So just remove it. (Note that the Chrome browser actually has Flash built-in and Google ensures that it’s up to date – so if you find a web site that requires Flash, you can use Chrome for that site… and then go back to Firefox!)

In the same vein, I would delete both Java and Silverlight plugins, if you have them. They’re buggy and mostly unnecessary.

Finally, go through all your browser plugins and just remove (or disable) any that you don’t need. Every one of those add-ons is a potential security or privacy risk.

If you later find that you do need any of these plugins, you can always just reinstall them… with the following major caveat…

DANGER! Beware Plugin Requests!

If you ever get a pop-up from a web site saying that you need some plugin in order to do something, never ever follow their link to install it!! This is an extremely common and effective way to install malware. When you see a pop-up like this, close it and then go directly to the site for this plugin and install from the source. A Google search should take you to the right place, if you don’t know where to go.

Plugins for Better Privacy and Security

The one plugin you should add to your browser to increase your security is a password manager like LastPass. Not only will a password manager help you to create strong and unique passwords for every web site, they will not be fooled by fake (“phishing”) web sites.

In terms of enhancing your privacy, Firefox and Safari already have a lot of built-in features to prevent tracking. However, there are a handful of add-ons I strongly recommend you install. It’s safe to add them all, they play nicely with each other.

  • uBlock Origin. This is a very good ad blocker, which protects you from tracking and malvertising. (Don’t get “uBlock” – you want “uBlock Origin”.)
  • Privacy Badger. From the wonderful folks at the EFF, this plugin watches for suspicious tracking behavior and blocks it – it even learns over time to get better.
  • HTTPS Everywhere. Also from EFF, this plugin ensures that any site you visit that can support encrypted communication will do it by default.
  • Decentraleyes. Kinda hard to explain briefly, but this plugin helps to limit your downloading of several common web page resources that could be used to track you when you request them.

To install a plugin, find your browser’s menu option for plugins, add-ons or extensions. You can search for the above plugins and install them directly into your browser.

Privacy First: Apple Strikes Another Blow

Full disclosure: I’m an Apple user and have been for decades. But one of the reasons I’ve been such an ardent Apple fan is that I’ve always felt like they had my back. I’m not sure how much of this is altruistic – but you can argue that the actual reasons might be more compelling (if more cynical): it’s their business model. Apple is a hardware company. They make computers, phones, tablets, and other devices. The software that comes with those devices is almost entirely free and is used to increase the value of the hardware. Most people still think of Google as a search engine company. If you happen to know that Google makes the Android smartphone operating system, then you might think of them as a hardware and software maker, too. And they are. But Google makes about 90% of their revenue from advertising.

Why does that matter? Because this means that Google’s primary product is you. They want to know all about you (and I mean all about you) so they can sell highly-targeted ads. Google may be extremely keen to protect your privacy… from everyone but Google. While Apple certainly has access to your personal information, and they even have a small ad business, they appear to be taking great pains to avoid abusing their position, drawing a stark contrast with Google and others. They actually appear to care about protecting your privacy and see this as a key marketing differentiator.

Apple Fires Another Shot Across the Bow

Apple was the first browser-maker to block third-party cookies by default about 6 years ago, which caused a huge fuss. Google was even caught circumventing this and ended up paying a $22M fine (which is, of course, nothing to Google).

And now Apple is at it again: daring to protect its users’ privacy using a new technology called Intelligent Tracking Prevention (ITP). This feature, built into Apple’s Safari browser, adds some common sense limits on the scope of web tracking. The details are rather arcane (if you want to give it shot, try this article), but the upshot is that Apple is actually proactively trying to protect its users’ privacy without breaking the way the web works (at least not the parts that users care about). It’s not preventing you from seeing ads. It’s not even preventing you from being tracked. It’s just putting some strict time limits on how long you can be tracked, depending on the user’s apparent actual interest in the product or web site. Sounds reasonable, doesn’t it?

Let Me Get My Tiny Violin

Not to web advertisers. They’re collectively freaking out, calling it “sabotage”. But let’s just be clear here that people never asked to be tracked. Advertisers love to claim that their targeted ads are so amazingly beneficial that removing them is actually harming the people they’re tracking. From an open letter to Apple from several ad agencies:

Apple’s unilateral and heavy-handed approach is bad for consumer choice and bad for the ad-supported online content and services consumers love. Blocking cookies in this manner will drive a wedge between brands and their customers, and it will make advertising more generic and less timely and useful. Put simply, machine-driven cookie choices do not represent user choice; they represent browser-manufacturer choice.

There are several problems with this statement. First, ITP doesn’t block ads and it doesn’t even prevent tracking – it just puts a time limit on tracking. Second, making ads more generic just takes things back to the ways ads were before tracking (ie, less creepy) – which is how advertising worked for decades or even centuries. Finally, users rarely bother tweaking any settings – even if they know and understand how tracking works, many people simply can’t be motivated to change their default browser preferences. It’s the Tyranny of the Default. People don’t actively say “I want to be tracked! Where is the setting that allows that? I want to make sure it’s enabled!” But sadly they also don’t do anything to stop being tracked.

Time for a Change

So kudos to Apple for trying to strike a balance and sticking up for their users. But I’m honestly more pleased that this has once again raised the issue of privacy and tracking. Most people just aren’t aware of the degree to which they’re being tracked, nor have they probably considered the consequences for themselves and for society in general. It’s going on constantly, right under our noses, and the results have so far been kept largely secret. (If you want to get just a taste of what these marketers know about you, check out aboutthedata.com from Acxiom or My Account from Google).

We got here because people don’t want to pay for web content – which led us to the ad-based web. We can debate the ethics of ad-blocking, but we really just need a new revenue model for the web that doesn’t incur horrendous privacy issues (for example, the new Brave web browser and micropayments).

[NOTE: Check out this week’s podcast where I go more in-depth on how and why we’re tracked, and what you can do to protect your privacy.]

Beware Hype and Click-Bait

(It’s been a while since I’ve written a full blog post. I’ve been putting most of my efforts into my weekly newsletter – be sure to subscribe to get weekly tips and news on cyber security and online privacy.)

Headline Hyperbole

This week, we saw the following headline from The Guardian: “WhatsApp vulnerability allows snooping on encrypted messages”. This story was immediately picked up by just about every other major tech news web site, with headlines that were even more dire:

  • A critical flaw (possibly a deliberate backdoor) allows for decryption of Whatsapp messages (BoingBoing)
  • WhatsApp Apparently Has a Dangerous Backdoor (Fortune)
  • WhatsApp encrypted messages can reportedly be intercepted through a security backdoor (Business Insider)

I swear there were others from big-name sites, but I can’t find them – I think they’ve been deleted or updated. Why? Because this story (like so many others) was completely overblown.

Which brings us to the point of this article: our online news is broken. It’s broken for much the same reasons that the media is broken in the US in general – it’s all driven by advertising dollars, and ad dollars are driven by clicks and eyeballs. (See also: On the Ethics of Ad-Blocking). But the problem is even more insidious when applied to the news because all the hyperbolic headlines and dire warnings are making it very hard to figure out which problems are real – and over time, like the boy who cried wolf, it desensitizes us all.

WhatsUp?

Let’s take this WhatsApp story as an example. The vague headline from The Guardian implies that WhatsApp is fatally flawed. And the other headlines above are even worse, trotting out the dreaded and highly-loaded term “backdoor”. Backdoor implies that someone at WhatsApp or Facebook (who bought WhatsApp) has deliberately created a vulnerability with the express purpose of allowing a third party to bypass message encryption whenever they wish and read your private communications.

The first few paragraphs from the article seem to confirm this. Some excerpts:

  • “A security vulnerability that can be used to allow Facebook and others to intercept and read encrypted messages has been found within its WhatsApp messaging service.”
  • “Privacy campaigners said the vulnerability is a ‘huge threat to freedom of speech’ and warned it could be used by government agencies as a backdoor to snoop on users who believe their messages to be secure.”
  • “If WhatsApp is asked by a government agency to disclose its messaging records, it can effectively grant access”

Now let’s talk about what’s really going on here. It’s a little technical, so bear with me.

The Devil In The Details

Modern digital communications use what’s called public key encryption. Unlike private key systems (which have a single, shared key to both encrypt and decrypt data), public key systems use two keys:

  1. Public key: Freely given to everyone, allows a sender to encrypt a message
  2. Private key: Fiercely protected and never shared, used to decrypt received messages that were encrypted with the public key

If you had a single, shared key, then you would have to find some secure way to get a copy of that key to your intended message recipient. You can’t just email or text it, or even speak it over the phone – that could be intercepted. The public key system allows you to broadcast your public key to the world, allowing anyone to send you an encrypted message that only you can decrypt, using your closely-guarded private key. In this same fashion, you use the other person’s public key to respond. This is insanely clever and it’s the basis for our secure web.

As is usually the case, the devil is in the details when it comes to crypto systems. The underlying math is solid and the algorithms have been rigorously tested. Where these systems break down is in the implementation. You can have an unbreakable deadbolt on your front door, but if you leave the key under your door mat or there’s a window right next to the lock on the door that can be broken… you get the idea.

Here’s the problem with how WhatsApp implemented their encryption. The app will generate public and private keys for you on the fly, and exchange public keys with the person you’re communicating with – all in the background, without bothering you. That’s fine – so far, so good. But let’s say Alice sends a message to Bob while Bob is offline. WhatsApp on Alice’s phone has used Bob’s last known public key to encrypt these messages, and they’re waiting (either on Alice’s phone or maybe on WhatsApp’s servers) for Bob to come online to be sent. In the meantime, Bob has dropped his phone in the toilet and must get a new one. He buys a new phone, reinstalls WhatsApp, and WhatsApp is forced to generate a new public/private key pair. When he comes online, Alice’s copy of WhatsApp figures out that the public key it has for Bob is no longer valid. And here’s where things fall apart. WhatsApp will then get Bob’s new public key and re-encrypt the pending messages, and then re-send them.

Bug or Feature?

That’s it. That was the fatal flaw. The “backdoor”. Did you catch it?

If you missed it, don’t feel bad. This stuff is complicated and hard to get right. The problem is that Alice was not warned of the key change and (crucially) was not given the opportunity to validate Bob’s new key. So, theoretically, some third party – let’s call her Mallory – could somehow force Bob to go offline for a period of time and then pretend to be Bob with a new device. This would trick Alice’s copy of WhatsApp to re-encrypt the pending messages using Mallory’s key and send them to Mallory. So, if you’re following along, what that means is that Mallory could potentially receive the pending messages for Bob. Not past messages. Just the pending ones, and potentially ones in the near future –  at least until Bob comes back online.

This key change is part and parcel of how modern public key crypto messaging works. The only possible fault you can find here with WhatsApp is that they don’t (currently) enable changed key warnings by default and they don’t block re-sending of pending messages until the user (in this case Alice) reviews the new keys and approves the update (ie, satisfies herself that it’s really Bob who is sending the new key).

Is that a “backdoor”? No. Not even close. It was not maliciously and secretly implemented to allow surreptitious access by a third party. Furthermore, if Alice turns on the key change warning (a setting in WhatsApp), it would allow her to see when this happens – a big no-no when it comes to surveillance. Is it a vulnerability or bug? No, not really. It’s a design decision that favors convenience (just going ahead and re-sending the messages) over security (forcing Alice to re-authenticate a recipient every time they get a new device, reinstall WhatsApp, or whatever). You can argue about that decision, but you can’t really argue that it’s a bug – it’s a feature.

UPDATE: The EFF has an excellent article on this with a very similar description. However, it also mentions a new effort called Key Transparency by Google which looks promising.

Remove Profit from the Press

So now let’s return to the big picture. Online news sites produce free web content that we consume. But producing that content costs money. In today’s web economy, people just expect to get something for nothing, which makes it almost impossible for sites to rely on a subscription model for revenue – if you ask people to pay, they’ll just go to some other site that’s free. So they turn to the de facto web revenue model: advertising. The more people who view the ads on your web site, the more money you get. And therefore you do whatever you can to get people to CLICK THAT LINK – NOW!! (This is called click bait.) It’s the same influence that corrupted our TV news (“if it bleeds, it leads”).

Some things should just not be profit-driven. News – in particular, investigative journalism – is one of those things. The conflict of interest corrupts the enterprise. TV news used to be a loss leader for networks: you lost money on news with the hopes of building loyalty and keeping the viewers around for the shows that followed.

Maybe that ship has sailed and it’s naive to believe we can return to the days of Walter Cronkite or Edward R Murrow. So what are we to do? Here are some ideas (some of which came from this excellent article):

  1. Subscribe to local and national newspapers that are doing good work. If you don’t care to receive a physical paper, you can usually get an on-line or digital subscription.
  2. Give money to organizations that produce or support non-profit investigative journalism. You might look at ProPublica, Institute for Non-Profit News, The Investigative Fund, NPR, and PBS. This article also has some good ideas.
  3. Share news responsibly. Do not post sensationalistic news stories on your social media or forward hyper-partisan emails to everyone you know. Don’t spread fake news, and when you see someone else doing this, (respectfully) call them out. Not sure if a story is real? Try checking Snopes.com, Politifact, or FactCheck.org. This article also has some great general advice for spotting fake or exaggerated news.
  4. When you do share news stories, be sure to share the original source whenever possible. This gives credit where credit is due (including ad revenue). If you found a derivative story, you may have to search it for the link to the original source.
  5. Use ad-blockers. This may seem contrary to the above advice, but as I mentioned in this blog, right now the ad networks are being overly aggressive on tracking you around the web and are not policing their ads sufficiently to prevent malware. It’s not safe to blindly accept all ads. You can disable the ad-blocker on individual web sites that you wish to support – just be aware of the risk.

 

Ditch Yahoo. Use ProtonMail. [updated]

I’ve been a Yahoo Mail user for 19 years. My Yahoo user ID has only 4 characters in it. It’s been my public (read spam) email address since 1997. I’m sure it’s the longest actively-used email account I’ve ever had. But now it’s time for me to move on. You should, too. Here’s why, and how…

How NOT To Handle Security

Yahoo announced recently that there was a massive breach in 2014 of many of its users’ accounts. While initial reports estimated 500 million users were compromised, it could actually be much worse. (If you haven’t changed your Yahoo password in the last two years, you should do so now.)

Password database breaches are going to happen. Security is hard and nothing is ever 100% secure. But we can and should judge a company by how seriously they take their users’ security and how they react when bad things happen.

While we’re pretty sure the breach occurred two years ago, it’s not clear yet that Yahoo knew about it before July of this year. However, Yahoo didn’t tell anyone about it until after the story broke elsewhere, two months later. It’s also been reported that Yahoo execs had a policy of not forcing users to reset passwords after a data breach because they didn’t want to lose customers. It’s also obvious that Yahoo prioritized shiny new features over security and privacy.

The Last Straw

That’s all pretty bad, but it gets worse. In a separate report shortly after this breach was announced, it was revealed that Yahoo allowed and perhaps helped the NSA or FBI to build a real-time email search program for all its customers, enabling mass surveillance in a way that was previously unprecedented.

Either of these scandals alone would be unacceptable, and should give any Yahoo user a valid reason to abandon their services – but taken together, it almost mandates it. This is a clear case where we, as consumers, need to show Yahoo that this is not acceptable, and do it in a way they will understand: close your Yahoo account and move to another service.

Ditch Yahoo

I’m not going to lie…. if you actually use your Yahoo account (like I do), this is not going to be fun or easy. But if you really care about your security, and security in general, you need to let Yahoo (and the other service providers) know that you take these horrendous security failures seriously. To do that, you have to hit them where it hurts: money. In your case, that means abandoning their services. Ditching Yahoo will not only make yourself safer, it will hopefully drive other service providers to improve their own security – which helps everyone.

I would say that you have at least three levels of options here, in increasing order of effectiveness (in terms of protesting Yahoo’s behavior):

  1. Stop using Yahoo email and all its other services
  2. Archive your Yahoo email locally and delete everything from their servers
  3. Delete your Yahoo account entirely

To stop using your Yahoo email, you will need to change everywhere you used your Yahoo email account and migrate to a new email service. LifeHacker has some tips that will help, but read through the rest of this article before choosing your new email provider.

To really rid yourself of Yahoo completely, you also need to abandon all their services: Flikr, Tumblr, fantasy sports, Yahoo groups, Yahoo messenger, and any of the dozens of other services.

Your next step is to archive all your old Yahoo email. These emails may contain valuable info that you’ll some day need to find: important correspondence, account setup/recovery info for other web sites, records of purchases, etc. If you’ve used an email application on your computer to access Yahoo (like Outlook or the Mail app on Mac OS), you should already have all your emails downloaded to your computer. But you might also want to consider an email archiving application: Windows users should look at MailStore Home (free); Mac users might look at MailSteward (ranges from free to $99).

Once you’ve safely archived everything, you should delete all your emails from Yahoo’s servers. Why? Well, if nothing else, it should prevent successful hackers from perusing your emails for info they could use against you (identity theft, for example). Assuming Yahoo actually deletes these emails, it may also keep Yahoo (or the government) from digging through that info.

You should reset your Yahoo password to a really strong password (use a password manager like LastPass). I would highly recommend setting up two-factor authentication, as well.

As a final step, you can completely close your Yahoo account. Note that this may not actually delete all your data. Yahoo probably retains the right to save it all. But this is the best you can do.

If you find that you are just too invested in Yahoo to completely abandon your email account (and I’ll admit I may be in that camp), you can set up email forwarding. This will send all of your incoming Yahoo email to a different account. (It’s worth mentioning that it looks like Yahoo tried to disable this feature recently, probably in an effort to prevent the loss of users.)

Use ProtonMail

While GMail and Outlook are two popular and free email providers, you should take a hard look at newer, more security- and privacy-conscious services. I would personally recommend ProtonMail. They have a nice free tier of service that includes web access and smartphones apps for iPhone and Android. If nothing else, grab your free account now to lock in a good user name before all the good ones are taken. Tell your friends to do the same. Just adding new free users will help the cause, even if the accounts aren’t used much.

But I’d like to ask you to go one step further: I encourage you strongly to sign up for one of their paid tiers of service, even if you don’t need the added features. The only way we’re going to force other service providers to take notice and to drive change is to put our money where our mouths are. Until it becomes clear that people are willing to pay for privacy and security, we’ll be stuck with all the ‘free’ services that are paid for with our personal info and where security is an afterthought.

Update Dec 14 2016:

Yahoo has just announced another breach, this time over 1 billion accounts hacked (maybe more). DITCH YAHOO!!

protonmail

(This article is adapted from a few of my previous weekly security newsletter articles.)

The Pros & Cons of Anti-Virus Software

When most people think of protecting their computers, they think of anti-virus (AV) software. Viruses are a real problem, of course, but how well do AV apps protect you? And are there any downsides to using AV software?

In older times, AV software was essential and generally did a good job at finding malware on your computer. Generally speaking, the core function of AV software is to recognize known malware and automatically quarantine the offending software. Some AV software is smart enough to use heuristic algorithms to recognize malware that is similar to the stuff it already knows is bad, or recognize suspicious behavior in general and flag it as potentially harmful. A popular new feature for a lot of AV software is to monitor your web traffic directly, trying to prevent you from going to malicious web sites or from downloading harmful software.

That all sounds good, but the devil (as always) is in the details. Firstly, in the ever-connected world of the Internet, malicious software is produced so frequently and is modified so quickly that it’s really hard for AV software to keep a relevant list of known viruses. Also, the bad guys have moved to other techniques like phishing and fake or hacked web sites to get your information – attacking the true weakest link: you. AV software just isn’t as effective as it used to be.

But the problem is much worse than that. In many cases, the AV software itself is providing bugs for hackers to exploit. Recently, Symantec/Norton products were found to have horrendous security flaws (which they claim to have since fixed). Increasingly, AV products are offering to monitor your web traffic directly, but this means inserting themselves into all of your encrypted (HTTPS) communications, which has all sorts of ugly security and privacy implications (see Superfish and PrivDog as examples).

So… what are we to do? My recommendation (Tip #23 from my book) is to install basic, free anti-virus software. There are still plenty of old exploits out there that hackers will always try, and AV software will help defend you against these. But I don’t believe that the for-pay AV software is really worth it – and many of them may do more harm than good.

For PC users, I highly recommend Microsoft’s Windows Defender (or Security Essentials for older PCs). For Mac, I would go with Avira or Sophos Home. Be sure to completely uninstall any other AV software you might have before trying to install new AV software. I don’t believe any of these programs will offer to monitor live web traffic, but if they do, I would NOT enable this feature. The security implications of doing this incorrectly are horrendous.

At the end of the day, your best protection is to follow basic safe-surfing practices:

  1. Don’t click on links or attachments in emails unless you specifically requested them.
  2. Be wary of anything that sounds too good (or too bad) to be true. If you get a scary email about one of your accounts, log into your account by manually typing the web address or use a favorite/bookmark (do NOT use any links provided!) and look for alerts there. You can also search snopes.com to check for known hoaxes and scams.
  3. Use unique, strong passwords for each of your web accounts. Use a password manager like LastPass to generate and manage those passwords.
  4. Keep your operating system and apps up to date. This includes smartphones and tablets.
  5. Back up all your files.
  6. Use an ad-blocker. Unfortunately, bad guys are slipping malware into ad networks. I use both uBlock Origin and Privacy Badger.

Our Insecure Democracy

I happen to be a rather political person, but I try to keep my politics out of my work in the security and privacy arena because these issues must transcend politics. Our democracy in many ways depends on some basic level of computer security and personal privacy. In no place is this more obvious than the security and privacy of the voting booth.

With the 2016 US election fast approaching, it’s important to call attention to the sorry state of affairs that is the US voting infrastructure. There are plenty of other problems with the US election system, but there’s hardly anything more fundamental to our democracy than the method by which we vote. (I’ll be focusing on the US election system, but these principles should apply to any democratic voting system.)

At the end of the day, the basic requirements are as follows (adapted from this paper):

  1. Every eligible voter must be able to vote.
  2. A voter may vote (at most) one time.
  3. Each vote is completely secret.
  4. All voting results must be verifiable.

The first requirement may seem obvious, but in this country it’s far from guaranteed. For many reasons, many eligible and willing voters either cannot vote or have serious obstacles to voting: inability to get registered, lack of proper ID, lack of nearby voting sites, lack of transportation, hours-long waits at polling places, inability to get out of work, and so on. Voting should be as effortless as possible. Why do we vote on a Tuesday? We should vote on the weekend (Saturday and Sunday). For people that work weekends, they should be given as much paid time off as necessary to vote. We should also have early voting and support absentee voting.

The second requirement has become a hot-button political issue in this country, though in reality, in-person voter fraud has been proven again and again to be effectively non-existent. We’ve got this covered, folks. We don’t need voter ID laws and other restrictions – they’re fixes for a problem that doesn’t exist, and they end up preventing way more valid voters from voting than allowing invalid voters to vote (see requirement #1).

Now we get to the meat of the matter, at least in terms of security and privacy. The third requirement is that every vote is completely secret. Most people believe this is about protecting your privacy – and to some extent, this is true. You should always be able to vote your conscience without worrying how your boss, your friends, or your spouse would react. You should be to tell them or not, lie or tell the truth – there should be no way for them to know. However, the real reason for a secret ballot is to prevent people from selling their vote and to prevent voter intimidation. If there is no way to prove to someone how you voted, then that vote can’t be verifiably bought or coerced. I think we had this pretty well figured out until smartphones came along. What’s to prevent you from taking a picture of your ballot? Depending on what state you live in, it may be a crime – but as a practical matter, it would be difficult to catch people doing this. However, I’m guessing this isn’t a big problem in our country – at least not yet.

Which brings us to the fourth and final requirement: verifiability. This is really where the current US voting system falls flat. In many states, we have voting systems that are extremely easy to hack and/or impossible to verify. We live in the era of constantly connected smartphones and tablets – a touchscreen voting system just seems like a no-brainer. But many electronic voting systems leave no paper trail – no hard copy of your vote that you can see, touch, feel and verify, let alone the people actually counting and reporting the vote tallies. The electronic records could be compromised, either due to a glitch or malicious tampering, and you probably wouldn’t even know that it happened. But regardless of how you enter your vote, every single vote placed by a voter must generate a physical, verifiable record. That may seem wasteful in this digital age, but it’s the only way. There must be some sort of hard copy receipt that the voter can verify and turn in before leaving the polling place. Those hard copy records must be kept 100% safe from tampering – no thefts, no ballot box stuffing, no alterations. And every single election result should include a statistical integrity audit – that is, a sampling of the paper ballots must be manually counted to make sure the paper results match the electronic ones. If there is any reason to doubt the electronic results, you must be able to do a complete manual recount. That’s the key.

Unfortunately, according to that same MIT paper, we have a hodge-podge of voting systems across the country, many of which have at least some areas where they use electronic voting systems (Direct Reporting by Electronics, or DRE) without a paper trial (Voter Verified Paper Audit Trail, or VVPAT).

voting systemsThis map pretty much says it all to me. It’s time that we adopt national standards for our voting infrastructure. You can leave it up to each state to implement, if you’re a real “states rights” type, but honestly I think we should just hand this over to the Federal Election Commission and have a single, rock solid, professionally-vetted, completely transparent, not-for-profit, non-partisan voting system. Of course, we’d need to revamp the current FEC – give it the budget, independence and expertise they need to do their job effectively. It should be staffed with non-political commissioners (never elected to office and no direct party affiliation) and they should be completely free from political and financial influence. This is much easier said than done, but if we can just agree that our democracy is more important than any party or ideology, just long enough to do this, then maybe we can make it happen. Of course, there’s no way any of this will happen before this year’s elections, but we should be able to get this in place for 2018 if we start now.

What can YOU do? As always, get educated and get involved. Write your congress person and vote for people that have vowed to reform our election and voting systems. If nothing else, give money to organizations that are doing the right things, and ask your friends and family to do the same. I’ve given some examples below for you to consider. Note that it’s very hard to find completely unbiased organizations because these issues have been so politicized and our country right now is very polarized. But whatever your political leanings, you can’t have a true democracy if you can’t have fair, open, and verifiable elections.

If you’re interested, here are a couple more good articles to check out.

UPDATE: Another interesting story on the security of our voting system.

Apple vs the FBI

I’ve been waiting to comment on this because more information seems to be coming out every day. Also, there has been so much written about this already that I wasn’t sure what I would have to add. But I’m not being hyperbolic when I say that this is a pivotal moment in our democracy, so I couldn’t just ignore it. One thing I haven’t seen is a good summary of what’s really going on here, so let’s start with that.

Just the Facts: What’s Really Going On Here?

First, let’s establish what’s really going on here, because it’s been very muddy. The FBI recovered an iPhone 5c that was used by one of the shooters in the San Bernardino attacks last year. This phone was issued to the shooter by his employer, and therefore was not his private cell phone – meaning that the data on that phone was technically not private. Nevertheless, that data was encrypted by default because that’s how Apple sets up every modern iPhone. The FBI believes there may be information on that iPhone that could help them perhaps find other co-conspirators or maybe uncover clues to some future plot.

This phone was backed up using Apple’s iCloud service, and it’s worth noting that Apple was willing and able to provide the FBI with the backed up data. However, for some reason, the backups to iCloud stopped about 6 weeks before the shooting – so the FBI wants to get to the data on the device itself to see what’s missing. Due to some sort of screw-up, the FBI instructed the local law enforcement to change the user’s iCloud password, which prevented it from doing another backup. If they had taken the device to a Wi-Fi network known to that device, the device might have backed up on its own, and then the FBI would have had the 6 weeks’ worth of data that was missing. But because the password was changed, we’ll never know.

The FBI is not asking Apple to break the encryption on the phone. That’s actually not possible. Encryption works. When done right, it can’t be broken. However, if you can unlock the device, then you can get to all the data on it. Unlocking the device means entering a PIN or password on the home screen – it could be as simple as a 4-digit number, meaning there are only 10,000 possible codes. With a computer-assisted “guesser”, it would be trivial to go through all the 10,000 options till you found the right one to unlock the phone.

To combat this “brute force” attack, Apple added some roadblocks. First, it restricted how often you could try a new number – taking progressively longer between guesses, from minutes up to a full hour. That would make guessing even 10,000 options take a very long time. Second, Apple gave the user the option to completely erase the device if someone entered an incorrect password ten times in a row. This feature is not enabled by default, but it easy to turn on (and I highly recommend that everyone do this).

The FBI is basically asking Apple to create a new, custom version of it’s iPhone operating system (iOS) that disables these two features and allows a connected computer to input its guesses electronically (so that a human wouldn’t have to try them all by hand). This would allow the FBI to quickly and automatically guess all possible PIN codes until the phone was unlocked. It’s not breaking the encryption, it’s allowing the FBI to “brute force” the password/PIN guessing. It’s not cracking the safe, it’s allowing a robot to quickly try every possible safe combination till it opens.

That’s just a thumbnail sketch, but I felt it was necessary background. This article from the EFF goes into a lot more depth and answers some excellent questions. If you’d like to know more, I encourage you to read it.

Why Is This Case So Important?

Both the FBI and Apple are putting heavy spin on this issue. The FBI has always disliked Apple’s stance on customer privacy (encrypting iPhone data by default) and picked this terrorism case to invoke maximum public sympathy. Apple is using this opportunity to extol its commitment to protecting its customers’ private data, particularly as compared to Android (which does not encrypt data by default). Despite what the FBI claims, this is not about a single iPhone and a single case; despite what Apple claims, creating this software is not really comparable to creating “software cancer”. We have to try to set all of that aside and look at the bigger picture. This is not a black and white situation – these situations rarely are. However, the implications are enormous and the precedent set here will have far-reaching effects on our society.

In this country, we have the the Fourth Amendment which prevents unreasonable search and seizure, and basically says that you need a warrant from a judge if you want to breach our right to privacy. In this case, the FBI has done its job in this regard. And it’s technically feasible for Apple to create a special, one-time version of it’s iOS that would allow the FBI to unlock this one iPhone – and this special software would not run on any other iPhone. This is due to a process called “signing”, which is another wonderful application of cryptographic techniques. So in this sense, it’s not a cancer – this special software load can’t be used on other devices. However, if Apple does this once, it can do it again, and there are already many other iPhones waiting at the FBI and in New York that will be next in line for this treatment. There is no doubt that this will set a precedent and will open the flood gates for more such requests in cases – not just from US law enforcement, but from repressive regimes around the globe. Furthermore, the very existence of such a tool, even though guarded heavily within Apple’s walls, will be a massive target for spy agencies and hackers around the globe.

So the issue is much deeper than simply satisfying a valid warrant (even without all the arcane All Writs Act stuff from the late 1700’s that the FBI claims should compel Apple – a third party – to help them satisfy this warrant.) The outcome of this case will have severe implications for privacy in general – and that’s why Apple is fighting back.

My Two Cents

I’ve read a lot of good articles on this issue, and I’ll point you to a couple of them shortly. But the bottom line is that we, as a society, need to figure out how we handle privacy in the age of digital communications and ubiquitous monitoring. Like it or not, you are surrounded by cameras and microphones, and it’s getting worse rapidly. You carry with you a single device that can simultaneously record video and audio, track your position anywhere on the planet, track many of your friends and family, record your physical movement, and store your personal health and financial data, as well as untold amounts of other personal information. That device is your smartphone. That one device probably has more information about you than any other single thing you own. Beyond that, all of our communications are now digital and can therefore be perfectly preserved forever. And in the grand scheme of things, any person or group of people that can gain surreptitious access to this information – regardless of their intentions – will have unimaginable power over us. This was not envisioned by the Founding Fathers – we’re in new territory here.

It’s long since time that we have an informed, open and frank discussion – as a nation – about how we balance the need for basic human privacy versus the need for discovery in the pursuit of safety. It’s also about targeted surveillance versus mass surveillance, and creating an open, transparent system of checks and balances to govern both. If nothing else, I hope this case leads to a more informed public and some rational, thoughtful debate that thinks about the broader issues here – not just this one, highly-emotional case.

As promised, here are some links with some excellent info and perspectives around these topics:

Here are some links to more general but related topics:

 

On the Ethics of Ad-Blocking

As the saying goes, if you’re not paying for the product, then you are the product. The business model for most of the Internet revolves around advertising – which in and of itself is not a bad thing. It may be an annoying thing, but passive advertising isn’t actually harmful. Passive advertising is placing ads where people can see them. And savvy marketers will place their ads in places where their target audiences tend to spend their time. If you’re targeting middle-aged men, you might buy ad space on fantasy football or NASCAR web sites, for example. If you’re targeting tween girls, you might buy ad space on any site that might feature something about Taylor Swift or Justin Bieber. And if it stopped there, I don’t think many of us would object – or at least have solid grounds for objection. After all, this advertising is paying for the content we’re consuming. Producing the content costs money – so someone has to pay for it or the content goes away.

Unfortunately, online marketing didn’t stop there. On the web, competition for your limited attention has gotten fierce – with multiple ads on a single page, marketers need you to somehow focus on their ad over the others. And being on the Internet (and not a printed page), advertisers are able to do a lot more to grab your attention. Instead of simple pictures, ads can pop up, pop under, flash, move around, or float over the articles you’re trying to read. Worse yet, ad companies want to be able to prove to their customers that they were reaching the right people and that those people were buying their product – because this makes their ad services far more valuable, meaning they can charge more for the ads.

Enter the era of “active advertising”. It has now become very hard to avoid or ignore web page and mobile ads. Worse yet, the code that displays those ads is tracking where you go and what you buy, building up profiles on you and selling those profiles to marketers without your consent (and without most people even realizing it). Furthermore, those ads use precious data on cell phones and take a lot of extra time to download regardless of what type of device you use. And if that weren’t bad enough, ad software has become so powerful, and ad networks so ubiquitous and so commoditized, that bad guys are now using ad networks to distribute “malware” (bad software, like viruses). It’s even spawned a new term: malvertising.

Over the years, browsers have given users the tools they need to tame some of these abuses, either directly in the browser or via add-ons. It’s been a cat-and-mouse game: when users find a way to avoid one tactic, advertisers switch to a new one. The most recent tool in this toolbox is the ad-blocker. These plugins allow the user to completely block most web ads. Unfortunately, there’s really no way for ad blockers to sort out “good” advertising from “bad” advertising. AdBlock Plus (one of the most popular ad-blockers) has attempted to address this with their acceptable ads policy, but it’s still not perfect.

But many web content providers need that ad revenue to stay afloat. Last week, Wired Magazine announced that they will begin to block people that use ad-blockers on their web site. You will either need to add Wired.com to your “whitelist” (allowing them to show you ads) or pay them $1 per week. They state clearly that they need that ad revenue to provide their content, and so they need to make sure that if you’re going to consume that content that you are paying for it – either directly ($1/week) or indirectly (via ad revenue).

So… what’s the answer here? As always, it’s not black and white. Below is my personal opinion, as things stand right now.

I fully understand that web sites need revenue to pay their bills. However,the business model they have chosen is ad-supported-content, and unfortunately the ad industry has gotten over-zealous in the competition for eyeballs. In the process of seeking to make more money and differentiate their services, they’re killing the golden goose. Given the abusive and annoying advertising practices, the relentless and surreptitious tracking of our web habits, the buying and selling of our profiles without our consent, and the lax policing that allows malware into ads, I believe that the ad industry only has itself to blame here. We have every reason to mistrust them and every right to protect ourselves. Therefore, I think that people are fully justified in the use of ad-blockers.

That said, Wired (and other web sites) also have the right to refuse to let us see their content if we refuse to either view their ads or pay them money. However, I think in the end they will find that people will just stop coming to their web sites if they do this. (It’s worth noting that some sites do well with voluntary donations, like Wikipedia.) Therefore, something has to change here. Ideally, the ad industry will realize that they’ve gone too far, that they must stop tracking our online pursuits and stop trafficking in highly personal information without our consent.

The bottom line is that the ad industry has itself to blame here. They’ve alienated users and they’re going to kill the business model for most of the Internet. They must earn back our trust, and that won’t be easy. Until they do, I think it’s perfectly ethical (and frankly safer) to use ad-blocking and anti-tracking tools.

Below are some of my favorite plugins. Each browser has a different method for finding and installing add-ons. You can find help here: Firefox, Safari, Internet Explorer, Chrome.

  • uBlock Origin – ad-blocker
  • Privacy Badger – anti-tracking plugin
  • HTTPS Everywhere – forces secure connections whenever possible
  • Better Privacy – another privacy plugin, slightly different from Privacy Badger

If you would like to get more involved, you might consider contributing to the Electronic Frontier Foundation.