Beware Hype and Click-Bait

(It’s been a while since I’ve written a full blog post. I’ve been putting most of my efforts into my weekly newsletter – be sure to subscribe to get weekly tips and news on cyber security and online privacy.)

Headline Hyperbole

This week, we saw the following headline from The Guardian: “WhatsApp vulnerability allows snooping on encrypted messages”. This story was immediately picked up by just about every other major tech news web site, with headlines that were even more dire:

  • A critical flaw (possibly a deliberate backdoor) allows for decryption of Whatsapp messages (BoingBoing)
  • WhatsApp Apparently Has a Dangerous Backdoor (Fortune)
  • WhatsApp encrypted messages can reportedly be intercepted through a security backdoor (Business Insider)

I swear there were others from big-name sites, but I can’t find them – I think they’ve been deleted or updated. Why? Because this story (like so many others) was completely overblown.

Which brings us to the point of this article: our online news is broken. It’s broken for much the same reasons that the media is broken in the US in general – it’s all driven by advertising dollars, and ad dollars are driven by clicks and eyeballs. (See also: On the Ethics of Ad-Blocking). But the problem is even more insidious when applied to the news because all the hyperbolic headlines and dire warnings are making it very hard to figure out which problems are real – and over time, like the boy who cried wolf, it desensitizes us all.

WhatsUp?

Let’s take this WhatsApp story as an example. The vague headline from The Guardian implies that WhatsApp is fatally flawed. And the other headlines above are even worse, trotting out the dreaded and highly-loaded term “backdoor”. Backdoor implies that someone at WhatsApp or Facebook (who bought WhatsApp) has deliberately created a vulnerability with the express purpose of allowing a third party to bypass message encryption whenever they wish and read your private communications.

The first few paragraphs from the article seem to confirm this. Some excerpts:

  • “A security vulnerability that can be used to allow Facebook and others to intercept and read encrypted messages has been found within its WhatsApp messaging service.”
  • “Privacy campaigners said the vulnerability is a ‘huge threat to freedom of speech’ and warned it could be used by government agencies as a backdoor to snoop on users who believe their messages to be secure.”
  • “If WhatsApp is asked by a government agency to disclose its messaging records, it can effectively grant access”

Now let’s talk about what’s really going on here. It’s a little technical, so bear with me.

The Devil In The Details

Modern digital communications use what’s called public key encryption. Unlike private key systems (which have a single, shared key to both encrypt and decrypt data), public key systems use two keys:

  1. Public key: Freely given to everyone, allows a sender to encrypt a message
  2. Private key: Fiercely protected and never shared, used to decrypt received messages that were encrypted with the public key

If you had a single, shared key, then you would have to find some secure way to get a copy of that key to your intended message recipient. You can’t just email or text it, or even speak it over the phone – that could be intercepted. The public key system allows you to broadcast your public key to the world, allowing anyone to send you an encrypted message that only you can decrypt, using your closely-guarded private key. In this same fashion, you use the other person’s public key to respond. This is insanely clever and it’s the basis for our secure web.

As is usually the case, the devil is in the details when it comes to crypto systems. The underlying math is solid and the algorithms have been rigorously tested. Where these systems break down is in the implementation. You can have an unbreakable deadbolt on your front door, but if you leave the key under your door mat or there’s a window right next to the lock on the door that can be broken… you get the idea.

Here’s the problem with how WhatsApp implemented their encryption. The app will generate public and private keys for you on the fly, and exchange public keys with the person you’re communicating with – all in the background, without bothering you. That’s fine – so far, so good. But let’s say Alice sends a message to Bob while Bob is offline. WhatsApp on Alice’s phone has used Bob’s last known public key to encrypt these messages, and they’re waiting (either on Alice’s phone or maybe on WhatsApp’s servers) for Bob to come online to be sent. In the meantime, Bob has dropped his phone in the toilet and must get a new one. He buys a new phone, reinstalls WhatsApp, and WhatsApp is forced to generate a new public/private key pair. When he comes online, Alice’s copy of WhatsApp figures out that the public key it has for Bob is no longer valid. And here’s where things fall apart. WhatsApp will then get Bob’s new public key and re-encrypt the pending messages, and then re-send them.

Bug or Feature?

That’s it. That was the fatal flaw. The “backdoor”. Did you catch it?

If you missed it, don’t feel bad. This stuff is complicated and hard to get right. The problem is that Alice was not warned of the key change and (crucially) was not given the opportunity to validate Bob’s new key. So, theoretically, some third party – let’s call her Mallory – could somehow force Bob to go offline for a period of time and then pretend to be Bob with a new device. This would trick Alice’s copy of WhatsApp to re-encrypt the pending messages using Mallory’s key and send them to Mallory. So, if you’re following along, what that means is that Mallory could potentially receive the pending messages for Bob. Not past messages. Just the pending ones, and potentially ones in the near future –  at least until Bob comes back online.

This key change is part and parcel of how modern public key crypto messaging works. The only possible fault you can find here with WhatsApp is that they don’t (currently) enable changed key warnings by default and they don’t block re-sending of pending messages until the user (in this case Alice) reviews the new keys and approves the update (ie, satisfies herself that it’s really Bob who is sending the new key).

Is that a “backdoor”? No. Not even close. It was not maliciously and secretly implemented to allow surreptitious access by a third party. Furthermore, if Alice turns on the key change warning (a setting in WhatsApp), it would allow her to see when this happens – a big no-no when it comes to surveillance. Is it a vulnerability or bug? No, not really. It’s a design decision that favors convenience (just going ahead and re-sending the messages) over security (forcing Alice to re-authenticate a recipient every time they get a new device, reinstall WhatsApp, or whatever). You can argue about that decision, but you can’t really argue that it’s a bug – it’s a feature.

UPDATE: The EFF has an excellent article on this with a very similar description. However, it also mentions a new effort called Key Transparency by Google which looks promising.

Remove Profit from the Press

So now let’s return to the big picture. Online news sites produce free web content that we consume. But producing that content costs money. In today’s web economy, people just expect to get something for nothing, which makes it almost impossible for sites to rely on a subscription model for revenue – if you ask people to pay, they’ll just go to some other site that’s free. So they turn to the de facto web revenue model: advertising. The more people who view the ads on your web site, the more money you get. And therefore you do whatever you can to get people to CLICK THAT LINK – NOW!! (This is called click bait.) It’s the same influence that corrupted our TV news (“if it bleeds, it leads”).

Some things should just not be profit-driven. News – in particular, investigative journalism – is one of those things. The conflict of interest corrupts the enterprise. TV news used to be a loss leader for networks: you lost money on news with the hopes of building loyalty and keeping the viewers around for the shows that followed.

Maybe that ship has sailed and it’s naive to believe we can return to the days of Walter Cronkite or Edward R Murrow. So what are we to do? Here are some ideas (some of which came from this excellent article):

  1. Subscribe to local and national newspapers that are doing good work. If you don’t care to receive a physical paper, you can usually get an on-line or digital subscription.
  2. Give money to organizations that produce or support non-profit investigative journalism. You might look at ProPublica, Institute for Non-Profit News, The Investigative Fund, NPR, and PBS. This article also has some good ideas.
  3. Share news responsibly. Do not post sensationalistic news stories on your social media or forward hyper-partisan emails to everyone you know. Don’t spread fake news, and when you see someone else doing this, (respectfully) call them out. Not sure if a story is real? Try checking Snopes.com, Politifact, or FactCheck.org. This article also has some great general advice for spotting fake or exaggerated news.
  4. When you do share news stories, be sure to share the original source whenever possible. This gives credit where credit is due (including ad revenue). If you found a derivative story, you may have to search it for the link to the original source.
  5. Use ad-blockers. This may seem contrary to the above advice, but as I mentioned in this blog, right now the ad networks are being overly aggressive on tracking you around the web and are not policing their ads sufficiently to prevent malware. It’s not safe to blindly accept all ads. You can disable the ad-blocker on individual web sites that you wish to support – just be aware of the risk.

 

Ditch Yahoo. Use ProtonMail. [updated]

I’ve been a Yahoo Mail user for 19 years. My Yahoo user ID has only 4 characters in it. It’s been my public (read spam) email address since 1997. I’m sure it’s the longest actively-used email account I’ve ever had. But now it’s time for me to move on. You should, too. Here’s why, and how…

How NOT To Handle Security

Yahoo announced recently that there was a massive breach in 2014 of many of its users’ accounts. While initial reports estimated 500 million users were compromised, it could actually be much worse. (If you haven’t changed your Yahoo password in the last two years, you should do so now.)

Password database breaches are going to happen. Security is hard and nothing is ever 100% secure. But we can and should judge a company by how seriously they take their users’ security and how they react when bad things happen.

While we’re pretty sure the breach occurred two years ago, it’s not clear yet that Yahoo knew about it before July of this year. However, Yahoo didn’t tell anyone about it until after the story broke elsewhere, two months later. It’s also been reported that Yahoo execs had a policy of not forcing users to reset passwords after a data breach because they didn’t want to lose customers. It’s also obvious that Yahoo prioritized shiny new features over security and privacy.

The Last Straw

That’s all pretty bad, but it gets worse. In a separate report shortly after this breach was announced, it was revealed that Yahoo allowed and perhaps helped the NSA or FBI to build a real-time email search program for all its customers, enabling mass surveillance in a way that was previously unprecedented.

Either of these scandals alone would be unacceptable, and should give any Yahoo user a valid reason to abandon their services – but taken together, it almost mandates it. This is a clear case where we, as consumers, need to show Yahoo that this is not acceptable, and do it in a way they will understand: close your Yahoo account and move to another service.

Ditch Yahoo

I’m not going to lie…. if you actually use your Yahoo account (like I do), this is not going to be fun or easy. But if you really care about your security, and security in general, you need to let Yahoo (and the other service providers) know that you take these horrendous security failures seriously. To do that, you have to hit them where it hurts: money. In your case, that means abandoning their services. Ditching Yahoo will not only make yourself safer, it will hopefully drive other service providers to improve their own security – which helps everyone.

I would say that you have at least three levels of options here, in increasing order of effectiveness (in terms of protesting Yahoo’s behavior):

  1. Stop using Yahoo email and all its other services
  2. Archive your Yahoo email locally and delete everything from their servers
  3. Delete your Yahoo account entirely

To stop using your Yahoo email, you will need to change everywhere you used your Yahoo email account and migrate to a new email service. LifeHacker has some tips that will help, but read through the rest of this article before choosing your new email provider.

To really rid yourself of Yahoo completely, you also need to abandon all their services: Flikr, Tumblr, fantasy sports, Yahoo groups, Yahoo messenger, and any of the dozens of other services.

Your next step is to archive all your old Yahoo email. These emails may contain valuable info that you’ll some day need to find: important correspondence, account setup/recovery info for other web sites, records of purchases, etc. If you’ve used an email application on your computer to access Yahoo (like Outlook or the Mail app on Mac OS), you should already have all your emails downloaded to your computer. But you might also want to consider an email archiving application: Windows users should look at MailStore Home (free); Mac users might look at MailSteward (ranges from free to $99).

Once you’ve safely archived everything, you should delete all your emails from Yahoo’s servers. Why? Well, if nothing else, it should prevent successful hackers from perusing your emails for info they could use against you (identity theft, for example). Assuming Yahoo actually deletes these emails, it may also keep Yahoo (or the government) from digging through that info.

You should reset your Yahoo password to a really strong password (use a password manager like LastPass). I would highly recommend setting up two-factor authentication, as well.

As a final step, you can completely close your Yahoo account. Note that this may not actually delete all your data. Yahoo probably retains the right to save it all. But this is the best you can do.

If you find that you are just too invested in Yahoo to completely abandon your email account (and I’ll admit I may be in that camp), you can set up email forwarding. This will send all of your incoming Yahoo email to a different account. (It’s worth mentioning that it looks like Yahoo tried to disable this feature recently, probably in an effort to prevent the loss of users.)

Use ProtonMail

While GMail and Outlook are two popular and free email providers, you should take a hard look at newer, more security- and privacy-conscious services. I would personally recommend ProtonMail. They have a nice free tier of service that includes web access and smartphones apps for iPhone and Android. If nothing else, grab your free account now to lock in a good user name before all the good ones are taken. Tell your friends to do the same. Just adding new free users will help the cause, even if the accounts aren’t used much.

But I’d like to ask you to go one step further: I encourage you strongly to sign up for one of their paid tiers of service, even if you don’t need the added features. The only way we’re going to force other service providers to take notice and to drive change is to put our money where our mouths are. Until it becomes clear that people are willing to pay for privacy and security, we’ll be stuck with all the ‘free’ services that are paid for with our personal info and where security is an afterthought.

Update Dec 14 2016:

Yahoo has just announced another breach, this time over 1 billion accounts hacked (maybe more). DITCH YAHOO!!

protonmail

(This article is adapted from a few of my previous weekly security newsletter articles.)

The Pros & Cons of Anti-Virus Software

When most people think of protecting their computers, they think of anti-virus (AV) software. Viruses are a real problem, of course, but how well do AV apps protect you? And are there any downsides to using AV software?

In older times, AV software was essential and generally did a good job at finding malware on your computer. Generally speaking, the core function of AV software is to recognize known malware and automatically quarantine the offending software. Some AV software is smart enough to use heuristic algorithms to recognize malware that is similar to the stuff it already knows is bad, or recognize suspicious behavior in general and flag it as potentially harmful. A popular new feature for a lot of AV software is to monitor your web traffic directly, trying to prevent you from going to malicious web sites or from downloading harmful software.

That all sounds good, but the devil (as always) is in the details. Firstly, in the ever-connected world of the Internet, malicious software is produced so frequently and is modified so quickly that it’s really hard for AV software to keep a relevant list of known viruses. Also, the bad guys have moved to other techniques like phishing and fake or hacked web sites to get your information – attacking the true weakest link: you. AV software just isn’t as effective as it used to be.

But the problem is much worse than that. In many cases, the AV software itself is providing bugs for hackers to exploit. Recently, Symantec/Norton products were found to have horrendous security flaws (which they claim to have since fixed). Increasingly, AV products are offering to monitor your web traffic directly, but this means inserting themselves into all of your encrypted (HTTPS) communications, which has all sorts of ugly security and privacy implications (see Superfish and PrivDog as examples).

So… what are we to do? My recommendation (Tip #23 from my book) is to install basic, free anti-virus software. There are still plenty of old exploits out there that hackers will always try, and AV software will help defend you against these. But I don’t believe that the for-pay AV software is really worth it – and many of them may do more harm than good.

For PC users, I highly recommend Microsoft’s Windows Defender (or Security Essentials for older PCs). For Mac, I would go with Avira or Sophos Home. Be sure to completely uninstall any other AV software you might have before trying to install new AV software. I don’t believe any of these programs will offer to monitor live web traffic, but if they do, I would NOT enable this feature. The security implications of doing this incorrectly are horrendous.

At the end of the day, your best protection is to follow basic safe-surfing practices:

  1. Don’t click on links or attachments in emails unless you specifically requested them.
  2. Be wary of anything that sounds too good (or too bad) to be true. If you get a scary email about one of your accounts, log into your account by manually typing the web address or use a favorite/bookmark (do NOT use any links provided!) and look for alerts there. You can also search snopes.com to check for known hoaxes and scams.
  3. Use unique, strong passwords for each of your web accounts. Use a password manager like LastPass to generate and manage those passwords.
  4. Keep your operating system and apps up to date. This includes smartphones and tablets.
  5. Back up all your files.
  6. Use an ad-blocker. Unfortunately, bad guys are slipping malware into ad networks. I use both uBlock Origin and Privacy Badger.

Our Insecure Democracy

I happen to be a rather political person, but I try to keep my politics out of my work in the security and privacy arena because these issues must transcend politics. Our democracy in many ways depends on some basic level of computer security and personal privacy. In no place is this more obvious than the security and privacy of the voting booth.

With the 2016 US election fast approaching, it’s important to call attention to the sorry state of affairs that is the US voting infrastructure. There are plenty of other problems with the US election system, but there’s hardly anything more fundamental to our democracy than the method by which we vote. (I’ll be focusing on the US election system, but these principles should apply to any democratic voting system.)

At the end of the day, the basic requirements are as follows (adapted from this paper):

  1. Every eligible voter must be able to vote.
  2. A voter may vote (at most) one time.
  3. Each vote is completely secret.
  4. All voting results must be verifiable.

The first requirement may seem obvious, but in this country it’s far from guaranteed. For many reasons, many eligible and willing voters either cannot vote or have serious obstacles to voting: inability to get registered, lack of proper ID, lack of nearby voting sites, lack of transportation, hours-long waits at polling places, inability to get out of work, and so on. Voting should be as effortless as possible. Why do we vote on a Tuesday? We should vote on the weekend (Saturday and Sunday). For people that work weekends, they should be given as much paid time off as necessary to vote. We should also have early voting and support absentee voting.

The second requirement has become a hot-button political issue in this country, though in reality, in-person voter fraud has been proven again and again to be effectively non-existent. We’ve got this covered, folks. We don’t need voter ID laws and other restrictions – they’re fixes for a problem that doesn’t exist, and they end up preventing way more valid voters from voting than allowing invalid voters to vote (see requirement #1).

Now we get to the meat of the matter, at least in terms of security and privacy. The third requirement is that every vote is completely secret. Most people believe this is about protecting your privacy – and to some extent, this is true. You should always be able to vote your conscience without worrying how your boss, your friends, or your spouse would react. You should be to tell them or not, lie or tell the truth – there should be no way for them to know. However, the real reason for a secret ballot is to prevent people from selling their vote and to prevent voter intimidation. If there is no way to prove to someone how you voted, then that vote can’t be verifiably bought or coerced. I think we had this pretty well figured out until smartphones came along. What’s to prevent you from taking a picture of your ballot? Depending on what state you live in, it may be a crime – but as a practical matter, it would be difficult to catch people doing this. However, I’m guessing this isn’t a big problem in our country – at least not yet.

Which brings us to the fourth and final requirement: verifiability. This is really where the current US voting system falls flat. In many states, we have voting systems that are extremely easy to hack and/or impossible to verify. We live in the era of constantly connected smartphones and tablets – a touchscreen voting system just seems like a no-brainer. But many electronic voting systems leave no paper trail – no hard copy of your vote that you can see, touch, feel and verify, let alone the people actually counting and reporting the vote tallies. The electronic records could be compromised, either due to a glitch or malicious tampering, and you probably wouldn’t even know that it happened. But regardless of how you enter your vote, every single vote placed by a voter must generate a physical, verifiable record. That may seem wasteful in this digital age, but it’s the only way. There must be some sort of hard copy receipt that the voter can verify and turn in before leaving the polling place. Those hard copy records must be kept 100% safe from tampering – no thefts, no ballot box stuffing, no alterations. And every single election result should include a statistical integrity audit – that is, a sampling of the paper ballots must be manually counted to make sure the paper results match the electronic ones. If there is any reason to doubt the electronic results, you must be able to do a complete manual recount. That’s the key.

Unfortunately, according to that same MIT paper, we have a hodge-podge of voting systems across the country, many of which have at least some areas where they use electronic voting systems (Direct Reporting by Electronics, or DRE) without a paper trial (Voter Verified Paper Audit Trail, or VVPAT).

voting systemsThis map pretty much says it all to me. It’s time that we adopt national standards for our voting infrastructure. You can leave it up to each state to implement, if you’re a real “states rights” type, but honestly I think we should just hand this over to the Federal Election Commission and have a single, rock solid, professionally-vetted, completely transparent, not-for-profit, non-partisan voting system. Of course, we’d need to revamp the current FEC – give it the budget, independence and expertise they need to do their job effectively. It should be staffed with non-political commissioners (never elected to office and no direct party affiliation) and they should be completely free from political and financial influence. This is much easier said than done, but if we can just agree that our democracy is more important than any party or ideology, just long enough to do this, then maybe we can make it happen. Of course, there’s no way any of this will happen before this year’s elections, but we should be able to get this in place for 2018 if we start now.

What can YOU do? As always, get educated and get involved. Write your congress person and vote for people that have vowed to reform our election and voting systems. If nothing else, give money to organizations that are doing the right things, and ask your friends and family to do the same. I’ve given some examples below for you to consider. Note that it’s very hard to find completely unbiased organizations because these issues have been so politicized and our country right now is very polarized. But whatever your political leanings, you can’t have a true democracy if you can’t have fair, open, and verifiable elections.

If you’re interested, here are a couple more good articles to check out.

UPDATE: Another interesting story on the security of our voting system.

Apple vs the FBI

I’ve been waiting to comment on this because more information seems to be coming out every day. Also, there has been so much written about this already that I wasn’t sure what I would have to add. But I’m not being hyperbolic when I say that this is a pivotal moment in our democracy, so I couldn’t just ignore it. One thing I haven’t seen is a good summary of what’s really going on here, so let’s start with that.

Just the Facts: What’s Really Going On Here?

First, let’s establish what’s really going on here, because it’s been very muddy. The FBI recovered an iPhone 5c that was used by one of the shooters in the San Bernardino attacks last year. This phone was issued to the shooter by his employer, and therefore was not his private cell phone – meaning that the data on that phone was technically not private. Nevertheless, that data was encrypted by default because that’s how Apple sets up every modern iPhone. The FBI believes there may be information on that iPhone that could help them perhaps find other co-conspirators or maybe uncover clues to some future plot.

This phone was backed up using Apple’s iCloud service, and it’s worth noting that Apple was willing and able to provide the FBI with the backed up data. However, for some reason, the backups to iCloud stopped about 6 weeks before the shooting – so the FBI wants to get to the data on the device itself to see what’s missing. Due to some sort of screw-up, the FBI instructed the local law enforcement to change the user’s iCloud password, which prevented it from doing another backup. If they had taken the device to a Wi-Fi network known to that device, the device might have backed up on its own, and then the FBI would have had the 6 weeks’ worth of data that was missing. But because the password was changed, we’ll never know.

The FBI is not asking Apple to break the encryption on the phone. That’s actually not possible. Encryption works. When done right, it can’t be broken. However, if you can unlock the device, then you can get to all the data on it. Unlocking the device means entering a PIN or password on the home screen – it could be as simple as a 4-digit number, meaning there are only 10,000 possible codes. With a computer-assisted “guesser”, it would be trivial to go through all the 10,000 options till you found the right one to unlock the phone.

To combat this “brute force” attack, Apple added some roadblocks. First, it restricted how often you could try a new number – taking progressively longer between guesses, from minutes up to a full hour. That would make guessing even 10,000 options take a very long time. Second, Apple gave the user the option to completely erase the device if someone entered an incorrect password ten times in a row. This feature is not enabled by default, but it easy to turn on (and I highly recommend that everyone do this).

The FBI is basically asking Apple to create a new, custom version of it’s iPhone operating system (iOS) that disables these two features and allows a connected computer to input its guesses electronically (so that a human wouldn’t have to try them all by hand). This would allow the FBI to quickly and automatically guess all possible PIN codes until the phone was unlocked. It’s not breaking the encryption, it’s allowing the FBI to “brute force” the password/PIN guessing. It’s not cracking the safe, it’s allowing a robot to quickly try every possible safe combination till it opens.

That’s just a thumbnail sketch, but I felt it was necessary background. This article from the EFF goes into a lot more depth and answers some excellent questions. If you’d like to know more, I encourage you to read it.

Why Is This Case So Important?

Both the FBI and Apple are putting heavy spin on this issue. The FBI has always disliked Apple’s stance on customer privacy (encrypting iPhone data by default) and picked this terrorism case to invoke maximum public sympathy. Apple is using this opportunity to extol its commitment to protecting its customers’ private data, particularly as compared to Android (which does not encrypt data by default). Despite what the FBI claims, this is not about a single iPhone and a single case; despite what Apple claims, creating this software is not really comparable to creating “software cancer”. We have to try to set all of that aside and look at the bigger picture. This is not a black and white situation – these situations rarely are. However, the implications are enormous and the precedent set here will have far-reaching effects on our society.

In this country, we have the the Fourth Amendment which prevents unreasonable search and seizure, and basically says that you need a warrant from a judge if you want to breach our right to privacy. In this case, the FBI has done its job in this regard. And it’s technically feasible for Apple to create a special, one-time version of it’s iOS that would allow the FBI to unlock this one iPhone – and this special software would not run on any other iPhone. This is due to a process called “signing”, which is another wonderful application of cryptographic techniques. So in this sense, it’s not a cancer – this special software load can’t be used on other devices. However, if Apple does this once, it can do it again, and there are already many other iPhones waiting at the FBI and in New York that will be next in line for this treatment. There is no doubt that this will set a precedent and will open the flood gates for more such requests in cases – not just from US law enforcement, but from repressive regimes around the globe. Furthermore, the very existence of such a tool, even though guarded heavily within Apple’s walls, will be a massive target for spy agencies and hackers around the globe.

So the issue is much deeper than simply satisfying a valid warrant (even without all the arcane All Writs Act stuff from the late 1700’s that the FBI claims should compel Apple – a third party – to help them satisfy this warrant.) The outcome of this case will have severe implications for privacy in general – and that’s why Apple is fighting back.

My Two Cents

I’ve read a lot of good articles on this issue, and I’ll point you to a couple of them shortly. But the bottom line is that we, as a society, need to figure out how we handle privacy in the age of digital communications and ubiquitous monitoring. Like it or not, you are surrounded by cameras and microphones, and it’s getting worse rapidly. You carry with you a single device that can simultaneously record video and audio, track your position anywhere on the planet, track many of your friends and family, record your physical movement, and store your personal health and financial data, as well as untold amounts of other personal information. That device is your smartphone. That one device probably has more information about you than any other single thing you own. Beyond that, all of our communications are now digital and can therefore be perfectly preserved forever. And in the grand scheme of things, any person or group of people that can gain surreptitious access to this information – regardless of their intentions – will have unimaginable power over us. This was not envisioned by the Founding Fathers – we’re in new territory here.

It’s long since time that we have an informed, open and frank discussion – as a nation – about how we balance the need for basic human privacy versus the need for discovery in the pursuit of safety. It’s also about targeted surveillance versus mass surveillance, and creating an open, transparent system of checks and balances to govern both. If nothing else, I hope this case leads to a more informed public and some rational, thoughtful debate that thinks about the broader issues here – not just this one, highly-emotional case.

As promised, here are some links with some excellent info and perspectives around these topics:

Here are some links to more general but related topics:

 

On the Ethics of Ad-Blocking

As the saying goes, if you’re not paying for the product, then you are the product. The business model for most of the Internet revolves around advertising – which in and of itself is not a bad thing. It may be an annoying thing, but passive advertising isn’t actually harmful. Passive advertising is placing ads where people can see them. And savvy marketers will place their ads in places where their target audiences tend to spend their time. If you’re targeting middle-aged men, you might buy ad space on fantasy football or NASCAR web sites, for example. If you’re targeting tween girls, you might buy ad space on any site that might feature something about Taylor Swift or Justin Bieber. And if it stopped there, I don’t think many of us would object – or at least have solid grounds for objection. After all, this advertising is paying for the content we’re consuming. Producing the content costs money – so someone has to pay for it or the content goes away.

Unfortunately, online marketing didn’t stop there. On the web, competition for your limited attention has gotten fierce – with multiple ads on a single page, marketers need you to somehow focus on their ad over the others. And being on the Internet (and not a printed page), advertisers are able to do a lot more to grab your attention. Instead of simple pictures, ads can pop up, pop under, flash, move around, or float over the articles you’re trying to read. Worse yet, ad companies want to be able to prove to their customers that they were reaching the right people and that those people were buying their product – because this makes their ad services far more valuable, meaning they can charge more for the ads.

Enter the era of “active advertising”. It has now become very hard to avoid or ignore web page and mobile ads. Worse yet, the code that displays those ads is tracking where you go and what you buy, building up profiles on you and selling those profiles to marketers without your consent (and without most people even realizing it). Furthermore, those ads use precious data on cell phones and take a lot of extra time to download regardless of what type of device you use. And if that weren’t bad enough, ad software has become so powerful, and ad networks so ubiquitous and so commoditized, that bad guys are now using ad networks to distribute “malware” (bad software, like viruses). It’s even spawned a new term: malvertising.

Over the years, browsers have given users the tools they need to tame some of these abuses, either directly in the browser or via add-ons. It’s been a cat-and-mouse game: when users find a way to avoid one tactic, advertisers switch to a new one. The most recent tool in this toolbox is the ad-blocker. These plugins allow the user to completely block most web ads. Unfortunately, there’s really no way for ad blockers to sort out “good” advertising from “bad” advertising. AdBlock Plus (one of the most popular ad-blockers) has attempted to address this with their acceptable ads policy, but it’s still not perfect.

But many web content providers need that ad revenue to stay afloat. Last week, Wired Magazine announced that they will begin to block people that use ad-blockers on their web site. You will either need to add Wired.com to your “whitelist” (allowing them to show you ads) or pay them $1 per week. They state clearly that they need that ad revenue to provide their content, and so they need to make sure that if you’re going to consume that content that you are paying for it – either directly ($1/week) or indirectly (via ad revenue).

So… what’s the answer here? As always, it’s not black and white. Below is my personal opinion, as things stand right now.

I fully understand that web sites need revenue to pay their bills. However,the business model they have chosen is ad-supported-content, and unfortunately the ad industry has gotten over-zealous in the competition for eyeballs. In the process of seeking to make more money and differentiate their services, they’re killing the golden goose. Given the abusive and annoying advertising practices, the relentless and surreptitious tracking of our web habits, the buying and selling of our profiles without our consent, and the lax policing that allows malware into ads, I believe that the ad industry only has itself to blame here. We have every reason to mistrust them and every right to protect ourselves. Therefore, I think that people are fully justified in the use of ad-blockers.

That said, Wired (and other web sites) also have the right to refuse to let us see their content if we refuse to either view their ads or pay them money. However, I think in the end they will find that people will just stop coming to their web sites if they do this. (It’s worth noting that some sites do well with voluntary donations, like Wikipedia.) Therefore, something has to change here. Ideally, the ad industry will realize that they’ve gone too far, that they must stop tracking our online pursuits and stop trafficking in highly personal information without our consent.

The bottom line is that the ad industry has itself to blame here. They’ve alienated users and they’re going to kill the business model for most of the Internet. They must earn back our trust, and that won’t be easy. Until they do, I think it’s perfectly ethical (and frankly safer) to use ad-blocking and anti-tracking tools.

Below are some of my favorite plugins. Each browser has a different method for finding and installing add-ons. You can find help here: Firefox, Safari, Internet Explorer, Chrome.

  • uBlock Origin – ad-blocker
  • Privacy Badger – anti-tracking plugin
  • HTTPS Everywhere – forces secure connections whenever possible
  • Better Privacy – another privacy plugin, slightly different from Privacy Badger

If you would like to get more involved, you might consider contributing to the Electronic Frontier Foundation.

 

 

Gone Phishin’ (LostPass)

LastPass is the password manager I recommend in my book and to anyone who asks. While there are a handful of good products like it, to me LastPass has a rock-solid security story and all the features anyone could want.

You may have heard last week about a threat to LastPass called “LostPass” on the news. Well, actually, you probably didn’t – the mainstream press doesn’t cover this stuff much. But I’m going to cover it anyway because it demonstrates one of the most troublesome security problems we have today: phishing. Unfortunately, this has nothing to do with a rod and a reel and whistling the theme to The Andy Griffith Show. Phishing is a technique used by scammers to get sensitive information from people by pretending to be someone else – usually via email or a web page (or both). Basically, they trick you into thinking you’re dealing with your bank, a popular web site (PayPal, eBay, Amazon, etc), or even the government. Sometimes they entice you with good stuff (winning a prize, free stuff, special opportunity) and sometimes scare you with bad stuff (freezing your account, reporting you to some authority, or telling you that your account has been hacked). But in all cases, they try to compel you to give up information like passwords or credit card numbers.

Unfortunately, it’s extremely easy to create exact duplicates of web pages. There’s just no real way to identify a fake by looking at it. Sometimes you can tell by looking at the web site’s address, but scammers are very good at finding plausible web site names that look very much like the real one they’re impersonating.

In the case of “LostPass”, a research demonstrated that they could act as a “man in the middle” to steal your LastPass login and password – even if you use two-factor authentication (which I strongly recommend). LastPass is a browser plugin that watches what web pages you’re on, and when it detects a login web form, it offers to automatically fill in your ID and password. This researcher was able to create a malicious web page that could log you out of LastPass and then pop up a dialog asking you to log back in – but not the real LastPass dialog! Instead, it was a fake. So you would enter your email address and password, then it would store this juicy info. It now had the keys to the kingdom! It had access to your entire LastPass vault! All your passwords, secure notes, credit cards!

To keep you from getting suspicious, it would actually then turn around and use the email and password you gave it to actually log into LastPass. This is the “man in the middle” part – to you, it pretends to be LastPass; to LastPass, it pretends to be you. If you had two-factor authentication turned on, it still did you no good. LastPass would tell the malware to prompt the user for the two-factor auth token, and the malware would turn around and ask you for that token – again, placing itself in the middle.

So, should you be worried? Should you abandon LastPass? Yes and no, in that order. Phishing is a problem for ALL browser-based plugins, including those of other password managers. Phishing is a major problem for everyone who uses email or web browsers. So in that sense, you should be worried about it. In a minute, I’ll give you some tips for protecting yourself.

This researcher picked on LastPass because he perceived it to be the top dog in password managers, not because it was the only one susceptible to this attack. He also contacted LastPass well before he released his research (as any good security researcher would do), and LastPass was able to patch their software before he even announced this problem. That is – if you’re using LastPass, you’re already safe from LostPass, as long as you’re updating your browser plugin. LastPass reacted properly to this, from what I can see, and has mitigated this particular risk (and ones like it). LastPass, in my experience, has taken all security concerns very seriously and is constantly updating its software to react to even potential risks. So I feel very comfortable continuing to recommend it.

How can you protect yourself from phishing attacks? Here are some tips:

  1. Never give out sensitive info in email. This includes credit card numbers, passwords, and social security numbers. Any reputable organization will never ask for this via email.
  2. Don’t click on links or buttons in emails. If the email is fake, they will take you to a fake or malicious web site – one that may look exactly like the real one. Instead of clicking the links provided, just go to the site “manually” – that is, type in the main web site address by hand (or Google it) and then log into your account or whatever from there.
  3. Don’t fall for scare tactics. It’s common for scammers to tell you that something bad will happen or has already happened, and you MUST click here NOW to fix it. For example, your PayPal account has been frozen and you need to click this link now to log in and set things straight. Instead, go to your web browser, type in “paypal.com” and log in. If there’s really a problem, you’ll see it immediately when you log in.
  4. Help protect others by using strong, unique passwords on your email accounts. Hackers love to get into your email account and your email address book to send emails to everyone you know. These people (presumably) trust you and will be more likely to click on bad links.
  5. Use LastPass and keep your browser plugin up to date. They are adding new features to help prevent phishing attacks.

Using Credit Freeze for Self Defense

Identity theft is arguably one of the worst things that can happen to a person, financially. When someone steals your identity, they can basically do anything you can do – including obtaining loans or credit cards in your name. And when the spending spree is over, you are left holding the bag. If it’s not bad enough that they’ve taken your money and left you with a huge bill, it may also have a major negative impact on your credit report. It can be very difficult and time consuming to undo all this damage.

In order to open a new loan or credit card in your name, the criminals have to pass a credit check through one of the big three credit bureaus: Experian, Trans-Union and Equifax. Therefore, if you can somehow stop the credit check from passing, you can prevent the bad guys from getting a new line of credit in your name.

The easiest way to do that is to “freeze” your credit – basically you tell the credit bureaus to put a halt on all credit checks until you tell them otherwise. This obviously only works if you yourself don’t need to have your credit checked. If you’re about to get another credit card (including store cards) or need to finance something (car, house, appliances, etc), then you’re going to need to run a credit check. Also, some other activities will trigger a credit check, such as background checks, opening a new financial account, or even signing up for a new utility (cable, for example).

Freezing your credit has absolutely no impact on your credit score. You have to do it with all three credit companies and there is a small fee involved usually (up to $10). There’s also a fee to “thaw” your credit, so you don’t want to do this often.

Basically, if you rarely if ever need to open new lines of credit, you should go ahead and put a freeze on your credit. It does no harm and can save you a ton of heartache. I recommend reading this Clark Howard article. It has all the details on how to freeze your credit with each of the three credit companies.

http://www.clarkhoward.com/credit-freeze-and-thaw-guide

If you’d like more info on credit freezes, check out this Federal Trade Commission web site:

https://www.consumer.ftc.gov/articles/0497-credit-freeze-faqs

If you’d like to stop getting “pre-screened” and “pre-qualified” credit card offers in the mail (which can sometimes be stolen and used to open credit in your name), see this FTC web site. It will tell you how to opt out. It’s a bit of a pain, but well worth it.

https://www.consumer.ftc.gov/articles/0148-prescreened-credit-and-insurance-offers

Windows 10 Privacy Issues

If you use a Windows computer at all, you’ve probably seen that annoying little pop-up message that keeps reminding you that Windows 10 is coming. Windows 10 is a free upgrade for most people and Microsoft is clearly banking on most people taking the Trojan horse free software. Microsoft is also counting on most people to just use the “express install” option – that is, take all the Microsoft-chosen default settings. I’m here to tell you: DON’T DO THAT.

Microsoft has really gone overboard with privacy-threatening features in this release, and it appears that most of them are on by default. When I write the second edition of my book, I’ll have a full explanation of how to guard your privacy on Windows 10. But here are some quick recommendations.

NOTE: If you can wait to install Windows 10, then by all means wait. We will learn more things about it in the coming weeks and months, and security and privacy experts will get a chance to learn what’s really going on and hopefully figure out how to fix the problems. And if there’s enough uproar, perhaps Microsoft will even dial back on some of these privacy-invading features. But if you can’t wait, or if you’ve already installed it, here are a few key tips.

  1. Don’t use the Express Install option. Customize your install and read over every option.
  2. Don’t sign into Windows with your Microsoft account. This allows Microsoft to associate all sorts of info and activities with you, and share it with others. Just use a local account.
  3. Don’t use Cortana. Yeah, it’s really cool, but by enabling this one feature, you open yourself up to all sorts of spying by your operating system and Microsoft. Until they can address security and privacy concerns, this feature is just too scary.
  4. Don’t use WiFi-Sense. This is a new feature which conveniently lets you share your WiFi password with people you know. This means syncing them to the cloud, which to me invites security risks that aren’t worth the convenience.

Here are some more articles you might want to check out.