Equifax Hack: Protecting Against Identity Theft

You’ve probably already heard about the massive data breach at Equifax, one of the three major US credit bureaus. The company says that up to 143 million people may be affected, which is almost half of the entire population of the United States. The stolen data may include names, Social Security numbers, birth dates, addresses and, in some instances, driver’s license numbers. In other words, just about everything you might need to commit identity theft. Equifax has a “potential impact” web site that will supposedly tell you if you were affected, but there have been mixed results in practice. If you were affected, it will send you to enroll in their TrustID credit monitoring service. And then tell you to come back in a few days to do it. They are frankly not handling this well, and the law suits are already coming.

Step One: Mitigating Identity Theft

So what should you do? I would go ahead and take the free monitoring service, when it becomes available. It can’t hurt (and shouldn’t prevent you from participating in a class action suit). But there are two other things you should consider strongly: either a credit freeze or a fraud alert.

A credit freeze will prevent any new requests for your credit history, which should stop anyone (including yourself) from getting a new credit card or opening a loan in your name. You will have to do this by contacting each of the three major bureaus (Equifax, Experian and TransUnion) and it will cost between $5-10 each. However, credit histories are used for many other purposes. So it might also interfere with applying for a new job, signing up for new service (e.g., phone, cable, utilities), or even the above-mentioned credit monitoring. You can always ‘thaw’ your credit and re-freeze it, but you will have to pay again.

The simpler option is a fraud alert, which is totally free but less effective. A fraud alert will simply require credit institutions to do a little more verification before allowing credit to be opened in your name. For example, they may call you if you have a phone number on file. Unlike the credit freeze, you only need to contact one of the three agencies and they are required to tell the other two. However, it only lasts for 90 days, though you can renew it as many times as you like. (If you can prove you have actually been a victim of identity theft, you can get a 7-year fraud alert.) I would do this immediately, and then after signing up for Equifax’s free monitoring service, you can consider implementing a full credit freeze.

Step Two: Basic Security Hygiene

Your next steps should be to beef up your general security – things you should already be doing, but things that become much more important in the wake of this horrific data breach.

  1. Use strong, unique passwords for your important accounts (financial, email and social media). Do not repeat passwords! To help with this, use a password manager like 1Password, LastPass, KeePass, etc.
  2. Set up and use two-factor authentication for these same accounts. This means you’ll have to enter a password and a one-time PIN code. (This is usually only for the first time you log in from an unknown location.) You can search for your service here and get quick links to help.
  3. Get your free annual credit reports from each credit bureau. I would recommend spreading them out – do one every four months, rotating through each of the three services. Set a repeating annual calendar reminder for each one, maybe Experian in January, Equifax in May and TransUnion in September.
  4. Keep a close eye on your credit card, bank and other financial statements for suspicious activity.

Stay tuned… I’m sure there will be more on this soon. My radio show and podcast will delve into this a bit further later this week.

UPDATE: This is another excellent article on credit freezes and fraud alerts.

UPDATE 2: Great article on the broader issues for democracy and privacy. The above is about you; this article is about everyone. The market is not able to fix these problems, it’s going to require legislation – and that means you need to be informed and lobby your  representatives.

Beware Hype and Click-Bait

(It’s been a while since I’ve written a full blog post. I’ve been putting most of my efforts into my weekly newsletter – be sure to subscribe to get weekly tips and news on cyber security and online privacy.)

Headline Hyperbole

This week, we saw the following headline from The Guardian: “WhatsApp vulnerability allows snooping on encrypted messages”. This story was immediately picked up by just about every other major tech news web site, with headlines that were even more dire:

  • A critical flaw (possibly a deliberate backdoor) allows for decryption of Whatsapp messages (BoingBoing)
  • WhatsApp Apparently Has a Dangerous Backdoor (Fortune)
  • WhatsApp encrypted messages can reportedly be intercepted through a security backdoor (Business Insider)

I swear there were others from big-name sites, but I can’t find them – I think they’ve been deleted or updated. Why? Because this story (like so many others) was completely overblown.

Which brings us to the point of this article: our online news is broken. It’s broken for much the same reasons that the media is broken in the US in general – it’s all driven by advertising dollars, and ad dollars are driven by clicks and eyeballs. (See also: On the Ethics of Ad-Blocking). But the problem is even more insidious when applied to the news because all the hyperbolic headlines and dire warnings are making it very hard to figure out which problems are real – and over time, like the boy who cried wolf, it desensitizes us all.

WhatsUp?

Let’s take this WhatsApp story as an example. The vague headline from The Guardian implies that WhatsApp is fatally flawed. And the other headlines above are even worse, trotting out the dreaded and highly-loaded term “backdoor”. Backdoor implies that someone at WhatsApp or Facebook (who bought WhatsApp) has deliberately created a vulnerability with the express purpose of allowing a third party to bypass message encryption whenever they wish and read your private communications.

The first few paragraphs from the article seem to confirm this. Some excerpts:

  • “A security vulnerability that can be used to allow Facebook and others to intercept and read encrypted messages has been found within its WhatsApp messaging service.”
  • “Privacy campaigners said the vulnerability is a ‘huge threat to freedom of speech’ and warned it could be used by government agencies as a backdoor to snoop on users who believe their messages to be secure.”
  • “If WhatsApp is asked by a government agency to disclose its messaging records, it can effectively grant access”

Now let’s talk about what’s really going on here. It’s a little technical, so bear with me.

The Devil In The Details

Modern digital communications use what’s called public key encryption. Unlike private key systems (which have a single, shared key to both encrypt and decrypt data), public key systems use two keys:

  1. Public key: Freely given to everyone, allows a sender to encrypt a message
  2. Private key: Fiercely protected and never shared, used to decrypt received messages that were encrypted with the public key

If you had a single, shared key, then you would have to find some secure way to get a copy of that key to your intended message recipient. You can’t just email or text it, or even speak it over the phone – that could be intercepted. The public key system allows you to broadcast your public key to the world, allowing anyone to send you an encrypted message that only you can decrypt, using your closely-guarded private key. In this same fashion, you use the other person’s public key to respond. This is insanely clever and it’s the basis for our secure web.

As is usually the case, the devil is in the details when it comes to crypto systems. The underlying math is solid and the algorithms have been rigorously tested. Where these systems break down is in the implementation. You can have an unbreakable deadbolt on your front door, but if you leave the key under your door mat or there’s a window right next to the lock on the door that can be broken… you get the idea.

Here’s the problem with how WhatsApp implemented their encryption. The app will generate public and private keys for you on the fly, and exchange public keys with the person you’re communicating with – all in the background, without bothering you. That’s fine – so far, so good. But let’s say Alice sends a message to Bob while Bob is offline. WhatsApp on Alice’s phone has used Bob’s last known public key to encrypt these messages, and they’re waiting (either on Alice’s phone or maybe on WhatsApp’s servers) for Bob to come online to be sent. In the meantime, Bob has dropped his phone in the toilet and must get a new one. He buys a new phone, reinstalls WhatsApp, and WhatsApp is forced to generate a new public/private key pair. When he comes online, Alice’s copy of WhatsApp figures out that the public key it has for Bob is no longer valid. And here’s where things fall apart. WhatsApp will then get Bob’s new public key and re-encrypt the pending messages, and then re-send them.

Bug or Feature?

That’s it. That was the fatal flaw. The “backdoor”. Did you catch it?

If you missed it, don’t feel bad. This stuff is complicated and hard to get right. The problem is that Alice was not warned of the key change and (crucially) was not given the opportunity to validate Bob’s new key. So, theoretically, some third party – let’s call her Mallory – could somehow force Bob to go offline for a period of time and then pretend to be Bob with a new device. This would trick Alice’s copy of WhatsApp to re-encrypt the pending messages using Mallory’s key and send them to Mallory. So, if you’re following along, what that means is that Mallory could potentially receive the pending messages for Bob. Not past messages. Just the pending ones, and potentially ones in the near future –  at least until Bob comes back online.

This key change is part and parcel of how modern public key crypto messaging works. The only possible fault you can find here with WhatsApp is that they don’t (currently) enable changed key warnings by default and they don’t block re-sending of pending messages until the user (in this case Alice) reviews the new keys and approves the update (ie, satisfies herself that it’s really Bob who is sending the new key).

Is that a “backdoor”? No. Not even close. It was not maliciously and secretly implemented to allow surreptitious access by a third party. Furthermore, if Alice turns on the key change warning (a setting in WhatsApp), it would allow her to see when this happens – a big no-no when it comes to surveillance. Is it a vulnerability or bug? No, not really. It’s a design decision that favors convenience (just going ahead and re-sending the messages) over security (forcing Alice to re-authenticate a recipient every time they get a new device, reinstall WhatsApp, or whatever). You can argue about that decision, but you can’t really argue that it’s a bug – it’s a feature.

UPDATE: The EFF has an excellent article on this with a very similar description. However, it also mentions a new effort called Key Transparency by Google which looks promising.

Remove Profit from the Press

So now let’s return to the big picture. Online news sites produce free web content that we consume. But producing that content costs money. In today’s web economy, people just expect to get something for nothing, which makes it almost impossible for sites to rely on a subscription model for revenue – if you ask people to pay, they’ll just go to some other site that’s free. So they turn to the de facto web revenue model: advertising. The more people who view the ads on your web site, the more money you get. And therefore you do whatever you can to get people to CLICK THAT LINK – NOW!! (This is called click bait.) It’s the same influence that corrupted our TV news (“if it bleeds, it leads”).

Some things should just not be profit-driven. News – in particular, investigative journalism – is one of those things. The conflict of interest corrupts the enterprise. TV news used to be a loss leader for networks: you lost money on news with the hopes of building loyalty and keeping the viewers around for the shows that followed.

Maybe that ship has sailed and it’s naive to believe we can return to the days of Walter Cronkite or Edward R Murrow. So what are we to do? Here are some ideas (some of which came from this excellent article):

  1. Subscribe to local and national newspapers that are doing good work. If you don’t care to receive a physical paper, you can usually get an on-line or digital subscription.
  2. Give money to organizations that produce or support non-profit investigative journalism. You might look at ProPublica, Institute for Non-Profit News, The Investigative Fund, NPR, and PBS. This article also has some good ideas.
  3. Share news responsibly. Do not post sensationalistic news stories on your social media or forward hyper-partisan emails to everyone you know. Don’t spread fake news, and when you see someone else doing this, (respectfully) call them out. Not sure if a story is real? Try checking Snopes.com, Politifact, or FactCheck.org. This article also has some great general advice for spotting fake or exaggerated news.
  4. When you do share news stories, be sure to share the original source whenever possible. This gives credit where credit is due (including ad revenue). If you found a derivative story, you may have to search it for the link to the original source.
  5. Use ad-blockers. This may seem contrary to the above advice, but as I mentioned in this blog, right now the ad networks are being overly aggressive on tracking you around the web and are not policing their ads sufficiently to prevent malware. It’s not safe to blindly accept all ads. You can disable the ad-blocker on individual web sites that you wish to support – just be aware of the risk.

 

Second interview: IoT

My second interview has posted on the George Orwell 2084 site – this one about the Internet of Things, or IoT. As they say, the “S” in “IOT” is for security. In this interview, we talk about the impact that these newly-connected “smart” devices are having on our lives, particularly with respect to our overall security – including some simple things we can all do to mitigate the threats. Check it out!

Interview on George Orwell 2084: Gooligan

Check out my radio/podcast interview with David Boron at George Orwell 2084. We talked about the Gooligan malware for Android, which has infected over 1 million Android phones so far and is making lots of money for the hackers.

Look for another interview in the near future about the Internet of Things and how the insecurity of these devices is a major threat.

If you are worried, you can go to Check Point’s Gooligan web site to check. (Check Point is the company that discovered this malware.)

Ditch Yahoo. Use ProtonMail. [updated]

I’ve been a Yahoo Mail user for 19 years. My Yahoo user ID has only 4 characters in it. It’s been my public (read spam) email address since 1997. I’m sure it’s the longest actively-used email account I’ve ever had. But now it’s time for me to move on. You should, too. Here’s why, and how…

How NOT To Handle Security

Yahoo announced recently that there was a massive breach in 2014 of many of its users’ accounts. While initial reports estimated 500 million users were compromised, it could actually be much worse. (If you haven’t changed your Yahoo password in the last two years, you should do so now.)

Password database breaches are going to happen. Security is hard and nothing is ever 100% secure. But we can and should judge a company by how seriously they take their users’ security and how they react when bad things happen.

While we’re pretty sure the breach occurred two years ago, it’s not clear yet that Yahoo knew about it before July of this year. However, Yahoo didn’t tell anyone about it until after the story broke elsewhere, two months later. It’s also been reported that Yahoo execs had a policy of not forcing users to reset passwords after a data breach because they didn’t want to lose customers. It’s also obvious that Yahoo prioritized shiny new features over security and privacy.

The Last Straw

That’s all pretty bad, but it gets worse. In a separate report shortly after this breach was announced, it was revealed that Yahoo allowed and perhaps helped the NSA or FBI to build a real-time email search program for all its customers, enabling mass surveillance in a way that was previously unprecedented.

Either of these scandals alone would be unacceptable, and should give any Yahoo user a valid reason to abandon their services – but taken together, it almost mandates it. This is a clear case where we, as consumers, need to show Yahoo that this is not acceptable, and do it in a way they will understand: close your Yahoo account and move to another service.

Ditch Yahoo

I’m not going to lie…. if you actually use your Yahoo account (like I do), this is not going to be fun or easy. But if you really care about your security, and security in general, you need to let Yahoo (and the other service providers) know that you take these horrendous security failures seriously. To do that, you have to hit them where it hurts: money. In your case, that means abandoning their services. Ditching Yahoo will not only make yourself safer, it will hopefully drive other service providers to improve their own security – which helps everyone.

I would say that you have at least three levels of options here, in increasing order of effectiveness (in terms of protesting Yahoo’s behavior):

  1. Stop using Yahoo email and all its other services
  2. Archive your Yahoo email locally and delete everything from their servers
  3. Delete your Yahoo account entirely

To stop using your Yahoo email, you will need to change everywhere you used your Yahoo email account and migrate to a new email service. LifeHacker has some tips that will help, but read through the rest of this article before choosing your new email provider.

To really rid yourself of Yahoo completely, you also need to abandon all their services: Flikr, Tumblr, fantasy sports, Yahoo groups, Yahoo messenger, and any of the dozens of other services.

Your next step is to archive all your old Yahoo email. These emails may contain valuable info that you’ll some day need to find: important correspondence, account setup/recovery info for other web sites, records of purchases, etc. If you’ve used an email application on your computer to access Yahoo (like Outlook or the Mail app on Mac OS), you should already have all your emails downloaded to your computer. But you might also want to consider an email archiving application: Windows users should look at MailStore Home (free); Mac users might look at MailSteward (ranges from free to $99).

Once you’ve safely archived everything, you should delete all your emails from Yahoo’s servers. Why? Well, if nothing else, it should prevent successful hackers from perusing your emails for info they could use against you (identity theft, for example). Assuming Yahoo actually deletes these emails, it may also keep Yahoo (or the government) from digging through that info.

You should reset your Yahoo password to a really strong password (use a password manager like LastPass). I would highly recommend setting up two-factor authentication, as well.

As a final step, you can completely close your Yahoo account. Note that this may not actually delete all your data. Yahoo probably retains the right to save it all. But this is the best you can do.

If you find that you are just too invested in Yahoo to completely abandon your email account (and I’ll admit I may be in that camp), you can set up email forwarding. This will send all of your incoming Yahoo email to a different account. (It’s worth mentioning that it looks like Yahoo tried to disable this feature recently, probably in an effort to prevent the loss of users.)

Use ProtonMail

While GMail and Outlook are two popular and free email providers, you should take a hard look at newer, more security- and privacy-conscious services. I would personally recommend ProtonMail. They have a nice free tier of service that includes web access and smartphones apps for iPhone and Android. If nothing else, grab your free account now to lock in a good user name before all the good ones are taken. Tell your friends to do the same. Just adding new free users will help the cause, even if the accounts aren’t used much.

But I’d like to ask you to go one step further: I encourage you strongly to sign up for one of their paid tiers of service, even if you don’t need the added features. The only way we’re going to force other service providers to take notice and to drive change is to put our money where our mouths are. Until it becomes clear that people are willing to pay for privacy and security, we’ll be stuck with all the ‘free’ services that are paid for with our personal info and where security is an afterthought.

Update Dec 14 2016:

Yahoo has just announced another breach, this time over 1 billion accounts hacked (maybe more). DITCH YAHOO!!

protonmail

(This article is adapted from a few of my previous weekly security newsletter articles.)

Our Insecure Democracy

I happen to be a rather political person, but I try to keep my politics out of my work in the security and privacy arena because these issues must transcend politics. Our democracy in many ways depends on some basic level of computer security and personal privacy. In no place is this more obvious than the security and privacy of the voting booth.

With the 2016 US election fast approaching, it’s important to call attention to the sorry state of affairs that is the US voting infrastructure. There are plenty of other problems with the US election system, but there’s hardly anything more fundamental to our democracy than the method by which we vote. (I’ll be focusing on the US election system, but these principles should apply to any democratic voting system.)

At the end of the day, the basic requirements are as follows (adapted from this paper):

  1. Every eligible voter must be able to vote.
  2. A voter may vote (at most) one time.
  3. Each vote is completely secret.
  4. All voting results must be verifiable.

The first requirement may seem obvious, but in this country it’s far from guaranteed. For many reasons, many eligible and willing voters either cannot vote or have serious obstacles to voting: inability to get registered, lack of proper ID, lack of nearby voting sites, lack of transportation, hours-long waits at polling places, inability to get out of work, and so on. Voting should be as effortless as possible. Why do we vote on a Tuesday? We should vote on the weekend (Saturday and Sunday). For people that work weekends, they should be given as much paid time off as necessary to vote. We should also have early voting and support absentee voting.

The second requirement has become a hot-button political issue in this country, though in reality, in-person voter fraud has been proven again and again to be effectively non-existent. We’ve got this covered, folks. We don’t need voter ID laws and other restrictions – they’re fixes for a problem that doesn’t exist, and they end up preventing way more valid voters from voting than allowing invalid voters to vote (see requirement #1).

Now we get to the meat of the matter, at least in terms of security and privacy. The third requirement is that every vote is completely secret. Most people believe this is about protecting your privacy – and to some extent, this is true. You should always be able to vote your conscience without worrying how your boss, your friends, or your spouse would react. You should be to tell them or not, lie or tell the truth – there should be no way for them to know. However, the real reason for a secret ballot is to prevent people from selling their vote and to prevent voter intimidation. If there is no way to prove to someone how you voted, then that vote can’t be verifiably bought or coerced. I think we had this pretty well figured out until smartphones came along. What’s to prevent you from taking a picture of your ballot? Depending on what state you live in, it may be a crime – but as a practical matter, it would be difficult to catch people doing this. However, I’m guessing this isn’t a big problem in our country – at least not yet.

Which brings us to the fourth and final requirement: verifiability. This is really where the current US voting system falls flat. In many states, we have voting systems that are extremely easy to hack and/or impossible to verify. We live in the era of constantly connected smartphones and tablets – a touchscreen voting system just seems like a no-brainer. But many electronic voting systems leave no paper trail – no hard copy of your vote that you can see, touch, feel and verify, let alone the people actually counting and reporting the vote tallies. The electronic records could be compromised, either due to a glitch or malicious tampering, and you probably wouldn’t even know that it happened. But regardless of how you enter your vote, every single vote placed by a voter must generate a physical, verifiable record. That may seem wasteful in this digital age, but it’s the only way. There must be some sort of hard copy receipt that the voter can verify and turn in before leaving the polling place. Those hard copy records must be kept 100% safe from tampering – no thefts, no ballot box stuffing, no alterations. And every single election result should include a statistical integrity audit – that is, a sampling of the paper ballots must be manually counted to make sure the paper results match the electronic ones. If there is any reason to doubt the electronic results, you must be able to do a complete manual recount. That’s the key.

Unfortunately, according to that same MIT paper, we have a hodge-podge of voting systems across the country, many of which have at least some areas where they use electronic voting systems (Direct Reporting by Electronics, or DRE) without a paper trial (Voter Verified Paper Audit Trail, or VVPAT).

voting systemsThis map pretty much says it all to me. It’s time that we adopt national standards for our voting infrastructure. You can leave it up to each state to implement, if you’re a real “states rights” type, but honestly I think we should just hand this over to the Federal Election Commission and have a single, rock solid, professionally-vetted, completely transparent, not-for-profit, non-partisan voting system. Of course, we’d need to revamp the current FEC – give it the budget, independence and expertise they need to do their job effectively. It should be staffed with non-political commissioners (never elected to office and no direct party affiliation) and they should be completely free from political and financial influence. This is much easier said than done, but if we can just agree that our democracy is more important than any party or ideology, just long enough to do this, then maybe we can make it happen. Of course, there’s no way any of this will happen before this year’s elections, but we should be able to get this in place for 2018 if we start now.

What can YOU do? As always, get educated and get involved. Write your congress person and vote for people that have vowed to reform our election and voting systems. If nothing else, give money to organizations that are doing the right things, and ask your friends and family to do the same. I’ve given some examples below for you to consider. Note that it’s very hard to find completely unbiased organizations because these issues have been so politicized and our country right now is very polarized. But whatever your political leanings, you can’t have a true democracy if you can’t have fair, open, and verifiable elections.

If you’re interested, here are a couple more good articles to check out.

UPDATE: Another interesting story on the security of our voting system.

Don’t Reuse Passwords

I’ve been focusing most of my efforts on my new weekly newsletter. But I wanted to make sure some of this info is making it out to my blog, as well. Here’s a little taste of my newsletter. To get this yummy goodness automatically every week, sign up here!

 

This tip is one of the most basic and yet most important pieces of advice I can give you, and lately is has become critically important. There has been a rash of “mega-breaches” this year – hackers have compromised a large number of high-profile corporate servers to steal tens of millions of account credentials (ie, email addresses and passwords). In some ways, this isn’t new – this has been going on for a while now, though it’s been getting worse. It’s also not new that they are taking these credentials and trying to use them on other online accounts. However, the scale of these attacks has markedly increased. This report talks about a recent attack involving 1 million attacking computers against almost half a billion accounts at two large web sites – a “financial” company and a “media/entertainment” company. Another article notes that Carbonite accounts were attacked in a similar way.

What this tells you is that the bad guys know that most people reuse passwords – that is, people use the same password on multiple sites. Passwords are hard to remember – I get it. But if you want to be secure, you need to use unique passwords for every website – at the very least, for the important web sites. These include not just your financial accounts, but your email and social media accounts, and any site that has your credit card information.

How do you do this? YOU don’t… humans are not capable of remembering dozens or even hundreds of unique, strong, random passwords. You need a password manager, like LastPass or 1Password or KeePass. I personally recommend LastPass (see Tip #2 in the Top Five Tips pamphlet I sent you). LastPass offers a Security Challenge (which you can find at the left in your password vault) which will tell you which passwords are bad and highlight any places where you’ve reused the same password. You can also use LastPass to automatically change your password on many sites

Regardless of how you do it, you need to set aside some time in the very near future to generate unique, secure passwords for your key online accounts. While you’re at it, turn on two-factor authentication wherever you can.

 

Again, sign up here to receive helpful tips like this every week, delivered to your inbox! (And no, I will never give your info to anyone else.)

Apple vs the FBI

I’ve been waiting to comment on this because more information seems to be coming out every day. Also, there has been so much written about this already that I wasn’t sure what I would have to add. But I’m not being hyperbolic when I say that this is a pivotal moment in our democracy, so I couldn’t just ignore it. One thing I haven’t seen is a good summary of what’s really going on here, so let’s start with that.

Just the Facts: What’s Really Going On Here?

First, let’s establish what’s really going on here, because it’s been very muddy. The FBI recovered an iPhone 5c that was used by one of the shooters in the San Bernardino attacks last year. This phone was issued to the shooter by his employer, and therefore was not his private cell phone – meaning that the data on that phone was technically not private. Nevertheless, that data was encrypted by default because that’s how Apple sets up every modern iPhone. The FBI believes there may be information on that iPhone that could help them perhaps find other co-conspirators or maybe uncover clues to some future plot.

This phone was backed up using Apple’s iCloud service, and it’s worth noting that Apple was willing and able to provide the FBI with the backed up data. However, for some reason, the backups to iCloud stopped about 6 weeks before the shooting – so the FBI wants to get to the data on the device itself to see what’s missing. Due to some sort of screw-up, the FBI instructed the local law enforcement to change the user’s iCloud password, which prevented it from doing another backup. If they had taken the device to a Wi-Fi network known to that device, the device might have backed up on its own, and then the FBI would have had the 6 weeks’ worth of data that was missing. But because the password was changed, we’ll never know.

The FBI is not asking Apple to break the encryption on the phone. That’s actually not possible. Encryption works. When done right, it can’t be broken. However, if you can unlock the device, then you can get to all the data on it. Unlocking the device means entering a PIN or password on the home screen – it could be as simple as a 4-digit number, meaning there are only 10,000 possible codes. With a computer-assisted “guesser”, it would be trivial to go through all the 10,000 options till you found the right one to unlock the phone.

To combat this “brute force” attack, Apple added some roadblocks. First, it restricted how often you could try a new number – taking progressively longer between guesses, from minutes up to a full hour. That would make guessing even 10,000 options take a very long time. Second, Apple gave the user the option to completely erase the device if someone entered an incorrect password ten times in a row. This feature is not enabled by default, but it easy to turn on (and I highly recommend that everyone do this).

The FBI is basically asking Apple to create a new, custom version of it’s iPhone operating system (iOS) that disables these two features and allows a connected computer to input its guesses electronically (so that a human wouldn’t have to try them all by hand). This would allow the FBI to quickly and automatically guess all possible PIN codes until the phone was unlocked. It’s not breaking the encryption, it’s allowing the FBI to “brute force” the password/PIN guessing. It’s not cracking the safe, it’s allowing a robot to quickly try every possible safe combination till it opens.

That’s just a thumbnail sketch, but I felt it was necessary background. This article from the EFF goes into a lot more depth and answers some excellent questions. If you’d like to know more, I encourage you to read it.

Why Is This Case So Important?

Both the FBI and Apple are putting heavy spin on this issue. The FBI has always disliked Apple’s stance on customer privacy (encrypting iPhone data by default) and picked this terrorism case to invoke maximum public sympathy. Apple is using this opportunity to extol its commitment to protecting its customers’ private data, particularly as compared to Android (which does not encrypt data by default). Despite what the FBI claims, this is not about a single iPhone and a single case; despite what Apple claims, creating this software is not really comparable to creating “software cancer”. We have to try to set all of that aside and look at the bigger picture. This is not a black and white situation – these situations rarely are. However, the implications are enormous and the precedent set here will have far-reaching effects on our society.

In this country, we have the the Fourth Amendment which prevents unreasonable search and seizure, and basically says that you need a warrant from a judge if you want to breach our right to privacy. In this case, the FBI has done its job in this regard. And it’s technically feasible for Apple to create a special, one-time version of it’s iOS that would allow the FBI to unlock this one iPhone – and this special software would not run on any other iPhone. This is due to a process called “signing”, which is another wonderful application of cryptographic techniques. So in this sense, it’s not a cancer – this special software load can’t be used on other devices. However, if Apple does this once, it can do it again, and there are already many other iPhones waiting at the FBI and in New York that will be next in line for this treatment. There is no doubt that this will set a precedent and will open the flood gates for more such requests in cases – not just from US law enforcement, but from repressive regimes around the globe. Furthermore, the very existence of such a tool, even though guarded heavily within Apple’s walls, will be a massive target for spy agencies and hackers around the globe.

So the issue is much deeper than simply satisfying a valid warrant (even without all the arcane All Writs Act stuff from the late 1700’s that the FBI claims should compel Apple – a third party – to help them satisfy this warrant.) The outcome of this case will have severe implications for privacy in general – and that’s why Apple is fighting back.

My Two Cents

I’ve read a lot of good articles on this issue, and I’ll point you to a couple of them shortly. But the bottom line is that we, as a society, need to figure out how we handle privacy in the age of digital communications and ubiquitous monitoring. Like it or not, you are surrounded by cameras and microphones, and it’s getting worse rapidly. You carry with you a single device that can simultaneously record video and audio, track your position anywhere on the planet, track many of your friends and family, record your physical movement, and store your personal health and financial data, as well as untold amounts of other personal information. That device is your smartphone. That one device probably has more information about you than any other single thing you own. Beyond that, all of our communications are now digital and can therefore be perfectly preserved forever. And in the grand scheme of things, any person or group of people that can gain surreptitious access to this information – regardless of their intentions – will have unimaginable power over us. This was not envisioned by the Founding Fathers – we’re in new territory here.

It’s long since time that we have an informed, open and frank discussion – as a nation – about how we balance the need for basic human privacy versus the need for discovery in the pursuit of safety. It’s also about targeted surveillance versus mass surveillance, and creating an open, transparent system of checks and balances to govern both. If nothing else, I hope this case leads to a more informed public and some rational, thoughtful debate that thinks about the broader issues here – not just this one, highly-emotional case.

As promised, here are some links with some excellent info and perspectives around these topics:

Here are some links to more general but related topics:

 

On the Ethics of Ad-Blocking

As the saying goes, if you’re not paying for the product, then you are the product. The business model for most of the Internet revolves around advertising – which in and of itself is not a bad thing. It may be an annoying thing, but passive advertising isn’t actually harmful. Passive advertising is placing ads where people can see them. And savvy marketers will place their ads in places where their target audiences tend to spend their time. If you’re targeting middle-aged men, you might buy ad space on fantasy football or NASCAR web sites, for example. If you’re targeting tween girls, you might buy ad space on any site that might feature something about Taylor Swift or Justin Bieber. And if it stopped there, I don’t think many of us would object – or at least have solid grounds for objection. After all, this advertising is paying for the content we’re consuming. Producing the content costs money – so someone has to pay for it or the content goes away.

Unfortunately, online marketing didn’t stop there. On the web, competition for your limited attention has gotten fierce – with multiple ads on a single page, marketers need you to somehow focus on their ad over the others. And being on the Internet (and not a printed page), advertisers are able to do a lot more to grab your attention. Instead of simple pictures, ads can pop up, pop under, flash, move around, or float over the articles you’re trying to read. Worse yet, ad companies want to be able to prove to their customers that they were reaching the right people and that those people were buying their product – because this makes their ad services far more valuable, meaning they can charge more for the ads.

Enter the era of “active advertising”. It has now become very hard to avoid or ignore web page and mobile ads. Worse yet, the code that displays those ads is tracking where you go and what you buy, building up profiles on you and selling those profiles to marketers without your consent (and without most people even realizing it). Furthermore, those ads use precious data on cell phones and take a lot of extra time to download regardless of what type of device you use. And if that weren’t bad enough, ad software has become so powerful, and ad networks so ubiquitous and so commoditized, that bad guys are now using ad networks to distribute “malware” (bad software, like viruses). It’s even spawned a new term: malvertising.

Over the years, browsers have given users the tools they need to tame some of these abuses, either directly in the browser or via add-ons. It’s been a cat-and-mouse game: when users find a way to avoid one tactic, advertisers switch to a new one. The most recent tool in this toolbox is the ad-blocker. These plugins allow the user to completely block most web ads. Unfortunately, there’s really no way for ad blockers to sort out “good” advertising from “bad” advertising. AdBlock Plus (one of the most popular ad-blockers) has attempted to address this with their acceptable ads policy, but it’s still not perfect.

But many web content providers need that ad revenue to stay afloat. Last week, Wired Magazine announced that they will begin to block people that use ad-blockers on their web site. You will either need to add Wired.com to your “whitelist” (allowing them to show you ads) or pay them $1 per week. They state clearly that they need that ad revenue to provide their content, and so they need to make sure that if you’re going to consume that content that you are paying for it – either directly ($1/week) or indirectly (via ad revenue).

So… what’s the answer here? As always, it’s not black and white. Below is my personal opinion, as things stand right now.

I fully understand that web sites need revenue to pay their bills. However,the business model they have chosen is ad-supported-content, and unfortunately the ad industry has gotten over-zealous in the competition for eyeballs. In the process of seeking to make more money and differentiate their services, they’re killing the golden goose. Given the abusive and annoying advertising practices, the relentless and surreptitious tracking of our web habits, the buying and selling of our profiles without our consent, and the lax policing that allows malware into ads, I believe that the ad industry only has itself to blame here. We have every reason to mistrust them and every right to protect ourselves. Therefore, I think that people are fully justified in the use of ad-blockers.

That said, Wired (and other web sites) also have the right to refuse to let us see their content if we refuse to either view their ads or pay them money. However, I think in the end they will find that people will just stop coming to their web sites if they do this. (It’s worth noting that some sites do well with voluntary donations, like Wikipedia.) Therefore, something has to change here. Ideally, the ad industry will realize that they’ve gone too far, that they must stop tracking our online pursuits and stop trafficking in highly personal information without our consent.

The bottom line is that the ad industry has itself to blame here. They’ve alienated users and they’re going to kill the business model for most of the Internet. They must earn back our trust, and that won’t be easy. Until they do, I think it’s perfectly ethical (and frankly safer) to use ad-blocking and anti-tracking tools.

Below are some of my favorite plugins. Each browser has a different method for finding and installing add-ons. You can find help here: Firefox, Safari, Internet Explorer, Chrome.

  • uBlock Origin – ad-blocker
  • Privacy Badger – anti-tracking plugin
  • HTTPS Everywhere – forces secure connections whenever possible
  • Better Privacy – another privacy plugin, slightly different from Privacy Badger

If you would like to get more involved, you might consider contributing to the Electronic Frontier Foundation.

 

 

Gone Phishin’ (LostPass)

LastPass is the password manager I recommend in my book and to anyone who asks. While there are a handful of good products like it, to me LastPass has a rock-solid security story and all the features anyone could want.

You may have heard last week about a threat to LastPass called “LostPass” on the news. Well, actually, you probably didn’t – the mainstream press doesn’t cover this stuff much. But I’m going to cover it anyway because it demonstrates one of the most troublesome security problems we have today: phishing. Unfortunately, this has nothing to do with a rod and a reel and whistling the theme to The Andy Griffith Show. Phishing is a technique used by scammers to get sensitive information from people by pretending to be someone else – usually via email or a web page (or both). Basically, they trick you into thinking you’re dealing with your bank, a popular web site (PayPal, eBay, Amazon, etc), or even the government. Sometimes they entice you with good stuff (winning a prize, free stuff, special opportunity) and sometimes scare you with bad stuff (freezing your account, reporting you to some authority, or telling you that your account has been hacked). But in all cases, they try to compel you to give up information like passwords or credit card numbers.

Unfortunately, it’s extremely easy to create exact duplicates of web pages. There’s just no real way to identify a fake by looking at it. Sometimes you can tell by looking at the web site’s address, but scammers are very good at finding plausible web site names that look very much like the real one they’re impersonating.

In the case of “LostPass”, a research demonstrated that they could act as a “man in the middle” to steal your LastPass login and password – even if you use two-factor authentication (which I strongly recommend). LastPass is a browser plugin that watches what web pages you’re on, and when it detects a login web form, it offers to automatically fill in your ID and password. This researcher was able to create a malicious web page that could log you out of LastPass and then pop up a dialog asking you to log back in – but not the real LastPass dialog! Instead, it was a fake. So you would enter your email address and password, then it would store this juicy info. It now had the keys to the kingdom! It had access to your entire LastPass vault! All your passwords, secure notes, credit cards!

To keep you from getting suspicious, it would actually then turn around and use the email and password you gave it to actually log into LastPass. This is the “man in the middle” part – to you, it pretends to be LastPass; to LastPass, it pretends to be you. If you had two-factor authentication turned on, it still did you no good. LastPass would tell the malware to prompt the user for the two-factor auth token, and the malware would turn around and ask you for that token – again, placing itself in the middle.

So, should you be worried? Should you abandon LastPass? Yes and no, in that order. Phishing is a problem for ALL browser-based plugins, including those of other password managers. Phishing is a major problem for everyone who uses email or web browsers. So in that sense, you should be worried about it. In a minute, I’ll give you some tips for protecting yourself.

This researcher picked on LastPass because he perceived it to be the top dog in password managers, not because it was the only one susceptible to this attack. He also contacted LastPass well before he released his research (as any good security researcher would do), and LastPass was able to patch their software before he even announced this problem. That is – if you’re using LastPass, you’re already safe from LostPass, as long as you’re updating your browser plugin. LastPass reacted properly to this, from what I can see, and has mitigated this particular risk (and ones like it). LastPass, in my experience, has taken all security concerns very seriously and is constantly updating its software to react to even potential risks. So I feel very comfortable continuing to recommend it.

How can you protect yourself from phishing attacks? Here are some tips:

  1. Never give out sensitive info in email. This includes credit card numbers, passwords, and social security numbers. Any reputable organization will never ask for this via email.
  2. Don’t click on links or buttons in emails. If the email is fake, they will take you to a fake or malicious web site – one that may look exactly like the real one. Instead of clicking the links provided, just go to the site “manually” – that is, type in the main web site address by hand (or Google it) and then log into your account or whatever from there.
  3. Don’t fall for scare tactics. It’s common for scammers to tell you that something bad will happen or has already happened, and you MUST click here NOW to fix it. For example, your PayPal account has been frozen and you need to click this link now to log in and set things straight. Instead, go to your web browser, type in “paypal.com” and log in. If there’s really a problem, you’ll see it immediately when you log in.
  4. Help protect others by using strong, unique passwords on your email accounts. Hackers love to get into your email account and your email address book to send emails to everyone you know. These people (presumably) trust you and will be more likely to click on bad links.
  5. Use LastPass and keep your browser plugin up to date. They are adding new features to help prevent phishing attacks.