It’s long since time that companies and customers adopt a much more pessimistic security and privacy stance. Data is not the new oil, it’s the new toxic waste. And while we need to get better at trying to contain it, it still seems to find ways to leak. And so we need to assume that it will leak and prepare accordingly. We need to adopt a trust no one stance on security and privacy.
Leaks, Breaches & Hacks – Oh My!
In the last two weeks alone, we’ve had two massive data leaks: both Facebook and LinkedIn reported separate data leaks of over half a billion customers (each). Notice that I didn’t call them “breaches” – because, technically, they weren’t. The data wasn’t “stolen”, it was “scraped” from public web pages. The claim from Facebook (and I think LinkedIn, too) is that the information was automatically downloaded from public user profiles in a way that violated terms of service, but wasn’t a true security hack. Note that this also means that neither company felt compelled to inform the affected customers of this leak.
Of course, there have also been several actual data breaches, as well. And there have been horrendous corporate hacks, like SolarWinds and ProxyLogon. And sadly, there have been very few real consequences for any of these, other than damaged reputation. We just don’t have good laws in place for this stuff (yet). There are patchwork state-level laws for notification, but even those have big loopholes. Often, in the US at least, the threat of class litigation and punitive damages can prod some companies to do better, but currently you have to prove actual financial harm in most cases – which is nigh impossible for a data breach.
Personal Data is Toxic Waste
Personal data, and computer systems in general, need to build security and privacy in by default. This includes not just laptops and servers, but all “smart” devices. Any device that runs software and is connected to a network must be capable of securely installing software updates. Furthermore, this process must be automatic and enabled by default. I have literally dozens of IoT devices in my house right now. There’s no practical way for me to monitor and manually update each one. You may not have that many yet, but you will. (As a quick pro tip, you can at least segregate all these Internet of Things (IoT) devices on your guest network.)
Companies need to treat personal data (including inferences drawn that may also be personal) as toxic waste. It’s a liability, not an asset. They should collect at little as possible – only what is absolutely necessary to perform the intended function (for you, not for them).
Trust No One
But even that isn’t enough. We need to adopt a new mode of security: trust no one. That doesn’t mean that we assume that all people are evil or even morally weak. It means that we must assume that any person – and any computer – can make mistakes or be compromised. A good employee can go rogue or be blackmailed into doing something they wouldn’t normally do. Companies need to compartmentalize employee access to customer data based on data type and employee role. They should even limit access by time and location. One of the worst LinkedIn breaches occurred when a Russian hacker used a LinkedIn employee’s account to access data from Russia. The system should have flagged this as fishy.
Companies need to also treat their own equipment as suspect. Computers and network-connected devices can be compromised, too. Just because the device is operated by employees and is within the corporate network doesn’t mean you should trust it. Target was hacked via its HVAC system and a casino is Vegas was hacked through a fish tank thermometer.
Steps You Can Take Now
However, unless you happen to be the CISO or IT security chief at a company, none of that is under your control. There are things you can do, though – as a consumer and as a citizen. The only data that can’t be stolen or abused is data that doesn’t exist. Minimize the data you give away, delete what you can, and obfuscate the rest. Some specific ideas…
- If you’re forced to put personal info into a form, try lying. Do they really need your actual birthday? Give them a date that’s close enough. Maybe they only really need to know that you’re not a minor, for example. Do they need your actual photo? Probably not. So pick one of these computer-generated faces. For security questions, give wrong answers. One note, though: record all these fake answers somewhere, like in your password manager. You may need to repeat these fake answers if you need support.
- Delete your data from any accounts you don’t need or no longer use. There are many helpful tutorials for deleting your Facebook and Google data, but think about other accounts, too. (You might want to download all that data before you delete it, so you have a copy.)
- If you still need an account, consider altering your personal data or removing anything that’s not strictly needed. Hopefully this will cause the old info to eventually be deleted internally.
- Stop using “Sign in with Google” or “Sign in with Facebook”. You’re creating a data-sharing contract. Just go ahead and create a new account ID and password, using a password manager. (Note that “Sign in with Apple” is explicitly designed for your privacy.)
- Stop oversharing, particularly when it comes to other people’s data. Don’t tag someone in a photo or even upload a photo with someone else in it without their express permission. Be careful how you wish other people a happy birthday or happy anniversary. Don’t share travel plans of others. Don’t ever agree to share your contact list with an app or service that doesn’t absolutely need it. Be aware that images from smartphones often contain GPS coordinates embedded in the image.
- Demand security and privacy regulations from your elected representatives. The reason you can eat food, take medications, drive your car and fly on an airplane without worrying is because we have regulations and regulators to enforce them. We need the same for our data and our devices.