How to Mitigate the Risks of AI

Artificial Intelligence is the buzzword of the day. Since the launch of ChatGPT in November 2022, there has been a flood of AI-based tools and services. Many tech firms are racing to build AI into their products without considering the consequences, let alone taking the time to build in guardrails for privacy and security.

Artificial Intelligence

AI Does Have Value

Before I go any further, I would like to just stipulate a couple things. First, artificial intelligence technology is just another tool. Like any tool, it can and will be used for both good and bad purposes, and everything in between. AI is a very powerful tool and we’ve barely scratched the surface of its capabilities and applications. Is it over-hyped? Of course it is – just like cryptocurrency was. But in some ways, it may be under-hyped. Look past the hype and focus on the technology itself. I think it’s going to be consequential, likely in ways we haven’t yet foreseen.

Second, we shouldn’t shun AI – in fact, I’ll argue below that we should embrace it. However, I do believe we should be careful where and how we use it. AI tech features should be transparent and obvious. We should know when, where and how AI is being utilized. We should also put guardrails on AI features where possible to prevent obvious abuses. But we’ll never be able to prevent all harm and misuse.

So, let’s dig into how these AI tools work, where the dangers lie, and what we can do to mitigate them.

AI in the Cloud

Because of the computation requirements, many artificial intelligence tools do their processing in the cloud. That is, the work is done on the company’s servers, not on your device. That means that the data they’re chewing on (like the prompt text you give to ChatGPT) must also be sent to the cloud. Many companies are blocking access to these web-based AI tools for fear of employees disclosing proprietary or other sensitive information. It’s not just a privacy concern – it potentially opens the company up to legal compliance issues.

So be aware that whenever you’re using an AI tool – even software running on your smartphone or computer – the images or text prompts you give them may be uploaded to the cloud. Given our nearly non-existent data privacy laws in the US, you should assume that this data may be saved, shared and sold.

New AI Features in Existing Products

While sharing personal data with new AI tools like ChatGPT is concerning, I’m much more worried about the rapid deployment of AI features into existing products. Companies are touting their fancy new AI features and people will be very tempted to enable them – some may even be enabled by default. And again, these new features can be extremely helpful, allowing us to do things we could never do before (or at least making them much easier to do).

But I’m worried that it will be unclear when these features may rely on cloud-based processing, which could potentially share data where none was shared previously. Oh, I’m sure there’s new verbiage buried in the terms of service. You probably got an email about this that you never read or a new pop-up message that you just clicked away.

Training AI on Your Existing Data

But wait, it gets worse. Companies we already have relationships with – particularly social media platforms – already have staggering amounts of our personal data. And the number one way you improve AI tools is to feed them more data. Furthermore, if you want to differentiate your AI tools from other AI tools, you need to feed it data that only you have access to.

Facebook has already declared that they are using their treasure trove of user data to train their AI systems. Google, DropBox, Zoom, Tumblr, WordPress and Reddit are, too. (Is Microsoft? Probably.) I would expect most companies to follow suit. AI is the next tech gold rush and public companies are under enormous pressure to cash in however they can. Sadly, these companies probably already have all the rights they need to use your data for AI training. But to be sure, many companies recently updated their terms of service to allow for this. There’s been a limited backlash, but nothing short of strong regulations will stop this.

What Can I Do?

For new AI tools like ChatGPT and Stable Diffusion, you can simply avoid using them. But I actually recommend you do the opposite. AI is not a passing fad, despite the ridiculous hype. This technology is powerful and game-changing, and therefore I think it’s important that we make some effort to understand it, even embrace it. Playing with free versions of these tools will show us what they can and can’t do. We can see the quirks and limitations, too. For example, see how chatbots can give nonsensical or even patently false results with complete confidence (referred to hallucinating).

So, keeping the above privacy concerns in mind, play around with some of these AI tools – just don’t give it personal information. Here are some places to start:

As for limiting the use of your personal data with existing products and services, that’s going to be harder – particularly outside the EU’s GDPR protections. Many of these companies already have your data and the right to use it. Some companies, like Facebook and DropBox, have ways for you to request that they not use your data for AI training, and by all means, you should avail yourself of any opportunity to do that. But it’s unclear if they will honor that request or are under any obligation to do so. Maybe it’s time to consider quitting social media platforms that don’t respect your privacy. When you do, ask them to delete your data, too. (You might want to ask for a copy of your data first.)

Future AI Privacy Solutions

Note that computers and smartphones are getting more powerful every year, particularly with regard to local AI processing. Computer chip designers like Apple, Intel and AMD are building powerful “neural engines” into their CPUs. This will allow at least some AI processing to occur locally, right on the device. While this won’t guarantee that your data won’t be shared or monetized, it at least allows for this possibility. Apple already touts this as a privacy feature.

The bottom line is that we need strong data privacy protections codified into law. In the US, it’s a big election year – contact your current representatives. Attend town halls and ask candidates their stance on privacy protections. Vote accordingly. Also, support groups that fight for your privacy.

Need practical security tips?

Sign up to receive Carey's favorite security tips + the first chapter of his book, Firewalls Don't Stop Dragons.

Don't get caught with your drawbridge down!

Scroll to Top