Picture this: It’s 2 am, you’re on your fifth coffee, and you’re up against a deadline that seemed impossible two hours ago. Suddenly, a magical AI genie (like ChatGPT) appears, ready to help you finish that report, presentation, or spreadsheet. What a lifesaver! But wait – are you sharing too much with this digital genie? (Hint: Yes.)
The Curious Case of Accidental Leaks
AI systems like ChatGPT are constantly learning from user inputs. This helps make them smarter which is good, but it also means that sensitive information – think credentials, proprietary code, business numbers, trade secrets – can unintentionally be fed into the system. Then once the model is re-deployed, adversaries might be able to mine this data by sending specific prompts.
The Untrainable AI Conundrum
Now, you might think, “Why not just un-train the AI to forget this sensitive info?” Well, it’s not that simple. Un-training an AI model is like trying to unscramble an egg – nearly impossible. Aside from that, it would only work if you catch the leak and un-train the model before anyone mines it out of the model.
An Ounce of Prevention
Users are our first line of defense, and awareness is the most cost-optimized tool in preventing data leakage.
We need to show them the ropes, provide examples of what’s safe to share, and what’s not when chatting with our friendly AI genies, preferably by examples that they can relate to e.g. examples from the data that they work with, or common mistakes that someone in their shoes might make.
Hold awareness workshops (don’t forget the pizza!), and interactive trainings would be walking in the right direction.
However, as much as I believe in the importance of training and awareness, I know, through experience, that we can’t expect our caffeine-fueled, deadline-chasing users to be on their A-game all the time.
The Quest for Untrainable AI Solutions
While we’re empowering our users, let’s also work on making AI models easier to “un-train” or at least mitigate the risks. This might involve collaborating with AI developers and researchers to create new ways of scrubbing sensitive data from AI systems. It’s a challenge, but hey, who doesn’t love a good puzzle? Least of all those who tackle such a complex systems for sure.
In the end, as with everything else, it’s all about striking a balance between using AI to make our lives easier and keeping sensitive information safe.
And to do that we can start by training the users, and make untraining models easier.