Is Your Business Data Safe? What You Need to Know About AI Data Privacy
Share this post!
A client told me she had been sat at her laptop the previous evening, staring at her inbox, and a question had occurred to her: “why would someone pay to use Gemini inside Google Workspace when you can just use the free version online?”.
My answer was three words, “AI data privacy”.
It is a fair question though, and most people have never thought to ask it. What actually happens to the information you put into these tools, and should you be mildly to moderately concerned about it?
Two products, two sets of AI data privacy rules
If you are using Gemini inside Google Workspace, which means inside Gmail, Google Docs, Google Drive and the rest of the suite, you are covered by enterprise-grade data protection. Google’s own documentation is pretty clear on this:
“Your content is not human reviewed or used for Generative AI model training outside your domain without permission.”
– Google Workspace Privacy Hub
That is what the higher subscription cost gets you. The data governance is built for business use, and it reflects that.
When you go to gemini.google.com you are operating under completely different terms. Google states that conversations can be reviewed by human trainers to improve their products and services. Your inputs can be used to improve their models. There is no enterprise data boundary, so if you paste in a client’s personal details, a contract, or financial information, you have limited protections.
I spent two years training large language models for some of the world’s biggest LLM companies, including Google, Anthropic and OpenAI. So when I say actual humans are looking at the data you share, I am not speculating. That is what AI trainers do. The likelihood of any one specific conversation being reviewed is low, but the point is that the terms permit it, and you would have no way of knowing.
Why AI data privacy matters for your business
Here is a number worth pondering. According to Security Magazine, only 10% of companies have a comprehensive, formal AI policy in place. More than one in four say no policy exists in their workplace, nor is there any plan for one. Which means the overwhelming majority of businesses are making this up as they go, including, quite possibly, decisions about what goes into these tools.
Â
That matters because personally identifiable information, or ‘PII’ as the tech companies refer to it, has a legal definition. Under GDPR, which applies to UK businesses, you have obligations around how you handle data relating to identifiable individuals. Pasting a client’s name, address, email, or financial details into a consumer AI tool and having that data potentially used for model training or reviewed by a third party is not a straightforward situation from a compliance perspective.
Â
This does not mean consumer AI tools are dangerous, or that you should avoid them. For many tasks they are perfectly appropriate, such as writing a social media post, brainstorming ideas, or drafting something that contains no sensitive data. The question is whether you are making a conscious choice about what goes in, or just defaulting to whatever is open in your browser.
of companies have a comprehensive, formal AI policy in place. More than 1 in 4 say no policy exists in their workplace and there is no plan for one.
The habit problem
Here is where AI data privacy gets practical. A lot of people who pay for Google Workspace are also in the habit of opening gemini.google.com because it is easy and familiar. They are technically paying for the Workspace version with enterprise protections, but not always using it correctly.
Â
The good news is that the enterprise protection follows your account, not the interface. If you open gemini.google.com while signed in with your Workspace account, you are still covered. The problem arises when people are signed in with a personal Google account instead, which is more common than you might think, especially on personal devices or shared laptops where accounts can get muddled.
Â
So the fix is simple. Check which account you are signed in with before you start typing anything sensitive. If it is your Workspace account, you are protected wherever you access Gemini. If it is a personal account, you are on consumer terms regardless of which interface you are using.
The wider principle
Google’s Gemini is a useful illustration precisely because it is the same product operating under two completely different sets of rules, depending on how you access it. The same principle applies across every AI tool you use.
Â
The question is always the same, under what terms are you sharing this data, and is that appropriate for the kind of information you are putting in?
Â
Most tools are clear about this in their terms, even if the terms are long and nobody reads them.
Â
ChatGPT’s consumer version uses conversations to improve their models by default, though you can opt out in settings. Their Business and Enterprise plans have different protections. Claude’s consumer plans, which includes Free, Pro and Max, may use your conversations for model training unless you opt out in settings. API and enterprise deployments are excluded from this by default.
Â
The habit worth building is a simple one: before you paste something sensitive into an AI tool, take a second to consider what tool you are actually using and what terms you are operating under. For anything involving client data, personal information, financial details, or anything you would not want a stranger to read, use a plan or tool with clear AI data privacy protections in place.
Â
Need support for your business?
OnyxOps provides business and operations support for small businesses. Hands-on admin and marketing, automation and AI integrations. Whatever is eating your time or holding your business back, that is where we come in.
Get in touch at onyxops.co.uk
