Skip to content

Why businesses need AI usage policies to avoid data breaches.

We are now well and truly in the AI era of working, with Gen AI tools such as ChatGPT and Gemini redefining the way that we work and access new information. 

As a result, the way that businesses should approach their risk management is also evolving, with AI tools potentially causing data breaches and security concerns that many office workers may not even be aware of.

A recent study by TELUS Digital found that 57% of enterprise employees admit to entering high-risk information into publicly available generative AI assistants. As Protecht’s VP Risk and Compliance for North America, I’d like to share my insights into the security threats AI tools can pose to organisations, and the importance of training staff on AI policies to avoid the damage of a potential data breach.

Don't enter confidential business data into AI tools

The first piece of advice for any business, regardless of your seniority at the company, is to not enter confidential data into any AI tool unless it has been approved for business use by the risk management team at your organisation. If you wouldn’t post it publicly, don’t put it into an AI tool. 

AI tools don’t have perfect memories, but they do process and retain data for training and moderation. 

Enterprise versions of tools often offer stronger privacy protections, but inputting confidential data into an AI tool is like whispering secrets in a crowded room - you can’t be sure who’s listening. If an AI platform is compromised or misused, that data could become an easy target for cybercriminals. 

And that’s before we get onto smaller platforms that may be controlled by bad actors to start with.

If you don’t have an AI use policy, make one now

To reduce the risks of AI data exposure, the first steps for any business should be as follows:

1) Set AI policies now, not later. Define what’s safe (or unsafe) to input.

2) Use enterprise AI solutions with clear security protections. Avoid free or unregulated tools for business use.

3) Educate employees. A simple mistake, like pasting a client’s data into ChatGPT, could trigger a major security event.

4) Monitor and audit AI use. Know which tools employees are using and what data is being shared.

AI risk isn’t theoretical, it’s real - and if you don’t have an AI policy (or if your formal policy is to ban all AI usage, which amounts to the same thing), then your employees will almost certainly be using it in an undocumented, unregulatable way. 

Training on how to use AI tools safely and securely should be mandatory for all office based workers, regardless of whether you work in banking, insurance, government, or any other industry. 

AI security training isn’t optional, it’s essential. AI is becoming a daily tool for many employees, but without proper guidance, a quick query can turn into a costly data breach.

Businesses already train employees on cybersecurity, phishing, and data protection, so AI needs to be part of the same playbook. 

Employees should know:

  • What not to enter into AI tools
  • How AI-generated content can be misleading
  • When to use enterprise-approved AI solutions

You can’t let “sorry, I didn’t know” ever be a genuine answer in a breach situation.

Advice for office workers when using AI tools

  1. Think before you type. If you wouldn’t post it publicly, don’t put it into an AI tool. Confidential data doesn’t belong in chatbots.
  2. Check the terms. Some AI tools store your queries. Read the privacy policy before using them.
  3. Stick to approved AI tools. If your company hasn’t vetted an AI tool, assume it’s not safe for sensitive work.
  4. Don’t trust AI blindly. AI can generate misleading, biased, or outright false information. Always verify before acting on it.
  5. Stay updated. AI risks evolve fast, so pay attention to company policies and cybersecurity alerts.

AI is a workplace tool, not a toy. Treat it like any other software that interacts with sensitive data.

AI tools only used internally can also be vulnerable

Internal AI doesn’t mean immune AI. Even closed systems can be compromised if they have:

  • Weak access controls: if employees or vendors have unrestricted access, insider threats become a major risk.
  • Insecure APIs: cybercriminals exploit weak integrations to inject malicious queries or extract sensitive data.
  • Lack of monitoring: AI tools need constant security testing to prevent vulnerabilities from being exploited.

Just because an AI tool is internal doesn’t mean it’s safe. Strong security controls are non-negotiable.

AI isn’t just a tool for businesses, it’s a weapon for cybercriminals. For example:

  • AI-powered phishing: Attackers use AI to craft hyper-personalised phishing emails that bypass spam filters.
  • Automated hacking: AI tools scan for vulnerabilities in networks, applications, and cloud services at scale.
  • Deepfake scams: AI-generated voice or video can impersonate executives, tricking employees into authorising fraudulent transactions.
  • AI model manipulation: Hackers attempt data poisoning, i.e. feeding corrupt information into AI models to manipulate their outputs.

All businesses should undertake the following security measures when creating their own AI tools, even if they are only designed for internal use:

  • Encrypt everything: Data at rest and in transit must be protected against breaches.
  • Use strict access controls: AI tools should follow zero-trust principles—only approved users can input or retrieve data.
  • Secure APIs: AI integrations need rate limiting, authentication, and anomaly detection to prevent abuse.
  • Monitor for threats: AI models should be continuously tested for data leakage, adversarial attacks, and model poisoning.
  • Apply ethical AI principles: Ensure AI outputs are auditable, explainable, and bias-tested to reduce compliance risks.

 

For over 20 years, Protecht has redefined the way people think about risk. We enable smarter risk taking by our customers to drive their resilience and sustainable success.

Our Protecht ERM SaaS platform lets you manage your risks in one place: risks, compliance, incidents, KRIs, resilience, vendors, cyber, and more. Find out more and request a demo:

Request a demo

About the author

Jared Siddle is Protecht's VP of Risk and Compliance, North America. He is a Qualified Risk Director who has been Head of Risk Management at three different companies, including two of the world's largest asset managers. Jared has proven success in banking, fund management and other financial service companies across over 26 countries. He is passionate about governance, risk, compliance and sustainability. He is an expert at designing, developing, and executing customised enterprise-wide risk frameworks.