How to Write an AI Policy That Will Steer You Away from a Data Breach

We are in the early stages of understanding AI, and the ungoverned use of AI could lead to a serious backlash. In this article our Senior Security Advisory Consultant Laurence Tennant shares how a good AI policy provides a guardrail that keeps the use of AI on the straight and narrow and ensures you can be part of the fourth industrial revolution.

In 2024 MIT Technology Review published an article called ‘Nobody knows how AI works’. The article listed examples of odd AI behaviours and failures, from hallucinations to casual racism. Whilst an entertaining read, the article highlighted a creeping doubt about AI – that nobody really knows how AI works. Where does the data go? How did the AI model come to this conclusion? How will AI be regulated in the future? And with that in mind, might the roll out of AI within your company be worth a little governance?

AI has always been with us, but it truly exploded with the advancement in the Generative Pre-trained Transformer 3 – better known as Chat GPT-3 – in 2022. The breakthrough in Chat GPT-3 was the generation of text in a far more accurate and advanced way than previous models, and the potential for business was and is huge. Chatbots could engage with customers, presentations could be created in seconds with easily written prompts, and using AI could cut hours off previously labour-intensive tasks. So far so good.

The results were impressive, and they still are – it’s easy to see why AI is being so rapidly adopted. But issues are starting to be raised about AI models around how they are trained and how they really work. Because AI can get things wrong – very wrong.

AI Can Cause a Data Breach

Data inputted into AI can turn into a data breach. AI can be trained on the wrong data and give inaccurate results. AI security is starting to come to the fore, with vulnerabilities in models and AI leading to an increase in an organisation’s attack service.

To illustrate the risk, let’s look at a company using AI to scan prospective employees’ CVs. Previously this would be done by the recruitment team reading the CVs and triaging them, sifting candidates and handling those CVs and candidate data sensitively.

The processing of a CV is the processing of data – potentially sensitive data. The CVs of employees might contain personally identifiable information (PII), and when the CV is inputted into the AI model the data might be labelled as sensitive, but the model might still store the data in an area accessible to all employees. If not specified or hosted by the company, then the CVs might be processed and stored offsite or in a third country.

Samsung employees inputted sensitive company information into Chat GPT-3 in 2023, leading to a data breach. The data could not be retrieved.

Then there is the scanning itself. AI works using prompts. Prompts can be hidden in documents which are being scanned by AI in a process called prompt injection. The CV author might insert a prompt in white text which forces the AI to choose that CV or, more worryingly, a prompt which causes the AI to do something nefarious.

AI Model Security and Ethics

The security of AI models is only beginning to be understood. Microsoft Copilot recently had cause to celebrate its first CVE. CVE-2025-32711, dubbed “EchoLeak”, highlighted where Copilot was scanning and executing prompts hidden in emails in a CVE which echoed the macro vulnerabilities of Office products.

And let’s not forget the ethics of using AI. If an AI model rejects a CV, why? If an AI model has discriminated against a candidate based on protected characteristics, then not only is this ethically unsound but it is also illegal. Because we don’t know how some models are trained, perhaps (as has happened) they are trained on racist or defamatory information.

Of course, recruitment is just one AI use case. Whilst AI can be used for scanning CVs, it is also being used for medical analysis, engineering, and legal work. Rejecting a CV for unethical reasons is troubling, but what if AI is used to misdiagnose a serious illness?

This is why we need governance.

How an AI Policy Can Help

Having an AI policy might just steer you away from a data breach or lawsuit. A good AI policy is there to enable the use of AI, not restrict it – after all, the unrestricted use of AI may lead to a data breach or a legal case and subsequently a kneejerk reaction such as a blanket ban and the potential loss of productivity this would cause.

When writing an AI policy, you should consider the following headings:

  • Data governance that establishes how data is collected, stored, processed, and protected to ensure quality, compliance, and ethical use.
  • Model security that defines measures to protect AI models from malicious access, tampering, and misuse throughout their lifecycle.
  • Model transparency that outlines requirements for explaining how AI models work, the data they use, and the rationale behind their outputs.
  • Acceptable usage that sets clear rules on how AI systems can and cannot be used to ensure responsible, lawful, and ethical application.

These four headings are a great place to start. But, depending on your use of AI and future regulation, AI governance is likely to grow significantly.

We are in the early stages of understanding AI, and the ungoverned use of AI could lead to a serious backlash. A good AI policy provides a guardrail that keeps the use of AI on the straight and narrow and ensures you can be part of the fourth industrial revolution.

Need support drafting your AI policy? Our expert consultants will scope, draft, and deliver a bespoke AI policy tailored to your organisation. Built on best practices from ISO 42001 and the NIST AI Risk Management Framework, your policy will cover four critical areas: data governance, model security, acceptable usage, and transparency and accountability. Learn more and speak to an expert.

Subscribe to the newsletter today

Related posts

New Six Degrees research exposes dangerous cyber security disconnect across the UK’s retail sector

New Six Degrees research exposes dangerous cyber…

Retailers claim high levels of cyber security confidence,…

Six Degrees Retail Whitepaper

Six Degrees Retail Whitepaper

Six Degrees Retail Whitepaper The UK’s retail sector…

RE:geared. – How Dealership Infrastructure is Becoming the Next Competitive Advantage

RE:geared. – How Dealership Infrastructure is Becoming…

Across forecourts and showrooms in the UK, automotive…