AI is no longer future music. For many professionals, Microsoft Copilot and ChatGPT are already part of their daily toolkit. Yet this often still happens ad hoc and without clear ground rules. How do you ensure that your organisation works with AI securely, compliantly and purposefully?
AI is no longer future music. For many professionals, Microsoft Copilot and ChatGPT are already part of their daily toolkit. Yet this often still happens ad hoc and without clear ground rules. How do you ensure that your organisation works with AI securely, compliantly and purposefully?
This is how to work safely with AI
A good AI policy gives employees guidance and ensures that everyone works responsibly with AI tools. Using 5 ground rules, we describe the basics of a good AI policy:
1. Use generative AI for a reason
Define when, for what and why generative AI may be used. Remember: AI is not an end in itself, but a means. Using AI is only valuable if it actually contributes to efficiency, quality or innovation, for example.
Can employees use Copilot for internal reporting? For customer presentations? Or for analysing financial data? In this, distinguish between experimental use and production use. Also indicate whether specific AI functions (such as automatic summarising or rewriting) are restricted or encouraged. Make sure the policy is in line with the organisation's core values, strategic goals and compliance requirements.
2. Treat AI as your intern
AI tools can generate total nonsense... that sounds convincing anyway (Or do you like glue on your pizza?). Therefore, do not see AI as an expert, but rather as a mediocre intern with access to Google: sometimes useful, but rarely flawless. Let policies clearly reflect that output from AI should never be adopted without verification - certainly not in customer communications, policy documents or financial reports. Encourage a 'four-eye principle': AI as an assistant, not as the person with final responsibility. In addition, provide guidelines for recognising so-called 'AI hallucinations' or misinterpretations, so employees know when to be extra critical.
3. Don't be ashamed of AI use
Using tools like ChatGPT is nothing to be ashamed of - as long as it is done responsibly, it actually shows that your organisation is consciously dealing with innovation. Do be clear about the use of AI in processes that impact external parties. For example, if AI is used for customer selection, risk assessment or reporting.
Explain how the tools are applied, what the added value is and how any risks are managed. Transparency about this builds trust and helps comply with (future) laws and regulations, such as the AI Act.
Make the policy concrete and understandable, so employees know what is expected of them. AI does not belong only in the IT department - it is an organisation-wide issue.
4. Don't leak personal data!
One of the biggest risks of AI tools is the inadvertent sharing of sensitive information. Therefore, be explicit in what may and may not be entered. Think about:
- no personal data (AVG)
- no confidential customer data
- no internal strategy documents
Make employees aware that input to an AI prompt - depending on the tool used - can in some cases be saved or even used for training the underlying model.
However, if you use Microsoft Copilot within an enterprise Microsoft 365 environment, you can be confident that this data will not be shared outside your organisation or used for model training, provided the settings are configured correctly.
5. No AI use without IT governance
A solid IT governance forms the foundation for responsible AI use. This starts with clear frameworks, roles and responsibilities. Determine who within the organisation decides which AI tools can or cannot be used, who is responsible for the technical set-up and security, and who supervises their proper use. Also establish who acts as a point of contact in case of questions, incidents or ethical dilemmas.
Integrate AI use explicitly into your risk management process. Identify potential risks - such as data leaks, erroneous output or unwanted bias - and ensure appropriate control measures are in place. By making AI part of your broader risk framework, you avoid surprises and create a solid foundation for responsible innovation.
Getting a better grip on AI within your organisation?
Working successfully with AI is only really successful if you have a clear structure of responsibilities, policies and risks in place. In our white paper IT governance: this is how companies innovate in the AI era you will read how to do that. You will find practical tips, tools and plenty of attention to the human factor.
Click here to download the white paper