As artificial intelligence becomes part of everyday operations, organisations must ensure that innovation goes hand in hand with strong governance and trust.
For a specialised international organisation, this challenge is particularly significant. The organisation manages large volumes of sensitive information and operates in an international environment where transparency, accountability, and responsible technology use are essential.
At the same time, it does not operate under a single binding legislative framework. Instead, governance must be built on internationally recognised standards and best practices.
To support the responsible adoption of AI while strengthening data protection, the organisation set out to reinforce its privacy governance and AI risk management capabilities.
From governance frameworks to daily practice
As AI-enabled tools began to appear across projects and internal operations, the organisation needed a consistent way to identify and manage both privacy risks and AI-related risks.
Privacy practices were strengthened in line with the ISO 27701 privacy information management standard, reinforcing processes such as privacy impact assessments, management of data subject rights requests, and monitoring of personal data breaches.
In parallel, a structured AI risk management framework was developed in alignment with the principles of the ISO 42001 AI management system standard.
This enables teams to assess AI-enabled use cases, identify operational and ethical risks — including those related to generative AI — and implement mitigation measures.
More than 40 AI-enabled tools and solutions have already been assessed as part of this governance process.
To support long-term oversight, the organisation also digitalised its AI risk assessment process through a governance and risk management platform, enabling consistent evaluations and clearer traceability of risks and mitigation actions.