AI GOVERNANCE & AI COMPLIANCE
AI rarely fails because of models.
It fails because of responsibility.
Not spectacularly, but quietly.
In poorly defined responsibilities.
In a lack of documentation.
In systems that work but belong to no one.
AI governance is therefore not a set of rules.
It is architecture for responsibility.
WHY THIS IS NO LONGER A TOPIC FOR LATER
The EU AI Act will effectively come into force in August 2026. Yes, I know that the hard lobbying by Big Tech has borne fruit. There are extended grace periods (as of 2025-12-27 - 20:01).
But things will get very serious starting in August. Despite staggered transition periods—such as for prohibited practices (starting in February 2025) or GPAI models (starting in August 2025)—real, legally binding obligations will take effect on this date, and failure to comply can result in fines of up to 35 million euros or 7% of global annual turnover. This new era brings with it real personal responsibility for decision-makers, as executives (such as CEOs) are explicitly held accountable for implementing risk management systems and adhering to compliance structures as part of the required AI governance.
Those who still consider compliance to be an optional extra today are building systems that will be neither approved nor operational tomorrow, as high-risk AI applications must undergo a conformity assessment procedure and obtain a CE mark before they can be placed on the market or put into service.
I have been working on real AI projects under precisely these conditions since 2023.
Not in training courses.
Not in strategy papers with no follow-up.
But in real life.
MY UNDERSTANDING OF AI GOVERNANCE
AI governance is not a control mechanism.
Nor is it a fig leaf for ethics.
It answers three simple questions that are rarely asked clearly:
Who is allowed to use AI?
For what exactly?
And who is responsible if things go wrong?
Everything else follows from this.
For me, governance means incorporating regulatory requirements into processes, architectures, and decision-making paths in such a way that they are effective in everyday life. And all this with as little disruption as possible.
See also my white paper on this topic (my doctoral thesis topic, “Minimal Viable Compliance”):
COMPLIANCE BY DESIGN INSTEAD OF COMPLIANCE BY PANIC
As a lawyer, I see paragraphs.
As a computer scientist, I see systems.
As an architect, I see the gaps between them.
Among other things, the EU AI Act requires:
- a comprehensive risk management system
- documented quality management
- clear operator obligations
- fundamental rights impact assessments for high-risk systems
- ongoing monitoring throughout the entire life cycle
That sounds abstract.
But it only remains so until you operationalize it.
That is exactly what my job is.
MINIMUM VIABLE COMPLIANCE
I have developed my own approach to this: Minimum Viable Compliance.
Not as an excuse.
But as an alternative to oversized compliance monsters.
The idea is simple:
As much regulation as necessary.
As little overhead as possible.
As early as makes sense.
MVC translates regulatory obligations into concrete artifacts, roles, and processes.
Connectable to real development and decision-making processes.
Not perfect.
But viable.
The concept is part of my ongoing scientific work and the basis of my practical projects.
GOVERNANCE IN PRACTICE
Among other things, I helped develop an AI strategy and governance structure for a large public institution with several thousand employees (statutory accident insurance).
The starting point was not a blank slate, but an existing chapter on governance and responsibility.
A solid foundation.
My task was, among other things, to refine this structure so that it:
- explicitly addresses the requirements of the EU AI Act
- defines clear roles throughout the entire AI lifecycle
- makes risks visible at an early stage
- and remains understandable for non-technicians
This included, among other things:
- the embedding of risk management in accordance with Article 9
- operationalizing quality management in accordance with Article 17
- clear documentation requirements along real use cases
- the distinction between supportive and automated decision-making
- the development of a structured use case pipeline instead of a graveyard of ideas
- and the conscious promotion of AI literacy within the organization
Governance only works when people understand what they are doing.
Not if they are afraid of doing something wrong.
USE CASES INSTEAD OF AI ZOO
A typical problem for large organizations is not a lack of AI, but an excess of AI.
Unstructured ideas.
Unclear priorities.
No common evaluation grid.
That’s why I establish systematic use case processes:
- Collect ideas
- Pre-evaluate them early on from a legal and strategic perspective (high risk or not is by far the most important decision)
- Prioritize them economically
- Pilot them
- Only then scale them
- And monitor them continuously
This doesn’t slow down AI.
It makes it more targeted.
And being compliant can also be a huge advantage. Read this and be amazed: https://www.sgs.com/en-it/news/2025/06/xayn-is-the-first-german-company-to-receive-iso-iec-42001-certification. ISO/IEC 42001 certification is essentially the publicly verified proof of concept that your AI use cases are not running as a colorful jungle of ideas, but as a controlled, risk-based system: clean governance, clearly documented risks and controls, AI Act-ready structures and thus faster conformity assessments, less friction with legal/compliance, and a highly visible signal of trust for enterprise and public sector customers who are no longer interested in “move fast and break things,” but rather in scalable, auditable AI landscapes.
So, what do you think: is it worth it? I think so!
WHO IT’S FOR
My work is aimed at organizations that
- want to use AI seriously
- accept the regulatory reality
- and understand that governance is not an innovation killer, but a stability factor
I help build and operate AI systems that work technically, comply legally, and remain organizationally accountable.
What am I not?
I am not a purveyor of fearmongering.
And certainly not one for buzzword programs. My website deliberately contains hardly any graphics, even though I am also a musician and design all the graphics for our song covers myself. I have been proficient in Adobe Photoshop and Illustrator for 20 years, as well as Corel Draw and my AI colleagues: Midjourney and Google’s young talent “Banana.”
Plain text is not a sacrifice, but a positioning. In times of AI stock images, Midjourney wallpapers, and “hero sections” with meaningful faces, text acts like a scalpel.
IN A NUTSHELL
AI governance is not an add-on module.
It is part of the architecture.
Those who ignore it save time today and pay twice tomorrow.
Those who integrate it cleanly gain room for maneuver.
If you are looking for someone to tell you what is really necessary and what you can do without, then it is worth having a conversation.
