Governance
- ISO/IEC 42001 and EU AI Act-related questions
- AI governance structure, policy updates, and implementation plans
- AI literacy and internal enablement
AI Governance, Assurance, and Engineering
For organizations bringing AI systems to market while keeping standards, governance, and implementation moving in parallel.
What I can help with
It depends on what the system does and in what context it is deployed. The Act classifies systems by risk level: prohibited, high-risk, limited-risk, and minimal-risk. General-purpose AI models sit in their own category. Classification is not always straightforward. A system recommending medication dosages in a clinical setting falls under the high-risk category. A system that helps hospital staff book meeting rooms does not, even though both are deployed in healthcare. If you are unsure where your system sits, a structured risk classification exercise is the right place to start.
The standard follows the same high-level structure as other ISO management system standards, with AI-specific requirements around risk, impact assessment, and the system lifecycle. In practice this means establishing an AI policy, defining roles and responsibilities, building a risk and impact assessment process, and putting controls in place across development and deployment. Most organizations understand the requirements well enough. Translating them into concrete decisions given specific systems, teams, and existing governance is where progress stalls. A gap assessment typically takes a few weeks. Full readiness takes longer depending on where you are starting from.
High-risk systems cover areas like hiring, credit, medical devices, and critical infrastructure. They must go through a conformity assessment, maintain technical documentation, implement a risk management system, apply data governance practices, ensure human oversight, and set up post-market monitoring. Providers also need to register in the EU database. The requirements are clear on paper. Working out how they translate into architecture decisions, evaluation criteria, and engineering practices is where most organizations get stuck, and where getting it wrong creates audit risk downstream.
Mostly it is a documentation and evidence problem. Auditors will look for proof that your organization has identified its AI systems, assessed their risks, put controls in place, and can show this in a structured way. The most common gaps are incomplete AI inventories, risk assessments disconnected from actual development decisions, and missing monitoring records. The best starting point is an honest gap assessment, then prioritizing whatever evidence would be hardest to reconstruct after the fact.
It is the set of policies, processes, roles, and controls an organization uses to govern how it builds, deploys, and monitors AI responsibly. ISO/IEC 42001 defines the requirements for an AIMS, the same way ISO 27001 does for information security. Think of it as an organizational capability rather than a software tool. For organizations navigating the EU AI Act or other regulation, a functioning AIMS is what makes compliance manageable rather than a recurring scramble.
Next step
We can quickly clarify what you are building, where the blockers are, and whether governance, assurance, engineering, or a combination of the three is needed.