AI Governance, Assurance, and Engineering

I help organizations design, govern, and build AI systems.

For organizations bringing AI systems to market while keeping standards, governance, and implementation moving in parallel.

What I can help with

Governance

  • ISO/IEC 42001 and EU AI Act-related questions
  • AI governance structure, policy updates, and implementation plans
  • AI literacy and internal enablement

Assurance

  • Evaluation and assessment criteria
  • Audit readiness, evidence, and documentation
  • Structured work on open questions around controls and evidence

Engineering

  • Architecture, prototyping, product development, and deployment
  • Classical ML, computer vision, LLMs, RAG, and agentic AI applications
  • Bridge work across governance, product, and engineering

How engagements
usually start

Typical starting points

  • Understanding whether a use case falls under the EU AI Act
  • Implementing engineering practices for high-risk AI system requirements
  • Practical implementation of ISO/IEC 42001
  • Governance or implementation questions blocking progress

Typical outputs

  • Risk classification and feasibility work
  • Policies, inventories, and roles and responsibilities
  • Evaluations, architectures, and prototypes
  • Production-ready systems and concrete next steps

Ways of working

  • Strategic advisory sessions
  • Short sprints with a clear scope
  • Embedded support
  • Hands-on collaboration alongside internal teams

FAQ

Does my AI system fall under the EU AI Act?

It depends on what the system does and in what context it is deployed. The Act classifies systems by risk level: prohibited, high-risk, limited-risk, and minimal-risk. General-purpose AI models sit in their own category. Classification is not always straightforward. A system recommending medication dosages in a clinical setting falls under the high-risk category. A system that helps hospital staff book meeting rooms does not, even though both are deployed in healthcare. If you are unsure where your system sits, a structured risk classification exercise is the right place to start.

How do I implement ISO/IEC 42001?

The standard follows the same high-level structure as other ISO management system standards, with AI-specific requirements around risk, impact assessment, and the system lifecycle. In practice this means establishing an AI policy, defining roles and responsibilities, building a risk and impact assessment process, and putting controls in place across development and deployment. Most organizations understand the requirements well enough. Translating them into concrete decisions given specific systems, teams, and existing governance is where progress stalls. A gap assessment typically takes a few weeks. Full readiness takes longer depending on where you are starting from.

What is required for EU AI Act compliance for high-risk AI systems?

High-risk systems cover areas like hiring, credit, medical devices, and critical infrastructure. They must go through a conformity assessment, maintain technical documentation, implement a risk management system, apply data governance practices, ensure human oversight, and set up post-market monitoring. Providers also need to register in the EU database. The requirements are clear on paper. Working out how they translate into architecture decisions, evaluation criteria, and engineering practices is where most organizations get stuck, and where getting it wrong creates audit risk downstream.

How do I prepare for an AI audit?

Mostly it is a documentation and evidence problem. Auditors will look for proof that your organization has identified its AI systems, assessed their risks, put controls in place, and can show this in a structured way. The most common gaps are incomplete AI inventories, risk assessments disconnected from actual development decisions, and missing monitoring records. The best starting point is an honest gap assessment, then prioritizing whatever evidence would be hardest to reconstruct after the fact.

What is an AI Management System (AIMS)?

It is the set of policies, processes, roles, and controls an organization uses to govern how it builds, deploys, and monitors AI responsibly. ISO/IEC 42001 defines the requirements for an AIMS, the same way ISO 27001 does for information security. Think of it as an organizational capability rather than a software tool. For organizations navigating the EU AI Act or other regulation, a functioning AIMS is what makes compliance manageable rather than a recurring scramble.

A short conversation is enough to see whether and where I can help.

We can quickly clarify what you are building, where the blockers are, and whether governance, assurance, engineering, or a combination of the three is needed.

Let’s discuss how I can help