By the main deadline of August 2026, companies must have inventoried, evaluated and documented their AI applications. This includes in particular

  • an AI inventory with risk classification (low, transparent, high, inadmissible),
  • defined roles and responsibilities (e.g. according to the RACI model),
  • an approval process for new AI use cases,
  • Policies for data access, logging, documentation and incident management,
  • Proof of AI literacy (basic skills in dealing with AI).

The first partial obligations will already apply from 2025 – such as transparency for general-purpose AI (e.g. GPT models).