The EU AI Act – How to bring copilot and AI agents into regular operation by 2026

The EU AI Act – How to bring copilot and AI agents into regular operation by 20262025-11-11T09:08:24+01:00
A robot (AI Agents from Copilot) is checked by an employee using a checklist. Afterwards, all employees in the company are trained to work efficiently with the AI agent.

The EU AI Act puts an end to the previously unregulated use of AI – including with tools like Microsoft Copilot or proprietary AI agents. Since August 2024, it is clear: those who use AI in companies, will need defined roles, documented processes, and traceable logging by 2026 at the latest.

What is the AI Act?

The EU AI Act is the world’s first comprehensive AI regulation and has been in force since August 2024.

The core of the AI Act is a risk-based approach: AI systems are classified into different risk categories based on their area of application and potential impact – from “minimal” to “unacceptable”. The higher the risk, the stricter the requirements: from simple labeling to documentation and control obligations, and even prohibitions, for example, in social scoring or manipulative systems.

It thus becomes clear: AI affects not only IT, but also HR, business departments, data protection, and legal equally. Inventory lists, responsible roles, risk assessments, and traceable decisions are no longer optional, but mandatory.

 

When Do the Regulations Apply?

Companies are already using Copilot, ChatGPT, or proprietary AI agents today – mostly without inventory, risk classification, or documented responsibilities. This is not yet a problem, but the transition period is ending: as of August 2026, most obligations will apply to higher-risk AI systems. Those who have not established governance structures by then risk fines and usage prohibitions.

Brief overview of the most important milestones:

Timeline What applies?
Since February 2025 Prohibition of unacceptable AI systems (e.g., Social Scoring, manipulative systems) and obligation for basic AI competence (AI Literacy).
By August 2025 Requirements for General-Purpose AI models (e.g., GPT, Gemini) – transparency, documentation, model cards.
By August 2026 Main deadline: Companies must have implemented AI inventory, risk classification, Policies, roles (RACI), approval processes, Logging, and monitoring – especially for high-risk applications.
From August 2027 Additional proof obligations, among others, for standardized high-risk systems such as medical devices, machinery, elevators, etc.

 

The period until 08/2026 is not an observation phase, but a implementation phase. Now, inventory lists must be established, high-risk applications identified, responsibilities defined, and initial policies tested. Those who wait lose valuable months and will face hectic compliance projects in 2026.

Policies & Processes – What Must be Documented by 2026

For Copilot, GPT-based assistants, or proprietary AI agents to be operated in compliance with the law from 2026, it is not enough to approve individual use cases. Companies need a documented policy set and clear processes – lean, but binding. The goal is a uniform framework: Who decides on new AI applications, which data may be used, and how is their use documented traceably?

What should the policy set include at a minimum?

  • AI Governance Policy: Principles on where and under what conditions AI may be used in the company.

  • Use Case Approval Process: Every new AI application requires a profile (purpose, data sources, risk) that is reviewed and approved.

  • Data and Access Policy: Defines which data an agent may access – and which not (e.g., HR, health, or financial data).
  • Handling AI Results: Human review remains mandatory, especially for sensitive decisions or publicly visible content.
  • Incident & Risk Management: Procedure for AI misconduct, data deviations, or undesirable results.
  • Logging & Documentation: Guidelines on how prompts, outputs, and decisions are documented to be provable by 2026.
An employee uses a checklist to review the processes behind the policies for smooth operation.


These policies only function with clear processes behind them:

  • Establish Inventory: Which AI is already in use today – internally, in tools like Microsoft 365, in business departments?
  • Risk Classification: Categorization of each system according to the AI Act (low risk, transparency obligation, high risk).
  • Approval Workflow: Template + review by IT, business department, data protection/legal.
  • Operation & Monitoring: Who monitors usage? Where is logging performed? Who is informed if something goes wrong?
  • Review & Update: Policies are not static PDF documents – they must be adapted with new use cases and legal changes.

Roles According to RACI – who Bears which Responsibility?

For AI systems like Copilot or proprietary agents to be operated responsibly, more than just technology is needed – clear responsibilities are essential. The AI Act does not require job titles, but traceable responsibility. A RACI matrix is perfectly suited for this: Who is Responsible, who is Accountable, who is Consulted, and who is Informed?

Typical roles in the AI governance model:

Role Tasks in AI Use
Owner / Business Department Reports use cases, documents purpose & benefit, responsible for results.
IT / Platform Operations Technical setup (e.g., Copilot, Azure OpenAI), access controls, integration & operation.
Risk / Compliance Assesses risk, defines control measures, documents risk acceptance.
Legal / Data Protection Reviews legal framework, licenses, data protection impact assessment.
Information Security Assesses data access, model security, protection against misuse.

Approval Process – from Idea to Release

For AI applications like Copilot or proprietary agents to be operated compliantly from 2026, a binding approval process is required. The goal is not bureaucracy, but transparency: Which AI is used where, with what data, by whom is it managed, and with what risk?

Step 1: Submit Use Case Profile
Every new AI deployment is registered with a short, standardized form. Typical contents:

  • Purpose & Benefit of the Use Case
  • Data Sources and Access Rights
  • Affected User or Customer Groups
  • Responsible Person in the Business Department (Owner)
  • Technical Implementation (e.g., Copilot, Azure OpenAI, internal model)

Step 2: AI Act Risk Check
Following the profile, the use case is classified into the AI Act risk categories:

  • Low Risk: e.g., internal assistants without personal reference
  • Subject to Transparency: e.g., chatbots, AI-generated content
  • High Risk: e.g., AI in HR processes, critical infrastructure, biometric systems

Depending on the classification, appropriate requirements are triggered, e.g., documentation obligation, human oversight, data quality evidence, or entry into the EU database.

Step 3: Approval & Action Plan
IT, data protection, legal, and potentially Risk/Compliance review the use case. It may only be used productively after approval. If risks exist but are acceptable, the decision is documented (Risk Acceptance). Missing controls are recorded as measures with a deadline.

Step 4: Operation & Evidence Management

After approval, logging, monitoring, and updating of the profile must be ensured. Changes to the use case (new data, new functions) trigger a renewed review.

Practical Example an Agent with Clear Guardrails

What does compliant AI use look like in practice? Let’s consider an internal agent that answers employees’ questions about products, processes, or policies. The difference from “just turning it on”: The agent works only with approved information, is reviewed, documented, and operated with ongoing support.

Data Access – Only Approved Sources Instead of “Everything Open”

The agent does not receive full access to the company network, but only to clearly defined data sources – for example, product documentation, manuals, and internal FAQs. Systems with sensitive information such as HR data, contracts, or personal customer data are excluded or only accessible via roles with highly restricted permissions. Technically, control is managed through rights in M365, Azure AD, or comparable systems. This ensures traceability of the content the agent accesses.

The AI agent is denied access to some files. It can access other files.

Approval & Tests – before the Agent Goes Live

Before the agent goes live, the use case is registered via a profile: purpose, data sources, responsible business department, risk assessment. IT, data protection, and potentially compliance review this application. This is followed by a test phase in a secure environment. Here, it is investigated whether the agent generates false content, discloses confidential information, or behaves unpredictably. Only when results are stable and guardrails are effective is approval granted for productive use. .

After setup, it is re-checked whether the AI agent can perform the necessary tasks.

Training & Change – how Employees Work Safely with AI

Even the best agent only functions if people handle it correctly. Therefore, users are trained: Which prompts work? Which data does the agent use? When do I need to review and approve results? This also includes a clear indication that AI supports , but does not replace decisions. In parallel, a feedback channel is established through which errors, new requirements, or suggestions for improvement can be reported. This ensures the agent remains not a one-time project, but a controlled, learning component of the organization.

In a training session, all employees are shown how they can use the AI agent.

Ready to Bring Copilot & AI Agents Safely into Operation?

Many companies are now facing the same task: using AI, but in a structured, controlled, and AI Act-compliant manner. We support you precisely where internal resources or experience are lacking. Whether it’s a governance concept, risk analysis, policy development, or a pilot project with Copilot & Agents – we design the process together with your teams. This way, AI evolves from an experiment into an integral part of your operations.

45 Minuten kostenlose Erstberatung

Ihre IT-Beratungsdienstleister

Mit meinem Klick auf „Jetzt Termin vereinbaren” erteile ich freiwillig meine Einwilligung in die Verarbeitung meiner personenbezogenen
Daten zu Zwecken der Kontaktaufnahme. Ich kann die datenschutzrechtliche Einwilligung jederzeit mit Wirkung für die Zukunft widerrufen. Durch den Widerruf der Einwilligung wird die Rechtmäßigkeit der aufgrund der Einwilligung bis zum Widerruf erfolgten Verarbeitung nicht berührt. Mit meiner Handlung bestätige ich ebenfalls, die Datenschutzerklärung und das Transparenzdokument gelesen und zur Kenntnis genommen zu haben.

AI Agents Secure in Operation – FAQ

Does the EU AI Act also affect other AI tools that are already in use in the company?2025-11-07T11:44:06+01:00

Yes – the EU AI Act applies to all AI systems used in the company, including ChatGPT or other AI tools. The decisive factor is not the technology, but the intended use and the risk. Companies must demonstrate by August 2026 at the latest that they have inventoried and assessed their AI systems and provided them with clear responsibilities and policies.
More on risk assessment here on the EU’s Digital Strategy website.

How can companies make a sensible start now without over-regulating?2025-11-07T11:44:11+01:00

You can get started with a practical, streamlined approach:

  • Inventory: What AI is already being used today (Copilot, ChatGPT, internal bots)?
  • Risk assessment: Classification according to the AI Act (e.g. high risk for HR processes).
  • Governance setup: Defined roles, simple policies, test approval process.
  • Documentation & training: logging, feedback mechanism and AI literacy training.

 

How can I ensure that Microsoft Copilot is used in the company in compliance with data protection regulations and in a controlled manner?2025-11-07T11:43:53+01:00

Transparent control of authorizations and data access is a central component of AI Act preparation. Microsoft Copilot accesses company data in M365 (SharePoint, Teams, OneDrive, Outlook, etc.) – often including content that users did not want to share.
Companies should therefore check this at an early stage:

  • Which data sources can Copilot read or search?
  • Is sensitive data (e.g. HR, finances, contracts) technically protected or excluded?
  • Are there roles and groups that have too far-reaching access?

You can find a practical checklist and concrete to-do list for setting up, checking and securing Copilot access in our blog article: Microsoft Copilot – To-do list for authorizations and accesses

What specifically must companies have implemented by August 2026?2025-11-07T11:43:51+01:00

By the main deadline of August 2026, companies must have inventoried, evaluated and documented their AI applications. This includes in particular

  • an AI inventory with risk classification (low, transparent, high, inadmissible),
  • defined roles and responsibilities (e.g. according to the RACI model),
  • an approval process for new AI use cases,
  • Policies for data access, logging, documentation and incident management,
  • Proof of AI literacy (basic skills in dealing with AI).

The first partial obligations will already apply from 2025 – such as transparency for general-purpose AI (e.g. GPT models).