Resources / EU AI Act Guide

The EU AI Act: What Your Company Needs to Know and Do

A practical guide for business operators — not lawyers — on the EU AI Act obligations, deadlines, and how to comply.

Last updated: March 2026 · 15 min read

What Is the EU AI Act?

The EU AI Act (Regulation (EU) 2024/1689) is the world's first comprehensive law regulating artificial intelligence. Adopted on May 21, 2024, and published in the Official Journal of the EU on July 12, 2024, it establishes harmonized rules for placing AI systems on the EU market and using them within the European Union.

The regulation takes a risk-based approach: the higher the risk an AI system poses, the stricter the requirements. It applies to anyone placing AI systems on the EU market or using AI whose output is intended for use in the EU — regardless of where the developer or deployer is based.

This means that a company outside the EU offering AI-powered tools to European customers is also subject to the regulation. The only exclusions are purely military/national security systems, R&D prototypes before market placement, and personal non-commercial use.

For most companies, the key distinction is between a provider and a deployer. A provider develops an AI system. A deployer uses it in a business context. If your company uses AI tools — ChatGPT, HR screening tools, credit scoring, customer service chatbots — you are a deployer. And you have specific obligations.

EU AI Act Timeline & Deadlines

The AI Act doesn't apply all at once. It enters into force in phases, with different obligations becoming enforceable at different dates.

Aug 1, 2024

AI Act enters into force

Published in the Official Journal of the EU.

Feb 2, 2025

Prohibited Practices (Art. 5) + AI Literacy (Art. 4)

Already enforceable for over a year. These are your first two obligations.

Aug 2, 2025

GPAI obligations + AI Office operational

Rules for general-purpose AI models. Market Surveillance Authorities designated.

Aug 2, 2026

Full application: High-risk obligations

All deployer obligations (Art. 26), FRIA (Art. 27), transparency (Art. 50). This is the big deadline.

Aug 2, 2027

High-risk AI in regulated products

AI embedded in medical devices, machinery, automotive (Annex I products).

Aug 2, 2030

Legacy public sector systems

AI systems already in use by public bodies must comply.

As of March 2026, Art. 4 and Art. 5 have been enforceable for over a year. In 5 months (August 2, 2026), ALL deployer obligations for high-risk systems become enforceable. Companies should treat August 2026 as the binding deadline.

EU AI Act Risk Classification

The EU AI Act classifies AI systems into four risk levels. The classification determines which obligations apply to you.

Unacceptable

Prohibited practices (Art. 5). Banned entirely in the EU.

Up to €35M or 7% of turnover

Social scoring, manipulative AI, emotion recognition at work

High

Annex III use cases + safety components. Full compliance required.

Up to €15M or 3% of turnover

HR screening, credit scoring, biometric ID, critical infrastructure

Limited

AI interacting with people. Transparency obligations.

Up to €7.5M or 1% of turnover

Chatbots, content recommendation, ad targeting

Minimal

Most AI systems. No mandatory obligations beyond AI Literacy.

None (except Art. 4)

Spam filters, translation tools, content generation, analytics

The tool itself is not high-risk — only its use case is. The same ChatGPT can be minimal risk (writing product descriptions), limited risk (customer chatbot), or high risk (screening CVs) depending on how your company uses it.

Art. 5 Prohibited AI Practices

Article 5 lists eight categories of AI systems that are entirely banned in the EU. These have been enforceable since February 2, 2025, and carry the highest tier of penalties (up to €35M or 7% of global turnover).

  1. 1

    Manipulative and deceptive AI

    AI using subliminal, manipulative, or deceptive techniques to distort behavior, causing or likely to cause significant harm.

  2. 2

    Exploiting vulnerabilities

    AI deliberately exploiting weaknesses due to age, disability, or socio-economic situation to distort behavior.

  3. 3

    Social scoring

    AI evaluating or classifying people based on social behavior or personal characteristics, leading to detrimental treatment in unrelated contexts.

  4. 4

    Crime prediction from profiling

    AI assessing the risk of committing a crime solely based on profiling or personality traits.

  5. 5

    Untargeted facial scraping

    Creating facial recognition databases by scraping images from the internet or CCTV in an untargeted manner.

  6. 6

    Emotion recognition at work/school

    AI inferring emotions in workplace and educational settings (except for medical or safety reasons).

  7. 7

    Biometric categorization by sensitive traits

    AI categorizing people based on biometrics to deduce race, political opinions, union membership, religion, or sexual orientation.

  8. 8

    Real-time remote biometric identification

    In public spaces by law enforcement (with narrow exceptions for missing persons, imminent threats, and serious crime suspects).

For e-commerce companies: personalized ads based on user preferences are NOT inherently manipulative. However, AI scoring customers for return risk that leads to detrimental treatment in unrelated contexts could potentially fall under social scoring.

Am I a Deployer Under the EU AI Act?

Under the AI Act, a deployer is any natural or legal person that uses an AI system under its authority — except where the AI is used in the course of a personal, non-professional activity. If your company uses AI tools in its business operations, you are almost certainly a deployer.

This includes using third-party AI services: if your HR team uses an AI screening tool, if your finance team uses AI-powered credit assessment, or if your customer service runs an AI chatbot — the company is a deployer for each of those systems.

The distinction matters because deployers have their own set of obligations under Art. 26, separate from provider (developer) obligations. You cannot simply rely on your vendor being compliant — you have independent legal responsibilities.

Important: you become a provider (and inherit provider duties) if you put your own brand on a high-risk AI system, substantially modify a system, or change its intended purpose so it becomes high-risk. Fine-tuning, RAG, or significant customization can trigger this reclassification.

EU AI Act Deployer Obligations — What You Must Do

Deployer obligations depend on the risk level of the AI systems you use. Below is what's already enforceable and what takes effect in August 2026.

Already enforceable (since February 2, 2025)

Art. 4ALL

AI Literacy

Ensure that staff involved in the operation and use of AI systems have a sufficient level of AI literacy. This applies to every company using any AI system, regardless of risk level. You must provide an AI Literacy training course appropriate to the context and role of each person.

Art. 5ALL

Prohibited Practices Screening

Screen all AI systems in use to confirm none fall under the eight prohibited categories. Document the screening process and results. This applies to every company, regardless of whether you use high-risk systems.

Enforceable from August 2, 2026 (for high-risk systems)

Art. 26(1)HIGH

Use in accordance with instructions

Use high-risk AI systems in accordance with the provider's instructions of use. Ensure the system is used only for its intended purpose and within its technical boundaries.

Art. 26(2)HIGH

Human oversight

Assign human oversight to natural persons who have the necessary competence, training, authority, and resources. The overseer must be able to understand the system's capabilities and limitations.

Art. 26(5)HIGH

Monitoring & periodic reviews

Monitor the operation of high-risk AI systems and report anomalies, malfunctions, and unexpected behavior.

Art. 26(6)HIGH

Record-keeping (logging)

Keep logs automatically generated by the high-risk AI system for a minimum period of at least 6 months unless specified otherwise by law.

Art. 26(7)HIGH

Worker notification

Inform workers and their representatives before deploying a high-risk AI system in the workplace.

Art. 26(11)HIGH

Informing individuals

Inform individuals subject to decisions made by a high-risk AI system about the use of such a system.

Art. 27HIGH

Fundamental Rights Impact Assessment (FRIA)

Perform a FRIA before deploying a high-risk AI system from Annex III. Required for public bodies and private entities in specific sectors.

Art. 73HIGH

Serious incident reporting

Report serious incidents to the provider and Market Surveillance Authority immediately when they threaten health, safety, or fundamental rights.

GDPR Art. 35HIGH

Data Protection Impact Assessment (DPIA)

Perform a DPIA for personal data processing by AI systems that may pose high risk to rights and freedoms. A GDPR requirement complementing FRIA.

FRIA: Who Needs One?

A Fundamental Rights Impact Assessment (FRIA) is required under Art. 27 before deploying a high-risk AI system from Annex III.

Public bodies

All bodies governed by public law deploying high-risk AI systems from Annex III — no exceptions.

Credit scoring & BNPL

Private entities using AI to assess creditworthiness of natural persons or establish their credit scoring.

Insurance

Private entities using AI for risk assessment and pricing in life and health insurance.

Other Annex III sectors

Other high-risk use cases from Annex III when required by the supervisory authority or national legislation.

FRIA and DPIA are different documents but complement each other. DPIA (GDPR Art. 35) focuses on personal data protection risks. FRIA (Art. 27) assesses broader fundamental rights impact — discrimination, freedom of expression, access to justice. Data from one assessment can feed the other.

FRIA results must be submitted to the relevant Market Surveillance Authority using a template to be developed by the AI Office.

Transparency Notice Requirements

Art. 50 imposes transparency obligations on specific AI systems, regardless of risk level:

Chatbots and systems interacting with people: users must be informed they are interacting with AI (unless obvious from context).

Deepfakes and synthetic content: content generated or manipulated by AI must be labelled.

Emotion recognition and biometric categorization systems: affected persons must be informed about the system's operation.

High-risk AI systems making decisions about individuals: those individuals must know a decision was made with the assistance of an AI system (Art. 26(11)).

Additionally, Art. 26(7) requires worker notification BEFORE deploying high-risk AI in the workplace. This is a separate obligation from general transparency notices.

AI and Shine generates three types of transparency notices: interaction notices (chatbots), decision notices (systems making decisions about individuals), and general notices.

Human Oversight Requirements

Art. 26(2) requires deployers of high-risk AI systems to assign oversight to natural persons with the necessary competence, training, and authority.

  • Assign specific persons responsible for overseeing each high-risk AI system
  • Document their competence — understanding the system, ability to interpret outputs
  • Formal authority to intervene — ability to override, suspend, or shut down the system
  • Regular training of oversight persons on the system's operation and limitations
  • Auditable evidence of oversight — not just a name, but documented actual oversight actions
Human oversight is not a formality. The supervisory authority can demand proof that designated persons genuinely have the competence and ability to intervene — not just appear on a list.

Market Surveillance Authorities

Each EU Member State must designate Market Surveillance Authorities (MSAs) responsible for enforcing the EU AI Act. Deadline: August 2, 2025.

CountryAuthorityStatus
Poland 🇵🇱Designation in progresspending
France 🇫🇷CNIL, DGCCRF, ARCOMdesignated
Germany 🇩🇪BNetzA (Bundesnetzagentur)designated
Spain 🇪🇸AESIAdesignated
Italy 🇮🇹AgID / Garantedesignated
Netherlands 🇳🇱Algoritmetoezicht (AT)designated
Ireland 🇮🇪15 bodies + AI Officedesignated
Belgium 🇧🇪Designation in progresspending
For companies operating in Poland: while formal supervisory authorities have not yet been designated, Art. 5 and Art. 4 obligations are already enforceable. UODO already has experience enforcing GDPR in the AI context — several decisions on personal data processing by AI systems have been issued.

Penalties & Fines

The EU AI Act provides three tiers of administrative penalties.

Tier 1 — Prohibited practices

Up to €35M or 7% of global annual turnover

Violation of Art. 5 — engaging in prohibited AI practices

Tier 2 — High-risk systems

Up to €15M or 3% of global annual turnover

Non-compliance with high-risk AI system requirements (documentation, human oversight, FRIA, monitoring, etc.)

Tier 3 — Incorrect information

Up to €7.5M or 1% of global annual turnover

Supplying incorrect, incomplete, or misleading information to authorities

Headline figures like €35M are designed for the largest violations by the largest companies. For a 200-person company, the practical risk is proportionally smaller — but the legal obligation is the same regardless of company size.

Member States may establish lower thresholds for smaller organizations, but the regulation does not eliminate penalties for any category of company. Organization size is a mitigating factor, not an exemption.

EU AI Act and GDPR — How They Work Together

The EU AI Act does not replace GDPR. Both regulations apply simultaneously. Most AI systems process personal data, so in practice you need to comply with both.

AspectDPIA (GDPR Art. 35)FRIA (Art. 27 EU AI Act)
PurposeAssess risks to personal data protectionAssess impact on fundamental rights (broader scope)
When requiredProcessing likely to result in high risk to rights and freedomsHigh-risk AI systems from Annex III
Who performsData controllerAI system deployer
ScopePersonal data, security, minimizationFundamental rights: non-discrimination, freedom of expression, privacy, access to justice
Legal basisGDPR Art. 35EU AI Act Art. 27
Cross-feedingDPIA data can feed FRIAFRIA data can feed DPIA

Companies already GDPR-compliant have a significant head start: they maintain records of processing (ROPA), have DPAs with vendors, conduct DPIAs, and have a DPO. These same processes and documents form the foundation for EU AI Act compliance.

How to Start — 7 Practical Steps

EU AI Act compliance is a process, not a one-time project. Here's a practical path for companies that want to start now.

1

Inventory all AI tools

Review all tools used in the company. Check which have AI features. Most companies discover 3–5× more AI tools than expected.

2

Classify each tool's risk level

For each AI tool, determine: how it's used (use case), whether it falls under Annex III, what risk level applies. Remember: the use case determines classification.

3

Screen for prohibited practices (Art. 5)

Verify that none of your use cases engage in practices prohibited under Art. 5. This is already enforceable and carries the highest penalties.

4

Conduct AI Literacy training (Art. 4)

Organize documented AI competence training for all employees. Doesn't require certification — internal training with attendance records is sufficient. Required since February 2025.

5

Collect and analyze vendor DPAs

Review contracts with AI tool vendors. Do you have DPAs? Do they contain clauses required by GDPR? Are they ready for AI Act requirements? Most vendors haven't updated their contracts yet.

6

Prepare documentation for high-risk tools

For high-risk tools: risk assessment, FRIA (if required), DPIA, transparency notices, worker notification, human oversight assignment.

7

Establish governance processes

Implement processes: new tool approval, incident reporting, periodic reviews, log retention tracking. Compliance is an ongoing process, not a one-time audit.

Disclaimer: This guide is provided for informational purposes only and does not constitute legal advice. While we strive to keep the information accurate and up to date, the EU AI Act is a complex regulation subject to ongoing interpretation by national authorities and courts. Companies should consult qualified legal counsel for advice specific to their situation.

Ready to start your compliance journey?

Book a demo and see how AI and Shine handles every step — from inventory to compliance proof.