AI Regulation and AI Act current status

The AI Regulation: A milestone for the regulation of artificial intelligence

The world of artificial intelligence (AI) is developing rapidly. Increasingly complex AI systems are permeating our everyday lives and changing the way we live and work. In view of these developments, it is essential to create a legal framework that ensures the ethical and safe use of AI. The European Union has created such a framework with the AI Regulation, also known as the AI Act. This comprehensive piece of legislation aims to regulate the development and use of AI in Europe while promoting innovation.

In this article, we will take an in-depth look at the AI Regulation. We will explain the objectives of the regulation, what types of AI systems it affects and what obligations arise for companies and organisations. We will also look at the most important dates and deadlines that companies must observe when implementing the new regulations.

Key information on the AI Regulation

  • The correct legal term in the EU and the official name of the German legislation is "AI Regulation"This is abbreviated as the AI Regulation. The international English term is AI Act. In German, the term AI Act is often used colloquially. All terms refer to the same EU regulation.
  • The AI Regulation has already entered into force, but pursues a
  • The AI Regulation pursues a risk-based approach and categorises AI systems into four risk categories: unacceptable risk (e.g. social scoring), high risk (e.g. AI in medical diagnostics), limited risk (e.g. chatbots) and minimal risk (e.g. AI-supported games). Depending on the category, different requirements apply to ensure security and the protection of fundamental rights.
  • The ordinance applies extraterritorial application and applies not only to companies based in the EU, but also to organisations outside the EU, provided their AI systems are used within the EU. This ensures that European citizens worldwide are protected from risks posed by AI systems.
  • The AI Regulation aims to Balance between promoting innovation and protecting fundamental rights by setting clear rules for the use of AI. It protects fundamental rights such as data protection and non-discrimination and at the same time creates a stable legal framework to promote investment in AI technologies.

What is the EU AI Regulation?

The EU's AI Regulation, also known as the AI Act, is a comprehensive set of rules that regulates the use of AI systems within the EU. The aim is to ensure the safe and ethical use of AI while promoting innovation.

Legislative objectives and background to the AI Regulation

The AI Act is an important piece of legislation that regulates the development and use of artificial intelligence in the European Union. Its aim is to create a balanced framework that protects both the Fundamental rights guaranteed as well as Promotes innovation.

Key objectives of the AI Regulation:

  • Protection of fundamental rights: The regulation ensures that AI systems are not misused to violate fundamental rights such as data protection or non-discrimination.
  • Promoting innovation: By providing clear and standardised rules, the AI Regulation creates a secure framework for companies to develop and use new AI technologies.
  • Minimisation of risks: Particularly risky AI applications are regulated more strictly in order to minimise potential dangers for society.
    Why is the AI Regulation necessary?

AI systems have long been part of our everyday lives. While simple applications such as film recommendations are unproblematic, others, such as credit decisions or social scoring, can have a significant impact on people's lives. Until now, there has been no specific legislation for AI in the EU, which is why the AI Regulation closes an important gap.

Timetable for the implementation of the AI Act: Action steps for organisations

  • 12 July 2024

    Publication of the AI law in the Official Journal of the EU

    The AI Regulation is officially published. This marks the start of the transition period.

    Action steps:

    • Find out about the content and requirements of the KI-VO.
    • Identify relevant sections of the law that could affect your organisation.
  • 1 August 2024

    Entry into force of the KI-VO

    The KI-VO officially comes into force. At this point, there are no binding requirements yet and it will be applied gradually.

    Action steps:

    • Set up a team or working group to monitor the implementation of the AI Regulation.
    • Develop a roadmap to fulfil the requirements on time.
  • 2 November 2024

    Deadline for member states

    EU Member States must designate authorities responsible for the protection of fundamental rights.

    Action steps for organisations:

    • Monitor the designation of the competent authorities in order to contact them in good time if necessary.
    • Be prepared to respond to requests for information or enquiries from the authorities.
  • 2 February 2025

    Eliminate prohibited AI systems and ensure AI expertise

    From this date, AI systems that are considered unacceptable may no longer be used. These include

    • Systems that manipulate human behaviour.
    • Systems that exploit vulnerabilities (e.g. of children).
    • Social scoring systems for evaluating people.

    Action steps:

    • Inventory: Check all deployed and planned AI systems for prohibited practices.
    • Risk management: Document that no prohibited practices are used.
    • Check service providers: Clarify with third-party providers that their systems are also compliant.

    Organisations need to ensure the AI competence of their employees, find out here about the Action steps in connection with AI competence.

  • 2 August 2025

    Applicability for general purpose AI systems

    From this date, extended requirements will apply, especially for basic models such as Chat-GPT.

    Action steps:

    • Develop guidelines: Create an internal AI policy that describes all processes and rules.
    • Clarify responsibilities: Determine who is responsible for the use and monitoring of AI systems.
    • Create transparency: Keep a register of all AI systems, documenting their purpose and potential risks.
  • 2 August 2026

    General applicability of the AI Regulation

    The regulations for high-risk AI systems and transparency obligations come into force.

    Action steps:

    • Ensure compliance: Proof of safety and legal compliance must be available for high-risk AI systems. Consult external experts if necessary.
    • Train employees: Sensitise your team to dealing with AI and potential risks.
    • Clarity for users: Clearly indicate when users are interacting with an AI, e.g. with information such as: "Decision supported by AI."
    • Check external providers: Clarify whether third-party systems fulfil the new requirements.
  • 2 August 2027

    Applicability of the AI Regulation for certain high-risk AI systems

    Transitional periods end for high-risk AI systems that were used before the regulation came into force.

    Action steps:

    • Inventory analysis: Identify older AI systems and check whether they fall into the high-risk category.
    • Plan upgrades: Adapt existing systems or replace them with new, compliant solutions.
    • Obtain certifications: Work with accredited bodies to ensure the compliance of your systems.

Scope of application, links and the most important definitions of the AI Regulation

The AI Regulation defines a comprehensive legal framework for artificial intelligence. It first defines what exactly is considered an AI system and which systems fall under its provisions. It then categorises these systems into different risk categories in order to define specific regulations for different areas of application. It does not matter where the developer or user of an AI system is located: The regulation applies to all AI systems that are placed on the market or used in the European Union.

Extraterritorial application of the AI Regulation

Put simply, "extraterritorial application" means that a law also applies outside the territory of a state. In the case of the AI Regulation, this means that it applies not only to organisations based in the EU, but also to organisations based outside the EU, provided that their AI systems are used in the EU. AI systems are developed and used worldwide. In order to ensure a uniform legal framework for AI in the EU, it is also necessary to reach organisations outside the EU whose systems are used in the EU. Extra-territorial application ensures that even EU citizens protected from the risks that may arise from the use of AI systems, even if these systems were developed by organisations outside the EU.

Concrete examples:

  • A US company develops an AI system for facial recognition and sells this system to a European company. Even though the US company is based outside the EU, it is subject to the provisions of the AI Act because the system is used in the EU.

Stakeholders affected by the AI Regulation

The AI Regulation applies to all players in the AI value chain. Employers are affected, regardless of their size, from start-ups to international corporations and both for organisations that develop their own AI systems (providers) and for organisations that use third-party AI systems (operators). This includes not only the direct developers and users of AI systems, but also all other parties involved, such as suppliers and service providers.

"Provider" means a natural or legal person, public authority, agency or other body that develops an AI system or an AI model for general purposes or has an AI system or an AI model developed for general purposes and places it on the market or puts it into operation under its own name or brand, regardless of whether this is done for payment or free of charge;

AI Regulation, Article 3 (3)

"Operator" means a supplier, product manufacturer, distributor, authorised representative, importer or retailer;

AI Regulation, Article 3 (8)

importer' means a natural or legal person resident or established in the Union who places on the market an AI-system bearing the name or trade mark of a natural or legal person established in a third country;

AI Regulation, Article 3 (6)

'distributor' means a natural or legal person in the supply chain who makes an AI system available on the Union market and who is not the supplier or importer;

AI Regulation, Article 3 (7)

What is the difference between AI system, AI model and GPAI model?

The difference between a AI systema AI model and a GPAI model (general purpose AI model) lies primarily in their scope, structure and purpose. This distinction is of crucial importance for the application of the AI Regulation, as for AI systems and AI models Different requirements apply. GPAI models, for example, are subject to special regulations. The distinction also helps to Responsibilities of the various players (providers, operators) and the categorisation of a system as an AI system or AI model also influences the Risk assessment and thus the safety measures to be applied.

AI system

A AI system is an application or tool that is based on artificial intelligence to fulfil specific tasks. It includes not only the AI model itself, but also the entire infrastructure and software required to utilise the AI technology for a specific purpose.

AI system

"AI system": a machine-supported system, which is designed in such a way that it can be operated with varying degrees of Autonomy can be operated and after its introduction Adaptability and which, for explicit or implicit goals, infers from the inputs it receives how it can generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments;

AI Regulation, Article 3 (1)

  • Components: AI models, algorithms, databases, user interfaces and hardware.
  • Purpose: The solution of specific problems or the fulfilment of clearly defined tasks, such as chatbots, facial recognition, automated diagnoses or recommendation systems.
  • Example: An autonomous vehicle is an AI system that combines various models (e.g. image recognition, speech processing) to navigate safely through traffic.

AI model

A AI model is the technical core of an AI system and consists of algorithms that have been trained with data to recognise patterns, make decisions or make predictions. It is the mathematical or statistical construct behind the actual "intelligence".

  • Components: Algorithms, neural networks and trained parameters.
  • Purpose: Processing data to deliver results based on patterns learnt during training.
  • Example: An AI model for speech recognition could be trained to convert human speech into text. This model is then integrated into an AI system (e.g. a voice assistant).

GPAI model (General Purpose AI model)

A GPAI model (general-purpose AI model) is a specific type of AI model that can be used for a variety of different applications and tasks. It is not limited to a specific purpose and can therefore be used particularly flexibly.

  • Components: Advanced, scalable AI architectures, such as large language models (e.g. GPT) or multimodal models that can process text, images and other inputs.
  • Purpose: Broad applicability in different contexts, e.g. translation, text generation, problem solving or creative tasks.
  • Example: ChatGPT or GPT-4 are GPAI models that can be used as text generators as well as for code development, translations or data analysis.

What is the risk-based approach of the AI Regulation?

The risk-based approach of the AI Regulation creates a clear framework for the safe and responsible use of innovative AI technologies. A risk-based approach means that measures, rules or regulations do not apply across the board to all cases, but are based on the specific risk posed by a technology, activity or system. In the context of the AI Act, this means that AI systems are categorised into the following different risk classes based on their potential hazard or impact on society: unacceptable risk, high risk, limited risk and minimal risk.

Organisations should familiarise themselves with the risk categories and the respective requirements in order to ensure compliance and minimise potential liability risks. To this end, it is important to identify the risk categories of the AI systems used as a first step.

Risk class for AI systems:

AI systems that violate European values or pose an unacceptable risk are prohibited. Examples of this are

  • Social scoring systems for evaluating people based on their behaviour.
  • Systems that encourage manipulative behaviour or specifically exploit people's vulnerabilities.
  • Biometric real-time remote identification systems for law enforcement purposes in public spaces (with exceptions).
  • Systems for recognising emotions in the workplace or in schools.

Practical tip: Organisations should check their deployed or planned AI systems to see whether they fall into this category. Such applications are strictly prohibited and may not be placed on the market.

High-risk AI systems are those that could affect the health, safety or fundamental rights of people. These include

  • Safety-critical applications, such as in medical products or vehicles.
  • Systems that determine access to education, job opportunities or creditworthiness.

Requirements: Suppliers must subject these systems to an ex-ante conformity assessment and provide comprehensive technical documentation. In addition, they must:

  • quality and risk management systems are implemented.
  • training data comply with the requirements of the AI Act (Art. 10).
  • Transparency and control options for users must be ensured.

Practical tip: Companies should set up internal processes at an early stage in order to fulfil these requirements. These include, for example, employee training, the creation of a fundamental rights impact assessment and the provision of a contact person for the European market.

AI systems that are intended for interaction with humans are subject to certain transparency requirements:

  • Users must be informed that they are interacting with an AI system.
  • Synthetically generated content (e.g. deepfakes) must be labelled as such.

Practical tip: Providers of chatbots or content generation systems should clearly communicate that they are AI-based systems. Simple and visible labelling can avoid potential infringements.

AI systems that cannot be assigned to any of the above categories can be used without any special restrictions.

Practical tip: Companies can use such systems without additional regulatory requirements. Nevertheless, a regular review should take place if the area of application changes.

The AI Act also addresses general-purpose AI models, such as GPT-4. These models, which can be used in numerous contexts, are subject to specific requirements depending on the area of application. In particular, models with systemic risks must fulfil additional requirements.

Practical tip: Providers of such models should create clear documentation of the deployment scenarios and ensure that the models are appropriately regulated for risky applications.

Which supervisory authority monitors compliance with the AI Regulation?

The AI Regulation stipulates that each EU Member State must designate its own authority responsible for monitoring implementation. Germany must therefore also designate such an authority. The question of the specific supervisory authority for AI in Germany has not yet been finalised.

Possible candidates for this task include

  • Federal Network Agency: Due to its technical expertise and experience in regulating digital markets, the Federal Network Agency is often mentioned as a possible candidate.
  • Federal Office for Information Security (BSI): The BSI is responsible for IT security and could also play a role due to its expertise in this area.
  • Federal Ministry for Digital and Transport Affairs (BMDV): As an interdepartmental authority, the BMDV could assume a coordinating function.

What impact does the AI Regulation have on organisations?

The regulation brings both challenges and opportunities for organisations:

  • Compliance requirements: Organisations must check their AI systems for compliance and adapt them if necessary.
  • Competitive advantage: Compliance with regulations can increase customer confidence.
  • AI expertise: Organisations must ensure the AI competence of their employees.
  • Pressure to innovate: Organisations are required to develop innovative and compliant solutions.
  • Labelling obligationOrganisations must label content generated by AI.

Organisations must ensure the AI competence of employees

From February 2025, the EU AI Regulation will present organisations with a new challenge: ensuring AI competence among their employees. This means that organisations must ensure that people involved in the development, deployment or maintenance of AI systems have sufficient knowledge and understanding of this technology. This new regulation aims to promote the responsible use of artificial intelligence and minimise potential risks. Organisations must therefore be prepared to train and educate their employees accordingly. This is the only way to ensure that AI systems are used ethically and legally.

What does Article 4 of the AI Regulation say?

"Providers and operators of AI-systems shall take measures to ensure, to the best of their knowledge and belief, that their personnel and other persons involved in the operation and use of AI-systems on their behalf have sufficient AI competence, taking into account their technical knowledge, experience, education and training and the context in which the AI-systems are intended to be used and the persons or groups of persons with whom the AI-systems are intended to be used."

What does AI expertise mean?

AI competence is the ability to understand artificial intelligence, use it responsibly and assess its impact. This includes both technical knowledge of how AI systems work and an awareness of the social, ethical and legal aspects of using AI.

AI expertise covers three core areas:

  1. Basic understanding of AI: A solid knowledge of how AI systems work is a prerequisite for their meaningful use.
  2. Critical categorisation of AI: The ability to weigh up the opportunities and risks of AI and take ethical aspects into account.
  3. Practical application of AI: Concrete skills in dealing with AI systems, adapted to the respective area of application.

How can AI skills be taught?

The AI Regulation does not prescribe a specific solution here. Each organisation must develop its own concept that is tailored to its specific needs. This includes training, further education, the development of internal guidelines and the promotion of dialogue between different departments.

AI expertise should be taught as practically as possible in order to facilitate its transfer into everyday working life. Firstly, all employees should be given a basic understanding of AI and, in view of the rapid development in the field of AI, should be given the opportunity for continuous further training. Data protection regulations must be taken into account in the development and use of AI systems. Therefore, involve all stakeholders, such as data protection officers, information security officers, IT department, HR department or, if available, the works council in the design of training measures.

Proposal for a guideline for an AI training concept:

The need should therefore arise on the one hand from the specific use cases in the organisation and on the other hand from overarching surveys of the level of competence. Organisations should carry out an analysis of requirements/competence:

  • Individual needs: Determination of the specific knowledge requirements based on the AI systems used (Which AI systems are used?), the roles of the employees and the existing competences (Does the employee have previous knowledge?).
  • Risk assessment: Consideration of the risks associated with the use of AI (what risk class does the AI system have?).

Based on the needs analysis, necessary training measures can be derived in the form of online courses or classroom training. Collaboration with training service providers can be useful to speed up this process. Depending on the results of the needs analysis, different training contents are useful:

  • Basics: Teaching basic knowledge about AI, how it works and legal and ethical aspects. The primary aim here should be to familiarise employees with the technology so that they can use it effectively and safely. A basic understanding of terms such as machine learning lowers the inhibition threshold. This understanding helps to integrate ChatGPT and the like into everyday working life in a meaningful way.
  • DeepeningIn order to specifically promote employees' AI skills, further training should be offered that is tailored to specific areas of application. A particular focus can be placed on topics such as IT security and law (e.g. in connection with the Trade Secrets Act or non-disclosure agreements), possible Threat scenarios (e.g. the threat of deepfakes, fraud, damage to reputation and manipulation by AI) or the Importance of AI compliance (e.g. compliance with legal regulations and ethical standards when using AI, the creation of internal guidelines and best practices in dealing with AI)

In addition to training measures, it is important to establish AI governance and integrate the associated guidelines and standards into your organisation. The organisation ensures that an AI system is only used by demonstrably competent employees and documents this in an overarching strategy. For example, access to an AI system (login and password) could be linked to successful training. The following points create a clear AI framework:

  • Guidelines and standards: Development of internal guidelines for dealing with AI.
  • AI Guideline: Creation of a guideline, including best practices, ethical principles and compliance requirements, for employees.
  • AI Officer: Depending on the size of the organisation and the extent of AI use, it may also make sense to appoint an AI officer to carry out risk assessments and risk impact assessments, to drive forward the implementation, monitoring and coordination of AI strategies and to plan and coordinate training courses. If necessary, these topics can also fall within the remit of the data protection officer or information security officer.

The KI-VO does not require any specific documentation. Nevertheless, it is advisable for employers to keep (electronic) records of training measures in particular. Comprehensible documentation protects companies from liability risks and proves fulfilment of the obligation under Art. 4 of the AI Regulation.

  • Proof: Documentation of the training courses held.
  • Liability protection: Protection against possible liability claims.

Consequences of inadequate implementation of Article 4 of the AI Regulation

Although Article 4 of the AI Regulation does not provide for direct sanctions, it is an important building block for the legal assessment of claims and labour law disputes in connection with AI systems. Organisations should therefore take the requirements of Article 4 seriously and take appropriate measures to train their employees.Key risks:

  • Liability: In the event of damage caused by faulty AI systems, an organisation can be held liable if it can prove that it has not carried out appropriate training measures. Courts could consider this to be a breach of the general duty of care.
  • Consequences under labour law:
    • Employee entitlements: Employees could assert claims if they are harmed by the use of AI systems or if they are denied appropriate training.
    • Cancellations: In certain cases, employers may have difficulty terminating employees due to a lack of AI skills, especially if sufficient training has not been provided.
  • Works council co-determination: When implementing training measures to fulfil the requirements of Article 4 of the AI Regulation, the works council must be involved in accordance with the German Works Constitution Act (BetrVG).

Online course to sharpen the AI skills of your employees

The AI Regulation will present organisations with clear requirements from February 2025: Employees who work with AI systems must have in-depth AI expertise. Whether development, deployment or maintenance - responsible handling of AI is the key to ethical and legally compliant use of this technology. With our practical AI training, you can make your team fit for the future and generate training certificates to document your measures.

Labelling obligation for AI-generated content in accordance with the AI Regulation

The AI Act stipulates that certain AI-generated content must be clearly labelled as such. The aim of this regulation is to protect consumers and ensure that they are aware when they are confronted with AI-generated content. The exact legal basis for this labelling requirement can be found in Article 50 of the AI Regulation. This article regulates the transparency requirements for AI systems that are intended for interaction with natural persons. The specific design of the labelling obligation can be complex in individual cases and requires careful legal examination.

Specifically, this labelling obligation applies:

  • Deepfakes: Images, videos or audio recordings that have been manipulated using AI so that they appear genuine must be labelled as such.
  • AI-generated content in general: Other AI-generated content that has the potential to mislead users should also be labelled transparently.

Exemptions from the labelling requirement:

  • Artistic works: If AI-generated content is part of an artistic, fictional or analogue work, it is sufficient to disclose the AI use in an appropriate manner without impairing the work.
  • Legally authorised purposes: The labelling requirement does not apply if the AI-generated content is used for the purposes of law enforcement or similar legal tasks.

Sanctions for breaches of the AI Regulation

Article 99 of the AI Regulation regulates the sanctions for violations of the provisions of the Regulation and provides for high fines for violations, similar to the GDPR. The amount of the fines is based on either a fixed amount in millions of euros or a percentage of the company's global annual turnover, whichever is higher.

  • Prohibited AI practices: Up to 35 million euros or 7 % of global annual turnover.
  • Other offences: Up to EUR 15 million or 3 % of turnover.
  • False information to authorities: Up to EUR 7.5 million or 1 % of turnover.

Conclusion: Responsibility and competitive advantages through the AI regulation

The AI Regulation is a significant step towards a responsible and ethical use of artificial intelligence in Europe. It provides organisations with a clear legal framework and at the same time creates trust among citizens. The implementation of the AI Regulation requires organisations to carefully review their existing AI systems and develop appropriate adaptation measures. It is important to consider the specific requirements of the regulation and to seek cooperation with experts.

The AI Regulation is not only a challenge, but also an opportunity. Organisations that address the new requirements at an early stage can gain a competitive advantage and consolidate their position as pioneers in the field of ethical AI. In the coming years, the AI regulation will significantly shape the development of AI in Europe. It is therefore essential to familiarise yourself with the content of the regulation and take the necessary measures to implement it. Over the next few years, the regulation will set the course for the development and use of AI in Europe and will thus Innovative capacity of the EU strengthen.

Caroline Schwabe

This might interest you too:

NIS2: EU directive for more cyber security

What does the NIS-2 Directive mean for organisations in Germany? Implementation obligations, sanctions, tips for implementation.

The EU-U.S. Data Privacy Framework

On 10 July 2023, the EU-U.S. Data Privacy Framework entered into force. All background information on the adequacy decision.

HinSchG: Protection of whistleblowers

The Whistleblower Protection Act: regulations and obligations for companies, requirements for whistleblowers, white paper including checklist!