EU AI Act
Regulation (EU) 2024/1689 laying down harmonized rules on artificial intelligence (in short: AI ACT) is the first attempt worldwide to regulate AI systems comprehensively (across sectors and applications) with regard to development, placing on the market and operation. The law, which came into force in August 2024, thus represents a legal framework with extensive obligations in relation to the development, integration and application of AI technology.
What does the EU AI Act mean by “AI system”?
Article 3, para. 1 of the AI Act defines an AI system as a “machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”.
Who is affected by the regulation?
According to Article 2, the AI Act applies to all companies, public and private actors who use an AI system professionally – both within the EU and outside the EU, provided that the AI technology is placed on the market in the EU or its use has an impact on people there.
Manufacturers, providers and operators must all meet the relevant requirements. However, this does not apply without exception. For example, AI projects and applications in the context of research, development or the testing of prototypes before they are placed on the market are exempt from the EU AI Act, as are AI systems that are used exclusively for military, defense or national security purposes.
When does the AI Act apply?
The EU AI Act came into force on August 1, 2024. Transitional periods for implementation apply from this date (pursuant to Article 113):
From February 2, 2025 (six months after coming into force), the bans on practices with unacceptable risk apply (see below).
From August 2, 2025 (twelve months after coming into force), the governance rules and obligations for general purpose AI models (an AI system that is “based on a general-purpose AI model and which has the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems” [Article 3, para. 66]) apply.
From August 2, 2026 (two years after coming into force), the requirements for AI systems with limited and low risk will also apply.
From August 2, 2027 (three years after coming into force), the rules for the high-risk systems listed in Annex I of the Regulation will apply.
Which products are covered by the EU AI Act?
The EU AI Act regulates AI systems and products with integrated AI technology that are placed on the market or put into operation in the EU. It divides them into four categories with regard to their intended purpose and the risks associated with their use. Risk is defined as “the combination of the probability of an occurrence of harmand the severity of that harm” (Article 3, para. 2). The higher the risk category, the higher the legal requirements.
Risk categories
Unacceptable risk: All AI systems and AI-based practices that are considered a clear threat to the safety, livelihood and rights of people. This includes AI systems for social scoring, biometric categorization (e.g. to derive racial affiliation), cognitive behavioral manipulation or emotion recognition in the workplace and in educational institutions.
High risk: AI systems that may have a negative impact on human safety or fundamental rights. Annex III contains a list of such high-risk AI systems, which can be adapted on an ongoing basis. The category includes AI systems that are not unacceptable risk systems but pose a high risk to health, safety or fundamental rights (e.g. AI systems for assessing suitability for medical treatment, employment or lending).
Another important group of high-risk systems are AI products or products with safety-relevant AI components from the fields of medical technology, toys, elevators, vehicles, aircraft or critical infrastructure, provided that they are subject to third-party conformity assessment in accordance with existing EU sectoral legislation (introduction, para. 50). Strict requirements apply to the manufacture and operation of such high-risk AI systems (see below).
Limited risk: AI systems that interact with humans, e.g. chatbots and AI systems for generating or modifying images, videos, audio or text. Special transparency obligations apply here; in particular, users must be informed that this is AI-generated content. Data protection requirements (e.g. the GDPR) must also be complied with.
Minimal risk: AI systems that do not fall into the above categories and do not pose any significant risks to health, safety or fundamental rights (e.g. spam filters or video games). To date, no special requirements apply to these systems. However, technical documentation and a corresponding risk assessment are required for classification as an AI system with minimal risk.
What requirements apply to high-risk systems?
Before high-risk AI systems are placed on the market or put into operation in the EU, they must undergo a strict conformity assessment in accordance with the EU AI Act. In addition, providers of high-risk AI systems must introduce a quality and risk management system and implement safety requirements throughout the entire AI lifecycle. The most important requirements are
- Ensuring a sufficient level of AI competence of employees involved in the operation and use of AI systems (Article 4)
- Establishment and implementation of a risk management system (Article 9)
- Management of data for training, validation and testing (Article 10)
- Preparation of technical documentation (Article 11)
- Compliance with recording obligations (Article 12), including the storage of automatically generated logs (Article 19)
- Ensuring an appropriate level of transparency and information for users (Article 13) and other transparency obligations (Chapter IV)
- Enabling human oversight (Article 14)
- Ensuring accuracy, robustness and cybersecurity (Article 15)
- Establishing and implementing a quality management system (Article 17)
What standards can manufacturers and developers use as a guide?
To date (end of 2024), there is little information on the specification, design and testing of functionally safe AI systems – nor on how AI technology can be used for functions with safety-relevant effects. Corresponding standards or the harmonization of corresponding standards are to be developed and published by European standardization organizations by the end of April 2025 and subsequently reviewed by the European Commission.
In the meantime, the Technical Report TR ISO/IEC 5469:2024 can provide initial general guidance. However, the technical report also offers hardly any detailed procedures on how the compatibility of an AI technology with the requirements of functional safety can be tested or verified.
What are the penalties for breaches of the EU AI Act?
In the event of violations, the member states are to impose penalties that are “effective, proportionate and dissuasive” (Article 99, para. 1, 2). Depending on the infringement, this could be fines of up to a maximum of EUR 35 million or 7 percent of the previous year's global turnover - the higher amount applies to large companies and the lower amount applies to SMEs and start-ups.
Reference sources
The legal text of the EU AI Act is available online in all official languages of the EU: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32024R1689
An overview of the annexes can be found at: https://artificialintelligenceact.eu/ai-act-explorer/
Support with the implementation of AI technologies
NewTec supports manufacturers of safety-related systems with AI components in identifying the relevant safety aspects, as well as in system, error and risk analysis and compliance with regulatory requirements.
Questions? Get in touch with us: Contact.
Or give us a call on +49 7302 9611-0.