Artificial intelligence systems: European regulations come into force
The European AI Regulation (or AI Act) came into force on August 1, 2024. It aims to provide a framework for the development, marketing and use of artificial intelligence systems. It applies to players in both the public and private sectors, inside and outside the European Union, as soon as the AI system has been put on the market in the Union or its use will have an impact on people located in the Union.
It therefore applies equally to suppliers (e.g. a developer of a CV analysis tool) and deployers of AI systems (e.g. a bank purchasing the tool). However, research, development and prototyping activities that take place before an AI system is put on the market are not subject to the provisions of this regulation. Nor do they apply to AI systems designed exclusively for military, defense or national security purposes, regardless of the type of entity carrying out these activities.
The regulation is based on a 4-level risk approach:
- Unacceptable risk: concerns a limited set of practices contrary to the values of the European Union and fundamental rights (for example: exploitation of people’s vulnerability, use of subliminal techniques…)
From February 2, 2025: Ban on AI systems presenting unacceptable risks. - High risk: this concerns AI systems which may affect the safety of individuals or their fundamental rights, justifying their development being subject to enhanced requirements (conformity assessments, technical documentation, risk management mechanisms). These systems are listed in Annex I for systems integrated into products already subject to market surveillance (medical devices, toys, vehicles, etc.) and in Annex III for systems used in eight specific areas. (Examples: systems that assess whether a person is capable of receiving a specific medical treatment, obtaining a specific job or being granted a loan to buy an apartment). The rules for high-risk AI systems in Annex III (AI systems in the areas of biometrics, critical infrastructure, education, employment, access to essential public services, law enforcement, immigration and administration of justice) apply from August 2, 2026. From August 2, 2027, implementation of rules relating to Annex I high-risk AI systems (toys, radio equipment, in vitro diagnostic medical devices, civil aviation safety, agricultural vehicles, etc.).
- Specific transparency risk: concerns AI systems in particular where there is a risk of manifest manipulation (e.g. through the use of chatbots) or hypertrucage. Users need to know that they are interacting with a machine.the regulations oblige suppliers of generative AI systems to mark the output produced by their systems in a machine-readable format, and to ensure that it is identifiable as having been generated or manipulated by an AI. Deployers of an AI system that generates or manipulates text published for the purpose of informing the public about matters of public interest must also indicate that the text has been generated or manipulated by an AI.
- Minimal risk: concerns the vast majority of AI systems currently in use in the European Union.
From August 2, 2025, the provisions of the regulation apply to AI models for general use.
The regulation provides for financial penalties in the event of non-compliance.
If you have a question or need help, contact the Dihnamic team (contact details at the bottom of the page)
Contact
In case you are interested in Dihnamic, you want to join it or you have any question, do not hesitate to contact us!
Want to contact the project coordinator? Contact :
Véronique DESBLEDS & Maria EL JAOUDI (ADI Nouvelle-Aquitaine)
contact@dihnamic.euTel. +33 (0)6 71 19 79 27
Are you a company ? Contact :
Marianne CHAMI (CEA)
marianne.chami.EDIH@cea.frTel. +33 (0)6 47 94 60 77
Are you a journalist ? Contact :
Claire BOUCHAREISSAS (ADI Nouvelle-Aquitaine)
c.bouchareissas@adi-na.frTel. +33 (0)6 82 36 76 36