Law enters into force 1 Aug 2024 Ban on AI systems with unacceptable risk. Rules on AI Literacy come into effect 2 Feb 2025 AI Office Codes of Practice required to be ready 2 May 2025 General Purpose AI (GPAI) models must comply¹ 2 Aug 2025 Coming soon Full EU AI Act applies to AI systems² 2 Aug 2026 Coming soon Rules on Annex I AI systems apply³ 2 Aug 2027 1 Subject to grandfathering provisions. Rules on penalties, governance and notification come into force 2 August 2025. 2 Including Annex III High Risk AI Systems. 3 e.g. where AI is a safety component in anything governed by the EU Machinery Regulations, Medical Devices Regulations, etc. Law enters into force 1 August 2024 The EU AI Act enters into force on 1 August 2024, 20 days after its publication in the Official Journal of the EU. Entering into force is an administrative milestone which does not create any immediate legal obligations for providers or deployers of AI systems. The first legal obligations under the Act will come into effect on 2 February 2025 (6 months from entering into force) – the effect of this will be to ban AI systems with an unacceptable risk. For more information on who the EU AI Act applies to, see our article here. For assistance with AI system risk categorisation, or to discuss our approach to compliance, download our guide or reach out to a member of the team. Ban on AI systems with unacceptable risk. Rules on AI Literacy come into effect 2 February 2025 AI Literacy Rules Providers and deployers of AI systems must now implement measures to ensure that personnel and other persons dealing with the operation and use of AI systems possess adequate AI literacy in order to do so in an informed way. The threshold for AI literacy should take into account the context the AI systems are to be used in and the groups of people on whom the systems are to be used. Unacceptable Risk Ban The Act enforces a prohibition on AI systems identified as posing an "unacceptable risk" in accordance with Article 5. The Act prohibits systems which have the consequence of undermining fundamental human rights. AI systems prohibited since 2 February 2025 include those which have the following functions or characteristics: Engage in deception, manipulation, or use subliminal techniques; Use social scoring for public or private purposes; Exploit biometric data in real-time or for categorisation purposes (i.e. to deduce or infer race, political opinions, trade union membership, religious or philosophical beliefs or sexual orientation); Scrape internet or CCTV for facial images to build-up or expand databases; Recognise emotions in the workplace and education institutions; Conduct certain types of predictive risk profiling; and Exploit vulnerable persons; The maximum penalty for breaches in respect of prohibited AI systems is the higher of: (a) EUR 35 million, or (b) up to 7% of worldwide annual turnover. While the provisions in relation to these prohibitions came into effect on 2 February 2025, the penalties will not apply until 2 August 2025. The EU Commission conducted a consultation on prohibitions and AI system definitions under the Act in late 2024 and early 2025, and published draft Guidelines on 4 February 2025 (which are approved by the EU Commission but not yet adopted). AI Office Codes of Practice required to be ready 2 May 2025 The final draft of the General-Purpose AI Code of Practice (the Code) was due to be presented for approval on 2 May 2025. The Code, which has undergone an iterative drafting process involving nearly 1000 stakeholders and three previous drafts, is intended to facilitate the proper application of the EU AI Act's rules for general-purpose AI models, including transparency and copyright-related rules, risk assessment, and mitigation measures. The final version was, in the end, presented for approval in July 2025. Details of the Code The Code will be an important tool for General-Purpose AI providers to demonstrate their compliance with the AI Act (whilst not providing a presumption of conformity). However, not all GPAI providers are prepared to confirm that they will sign the Code. Given the evolving state of AI, the drafting of the Code aims to strike a balance between clear commitments and the flexibility to adapt as AI technology evolves, but its passage to date has been controversial. Article 56 EU AI Act outlines the key issues addressed in the Code. These include: the means to ensure that information is kept up to date in light of market and technological developments (transparency); the adequate level of detail for the summary about the content used for training (copyright); the identification or the type and nature of the systemic risks, including, where appropriate, their sources (risk assessment); and the measures, procedures and modalities for the assessment and management of the systemic risks identified above (mitigation measures). We have analysed the final version of the Code here. General Purpose AI (GPAI) models must comply (subject to grandfathering provisions). Rules on penalties, governance and notification come into force 2 August 2025 From 2 August 2025, a number of provisions in the EU AI Act begin to apply. In particular, providers of General Purpose AI (GPAI) models put on the market in the EU on or after that date must comply with certain rules and obligations. For GPAI models placed on the EU market before 2 August 2025, however, the deadline for compliance is 2 August 2027. Supervision and enforcement by the AI Office for compliance with the rules for GPAI models (including issuing of penalties) will start as of 2 August 2026. However, other penalties can now be imposed by the Office. Rules on GPAI models These include: Technical documentation: Providers of GPAI models must prepare detailed technical documentation and other information about their model which must be kept up-to-date and demonstrate compliance with the Act. Copyright compliance: Providers of GPAI models must implement policies for compliance with EU copyright law, in particularly identifying and respecting reservations of rights expressed by rights holders. Summaries of the content used for training GPAI models must be made publicly available. Risk assessment for GPAI models with systemic risk: For GPAI models with systemic risk, providers must perform model evaluation; assess and mitigate possible systemic risks at the EU level; document and report any serious incidents to the AI Office and national authorities; and ensure an adequate level of cybersecurity protection. Appointment of authorised EU representative: Providers of GPAI models established outside of the EU must appoint an 'authorised representative' within the EU before placing the model on the market. This representative shall cooperate with local authorities and is responsible for verifying technical documentation has been drawn up and made available. Materials have been published to assist GPAI model providers in compliance with their obligations including: A voluntary Code of Practice which will not provide a presumption of conformity but will provide 'increased legal certainty' – discussed in our article here. The Commission has confirmed providers who sign the Code will benefit in effect from a grace period to demonstrate 'good faith' adherence. Guidelines on the scope of obligations for GPAI model providers – discussed in our article here. Mandatory Template for disclosing summaries of training data – discussed in our article here. Penalties and enforcement The AI Act's tiered approach to penalties also applies from 2 August 2025 - non-compliance with the Act's provisions could potentially result in substantial fines, for example up to €35 million or 7% of global annual turnover, whichever is higher in respect of prohibited AI practices. However, fines against providers of GPAI models (which are a maximum of €15 million or 3% of global annual turnover, whichever is higher) will not start to apply until 2 August 2026. Other provisions This date will also see the establishment of the respective national competent authorities (Chapter VII, Article 70), who will be responsible for supervising the application of the Act on a national level. Full EU AI Act applies to AI systems (including Annex III High Risk AI Systems) 2 August 2026 The majority of the AI Act’s provisions become applicable to organizations operating within the EU. Rules on Annex I AI systems apply (e.g. where AI is a safety component in anything governed by the EU Machinery Regulations, Medical Devices Regulations, etc.) 2 August 2027 Final phase for safety components in regulated product categories.