AI Act milestone: August 2025 introduces major rules for General-Purpose AI in Europe

The regulatory countdown on general-purpose AI has come to an end. As of 2 August 2025, key provisions of the AI Act become legally binding, marking a significant regulatory milestone for AI systems in Europe.

Share this post
Related Industries

Just a brief recap

On February 2, 2025, the first round of a lengthy series of long-awaited provisions of the AI Act[1] came into force. Thus, the year began with the general obligation of AI literacy and the provision enumerating the AI prohibited practices[2]. However, the impact was minimal, as they were just meant to address spreading awareness about AI terminology and to effectively restrict the most worrying of AI’s uses, as they were identified by the EU legislator.

Where on the AI Act map are we now?

The second wave of AI provisions, which took effect on August 2, 2025, introduces substantial new regulatory requirements, particularly for general-purpose AI (GPAI) models. Given that GPAI models form the backbone of mainstream corporate AI systems and have become wide-ranging tools for countless users, these new provisions represent a significant development. While they promise regulatory clarity, they also introduce legitimate concerns for providers who might face potential sanctions for non-compliance.

The dual nature of this regulatory milestone, offering both guidance and enforcement mechanisms, reflects the evolution of AI governance as these technologies become increasingly central to business operations and daily life.

Although most of the AI Act provisions have not entered into force yet, becoming effective on August 2, 2026, the provisions on GPAI models may be the most impactful, ultimately, for everyday users of AI systems that incorporate such models. The new obligations imposed on providers of GPAI models appear to have been enacted by placing human rights and fundamental freedoms at their core. Adopting such a user-focused approach, the EU legislator choses to focus on compliance rather than on sanctioning. A notable endeavour in this respect is also the recent publication of the final version of the Code of Practice for General-Purpose AI Models[3], prepared by independent experts and endorsed by the Commission and the AI Board as „an adequate voluntary tool for providers of GPAI models to demonstrate compliance with the AI Act”.

What are GPAI models?

As per the definition provided under the AI Act, a GPAI model is primarily defined as an AI model that shows “significant generality”, meaning that it can effectively perform a wide variety of different tasks. The Commission’s Guidelines on the scope of the obligations for general-purpose AI models established by AI Act[4] set out an indicative test for determining whether an AI model qualifies as a GPAI model, namely, if it meets the following criteria:

  • it was trained using more than 10²³ floating point operations (FLOP), indicating very high computational power, and
  • it can generate language (whether in the form of text or audio), text-to-image or text-to-video.

Two key indicators help identify such models:

  • Model scale: the model functions on at least one billion parameters, suggesting a high level of complexity and capability.
  • Training method: the model is trained using self-supervised learning on large volumes of unstructured data[5]. This means the model can generate its own labels from raw, uncategorized data during training, rather than relying on manually labeled datasets.

Who is most impacted by the latest AI Act provisions that became effective?

While the provisions of the AI Act that entered into force on 2 August mostly impact the providers of GPAI models, imposing multiple obligations on them, this second AI Act wave is relevant for many stakeholders across the AI ecosystem:

  • GPAI developers should assess the classification of their models under the AI Act and implement the related compliance measures.
  • GPAI deployers need to evaluate whether they might be deemed as AI providers as per the AI Act, particularly if a certain GPAI model is developed specifically for them. While the Act doesn’t impose direct obligations on those deploying GPAI or other actors in the supply chain, these companies should still assess what they expect from GPAI providers, what due diligence is necessary, what contractual safeguards should be in place, and how they plan to meet AI literacy requirements applicable to all AI system deployers.
  • Rightsholders (such as creators, publishers, or owners of intellectual property like text, images, music, or video) should assess the impact of the new requirements on the way their content is used in relation to AI, especially by GPAI, which often rely on large datasets that may include copyrighted material. They might also consider examining and tracking the information that GPAI developers disclose, such as data about the sources used for training their models or how the models interact with copyrighted content.

Which requirements are imposed on providers GPAI models?

Unless GPAI models pose a systemic risk, providers of GPAI models placed on the market on or after August 2, 2025, must comply with the following obligations:

  • Draw up and keep up to date technical documentation. The Code of Practice provides a Model Documentation Form, which should be a useful tool for compliance moving forward and a very facile and uniform means to display the required information
  • Draw up and keep up to date information and documentation meant for providers of AI systems that wish to integrate the model. This means that the model should be ready for downstream implementation, also from the documentation perspective, besides the technical aspects.
  • Ensure the existence of a copyright policy, especially in relation to the opt-out clause in the text-and-data mining exception provided by the Digital Single Market Directive.[6] Providers must establish a publicly accessible copyright policy addressing the Digital Single Market Directive’s text-and-data mining opt-out clause. The policy should consolidate requirements in a single document with clear internal responsibilities, exclude infringing sources, respect robot exclusion protocols, implement output safeguards to prevent copyright violations, and maintain complaint mechanisms with designated contact points.
  • Draw up and make available a summary of the training content used. The Code of Practice requires documentation of training stages, methodologies, design choices, and optimisation objectives. For training content, summaries must include data types, provenance (web crawling, private datasets, user data), scope and characteristics (domain, geography, language), curation methodologies, bias detection measures, plus computational resources and energy consumption details.

Enhanced Obligations for Systemic Risk GPAI Models

When a general-purpose AI model presents systemic risk, providers face significantly expanded regulatory requirements under the AI Act article 55. Such systemic risk is defined through the broad concept of “high impact capabilities” but practically identified by the presumption of training computation exceeding 10²⁵ FLOPs. A very useful point of interest is that providers may rely on codes of practice to display their compliance, such as the Code of Practice. In addition to the aspects already referred to, the Code also provides a Safety and Security Model Report, which ties in more to the situation where the general-purpose AI model falls into the system risks category.

These enhanced obligations reflect the EU’s recognition that the most powerful AI models require proportionally stricter oversight due to their potential for widespread impact. The four primary additional obligations create a comprehensive risk management framework. Providers of GPAI models that present systemic risks must:

  • conduct thorough model evaluations, including adversarial testing to identify potential vulnerabilities and failure modes,
  • assess and actively mitigate systemic risks, specifically at the EU level, developing targeted strategies for their jurisdiction,
  • track, document and report to the AI Office, and, as appropriate, to the national competent authorities, relevant information about serious incidents and possible corrective measures to address them, and
  • implement adequate cybersecurity measures proportionate to their model’s risk profile, recognising that systemic risk models represent high-value targets for malicious actors and require correspondingly robust protection mechanisms.

It is worth noting that providers of GPAI models placed on the market before August 2, 2025, are not required to fully comply immediately. They have a compliance grace period until August 2, 2027, to fully meet the new requirements.

What are the consequences?

With respect to the possible sanctions applicable to providers that do not respect the obligations highlighted in the above, Chapter XII is relevant because the full framework of penalties enters into force, except for Article 101, which specifically provides the sanctions for providers of gen-purpose AI models. Hence, the AI Act provides a generous grace period for the providers to comply with the new obligations. This period will last until August 2, 2026, when penalties for GPAI models providers shall become enforceable. However, providers should already be aware about the sanctions provided by the AI Act. Precisely, the AI Act provides that Commission may apply a fine not exceeding 3% of the annual total worldwide turnover in the preceding financial year or EUR 15 000 000, whichever is higher, if the relevant provisions of the AI Act are infringed.

In conclusion, although the scope of application of the AI Act is still small in scale, August 2, 2025, marked a crucial moment in its enforcement. As for providers, specifically, this date should represent a moment where accountability and responsibility become more prominent, in order to ensure both a safer development and a more cautious deployment of their AI models. While the grace period until August 2026 for full penalty enforcement provides breathing room for compliance, providers must recognize that this regulatory shift demands immediate attention to risk management and transparency measures. This milestone sets the foundation for a more mature AI industry where innovation proceeds hand-in-hand with ethical responsibility.

——

[1] Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act).

[2] Our article on the first provisions of the AI Act that entered into force is available here – The first wave of AI provisions is more impactful than it seems – AI literacy obligations, prohibited practices and why your organisation should already be compliant – WH Partners

[3] Available here – The General-Purpose AI Code of Practice | Shaping Europe’s digital future

[4] ANNEX to the Communication to the Commission Approval of the content of the draft Communication from the Commission – Guidelines on the scope of the obligations for general-purpose AI models established by Regulation (EU) 2024/1689 (AI Act)

[5] Bergmann D, “What Is Self-Supervised Learning?” (IBM) <https://www.ibm.com/think/topics/self-supervised-learning> accessed July 30, 2025.

[6] Directive (EU) 2019/790 of the European Parliament and of the Council of 17 April 2019 on copyright and related rights in the Digital Single Market and amending Directives 96/9/EC and 2001/29/EC.

About the author

Silvana Curteanu-Tihon

Silvana is a Senior Associate at WH Simion & Partners in Bucharest, specialising in consumer protection, intellectual property, data privacy, technology and media, and gambling law.

Learn More
Catalin Veliscu
About the author

Catalin Veliscu

Catalin Veliscu is a junior lawyer specializing in Intellectual Property, Artificial Intelligence, Data Protection, Copyright, and Gambling Law.

Learn More
Share this post

Related Articles

Scroll to Top