The second phase of the AI Act came into force this Saturday, targeting general-purpose artificial intelligence (GPAI) models such as ChatGPT. This pivotal part of the regulation introduces far-reaching transparency and compliance requirements designed to build public trust and ensure the safe development of AI technologies across the European Union. The second phase of the AI Act is a cornerstone in the bloc’s evolving regulatory strategy.
Timeline and Scope of Application
Initially adopted in July 2024, the EU AI Act took effect on 1 August 2024, with a staggered implementation schedule. The second phase of the AI Act, which focuses specifically on GPAI systems, becomes legally binding on 2 August 2025. AI models already on the market by this date must fully comply no later than 2 August 2027.
New Obligations for GPAI Providers
Under the second phase of the AI Act, providers of general-purpose AI models are required to:
- Publish a public summary of training datasets, including data origins, volumes, and licensing information.
- Maintain technical documentation that is regularly updated and meets regulatory standards.
- Implement risk mitigation frameworks, especially for models considered systemically risky (e.g. those trained on >10^25 FLOPS).
- Adhere to copyright and intellectual property laws, ensuring all content used in model training is compliant.
From August 2026, non-compliant providers could face penalties of up to €15 million or 3% of global turnover, whichever is higher.
Voluntary Code of Practice: A Strategic Advantage
In July 2025, the European Commission introduced a GPAI Code of Practice. It was co-developed with 13 independent experts and industry players and focuses on:
- Transparency protocols
- Safety and robustness
- Copyright compliance
While non-binding, adopting the Code grants companies a presumption of compliance, enhanced reputational standing, and smoother regulatory engagements during the transition period.
Google and OpenAI have endorsed the Code, whereas Meta declined to sign it, citing innovation constraints and legal ambiguities.
What Investors Should Know
Risks
- High compliance costs, including auditing, documentation, legal reviews, and risk assessments.
- Potential financial penalties for delayed or inadequate adherence.
Opportunities
- Strengthened public and investor trust due to higher standards of transparency.
- Competitive edge for early adopters demonstrating ethical leadership.
- Global ripple effect as non-EU AI providers must comply when serving EU markets.
Strategic and Regulatory Outlook
The second phase of the AI Act moves the EU from principle to practice, anchoring its AI governance in operational mandates. With increasing global scrutiny, this regulation sets a precedent that may inspire similar frameworks beyond Europe. For investors, tracking compliance uptake and regulatory response will be critical as the AI sector matures under stricter oversight.
The second phase of the AI Act marks a regulatory milestone for general-purpose AI models. By enforcing transparency and responsibility, it aims to create a safer digital environment while preserving innovation. Companies that embrace the transition will likely benefit from early compliance reputational gains, while those lagging behind may face significant financial and legal exposure.