The EU AI Act takes shape with a first draft of rules for general-purpose models
As AI advances rapidly, regulators worldwide are turning their focus to both the technology and the companies diving headfirst into it. Leading the way, the European Union was the first to approve comprehensive legislation aimed at AI regulation. Now, it’s released an initial draft of a Code of Practice specifically for general-purpose AI (GPAI) models.
The draft, set to be finalized by May next year, lays out guidelines for managing risks and offers companies a roadmap to stay compliant and dodge steep penalties. While the EU’s AI Act technically came into effect on August 1, it left the specifics of regulating GPAI for a later date.
This draft is the first real attempt to define expectations for these advanced models, giving stakeholders a chance to submit feedback and fine-tune the guidelines before they take effect.
So, what exactly are general-purpose AI (GPAI) models? They are a type of AI capable of handling a wide variety of tasks, forming the backbone of many AI systems. These models are trained on vast amounts of data and can be adapted to different tasks with minimal adjustments. Companies currently expected to fall under the EU’s guidelines (though the list could expand) include:
The document touches on several key areas for GPAI makers: transparency, copyright compliance, risk assessment, and technical and governance-related risk management. This 36-page draft is already packed with detailed guidelines and is likely to expand even further before it’s finalized, but a few sections stand out.
Transparency is a big focus here. The AI Act demands that AI companies disclose details about the web crawlers used to train their models – a hot topic that even led
the EU to investigate how X is training its AI chatbot, Grok.
Risk assessment also takes center stage, aiming to address concerns about cybersecurity, potential discrimination, and the chance of AI systems spiraling out of control.
AI developers are also expected to implement a Safety and Security Framework (SSF), which will help outline their risk management strategies and match their mitigation efforts to the level of systemic risk involved.
The guidelines extend into technical aspects, like securing model data, establishing reliable failsafe access controls, and continuously reevaluating the effectiveness of these measures.
Lastly, the governance section pushes for accountability within the companies, emphasizing the need for continuous risk evaluations and the involvement of external experts when necessary.
Similar to other EU tech regulations, like the DMA, which has been giving big tech companies a hard time lately, the AI Act comes with tough penalties. Companies risk fines of up to €35 million (around $36.8 million) or seven percent of their global annual profits – whichever is higher.
Stakeholders have until November 28 to provide feedback on this draft via the Futurium platform. The finalized rules are slated to take effect by May 1, 2025.
These regulations may not sit well with big tech, and some companies – like Meta and Apple – have already voiced their disagreement by refusing to sign the EU AI Act. Meanwhile, tech giants such as OpenAI, Google, and Microsoft have backed the agreement. That said, I believe these laws are crucial for us, the users. AI development – especially in terms of training and future applications – definitely needs closer monitoring.
Meta, Google, OpenAI, Apple and more will need to comply
The draft, set to be finalized by May next year, lays out guidelines for managing risks and offers companies a roadmap to stay compliant and dodge steep penalties. While the EU’s AI Act technically came into effect on August 1, it left the specifics of regulating GPAI for a later date.
So, what exactly are general-purpose AI (GPAI) models? They are a type of AI capable of handling a wide variety of tasks, forming the backbone of many AI systems. These models are trained on vast amounts of data and can be adapted to different tasks with minimal adjustments. Companies currently expected to fall under the EU’s guidelines (though the list could expand) include:
- OpenAI, the creator of ChatGPT
- Google with its Gemini models
- Meta, whose Llama models power Meta AI, the assistant across its apps
- Apple
- Microsoft
- Anthropic
- Mistral
The final document will play a crucial role in guiding the future development and deployment of trustworthy and safe general-purpose AI models.
– European Commission, November 2024
The document touches on several key areas for GPAI makers: transparency, copyright compliance, risk assessment, and technical and governance-related risk management. This 36-page draft is already packed with detailed guidelines and is likely to expand even further before it’s finalized, but a few sections stand out.
Risk assessment also takes center stage, aiming to address concerns about cybersecurity, potential discrimination, and the chance of AI systems spiraling out of control.
The guidelines extend into technical aspects, like securing model data, establishing reliable failsafe access controls, and continuously reevaluating the effectiveness of these measures.
Lastly, the governance section pushes for accountability within the companies, emphasizing the need for continuous risk evaluations and the involvement of external experts when necessary.
Stakeholders have until November 28 to provide feedback on this draft via the Futurium platform. The finalized rules are slated to take effect by May 1, 2025.
These regulations may not sit well with big tech, and some companies – like Meta and Apple – have already voiced their disagreement by refusing to sign the EU AI Act. Meanwhile, tech giants such as OpenAI, Google, and Microsoft have backed the agreement. That said, I believe these laws are crucial for us, the users. AI development – especially in terms of training and future applications – definitely needs closer monitoring.
Things that are NOT allowed: