
EU AI Act: What the New Code of Practice Means for Tech Companies?
The European Commission has taken another big step in shaping artificial intelligence. It has finalised the AI Code of Practice EU, a voluntary guide to help companies deal with transparency, copyright, and safety for general-purpose AI models.
Since August 2025, the EU AI Act is officially in force, opening a new chapter in AI governance across Europe. Announcing the code, Henna Virkkunen, Executive Vice-President of the European Commission, said: “Today’s publication of the final version of the Code of Practice for general-purpose AI marks an important step in making the most advanced AI models available in Europe not only innovative, but also safe and transparent.”
For businesses, this moment is not just another regulation on paper. It sets the tone for how artificial intelligence will be built, shared, and trusted across Europe. This article takes a closer look at what the code means, why it matters, and how companies can prepare for AI compliance under AI regulation Europe.
A turning point for AI regulation
The European Union has spent years shaping the digital world with rules on privacy, competition, and online content. Back in 2021, it went further by putting forward the EU AI Act, the first real attempt to fully regulate artificial intelligence.
It took years of debate before the Act was finally approved in 2024. By August 2025, the law came into force, and the rules are now live. This is a major moment. Companies must begin adjusting to requirements covering transparency, copyright, and accountability.
The AI Code of Practice EU is part of this picture. It is not binding law, but it works as a bridge. It gives businesses a way to prepare for obligations that will soon become enforceable. For many firms, it feels clearer and more practical than the legal text alone.
The code focuses on general-purpose AI models EU, like chatbots, large language models, and image tools. These systems are powerful and widely used, but they are also tricky to regulate using old frameworks. By introducing the code, Europe wants to give companies clarity while still keeping space for innovation.
What the code actually covers?
The code deals with three big issues: copyright, transparency, and safety. Each area has clear guidance designed to help firms prepare.
Copyright rules
Copyright has been one of the most controversial aspects of AI. Training models often involves vast amounts of data, much of which may contain copyrighted material. The EU AI Act copyright rules require providers to respect creators’ rights and be transparent about how data is used.
The code guides companies to:
- Develop clear copyright policies.
- Identify and manage data where rights are reserved.
- Create contact points for complaints or rights issues.
- Share information on whether copyrighted works were included in training.
This gives creators more protection while offering firms a predictable compliance pathway.
Transparency obligations
The second pillar is transparency. The code introduces a Model Documentation Form, a structured template to record details about each AI system. Companies must disclose information such as:
Companies must disclose:
- The nature and size of training data.
- Energy usage during training.
- Known model limitations and risks.
- Version history and updates, kept for up to ten years.
This ensures regulators and downstream users understand how the model was built. Transparency also helps build public trust, which is crucial for adoption.
Safety and systemic risks
Finally, the code addresses safety. Some general-purpose AI models may reach a level where they pose systemic risks, such as widespread misinformation or cybersecurity threats. For those advanced models, the code recommends robust risk management, stress testing, and monitoring.
Not every model falls into this category. The focus is on cutting-edge systems with global reach. Smaller companies and startups receive simplified obligations to avoid stifling innovation.
For models with systemic risk, companies should:
- Adopt state-of-the-art risk management practices.
- Carry out stress testing and monitoring for potential harms.
- Establish safeguards against misuse and security threats.
- Apply simplified measures for startups and SMEs where possible.
Why companies should pay attention?
The code is voluntary, but ignoring it carries risks. When the EU AI Act becomes enforceable, firms that followed the code will be better prepared. They will also demonstrate good-faith compliance, which may reduce regulatory pressure.
Failure to comply with the EU AI rules, once binding, could result in fines of up to seven per cent of global turnover. That alone is reason enough for companies to take the code seriously.
For many, the code also provides practical tools. The documentation forms and copyright policies save companies from creating processes from scratch. It offers a shared standard across Europe, making compliance more consistent and less fragmented.
Which companies are signing up?
Reactions from the tech sector have been mixed. Several leading firms, including Google, Microsoft, and OpenAI, have agreed to sign the code. They view it as aligned with their own efforts on responsible AI.
Others remain sceptical. Meta declined to sign, arguing that parts of the code go beyond the EU AI Act and could limit innovation. Elon Musk’s xAI chose to sign only the safety chapter, rejecting the transparency and copyright sections.
These differences highlight the challenges of balancing innovation with regulation. Yet the overall trend suggests most large AI providers see value in adopting the code early.
What AI compliance means in practice?
So, what should companies actually do to prepare? Here are the main steps:
- Review training data policies – Ensure copyrighted material is identified and handled under clear rules.
- Create documentation systems – Adopt the model forms provided in the code to track and share information.
- Establish transparency teams – Assign staff to manage disclosures, licensing issues, and contact points.
- Plan for systemic risks – Larger firms should build risk assessment frameworks for advanced models.
- Engage with the AI Office – Early engagement will smooth compliance and reduce the chance of disputes.
For startups and SMEs, the process may look simpler. The EU has designed lighter compliance pathways for smaller players, recognising their limited resources.
The wider impact on AI regulation Europe
This code is not just about Europe. It sets a global example. Other regions, from the US to Asia, are watching closely. If successful, the AI Code of Practice EU could become a blueprint for global standards.
For tech companies, this means one thing: what happens in Europe may soon shape practices worldwide. By preparing for compliance here, firms may gain an edge in international markets.
Distilled
The EU AI Act has set the stage for a new era of artificial intelligence governance. With its AI Code of Practice EU, Europe offers companies a roadmap to meet rules on copyright, transparency, and safety.
For tech firms, the decision is simple but strategic: sign up, engage early, and prepare for AI compliance or risk higher costs and regulatory scrutiny later.
The EU has drawn a line in the sand. The future of AI in Europe will be built on transparency, respect for rights, and accountability. For companies across the world, this is the signal: compliance is no longer optional. It is the foundation of trust in the age of artificial intelligence.