Test Update
Test content for the handbook update


A comprehensive guide for understanding and navigating the EU Artificial Intelligence Act.

Test content for the handbook update
Test content for the handbook update
Test content for the handbook update
Why this handbook exists
Welcome to the AI Green Bytes EU AI Act Handbook. This document serves as a comprehensive guide for our team to understand and navigate the complexities of the European Union's Artificial Intelligence Act.
As a leading provider of sovereign, green AI infrastructure in Europe, our commitment to ethical practices and full regulatory compliance is paramount. This handbook ensures that we, as a company, and our clients can innovate responsibly within the legal framework set by the EU.
Our mission is to support the AI revolution with essential, sustainable infrastructure. A core part of that mission is building trust. By proactively embracing the principles of the EU AI Act, we not only de-risk our operations but also strengthen our value proposition to clients who are building the next generation of AI applications.
This handbook is a living document. As the AI Act evolves and new guidelines are published, we will update it to reflect the latest requirements and best practices. The regulation is being implemented in phases through 2027, with ongoing secondary legislation and guidance from the European Commission.
Key resources we monitor for updates include the official EU AI Act text, the artificialintelligenceact.eu portal, and the EU AI Act Newsletter by Risto Uuk at the Future of Life Institute.

The world's first comprehensive AI law
The Act categorises AI systems into four tiers: unacceptable risk, high risk, limited risk, and minimal risk. Its primary goal is to ensure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory, and under human oversight.
The regulation applies not only to EU-based entities but also to providers and deployers outside the EU whose AI system output is used within the Union.
As a provider of GPU-as-a-Service (GPUaaS) and AI inference platforms, AI Green Bytes operates at a critical layer of the AI value chain. While many of our clients are the "providers" or "deployers" of AI systems, our infrastructure plays a crucial role. Understanding the Act is essential for us to:
| Risk Level | Description | Examples | Our Role |
|---|---|---|---|
Unacceptable | AI practices that are a clear threat to the safety, livelihoods, and rights of people. | Social scoring, real-time remote biometric identification in public spaces, manipulative AI. | Ensure our platform is not used for prohibited applications. |
High | AI systems with high potential to harm health, safety, fundamental rights, or the environment. | AI in critical infrastructure, medical devices, recruitment, law enforcement, education. | Provide secure, robust platform enabling clients to meet high-risk requirements. |
Limited | AI systems that require transparency so users know they are interacting with AI. | Chatbots, deepfake generators, AI-generated content systems. | Advise clients on implementing transparency measures. |
Minimal | The vast majority of AI systems. Largely unregulated under the Act. | AI-enabled video games, spam filters, inventory management systems. | No specific legal obligations under the Act. |
AI practices that are a clear threat to the safety, livelihoods, and rights of people.
Examples: Social scoring, real-time remote biometric identification in public spaces, manipulative AI.
Our role: Ensure our platform is not used for prohibited applications.
AI systems with high potential to harm health, safety, fundamental rights, or the environment.
Examples: AI in critical infrastructure, medical devices, recruitment, law enforcement, education.
Our role: Provide secure, robust platform enabling clients to meet high-risk requirements.
AI systems that require transparency so users know they are interacting with AI.
Examples: Chatbots, deepfake generators, AI-generated content systems.
Our role: Advise clients on implementing transparency measures.
The vast majority of AI systems. Largely unregulated under the Act.
Examples: AI-enabled video games, spam filters, inventory management systems.
Our role: No specific legal obligations under the Act.
Unacceptable risk — what is banned
The AI Act explicitly bans certain AI applications that are deemed to create an "unacceptable risk." These practices are considered contrary to EU values and fundamental rights. As a responsible infrastructure provider, AI Green Bytes will not knowingly support the development or deployment of these systems on our platform.
Our Acceptable Use Policy has been updated to explicitly prohibit the use of our infrastructure for any of the banned practices listed above. During client onboarding, we will assess the intended use of AI systems to ensure they do not fall into these categories.
If we become aware that our platform is being used for prohibited purposes, we will take immediate action in accordance with our terms of service.

Strict requirements, not bans
This is the most critical category for AI Green Bytes and our clients. High-risk AI systems are not banned but are subject to strict requirements before they can be placed on the market or put into service.
An AI system is classified as high-risk if it falls under one of two categories: it is a safety component of a product covered by existing EU legislation listed in Annex I (e.g., medical devices, machinery, toys), or it is used in one of the sensitive areas listed in Annex III.
The high-risk obligations for Annex III systems take effect on 2 August 2026, while Annex I obligations follow on 2 August 2027.
The following areas are designated as high-risk under Annex III:
Providers of high-risk AI systems must comply with a comprehensive set of requirements:
As an infrastructure provider, we are a key enabler for our clients' compliance. We provide:
Rules for foundation models
The GPAI rules became enforceable on 2 August 2025, making this an area where compliance is already required.
All providers of GPAI models must:
Draw up and maintain technical documentation, including information about the training and testing process and evaluation results.
Provide information and documentation to downstream providers who intend to integrate the GPAI model into their own AI systems.
Establish a policy to comply with the EU Copyright Directive, particularly regarding the right of copyright holders to opt out of text and data mining.
Publish a sufficiently detailed summary of the training data used, following a template provided by the AI Office.
Providers may demonstrate compliance by adhering to voluntary codes of practice until European harmonised standards are published.
Many of our clients deploy or fine-tune GPAI models on our infrastructure. While the primary obligations fall on the model providers, we play a supporting role by:
Providing the compute infrastructure that enables model evaluations and adversarial testing.
Ensuring our platform supports the logging and documentation requirements.
Offering data-sovereign infrastructure that helps providers comply with EU data residency expectations.
Monitoring the GPAI models being deployed on our platform and working with providers to ensure they are aware of their obligations.

Our internal framework
AI Green Bytes is establishing a formal AI Act compliance framework to ensure we meet our obligations and support our clients effectively. This framework includes:
The AI Act establishes a multi-layered governance structure:

Key dates and milestones
The EU AI Act follows a phased implementation approach. After entering into force on 1 August 2024, different provisions become applicable at different times. This gives organisations time to prepare and adapt their systems and processes.
As of February 2026, the first three milestones have already passed. The prohibitions on unacceptable risk AI, the GPAI codes of practice, and the GPAI rules are all now in effect. The next critical milestone is August 2026, when high-risk AI obligations under Annex III become enforceable.
Prohibitions on unacceptable risk AI systems take effect. Social scoring, manipulative AI, and untargeted facial recognition scraping are banned.
Codes of practice for General Purpose AI (GPAI) must be finalised. These cover transparency, copyright, and training data obligations.
General Purpose AI rules become enforceable. Member State competent authorities must be appointed. Annual Commission review begins.
Commission issues implementing acts creating a template for high-risk AI providers' post-market monitoring plan.
Obligations for high-risk AI systems listed in Annex III take effect. This includes AI in biometrics, critical infrastructure, education, employment, law enforcement, and more. Penalties and regulatory sandboxes also apply.
Obligations for high-risk AI systems that are safety components of products covered by existing EU legislation (e.g., medical devices, toys, vehicles).
Obligations for AI systems that are components of large-scale IT systems in freedom, security, and justice (e.g., Schengen Information System).
The AI Act establishes significant penalties for non-compliance:
Up to €35 million or 7% of global annual turnover (whichever is higher) for violations of prohibited AI practices.
Up to €15 million or 3% of global annual turnover for violations of other provisions, including high-risk AI requirements.
Up to €7.5 million or 1% of global annual turnover for supplying incorrect, incomplete, or misleading information to authorities.
For SMEs and startups, the lower of the two amounts (fixed or percentage) applies, providing some proportionality.
Checklists and communication
Before onboarding a new client project, our team should consider the following questions:
1. What is the intended purpose of the AI system? 2. Does it fall into any of the "unacceptable risk" categories listed in Chapter 2? 3. Is it likely to be classified as "high-risk" under Annex III? 4. If high-risk, is the client aware of their obligations as a "provider" or "deployer"? 5. What infrastructure requirements will they have to meet their compliance needs (e.g., logging, data governance, cybersecurity)? 6. Does the client use or deploy any GPAI models? If so, are they aware of the GPAI obligations? 7. Is the client's data processing compliant with GDPR and other relevant data protection laws?
We should proactively communicate our commitment to compliance. Key messages include:
"AI Green Bytes is your trusted partner for building and deploying AI in Europe."
"Our sovereign, green infrastructure is designed to help you meet the requirements of the EU AI Act."
"We provide the security, reliability, and data residency you need for even high-risk AI applications."
"Our team is trained on the EU AI Act and can advise you on the infrastructure aspects of your compliance journey."
The regulatory landscape is evolving rapidly. Recent developments include:
The EU AI Act Newsletter reports on ongoing debates about simplification versus deregulation, with industry groups pushing for the AI Omnibus proposals to go further while civil society warns against rolling back fundamental protections.
The European Commission has launched a whistleblower tool for reporting suspected breaches of the AI Act directly to the EU AI Office.
CEN and CENELEC have announced exceptional measures to speed up the development of European standards supporting the AI Act.
A first draft of the Code of Practice on transparency of AI-generated content has been released.
The European Commission is inviting feedback on draft rules for establishing AI regulatory sandboxes.
We will continue to monitor these developments and update this handbook accordingly.
Further reading and contacts
For any questions regarding the EU AI Act or this handbook, please contact our AI Compliance Officer.
This handbook is maintained by the Legal and Compliance team at AI Green Bytes. For the latest version, always refer to the digital copy at this URL.