AI Green Bytes
Version 1.0 · February 2026

AI Green Bytes
EU AI Act
Handbook

A comprehensive guide for understanding and navigating the EU Artificial Intelligence Act.

Regulation (EU) 2024/1689In force since 1 August 2024

Latest Updates

3 updates
News19 Feb 2026

Test Update

Test content for the handbook update

News19 Feb 2026

Test Update

Test content for the handbook update

News19 Feb 2026

Test Update

Test content for the handbook update

Introduction

Why this handbook exists

Welcome

Welcome to the AI Green Bytes EU AI Act Handbook. This document serves as a comprehensive guide for our team to understand and navigate the complexities of the European Union's Artificial Intelligence Act.

As a leading provider of sovereign, green AI infrastructure in Europe, our commitment to ethical practices and full regulatory compliance is paramount. This handbook ensures that we, as a company, and our clients can innovate responsibly within the legal framework set by the EU.

Our mission is to support the AI revolution with essential, sustainable infrastructure. A core part of that mission is building trust. By proactively embracing the principles of the EU AI Act, we not only de-risk our operations but also strengthen our value proposition to clients who are building the next generation of AI applications.

A Living Document

This handbook is a living document. As the AI Act evolves and new guidelines are published, we will update it to reflect the latest requirements and best practices. The regulation is being implemented in phases through 2027, with ongoing secondary legislation and guidance from the European Commission.

Key resources we monitor for updates include the official EU AI Act text, the artificialintelligenceact.eu portal, and the EU AI Act Newsletter by Risto Uuk at the Future of Life Institute.

01
Chapter 01

Understanding the EU AI Act

The world's first comprehensive AI law

What is the EU AI Act?

The EU AI Act is the world's first comprehensive legal framework for artificial intelligence. Published in the Official Journal of the EU on 12 July 2024 and entering into force on 1 August 2024, it follows a risk-based approach — meaning that the legal requirements for an AI system are tailored to the level of risk it poses.

The Act categorises AI systems into four tiers: unacceptable risk, high risk, limited risk, and minimal risk. Its primary goal is to ensure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory, and under human oversight.

The regulation applies not only to EU-based entities but also to providers and deployers outside the EU whose AI system output is used within the Union.

Why It Matters for AI Green Bytes

As a provider of GPU-as-a-Service (GPUaaS) and AI inference platforms, AI Green Bytes operates at a critical layer of the AI value chain. While many of our clients are the "providers" or "deployers" of AI systems, our infrastructure plays a crucial role. Understanding the Act is essential for us to:

Support Our Clients — advise them on building and deploying compliant AI applications on our platform.
Manage Our Risk — ensure our own internal systems and services are compliant.
Strengthen Our Market Position — differentiate ourselves as a trusted, compliance-aware infrastructure partner in the European market, particularly given our focus on data sovereignty and sustainability.

Key Definitions

AI System — A machine-based system designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments.
Provider — A natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed with a view to placing it on the market or putting it into service under its own name or trademark, whether for payment or free of charge.
Deployer (User) — Any natural or legal person, public authority, agency or other body using an AI system under its authority, except where the AI is used in the course of a personal non-professional activity.
General Purpose AI (GPAI) — An AI model that displays significant generality and is capable of competently performing a wide range of distinct tasks, regardless of the way the model is placed on the market.

Risk Classification Framework

Unacceptable Risk

AI practices that are a clear threat to the safety, livelihoods, and rights of people.

Examples: Social scoring, real-time remote biometric identification in public spaces, manipulative AI.

Our role: Ensure our platform is not used for prohibited applications.

High Risk

AI systems with high potential to harm health, safety, fundamental rights, or the environment.

Examples: AI in critical infrastructure, medical devices, recruitment, law enforcement, education.

Our role: Provide secure, robust platform enabling clients to meet high-risk requirements.

Limited Risk

AI systems that require transparency so users know they are interacting with AI.

Examples: Chatbots, deepfake generators, AI-generated content systems.

Our role: Advise clients on implementing transparency measures.

Minimal Risk

The vast majority of AI systems. Largely unregulated under the Act.

Examples: AI-enabled video games, spam filters, inventory management systems.

Our role: No specific legal obligations under the Act.

02
Chapter 02

Prohibited AI Practices

Unacceptable risk — what is banned

Overview

The AI Act explicitly bans certain AI applications that are deemed to create an "unacceptable risk." These practices are considered contrary to EU values and fundamental rights. As a responsible infrastructure provider, AI Green Bytes will not knowingly support the development or deployment of these systems on our platform.

These prohibitions took effect on 2 February 2025 — the first milestone in the Act's implementation timeline.

Banned Practices

Subliminal, Manipulative, or Deceptive Techniques — AI systems that use techniques beyond a person's consciousness to materially distort behaviour in a way likely to cause physical or psychological harm.
Exploitation of Vulnerabilities — AI that exploits the vulnerabilities of a specific group of persons due to their age, physical or mental disability, or socio-economic situation.
Social Scoring — AI systems used by public authorities (or on their behalf) for evaluating or classifying people based on social behaviour or personal characteristics, leading to detrimental treatment.
Untargeted Facial Recognition Scraping — Creating or expanding facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage.
Emotion Inference in Workplaces and Education — AI systems that infer emotions of individuals in the workplace or educational institutions, except for medical or safety reasons.
Biometric Categorisation of Sensitive Attributes — AI systems that categorise individuals based on biometric data to deduce race, political opinions, trade union membership, religious beliefs, sex life, or sexual orientation.
Real-time Remote Biometric Identification — Use in publicly accessible spaces for law enforcement, except in narrowly defined situations such as searching for missing children or preventing imminent terrorist threats.

What This Means for AI Green Bytes

Our Acceptable Use Policy has been updated to explicitly prohibit the use of our infrastructure for any of the banned practices listed above. During client onboarding, we will assess the intended use of AI systems to ensure they do not fall into these categories.

If we become aware that our platform is being used for prohibited purposes, we will take immediate action in accordance with our terms of service.

03
Chapter 03

High-Risk AI Systems

Strict requirements, not bans

Identifying High-Risk AI

This is the most critical category for AI Green Bytes and our clients. High-risk AI systems are not banned but are subject to strict requirements before they can be placed on the market or put into service.

An AI system is classified as high-risk if it falls under one of two categories: it is a safety component of a product covered by existing EU legislation listed in Annex I (e.g., medical devices, machinery, toys), or it is used in one of the sensitive areas listed in Annex III.

The high-risk obligations for Annex III systems take effect on 2 August 2026, while Annex I obligations follow on 2 August 2027.

Annex III Use Cases

The following areas are designated as high-risk under Annex III:

Biometrics — Remote biometric identification systems (non-banned uses), biometric categorisation, and emotion recognition.
Critical Infrastructure — AI used to manage and operate critical digital infrastructure, road traffic, and supply of water, gas, heating, and electricity.
Education and Vocational Training — Systems that determine access to education, evaluate students, assess appropriate levels of education, or monitor prohibited behaviour during exams.
Employment and Workers Management — AI used for recruitment, job application filtering, promotions, task allocation, monitoring, and evaluation of work performance.
Essential Public and Private Services — Systems used to evaluate creditworthiness, determine access to public benefits, evaluate insurance risk, or triage emergency calls.
Law Enforcement — AI used for risk assessments of natural persons, polygraphs, evaluation of evidence reliability, profiling in criminal investigations, and crime analytics.
Migration, Asylum, and Border Control — AI used in the context of migration and border management, including risk assessments and document authentication.
Administration of Justice — AI intended to assist judicial authorities in researching and interpreting facts and the law.

Obligations for Providers

Providers of high-risk AI systems must comply with a comprehensive set of requirements:

Risk Management System — Establish and maintain a risk management system throughout the AI system's lifecycle.
Data Governance — Implement robust data governance practices for training, validation, and testing datasets.
Technical Documentation — Maintain detailed technical documentation demonstrating compliance.
Record-Keeping — Design the system to automatically record events (logging) relevant to identifying risks.
Instructions for Use — Provide clear, comprehensive instructions for deployers.
Human Oversight — Design the system to allow effective human oversight during use.
Accuracy, Robustness, and Cybersecurity — Achieve appropriate levels of accuracy, robustness, and cybersecurity protection.
Quality Management System — Implement a quality management system covering all aspects of compliance.

AI Green Bytes' Role

As an infrastructure provider, we are a key enabler for our clients' compliance. We provide:

Secure Infrastructure — A highly secure environment with immersion-cooled GPU clusters to protect sensitive data and models.
Robustness and Reliability — High-availability infrastructure to ensure the consistent operation of high-risk systems.
Data Sovereignty — Guaranteed data residency within the EU (Iceland and Nordic region), a core tenet of our business model.
Logging and Monitoring — Infrastructure-level logging capabilities that support our clients' record-keeping obligations.
Support and Guidance — We work with our clients to help them understand the infrastructure-related aspects of their compliance obligations.
04
Chapter 04

General Purpose AI

Rules for foundation models

What is GPAI?

General Purpose AI (GPAI) models are a major focus of the Act. These are large, flexible models that can be adapted to a wide range of tasks — most notably, large language models (LLMs) like GPT, Claude, Llama, and Mistral.

The GPAI rules became enforceable on 2 August 2025, making this an area where compliance is already required.

Obligations for All GPAI Providers

All providers of GPAI models must:

Draw up and maintain technical documentation, including information about the training and testing process and evaluation results.

Provide information and documentation to downstream providers who intend to integrate the GPAI model into their own AI systems.

Establish a policy to comply with the EU Copyright Directive, particularly regarding the right of copyright holders to opt out of text and data mining.

Publish a sufficiently detailed summary of the training data used, following a template provided by the AI Office.

Providers may demonstrate compliance by adhering to voluntary codes of practice until European harmonised standards are published.

GPAI with Systemic Risk

GPAI models trained with a very large amount of computing power — specifically, more than 10²⁵ floating point operations (FLOPs) — are presumed to have systemic risk. These models face additional obligations:
Model Evaluations — Perform model evaluations in accordance with standardised protocols and tools, including adversarial testing.
Systemic Risk Assessment — Assess and mitigate possible systemic risks, including their sources.
Incident Reporting — Track, document, and report serious incidents and possible corrective measures to the AI Office and relevant national authorities.
Cybersecurity — Ensure an adequate level of cybersecurity protection for the model and its physical infrastructure.

Relevance to AI Green Bytes

Many of our clients deploy or fine-tune GPAI models on our infrastructure. While the primary obligations fall on the model providers, we play a supporting role by:

Providing the compute infrastructure that enables model evaluations and adversarial testing.

Ensuring our platform supports the logging and documentation requirements.

Offering data-sovereign infrastructure that helps providers comply with EU data residency expectations.

Monitoring the GPAI models being deployed on our platform and working with providers to ensure they are aware of their obligations.

05
Chapter 05

Compliance & Governance

Our internal framework

Internal Compliance Framework

AI Green Bytes is establishing a formal AI Act compliance framework to ensure we meet our obligations and support our clients effectively. This framework includes:

AI Compliance Officer — A designated individual responsible for overseeing our AI Act compliance efforts, staying current with regulatory developments, and serving as the primary point of contact for compliance matters.
Client Onboarding Process — A structured process to assess the risk level of AI systems our clients intend to deploy, ensuring we understand the regulatory implications before infrastructure is provisioned.
Acceptable Use Policy — An updated policy that explicitly prohibits the use of our platform for banned AI practices and sets clear expectations for high-risk deployments.
Internal Training — Regular training for all employees on the EU AI Act and this handbook, ensuring company-wide awareness and competence.

Roles and Responsibilities

Sales and Business Development — Responsible for initial client screening and communicating our compliance stance. They are the first line of defence in ensuring we do not onboard clients with prohibited use cases.
Solutions Architects — Responsible for advising clients on designing compliant solutions on our platform. They help clients understand how our infrastructure supports their compliance needs (logging, security, data residency).
Operations — Responsible for maintaining the security, logging, and robustness of our infrastructure. They ensure our platform meets the technical standards expected of a provider supporting high-risk AI deployments.
Legal and Compliance — Responsible for maintaining this handbook, monitoring regulatory developments, and advising the company on all legal matters related to the AI Act.

EU Governance Structure

The AI Act establishes a multi-layered governance structure:

The AI Office — Established within the European Commission, it monitors the effective implementation and compliance of GPAI model providers. Downstream providers can lodge complaints regarding upstream providers' infringements.
National Competent Authorities — Each Member State must appoint at least one national competent authority responsible for supervising the application and implementation of the AI Act.
European AI Board — Composed of representatives from each Member State, it advises and assists the Commission and Member States to facilitate consistent application of the regulation.
Scientific Panel of Independent Experts — Provides technical expertise and advice to the AI Office and national authorities, particularly regarding GPAI models with systemic risk.
06
Chapter 06

Implementation Timeline

Key dates and milestones

Phased Implementation

The EU AI Act follows a phased implementation approach. After entering into force on 1 August 2024, different provisions become applicable at different times. This gives organisations time to prepare and adapt their systems and processes.

As of February 2026, the first three milestones have already passed. The prohibitions on unacceptable risk AI, the GPAI codes of practice, and the GPAI rules are all now in effect. The next critical milestone is August 2026, when high-risk AI obligations under Annex III become enforceable.

Feb 2, 2025Completed

Prohibited AI Practices

Prohibitions on unacceptable risk AI systems take effect. Social scoring, manipulative AI, and untargeted facial recognition scraping are banned.

ImpactEnforce Acceptable Use Policy on our platform.
May 2, 2025Completed

GPAI Codes of Practice

Codes of practice for General Purpose AI (GPAI) must be finalised. These cover transparency, copyright, and training data obligations.

ImpactUpdate guidance for clients deploying GPAI models.
Aug 2, 2025Completed

GPAI Rules Apply

General Purpose AI rules become enforceable. Member State competent authorities must be appointed. Annual Commission review begins.

ImpactMonitor GPAI usage on our infrastructure.
Feb 2, 2026Current

Post-Market Monitoring Template

Commission issues implementing acts creating a template for high-risk AI providers' post-market monitoring plan.

ImpactPrepare monitoring capabilities for high-risk clients.
Aug 2, 2026Upcoming

High-Risk AI (Annex III)

Obligations for high-risk AI systems listed in Annex III take effect. This includes AI in biometrics, critical infrastructure, education, employment, law enforcement, and more. Penalties and regulatory sandboxes also apply.

ImpactEnsure infrastructure and support services are fully ready for clients deploying high-risk systems.
Aug 2, 2027Upcoming

High-Risk AI (Annex I)

Obligations for high-risk AI systems that are safety components of products covered by existing EU legislation (e.g., medical devices, toys, vehicles).

ImpactContinued support for clients in regulated product sectors.
End of 2030Upcoming

Large-Scale IT Systems

Obligations for AI systems that are components of large-scale IT systems in freedom, security, and justice (e.g., Schengen Information System).

ImpactLong-term compliance planning for government and public sector clients.

Penalties

The AI Act establishes significant penalties for non-compliance:

Up to €35 million or 7% of global annual turnover (whichever is higher) for violations of prohibited AI practices.

Up to €15 million or 3% of global annual turnover for violations of other provisions, including high-risk AI requirements.

Up to €7.5 million or 1% of global annual turnover for supplying incorrect, incomplete, or misleading information to authorities.

For SMEs and startups, the lower of the two amounts (fixed or percentage) applies, providing some proportionality.

07
Chapter 07

Practical Guidance

Checklists and communication

Project Onboarding Checklist

Before onboarding a new client project, our team should consider the following questions:

1. What is the intended purpose of the AI system? 2. Does it fall into any of the "unacceptable risk" categories listed in Chapter 2? 3. Is it likely to be classified as "high-risk" under Annex III? 4. If high-risk, is the client aware of their obligations as a "provider" or "deployer"? 5. What infrastructure requirements will they have to meet their compliance needs (e.g., logging, data governance, cybersecurity)? 6. Does the client use or deploy any GPAI models? If so, are they aware of the GPAI obligations? 7. Is the client's data processing compliant with GDPR and other relevant data protection laws?

Client Communication

We should proactively communicate our commitment to compliance. Key messages include:

"AI Green Bytes is your trusted partner for building and deploying AI in Europe."

"Our sovereign, green infrastructure is designed to help you meet the requirements of the EU AI Act."

"We provide the security, reliability, and data residency you need for even high-risk AI applications."

"Our team is trained on the EU AI Act and can advise you on the infrastructure aspects of your compliance journey."

Staying Current

The regulatory landscape is evolving rapidly. Recent developments include:

The EU AI Act Newsletter reports on ongoing debates about simplification versus deregulation, with industry groups pushing for the AI Omnibus proposals to go further while civil society warns against rolling back fundamental protections.

The European Commission has launched a whistleblower tool for reporting suspected breaches of the AI Act directly to the EU AI Office.

CEN and CENELEC have announced exceptional measures to speed up the development of European standards supporting the AI Act.

A first draft of the Code of Practice on transparency of AI-generated content has been released.

The European Commission is inviting feedback on draft rules for establishing AI regulatory sandboxes.

We will continue to monitor these developments and update this handbook accordingly.

08
Chapter 08

Resources & References

Further reading and contacts

Glossary

Annex I — List of existing EU harmonisation legislation whose products may contain high-risk AI components.
Annex III — List of high-risk AI use cases subject to strict requirements under the AI Act.
Conformity Assessment — The process of verifying whether a high-risk AI system meets the requirements of the Act before it can be placed on the market.
CE Marking — The marking affixed to high-risk AI systems to indicate conformity with the Act.
Codes of Practice — Voluntary frameworks developed by the AI Office to help GPAI providers demonstrate compliance.
Harmonised Standards — European standards developed by CEN/CENELEC that, when followed, create a presumption of conformity with the Act.
Regulatory Sandbox — A controlled environment established by national authorities to allow the testing of innovative AI systems under regulatory supervision.
Systemic Risk — Risk associated with GPAI models that have a significant impact due to their reach or because of actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or society as a whole.

Contact

For any questions regarding the EU AI Act or this handbook, please contact our AI Compliance Officer.

This handbook is maintained by the Legal and Compliance team at AI Green Bytes. For the latest version, always refer to the digital copy at this URL.