Newsletter image

Subscribe to the Newsletter

Join 10k+ people to get notified about new posts, news and tips.

Do not worry we don't spam!

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use

Search

GDPR Compliance

We use cookies to ensure you get the best experience on our website. By continuing to use our site, you accept our use of cookies, Privacy Policy, and Terms of Service.

EU - Ecosystem

EU AI Act Open-Source Exemption: The Circus of Conditions Continues

The EU AI Act open-source exemption exempts only 3 paperwork items. Copyright policy, training-data summaries, and 10^25 FLOP systemic-risk rules still apply. Full enforcement hits August 2, 2026.

TL;DR
  • EU Commission April 10, 2026 clarification on the AI Act open-source GPAI exemption. Three obligations waived, the rest stays.
  • Exemption covers Articles 53(1a-b) and 54. Copyright policy (53(1c)) and training-data summary (53(1d)) still required. Models above 10^25 FLOPs get no exemption regardless of license.
  • Full enforcement begins August 2, 2026. Models placed on EU market after August 2, 2025 owe immediate training-data summaries.

Brussels has spoken. Again. On April 10, 2026, the Commission pushed out another round of clarifications around the AI Act's open-source exemption, and the message to anyone who actually builds models is: yes, you can release open weights, but only after you have read 40 pages of caveats, passed three tests, and filed a copyright policy with the AI Office. If this sounds like a lighter touch, that is because the European definition of "lighter" is carrying a filing cabinet up a spiral staircase instead of two filing cabinets.

The headline version plays well. "Open-source models are exempt from certain GPAI obligations." The fine print version, once you read the actual guidance and the Code of Practice, makes clear the exemption applies to roughly three paragraphs of the Act, never applies to anything anyone would call a frontier model, and still leaves open-source providers on the hook for the two obligations that cost the most time: copyright compliance and training-data disclosure. Congratulations, Europe. You have invented the paperwork-optional paperwork track.

What the exemption actually exempts

Three obligations. That is the full list. If your model qualifies as "free and open-source" under the Commission's definition, you get a waiver from:

  • Article 53(1a): maintaining detailed technical documentation for authorities on request.
  • Article 53(1b): providing that same technical documentation to downstream providers who integrate your model.
  • Article 54: appointing an authorized representative physically present in the EU (applies to non-EU providers).

Everything else in Article 53 still applies. You still write a copyright compliance policy. You still publish a sufficiently detailed summary of your training data using the AI Office's template. Miss either one and the fines scale to 35 million euro or 7 percent of worldwide turnover, whichever is larger. The exemption gets you out of paperwork for bureaucrats. It does not get you out of paperwork for lawyers.

The three conditions you must pass

To qualify as "truly free and open-source" per the guidance, a model must clear all three of these hurdles:

  1. License. Apache 2.0, MIT, and OpenMDW are in. Any license with research-only clauses, non-commercial restrictions, or usage carve-outs that are not "specific, proportionate, and safety-oriented" is out. Llama's acceptable-use policy? Out. Most of the corporate "community" licenses? Out.
  2. Transparency. Weights, architecture information, and usage information must be publicly available. No sneaking the good weights behind a gated form.
  3. No monetization. You cannot charge for access, bundle the model with a paid service, run ads against it, or harvest user data as a condition of access. The Commission did at least carve out that hosting weights on an open repository like Hugging Face does not count as monetization. Thank you, Brussels, truly.

The monetization clause is the quiet killer. It rules out almost every commercial "open-weights" release from the major labs. Your startup built a paid API around an open model you trained yourself? Not exempt. Your company releases a model and sells premium fine-tunes? Not exempt. The exemption is designed for researchers and hobbyists, then dressed up in press-release language that makes it sound like it applies to industry.

The systemic-risk trap

Here is where the circus gets loud. The exemption does not apply to general-purpose AI models with "systemic risk." Systemic risk is defined by a compute threshold: any model trained with more than 10 to the 25th floating-point operations of cumulative compute. For reference, that covers GPT-4o, Grok 4, Mistral 2 Large, and effectively any frontier-scale model shipping today. Cross the threshold, and you owe the Commission full Article 55 compliance regardless of your license: model evaluation, systemic risk assessment, adversarial testing, incident reporting, cybersecurity measures, and energy efficiency reporting. Open weights or not.

The Act does leave a pressure valve. Under Recital 112, a developer can submit evidence arguing that "because of its specific characteristics, a general-purpose AI model exceptionally does not present systemic risks." Good luck with that. The subtext is that the Commission gets to decide what counts as exceptional, and the Commission has not traditionally treated "we trained it ourselves and shipped the weights for free" as an exceptional characteristic.

The bar is lower than it looks

The regular GPAI threshold sits at 10 to the 23rd FLOPs, roughly the compute it takes to train a one-billion-parameter model on a serious dataset. That puts SmolLM3-3B, Mistral 7B, Llama-3.2-3B, and most of what a mid-sized AI startup releases this year squarely inside the scope of the Act. Not exempt from GPAI status, not exempt from training-data summaries, not exempt from copyright policies. Only exempt from the three paperwork items listed above, and only if all three open-source conditions are cleanly met.

Anyone who assumed "open-source" meant "exempt from the AI Act" in any general sense has spent the last year reading marketing summaries instead of the actual legislation.

The finetuning loophole that might actually help

One provision in the guidance is genuinely useful. If you finetune an existing GPAI model, you only become a "provider" under the Act if your finetuning compute exceeds one third of the original model's training compute. For a 10 to the 23rd FLOP base model, that is about 3.3 times 10 to the 22nd FLOPs of additional compute before you inherit full provider obligations. Below that, you are a downstream user, and your documentation duty is limited to the changes you made plus a pointer to the original model's training data summary.

Translation for developers: LoRAs, QLoRAs, small SFT runs, and task-specific finetunes of open models almost never cross the one-third threshold. You inherit the base model's status, not a fresh provider obligation. That is the cleanest part of the guidance and the one piece that clearly helps the open-source ecosystem.

The enforcement deadline is real

The AI Office's full enforcement powers switch on August 2, 2026. At that point the Office can request information, order model recalls, mandate mitigations, and issue fines. Models placed on the EU market before August 2, 2025 have until August 2, 2027 to bring their training-data summary into compliance. Models placed on the market after August 2, 2025 owe the summary immediately.

Most open-source releases from the second half of 2025 and early 2026 are already inside the stricter deadline and are at various stages of scrambling to publish the summaries. The big labs have legal teams that already filed the template. The small ones are reading it for the first time this month.

The verdict: a better circus than before

Credit where it is due. The April 2026 clarification is better than the original Act. The finetuning ratio is sensible. The Hugging-Face-hosting-is-not-monetization note is sensible. Excluding non-commercial research licenses from the definition of open-source is at least a defensible position. The guidance reads like the Commission finally talked to actual developers before shipping.

But the overall structure is unchanged. Any model that matters commercially will be above 10 to the 25th FLOPs, which means the exemption does not apply, which means the rules kick in in full. Any model below that threshold still owes copyright policies and training summaries regardless of license. The "exemption" waives the smallest three items on the list. The rest stays, and the fines stay, and the August 2026 enforcement date stays. Europe's open-source developers get a slightly smaller filing cabinet to carry. They still carry it up the same staircase, and the ringmaster in Brussels is still selling tickets to the show.

Meanwhile, across the Atlantic, a 19-year-old with a Colab notebook fine-tunes Qwen3 and ships it on Hugging Face the same afternoon. No template. No summary. No representative. No staircase. That is the competitive picture this guidance does not change.

What to actually do

If you are shipping open weights from Europe, three concrete moves:

  • Pick a real open-source license. Apache 2.0 or MIT. Avoid "community," "research-only," and anything with downstream-user restrictions. OpenMDW is explicitly on the Commission's accepted list if you want something AI-specific.
  • Publish the training-data summary using the Commission's template. It is not optional for any model placed on the EU market after August 2, 2025.
  • Write a one-page copyright compliance policy, respect robots.txt and machine-readable opt-outs, and publish a complaint mechanism for rightsholders. This is the bare minimum under Article 53(1c), exemption or not.

Then get back to work. The enforcement deadline is four months out, the filing cabinet is already on your back, and the circus in Brussels will still be running the same act this time next year.

Sources

Prev Article
AI News Today: Developer Edition 2026 Sources Guide
Next Article
AI Cybersecurity Just Got Autonomous: Mythos, Glasswing, GPT-5.4-Cyber

Related to this topic: