Headlines

AI models with systemic risks given pointers on how to comply with EU AI rules

Published by Global Banking & Finance Review

Posted on July 18, 2025

2 min read

· Last updated: January 22, 2026

Add as preferred source on Google
AI models with systemic risks given pointers on how to comply with EU AI rules
Global Banking & Finance Awards 2026 — Call for Entries

By Foo Yun Chee BRUSSELS (Reuters) -The European Commission set out guidelines on Friday to help AI models it has determined have systemic risks and face tougher obligations to mitigate potential

EU Provides Guidelines for AI Models Facing Systemic Risk Compliance

EU AI Act Guidelines for Systemic Risk Models

By Foo Yun Chee

Understanding Systemic Risk in AI

BRUSSELS (Reuters) -The European Commission set out guidelines on Friday to help AI models it has determined have systemic risks and face tougher obligations to mitigate potential threats comply with European Union artificial intelligence regulation (AI Act).

Requirements for AI Model Compliance

The move aims to counter criticism from some companies about the AI Act and the regulatory burden while providing more clarity to businesses which face fines ranging from 7.5 million euros ($8.7 million) or 1.5% of turnover to 35 million euros or 7% of global turnover for violations.

Impact on Businesses and Technology Companies

The AI Act, which became law last year, will apply on Aug. 2 for AI models with systemic risks and foundation models such as those made by Google, OpenAI, Meta Platforms, Anthropic and Mistral. Companies have until August 2 next year to comply with the legislation.

The Commission defines AI models with systemic risk as those with very advanced computing capabilities that could have a significant impact on public health, safety, fundamental rights or society.

The first group of models will have to carry out model evaluations, assess and mitigate risks, conduct adversarial testing, report serious incidents to the Commission and ensure adequate cybersecurity protection against theft and misuse.

General-purpose AI (GPAI) or foundation models will be subject to transparency requirements such as drawing up technical documentation, adopt copyright policies and provide detailed summaries about the content used for algorithm training.

"With today's guidelines, the Commission supports the smooth and effective application of the AI Act," EU tech chief Henna Virkkunen said in a statement.

($1 = 0.8597 euros)

(Reporting by Foo Yun Chee;Editing by Elaine Hardcastle)

Key Takeaways

  • The EU has issued guidelines for AI models with systemic risks.
  • AI models face obligations under the EU AI Act.
  • Non-compliance could result in significant fines.
  • The guidelines aim to clarify regulatory requirements.
  • Major tech companies are affected by these regulations.

Frequently Asked Questions

What are the new guidelines for AI models with systemic risks?
The European Commission has set out guidelines to help AI models with systemic risks comply with the AI Act, which includes tougher obligations to mitigate potential threats.
When will the AI Act apply to these models?
The AI Act will apply on August 2 for AI models with systemic risks and foundation models such as those developed by Google, OpenAI, and others.
What are the requirements for AI models classified as having systemic risks?
These models must conduct evaluations, assess and mitigate risks, perform adversarial testing, report serious incidents, and ensure adequate cybersecurity measures.
What transparency requirements must general-purpose AI models meet?
General-purpose AI models will need to create technical documentation, adopt copyright policies, and provide detailed summaries about their functioning.
What is the purpose of the EU's new guidelines for AI?
The guidelines aim to clarify compliance for businesses facing fines and address criticism regarding the regulatory burden imposed by the AI Act.

Tags

Related Articles

More from Headlines

Explore more articles in the Headlines category