European AI Regulation: How to comply with the AI Act in Spain 2025

Here's a practical guide, designed for business and technology teams in Spain, to move from theory to action.

Save the date:
31/7/2025
6
No items found.
Logo de Mbit School
Por
MBIT DATA School

If your company already uses AI — or is about to — this interests you. El AI Act it is not “just another role”: it is the new regulatory framework that defines how we conceive, build, purchase and operate artificial intelligence systems in the EU. Here's a practical guide, designed for business and technology teams in Spain, to move from theory to action.

What is the AI Act and why does it matter now?

El European Artificial Intelligence Regulation (Reg. (EU) 2024/1689) is the world's first comprehensive law on AI. It came into force on August 1, 2024 And it will be fully applicable on August 2, 2026, with intermediate milestones: from February 2025 They govern prohibitions And the AI literacy, and since August 2025 The rules of apply governance and the obligations for general purpose models (GPAI). The systems of high risk embedded in regulated products they have an extended transition to August 2027.

The scope is very wide: it affects vendors (who developers/ “puts on the market”), importers/distributors and deployers (who uses AI in a professional context in the EU). There are assessed exclusions (e.g., non-professional military or personal uses), but for general business, applies.

How do you classify risk (and what each level requires of you)

The AI Act takes an approach risk-based: the greater the potential impact, more obligations.

Unacceptable risk (prohibited)

Practices such as harmful cognitive-behavioral manipulation, the “social scoring” by authorities or the indiscriminate use of remote biometry in public spaces (with limited exceptions and guarantees). These prohibitions are apply since February 2, 2025.

High risk

It includes, among others, AI for biometry, critical infrastructures, upbringing, Employment/RR. HH., essential services (banking, health), law enforcement or justice. It requires a risk management system, data quality, technical documentation, human oversight, cybersecurity, post-marketing monitoring, CE marked and, for many cases, registration in the EU database. The deadlines are scaled between 2026 and 2027 depending on the type.

Limited risk

Obligations of transparency: report when a person interacts with AI, when the content is synthetic (deepfakes) or when it is used recognition of emotions or biometric categories in certain contexts. These obligations start at August 2025.

Minimal risk

Most general-purpose software. Encourage codes of conduct and good practices, but without hard obligations of the Regulations.

Key dates for your roadmap

  • February 2, 2025Prohibitions and AI literacy. Review use cases and eliminate any banned practices.
  • August 2, 2025Governance, transparency and GPAI obligations (general-purpose models). Adjust policies, disclaimers, content labeling, and contracts with generative AI providers.
  • August 2, 2026Full implementation of the AI Act (with certain exceptions of high risk embedded in regulated products). Have the compliance program operational.
  • August 2, 2027Extension for high risk embedded in regulated products (CE marking and specific obligations). Plan ahead if you manufacture or integrate AI into those products.

Current news (July 2025): despite requests for extensions from large companies, the Commission has confirmed that there will be no pause in the calendar.

Generative AI and General Purpose Models (GPAI)

If your company uses or integrates foundational models (e.g., LLMs), the AI Act has specific rules:

  • Transparency and documentation obligations for GPAI providers: technical data sheets, usage limits and support for integrators.
  • Respect for copyright and publication of relevant information on the use of protected content.
  • Models with “systemic risk” (a category that can be presumed, among other technical criteria, when training exceeds 10² 5 FLOPs): require reinforced risk management, evaluations, cybersecurity and notification to the Commission. These obligations start at August 2025.

For deployers (business users), even if the vendor complies, It doesn't exempt you: you must verify suitability of the use case, inform users when appropriate and ensure human oversight and proportional controls.

Penalties: How much do you play

The fines for violating the AI Act are relevant:

  • Up to 35 M€ or 7% of the world's turnover (the largest) by Prohibited Practices.
  • Up to 15 M€ or 3% for breaches of other obligations.
  • Up to 7.5 M€ or 1.5% for providing incorrect information.

Beyond the number, the reputational and operational risk is high: removal of systems, forced audits, impact on customer trust and talent.

What is changing in Spain? Authorities and regulatory coexistence

Spain has created the AESIA (Spanish Agency for the Supervision of AI), with statute approved by RD 729/2023. AESIA will assume functions of market surveillance and national coordination of the AI Act framework, in line with other authorities (e.g., AEPD for data protection) and sector regulators (CNMV/Bde, Health, etc.).

Tu compliance program it must fit with the rest of the regulations: GDPR, NIS2 (cybersecurity), DSA/DMA (platforms), intake and sectoral rules (financial, health, mobility). The AI Act does not replace: It adds up.

90-day plan to get going

0—30 days: Inventory and risk

  1. AI Inventory: list all systems/use cases (own and third-party). It includes prompts, flows and involved data.
  2. Preliminary classification: label each case as prohibited, limited, tall or tiniest risk. Mark “GPAI” when applicable.
  3. Cut the forbidden: verifies that there are no banned practices (e.g., social scoring, indiscriminate biometric scraping).

31—60 days: government and controls

  1. Governance: name a Head of AI and create a committee (Business, Data, Technology, Legal, Risks, HR. HH.).
  2. Responsible AI Policy: minimum documentation, criteria for human oversight, Explanability, security and lifecycle.
  3. Transparency controls: prepare notifications to users, synthetic content labeling, and internal guidelines for using generative AI. It comes into play at August 2025.

61—90 days: suppliers and high risk

  1. Third-party management: renegotiate AI contracts to include Declaration of Conformity, CE marked if applicable, tests, security and holder.
  2. High risk: design the risk management system (data, biases, robustness, cybersecurity), technical documentation and Monitoring. If your system crashes in Annex III, plan the registry.
  3. GPAI: request technical data sheets and copyright compliance with model providers; reviews usage limits and integration guidelines. (Artificial Intelligence Act EU)

Typical areas of impact on Spanish companies

Human Resources

AI in selection and evaluation It can be High risk: you must justify variables, monitor biases and ensure human oversight real (not “sticky”). Train recruiters in Explanability and informed decisions.

Marketing and content

If you generate creatives or Copys with AI, prepare notices when necessary and Synthetic content label (deepfakes, voice, image). Adjust approval flows with human reviews.

Risk, Compliance and Financial

The models of fraud detection, granting credit or ** risk scoring** usually fall into high risk: requires traceability, data controls, Explanability and CE marked when it applies.

Industry and health

If you integrate AI into regulated products (healthcare, automotive), the calendar is extended to 2027: coordinates with the manufacturer and plans conformity assessment.

Turn the AI Act into a competitive advantage

Complying with the AI Act isn't about “papers”, it's about Reliability: better documented models, more explainable decisions and users who know what to expect. Start with Inventory, decide where you want to be in August 2025 and August 2026, and build a program that allows you innovate without shocks. At MBIT School we can help you design your compliance roadmap, train your teams and evaluate critical use cases.

Recommended next step: turn it into a 90-day project — if you want, we propose a workshop with your key areas (Business, Data, Legal and Technology) to land the plan in your company.

This article is based on official and updated sources at July 31, 2025, including the European Commission, EUR-Lex and specialized documentation. (European Digital Strategy, EUR-Lex, Whitecase.com, Artificial Intelligence Act EU, Artificial Intelligence Act EU, Artificial Intelligence Act EU, BOE, aesia.digital.gob.es, Reuters)

No items found.
Great! Your request is already being processed. Soon you will have news.
Oops! Some kind of error has occurred.

If your company already uses AI — or is about to — this interests you. El AI Act it is not “just another role”: it is the new regulatory framework that defines how we conceive, build, purchase and operate artificial intelligence systems in the EU. Here's a practical guide, designed for business and technology teams in Spain, to move from theory to action.

What is the AI Act and why does it matter now?

El European Artificial Intelligence Regulation (Reg. (EU) 2024/1689) is the world's first comprehensive law on AI. It came into force on August 1, 2024 And it will be fully applicable on August 2, 2026, with intermediate milestones: from February 2025 They govern prohibitions And the AI literacy, and since August 2025 The rules of apply governance and the obligations for general purpose models (GPAI). The systems of high risk embedded in regulated products they have an extended transition to August 2027.

The scope is very wide: it affects vendors (who developers/ “puts on the market”), importers/distributors and deployers (who uses AI in a professional context in the EU). There are assessed exclusions (e.g., non-professional military or personal uses), but for general business, applies.

How do you classify risk (and what each level requires of you)

The AI Act takes an approach risk-based: the greater the potential impact, more obligations.

Unacceptable risk (prohibited)

Practices such as harmful cognitive-behavioral manipulation, the “social scoring” by authorities or the indiscriminate use of remote biometry in public spaces (with limited exceptions and guarantees). These prohibitions are apply since February 2, 2025.

High risk

It includes, among others, AI for biometry, critical infrastructures, upbringing, Employment/RR. HH., essential services (banking, health), law enforcement or justice. It requires a risk management system, data quality, technical documentation, human oversight, cybersecurity, post-marketing monitoring, CE marked and, for many cases, registration in the EU database. The deadlines are scaled between 2026 and 2027 depending on the type.

Limited risk

Obligations of transparency: report when a person interacts with AI, when the content is synthetic (deepfakes) or when it is used recognition of emotions or biometric categories in certain contexts. These obligations start at August 2025.

Minimal risk

Most general-purpose software. Encourage codes of conduct and good practices, but without hard obligations of the Regulations.

Key dates for your roadmap

  • February 2, 2025Prohibitions and AI literacy. Review use cases and eliminate any banned practices.
  • August 2, 2025Governance, transparency and GPAI obligations (general-purpose models). Adjust policies, disclaimers, content labeling, and contracts with generative AI providers.
  • August 2, 2026Full implementation of the AI Act (with certain exceptions of high risk embedded in regulated products). Have the compliance program operational.
  • August 2, 2027Extension for high risk embedded in regulated products (CE marking and specific obligations). Plan ahead if you manufacture or integrate AI into those products.

Current news (July 2025): despite requests for extensions from large companies, the Commission has confirmed that there will be no pause in the calendar.

Generative AI and General Purpose Models (GPAI)

If your company uses or integrates foundational models (e.g., LLMs), the AI Act has specific rules:

  • Transparency and documentation obligations for GPAI providers: technical data sheets, usage limits and support for integrators.
  • Respect for copyright and publication of relevant information on the use of protected content.
  • Models with “systemic risk” (a category that can be presumed, among other technical criteria, when training exceeds 10² 5 FLOPs): require reinforced risk management, evaluations, cybersecurity and notification to the Commission. These obligations start at August 2025.

For deployers (business users), even if the vendor complies, It doesn't exempt you: you must verify suitability of the use case, inform users when appropriate and ensure human oversight and proportional controls.

Penalties: How much do you play

The fines for violating the AI Act are relevant:

  • Up to 35 M€ or 7% of the world's turnover (the largest) by Prohibited Practices.
  • Up to 15 M€ or 3% for breaches of other obligations.
  • Up to 7.5 M€ or 1.5% for providing incorrect information.

Beyond the number, the reputational and operational risk is high: removal of systems, forced audits, impact on customer trust and talent.

What is changing in Spain? Authorities and regulatory coexistence

Spain has created the AESIA (Spanish Agency for the Supervision of AI), with statute approved by RD 729/2023. AESIA will assume functions of market surveillance and national coordination of the AI Act framework, in line with other authorities (e.g., AEPD for data protection) and sector regulators (CNMV/Bde, Health, etc.).

Tu compliance program it must fit with the rest of the regulations: GDPR, NIS2 (cybersecurity), DSA/DMA (platforms), intake and sectoral rules (financial, health, mobility). The AI Act does not replace: It adds up.

90-day plan to get going

0—30 days: Inventory and risk

  1. AI Inventory: list all systems/use cases (own and third-party). It includes prompts, flows and involved data.
  2. Preliminary classification: label each case as prohibited, limited, tall or tiniest risk. Mark “GPAI” when applicable.
  3. Cut the forbidden: verifies that there are no banned practices (e.g., social scoring, indiscriminate biometric scraping).

31—60 days: government and controls

  1. Governance: name a Head of AI and create a committee (Business, Data, Technology, Legal, Risks, HR. HH.).
  2. Responsible AI Policy: minimum documentation, criteria for human oversight, Explanability, security and lifecycle.
  3. Transparency controls: prepare notifications to users, synthetic content labeling, and internal guidelines for using generative AI. It comes into play at August 2025.

61—90 days: suppliers and high risk

  1. Third-party management: renegotiate AI contracts to include Declaration of Conformity, CE marked if applicable, tests, security and holder.
  2. High risk: design the risk management system (data, biases, robustness, cybersecurity), technical documentation and Monitoring. If your system crashes in Annex III, plan the registry.
  3. GPAI: request technical data sheets and copyright compliance with model providers; reviews usage limits and integration guidelines. (Artificial Intelligence Act EU)

Typical areas of impact on Spanish companies

Human Resources

AI in selection and evaluation It can be High risk: you must justify variables, monitor biases and ensure human oversight real (not “sticky”). Train recruiters in Explanability and informed decisions.

Marketing and content

If you generate creatives or Copys with AI, prepare notices when necessary and Synthetic content label (deepfakes, voice, image). Adjust approval flows with human reviews.

Risk, Compliance and Financial

The models of fraud detection, granting credit or ** risk scoring** usually fall into high risk: requires traceability, data controls, Explanability and CE marked when it applies.

Industry and health

If you integrate AI into regulated products (healthcare, automotive), the calendar is extended to 2027: coordinates with the manufacturer and plans conformity assessment.

Turn the AI Act into a competitive advantage

Complying with the AI Act isn't about “papers”, it's about Reliability: better documented models, more explainable decisions and users who know what to expect. Start with Inventory, decide where you want to be in August 2025 and August 2026, and build a program that allows you innovate without shocks. At MBIT School we can help you design your compliance roadmap, train your teams and evaluate critical use cases.

Recommended next step: turn it into a 90-day project — if you want, we propose a workshop with your key areas (Business, Data, Legal and Technology) to land the plan in your company.

This article is based on official and updated sources at July 31, 2025, including the European Commission, EUR-Lex and specialized documentation. (European Digital Strategy, EUR-Lex, Whitecase.com, Artificial Intelligence Act EU, Artificial Intelligence Act EU, Artificial Intelligence Act EU, BOE, aesia.digital.gob.es, Reuters)

signup
Icono de Google Maps
Great! Your request is already being processed. Soon you will have news.
Oops! Some kind of error has occurred.

Related training itineraries

Have you been interested? Go much deeper and turn your career around. Industry professionals and an incredible community are waiting for you.

Master
Expert Program
Course
Especialista en el Reglamento Europeo de IA

Conviértete en AI Compliance Officer y garantiza la conformidad regulatoria de los sistemas de IA en tu organización.

60 horas
Octubre 2025
Face-to-Face/Online
Master
Expert Program
Course
Data Governance, Compliance and Security

Learn the keys to understanding, designing and executing a Data Governance strategy within your organization

10 months
April 2025
Face-to-Face/Online
Master
Expert Program
Course
Artificial Intelligence

Become an expert in Artificial Intelligence applied to business and acquire the strategic and technical competencies to build state-of-the-art solutions

10 months
October 2024
Face-to-Face/Online