Europe’s AI regulatory framework has evolved from draft to reality – and now, CEOs and boards must act. The EU AI Act is no longer theoretical; some obligations are in effect, others loom large. Here’s what companies need to know and do as compliance becomes a strategic imperative.
The Staggered Rollout: Act, Apply, Enforce
The AI Act, Regulation (EU) 2024/1689, came into force on 1 August 2024 securityinfowatch wikipedia. But it’s being implemented in phases:
2 February 2025: Ban on “unacceptable-risk” AI, such as manipulative systems, social scoring or real-time biometric surveillance. SIG velaw
2 August 2025: Obligations for General-Purpose AI (GPAI) – transparency, risk documentation, copyright checks; a mandatory Code of Practice expected to guide implementation. velaw
2 August 2026: High-risk systems (e.g., medical, biometric, transport, employment) face full compliance. velaw
2 August 2027: Extension for legacy GP AI providers to comply. bsk
That phased approach is generous, but it demands sustained planning over the next two years.
What’s Already Law - And Non-Negotiable
1. Banned Practices (since Feb 2025)
Any AI that manipulates vulnerable groups, engages in social scoring or real-time biometric monitoring in public is prohibited. securityinfowatch velaw Companies must audit existing systems and immediately remove banned functionality. This applies whether deploying or sourcing AI tools.
2. Mandatory AI Literacy (since Feb 2025)
Under Article 4, all providers and deployers of AI must ensure staff possess adequate AI literacy, including understanding risks, legal obligations and operational implications. wsgr While initial breaches won’t trigger fines, poor training may lead to liability if AI misuse causes harm. Take it seriously – as a governance issue, not a checkbox.

Coming Up: GPAI & Transparency (Aug 2025)
From 2 August 2025, companies providing general-purpose AI models like ChatGPT, GPT‑4, Bard, Claude, LLaMA, etc. must adhere to stricter rules bsk:
Documentation: Providers must document training data, risk assessments, and governance processes
Transparency: Users must be told they’re interacting with AI; compliance with copyright law is required
Risk classification: GPAI systems deemed “systemic risk” must undergo incident reporting, cybersecurity measures and potentially independent audits
Code of Practice: The EU has produced a voluntary Code to guide GPAI providers on compliance, signing gives legal clarity; non-signatories face uncertainty wsj wikipedia reuters
Major players – OpenAI, Google, Meta – are already reviewing the Code, yet some are lobbying for delayed implementation, citing complexity. Expect commercial pressure through 2025. reuters
High-Risk AI Systems: Full EU AI Act Compliance in View (Aug 2026–27)

From 2 August 2026, high-risk AI applications must satisfy full obligations: pre-market conformity assessments, documentation, human oversight, record-keeping and post-market monitoring.
By August 2027, under the EU AI Act, providers of GPAI models pre-dating Aug 2025 must also comply if they remain in EU circulation. velaw
What CEOs Must Do Today
1. Map your AI landscape
Inventory all AI: Are any systems banned? Which are high-risk? Which are GPAI? A cross-functional team must classify, tag and assess every tool.
2. Build AI literacy programmes
Training must be mandatory, contextualised by role. It’s not optional, it’s legally required. Lack of it can increase liability if an AI breach occurs.
3. Prepare for transparency
By Aug 2025, GPAI users must know they are interacting with AI and be prompted on content origin and rights. Update user interfaces, privacy notices, legal agreements and internal policy accordingly.
4. Risk assess and document now
Documentation can’t wait. Start risk assessments, governance procedures, copyright checks and incident monitoring logs. The voluntary Code of Practice is a smart blueprint to follow.
5. Designate AI governance & oversight
Appoint a senior executive sponsor for AI compliance. Boards should receive quarterly updates, including risk reviews, training progress and regulatory horizon scanning.
Balancing Innovation with Compliance
Many executives worry that the AI Act may inhibit innovation. Airbus, BNP and 45 signatories have urged delaying enforcement, warning that complexity could hamper competitiveness. reuters
It’s a valid concern, but regulation also brings trust. Europe’s strategic ambition is to lead by shaping “responsible AI.” As LBS’s Ekaterina Abramova recently argued, avoiding short-sighted pressure is crucial to preventing “public good” erosion. ft
For boards, the AI Act offers both constraints and clarity, if treated as a framework rather than a roadblock.
Final Reflections for Decision-Makers
The AI Act is here – and it matters now. Boards should treat it as an operational and reputational mandate, not future noise.
Write compliance into roadmaps: Training, audits, governance and deployment milestones
Plan budgets accordingly: For compliance tools, training, audits and legal advice
Engage tech partners: Ensure systems align with transparency, oversight and risk classification needs
Monitor regulatory evolution: National AI sandboxes launch by August 2026; guidance on high-risk AI arrives in February 2026
Ultimately, compliant AI will enhance trust and operational resilience. Boards that navigate this carefully now will shape the future of sustainable AI in Europe, not scramble to untangle after the fact. At North Atlantic, we’re here to help you deploy compliant AI, whether it’s bespoke dev or our off-the-shelf solutions.
Victor A. Lausas
Chief Executive Officer
Subscribe to North Atlantic’s email newsletter and get your free copy of my eBook,
Artificial Intelligence Made Unlocked. 👉 https://www.northatlantic.fi/contact/
Discover Europe’s best free AI education platform, NORAI Connect, start learning AI or level up your skills with free AI courses and future-proof your AI knowledge. 👉 https://www.norai.fi/

