Skip to main content
blog
Posted 22 December 2025
The EU AI Act: What TechIsland Member Companies Need to Know
share
eu

During a Stakeholder Meeting organized by the Commissioner of Communications in Cyprus, one of the official local authorities responsible for the implementation of the EU AI Act, an official presentation of the EU AI Act took place focused on the practical application of the AI Act at national level, including:

  • governance and enforcement structures,
  • system classification,
  • obligations by role (provider, deployer, distributor),
  • specific provisions for SMEs.

This current blog is a summary of the key takeaways most relevant for companies, based strictly on official statements and presentation material shared during the event. This blog is intended for informational purposes only and does not constitute legal advice or formal guidance from national authorities.

  1. What is the AI Act (business-relevant summary)

The EU AI Act is the first comprehensive EU-wide regulatory framework governing the development, placing on the market, and professional use of AI systems.

Key points for companies:

  • Applies to AI providers, deployers (professional users), importers, distributors, and authorised representatives.
  • Follows a risk-based approach, similar to EU product safety legislation.
  • Applies to AI systems used or placed on the EU market, regardless of where they are developed.
  • Covers both private and public sector use.

The AI Act applies beyond technology companies. Organisations in any sector may fall within scope where they professionally deploy AI systems in specific use cases regulated by the Act, including certain products, services, or internal functions such as HR or access to essential services.

  1. What companies should pay attention to now

This section is intended to help companies identify whether the AI Act may be relevant to them. It does not establish legal obligations.

Companies should consider:

  • Whether AI is used in:
    • products or services,
    • internal tools,
    • HR, recruitment, or workforce management,
    • analytics, profiling, automation, or decision-support systems.
  • Use of third-party AI solutions, including:
    • APIs,
    • SaaS platforms,
    • white-label or branded AI services.
  • Whether any use cases may fall into high-risk areas, such as:
    • employment and HR,
    • access to financial services,
    • health or education,
    • migration, asylum, or public-sector functions.
  • Their role under the AI Act:
    • provider,
    • deployer (professional user),
    • importer, distributor, product manufacturer or authorised representative.
  • Transparency obligations, particularly for:
    • chatbots,
    • generative AI,
    • AI-generated or AI-altered content (e.g. deepfakes).
  1. AI system classification & self-assessment
Self-assessment as the starting point

AI system classification starts with self-assessment by the company.

Companies assess whether an AI system may fall into:

  • prohibited practices,
  • high-risk systems,
  • limited-risk systems (mainly transparency obligations).
Official EU assessment tool

The EU AI Act Compliance Checker, a tool crafted to clarify the obligations and requirements of the AI Act,  is available on the European Commission’s website: https://ai-act-service-desk.ec.europa.eu/en/eu-ai-act-compliance-checker

Confirmed characteristics:

  • Supports initial self-classification.
  • Acts as a helpdesk and orientation tool.
  • Classification depends primarily on the intended use of the system.
When further assessment is required

Where self-classification indicates:

  • Annex I or Annex III high-risk systems, or
  • systems subject to third-party conformity assessment,

additional steps may include:

  • conformity assessment procedures,
  • internal control (which is self-assessment of conformity) or involvement of a notified body (third-party conformity assessment), depending on the category.

 

  1. AI system categories (confirmed)
Prohibited practices

Certain AI uses are prohibited outright, including specific forms of:

  • social scoring,
  • manipulative or exploitative practices,
  • real-time remote biometric identification in public spaces (with narrow law-enforcement exceptions).
High-risk AI systems

High-risk systems include AI used in areas such as:

  • employment and workforce management,
  • education and vocational training,
  • access to essential services (e.g. creditworthiness),
  • biometric identification,
  • migration, asylum, and border control,
  • justice and democratic processes.
Limited-risk systems (Article 50)

Limited-risk systems include AI that may manipulate or mislead users, such as:

  • chatbots,
  • deepfakes and synthetic media.

These systems are subject primarily to transparency obligations, not full conformity assessment.
The scope of this category is reviewed periodically at EU level.

  1. High-risk AI systems: general requirements (Articles 9–15)

For high-risk AI systems, the AI Act sets explicit general requirements, including:

  • Risk management system (Art. 9).
  • Data and data governance requirements for training, validation, and testing data (Art. 10).
  • Technical documentation / technical file demonstrating compliance (Art. 11 and Annex IV).
  • Automatic logging and traceability throughout the system’s lifecycle (Art. 12).
  • Transparency and user information (Art. 13).
  • Human oversight mechanisms (Art. 14).
  • Accuracy, robustness, and cybersecurity (Art. 15).

These requirements apply before the system is placed on the market or put into service.

  1. Obligations by role (confirmed)
Providers of high-risk AI systems

Providers must, among other obligations:

  • Comply with all general high-risk requirements.
  • Implement a quality management system (Art. 17).
  • Retain technical and compliance documentation for 10 years (Art. 18).
  • Complete conformity assessment before market placement or use (Art. 43).
  • Retain logs for at least 6 months (Art. 19).
  • Take corrective action and inform authorities in case of non-compliance (Art. 20).
  • Appoint an authorised EU representative if established outside the EU (Art. 22).
  • Issue an EU declaration of conformity and apply CE marking (Arts. 47–48).
  • Register themselves and their system in the EU database (Art. 49).
  • Monitor systems post-deployment and report serious incidents.
Deployers / professional users (Article 26)

Deployers must:

  • Follow instructions for use with appropriate technical and organizational measures.
  • Assign human oversight to suitably qualified persons.
  • Ensure input data is relevant and appropriate.
  • Monitor system performance and risks.
  • Retain logs for at least 6 months.
  • Inform affected persons where required.
  • Conduct fundamental-rights impact assessments in certain public-sector contexts (Art. 27).
Importers, distributors, authorised representatives (Arts. 22–24)

These actors must:

  • Verify provider compliance (technical file, declaration of conformity, CE marking).
  • Refrain from placing systems on the market if non-compliance is identified.
  • Inform providers and authorities where issues arise.
  • Cooperate with supervisory authorities by providing documentation and information.

 

  1. Specific provisions for SMEs (IMPORTANT)

The AI Act includes explicit measures to reduce administrative burden for SMEs, confirmed during the presentation:

  • Simplified technical documentation requirements for SMEs (Art. 11).
  • Priority access to regulatory sandboxes and controlled testing environments (Art. 58).
  • Reduced or near-exempt fees for conformity assessment and sandbox participation (Arts. 58, 62).
  • Proportionate enforcement by supervisory authorities, considering company size and capacity (Art. 99).
  • Facilitated participation in standardisation processes (Art. 62).
  • Dedicated guidance, support, and training channels for SMEs (Art. 62).

These provisions are part of the adopted Regulation and apply independently of any future simplification initiatives.

  1. Digital Omnibus: what is under discussion

The Digital Omnibus is an initiative currently under preparation by the European Commission, linked to broader efforts to improve the competitiveness and readability of EU digital legislation.

The Digital Omnibus is expected to focus on:

  • Reducing administrative burden, particularly for SMEs and smaller market actors.
  • Adjusting implementation timelines, especially where:
    • technical standards are not yet available,
    • conformity assessment frameworks are not fully operational.
  • Sequencing obligations, so that requirements enter into force in a more practical order rather than simultaneously.
  • Aligning the AI Act with related digital legislation, to avoid overlapping or duplicative compliance steps.

Grace periods and timelines 

  • The Digital Omnibus proposal includes extended or clarified grace periods for certain categories of high-risk AI systems.
  • Indicative timelines mentioned were:
    • up to 2 December 2027 for some high-risk systems under Annex III, and
    • up to 2 August 2028 for certain systems linked to Annex I,
      depending on the availability of harmonised technical standards.
  • These timelines are explicitly linked to whether the relevant standards are ready; if standards become available earlier, implementation dates may be advanced by Commission decision.
Other Clarifications: 
  • The Digital Omnibus is not intended to remove the AI Act or its core obligations.
  • It does not change the risk-based structure of the regulation.
  • It does not eliminate high-risk categories or transparency obligations.
  • Any changes remain under discussion and are not final.
Relationship with SME provisions
  • The Digital Omnibus is separate from the SME-specific measures already included in the AI Act (such as simplified documentation, regulatory sandboxes, and proportionate enforcement).
  • Existing SME provisions remain applicable regardless of the outcome of the Digital Omnibus discussions.
What companies should do at this stage
  • Companies should monitor official updates from the European Commission and national authorities.
  • Companies should not assume that obligations are suspended or cancelled.
  • The AI Act should be treated as adopted law, with possible adjustments to timing and sequencing, not substance.

 

  1. How the AI Act will be applied in Cyprus
Governance and authorities

The Council of Ministers approved the following national competent authorities to supervise and enforce the provisions of the AI Act: 

  • The Office of the Commissioner of Communications:
    • acts as Market Surveillance Authority,
    • serves as Single Point of Contact for the AI Act with the public and other counterparts at Member State and Union level,
    • acts as Notifying Authority
  • The Commissioner for Personal Data Protection:
    • supervises AI systems for several Annex III categories,
  • The Deputy Ministry of Research, Innovation & Digital Policy:
    • sets national AI policy and strategy,
    • represents Cyprus in the EU AI Board.

 

Moreover the Council of Ministers approved the establishment of a National AI Taskforce.

The National AI Task Force:

  • provides strategic and advisory input,
  • headed by the Chief Scientist for Research, Innovation and Technology and comprise people with specialised knowledge and experience in the field of AI.

Cyprus is scheduled to assume the Chair of the AI Board at the start of its Presidency of the Council of the EU on 1 January 2026.

The AI Board helps coordinate and ensure cooperation between EU Member States, aiming for consistent implementation and application of the AI Act across the Union. It helps coordinate national competent authorities responsible for enforcing the Regulation, and it collects and shares technical and regulatory expertise, but also best practices.This role positions Cyprus to help coordinate discussions among Member States during the early implementation of the AI Act, supporting consistent governance, enforcement and cross-border cooperation..

Upcoming Event on 27 January by Commissioner of Communications together with CYS- Dedicated event focused on:

  • high-risk AI systems,
  • enforcement and interpretation,
  • Q&A with EU and national experts.

Link to the event: https://www.linkedin.com/posts/office-of-the-commissioner-of-communications_savethedate-aiact-eu-activity-7406309223401443329-zYMm?utm_source=social_share_send&utm_medium=member_desktop_web&rcm=ACoAAD4JZ-kBgC75Gkjr0ZyW5yDCn10XL0_XBvA 

TechIsland’s role

TechIsland’s role is informational and facilitative:

  • Share verified and authoritative updates.
  • Aggregate member questions.
  • Inform members about official events and guidance.
  • Facilitate dialogue with institutions where appropriate.

TechIsland does not provide legal advice.

Final Note

The AI Act is a product-safety-style regulation for AI, with phased implementation and evolving technical standards.
For companies, the immediate priority is understanding roles, use cases, and risk categories, and staying aligned with official guidance as implementation progresses.

Who to contact for clarifications:

Commissioner of Communications Office:

info@ai.ee.cy or 22693000