Blog

AI Act | European Commission Guidelines on Prohibited Practices (Article 5)

Written by Inspeq AI | Feb 5, 2025 4:45:46 PM

On 4th February 2025, just two days after Article 5 of the AI Act came into force, the European Commission released a 135-page report outlining official guidelines on the implementation of prohibited AI practices. This report followed a public consultation aimed at clarifying specific aspects of Article 5, where Inspeq AI submitted several key questions, many of which were addressed in the final document. Following the clarifications, we have also developed a compliance checklist template which can be used to evaluate AI systems under Article 5 as part of your AI compliance processes.

Download your free AI Act (Article 5) Compliance Checklist

In this blog, we summarise the European Commission’s latest guidance on prohibited AI practices, highlighting the most critical clarifications.

Article 5 took effect on 2nd February 2025, imposing fines of up to 7% of global annual revenue for breaches. The prohibitions target AI systems that are inherently harmful, manipulative, or infringe upon fundamental rights. At this stage, organisations must ensure their AI systems comply with the regulation to avoid severe penalties. The Commission’s Guidelines on Prohibited AI Practices provide further explanations, clarifications, and practical examples beyond those included in the AI Act (Regulation (EU) 2024/1689). Below, we break down the key insights that were not explicitly detailed in the Act.


1. What Constitutes “Manipulative” AI? (Article 5(1)(a))

One of the clarifications Inspeq AI submitted to the public consultation was to seek clarity around what constitutes manipulative AI. 

  • Clarification: The report provides clearer definitions of “subliminal, manipulative, or deceptive techniques,” including:
    • Subliminal messages: AI presenting stimuli beyond human perception (e.g., flashing visuals too fast to consciously see, inaudible messages).
    • Covert behavioural influence: AI nudging decisions without explicit user awareness.
    • AI-driven misdirection: Redirecting attention to prevent people from noticing certain details.
    • AI temporal manipulation: Systems that distort time perception (e.g., forcing users into compulsive scrolling habits).
  • What this means for businesses: AI-driven marketing, recommendation engines, and behavioural design in digital platforms must avoid covert or subconscious influence.

2. Social Scoring is Broader than Expected (Article 5(1)(c))

The report offered additional guidance on what social scoring entails, and it appears to be broader than what was covered in the AI Act itself. This is significant given the number of AI companies, both established ones and a vast number of startups which are using LLM based AI technology to provide scores on employees, be that performance or productivity scores, if those scores can be considered to influence promotions.


  • Clarification: The AI Act bans social scoring, but the guidelines provide more specific examples, including:
    • Workplace ratings: AI systems tracking employee behaviour or compliance to influence promotions.
    • Financial or credit scores based on non-financial data: Using social media activity, lifestyle habits, or past interactions to determine loan eligibility.
    • Government welfare assessments: AI predicting a citizen’s eligibility for benefits based on social behaviour.

  • What this means for businesses: HR departments, credit agencies, and social analytics firms must ensure that their AI-based risk scoring does not lead to unfair discrimination. If managers or decision makers in employee promotion opportunities have access to AI generated performance or compliance scores, this could be considered a breach.

3. Emotion Recognition Has Limited Exceptions (Article 5(1)(f))

Detecting stress or emotions relating to safety in high risk environments is permitted, along with certain medical use cases, however for the most part, emotion recognition is still considered a prohibited activity under the AI Act.

  • Clarification: While the Act bans AI emotion recognition in workplaces and schools, the report clarifies the exceptions:
    • Medical Use Cases: Allowed if it serves a clear medical purpose (e.g., detecting neurological disorders, mental health diagnostics).
    • Safety Monitoring: AI detecting stress or fatigue in high-risk environments (e.g., pilots, truck drivers).

  • What this means for businesses: Emotion AI startups and HR technology companies using facial expressions, voice, or physiological data will have to prove legitimate use cases under medical or safety exemptions.

4. Crime Prediction AI is Only Banned Under Certain Conditions (Article 5(1)(d))

Using AI for crime prediction is still severely restricted. The use of AI for predicting individual crime risk is banned, along with certain group risk profiling, such as solely using personal trails to predict future crime. Those allowed activities are still considered high risk and will have to adhere to the high risk requirements such as performance monitoring, bias mitigation, strict documentation and transparency requirements and sufficient human oversight.


  • Clarification: The Act bans AI predicting individual crime risk, but the report clarifies exceptions:
    • Allowed: AI tools supporting human decision-makers (e.g., crime mapping based on location data).
    • Banned: AI that solely uses personal traits (e.g., personality profiling, past minor infractions) to predict future crime.

  • What this means for businesses: Security tech firms offering predictive policing solutions will have to prove human oversight and objective data sources.

5. Facial Recognition Scraping is Broadly Banned (Article 5(1)(e))


The public consultation questionnaire referred to facial recognition databases are those databases being used for 1-1 matching. This was a significant area of contention as scraping of facial images for databases to train AI systems were therefore not included in this. However the report clarifies that “a facial recognition database may be temporary, centralised or decentralised” and further, Article 5(1)(e) does not require that the sole purpose of the database is to be used for facial recognition; it is sufficient that the database can be used for facial recognition.” 

  • Clarification: The Act bans "untargeted" AI scraping of facial images to create facial recognition databases, but the report expands on what this means:
    • Explicitly banned:
      • Scraping images from social media, CCTV, public databases to build AI models.
      • Creating private facial recognition databases for commercial use.
    • Exceptions:
      • If explicit consent is obtained for each dataset.
      • If used strictly for research, without commercialisation.
      • If the database is targeted, however, the definition of this is not explicitly clear.

  • What this means for businesses: AI firms using publicly available images for training (e.g., Clearview AI-style systems) are prohibited.

6. Remote Biometric ID is More Restricted Than Expected (Article 5(1)(h))


It is important to note that remote biometric ID is referring to general surveillance, rather than 1-1 authentication. Verification of a natural person, such as a biometric login to a banking app does not fall under prohibited practices, and is additionally excluded from the high risk category.

  • Clarification: The AI Act bans real-time biometric identification (RBI) in public spaces, but the report clarifies narrow exceptions:
    • Strictly Limited Exceptions:
      • Terrorism & serious crime prevention (real-time tracking of specific suspects).
      • Searching for missing persons (e.g., child abduction cases).
    • No private sector use cases allowed.
    • Mandatory prior judicial approval before deployment.

  • What this means for businesses: Companies selling facial recognition surveillance for law enforcement will face high regulatory hurdles.

Enforcement & Penalties: Heavy Burdens for Businesses

  • Clarification:
    • €35M or 7% global turnover fines apply to ALL banned AI practices.
    • Companies must conduct ongoing compliance checks—not just one-time assessments.
    • Market Surveillance Authorities (MSAs) can seize AI products deemed illegal.

  • What this means for businesses: AI vendors need rigorous compliance audits and should build fail-safes for responsible AI governance.

 

Download your free AI Act (Article 5) Compliance Checklist

 

Review of All Key Prohibited AI Practices (Article 5)

The AI Act bans certain AI applications that pose unacceptable risks to health, safety, and fundamental rights:

  1. Harmful Manipulation & Deception (Article 5(1)(a))
    • AI systems using subliminal, deceptive, or manipulative techniques to distort user behaviour.
    • Examples:
      • AI that subtly nudges decisions through imperceptible cues.
      • AI-driven covert advertising influencing consumer behaviour.
    • Business Impact: This affects businesses deploying AI in marketing, content personalisation, or recommendation systems. Transparency will be crucial.
  2. Exploitation of Vulnerabilities (Article 5(1)(b))
    • AI that exploits vulnerabilities based on age, disability, or socio-economic factors.
    • Example: AI-driven loan approvals or insurance pricing that unfairly targets vulnerable individuals.
    • Business Impact: AI models in finance, healthcare, and consumer services must ensure fairness and non-discrimination.
  3. Social Scoring (Article 5(1)(c))
    • AI systems that evaluate individuals based on their behaviour or personal traits, leading to unfair treatment.
    • Example: A company refusing services based on a customer’s social media behaviour or financial history.
    • Business Impact: Affects AI applications in credit scoring, recruitment, and public service eligibility.
  4. Predicting Criminal Offences (Article 5(1)(d))
    • AI systems assessing a person’s likelihood of committing a crime solely based on profiling or personality traits.
    • Exception: If AI assists a human decision-maker with objective and verifiable data.
    • Business Impact: Affects companies in law enforcement tech and risk assessment. AI must be transparent and avoid discriminatory profiling.
  5. Untargeted Scraping for Facial Recognition (Article 5(1)(e))
    • Prohibited: AI systems that collect and compile facial data from public sources (e.g. social media, CCTV footage) without consent.
    • Business Impact: Tech firms, security providers, and advertisers using facial recognition must ensure compliance with data privacy laws.
  6. Emotion Recognition in Workplaces & Schools (Article 5(1)(f))
    • Banned: AI detecting human emotions in workplaces or educational settings.
    • Exception: Allowed for medical or safety reasons.
    • Business Impact: Affects HR software, recruitment tools, and classroom monitoring systems using emotional AI.
  7. Biometric Categorisation for Sensitive Attributes (Article 5(1)(g))
    • AI categorising people based on race, political opinions, religious beliefs, sexual orientation, etc.
    • Exception: Allowed for filtering biometric datasets (e.g. law enforcement investigations).
    • Business Impact: Affects security tech, border control, and personalised advertising.
  8. Real-Time Biometric Identification (RBI) in Public Spaces (Article 5(1)(h))
    • Banned: Real-time facial recognition in public spaces by law enforcement.
    • Exceptions:
      • Victim searches (e.g. human trafficking cases)
      • Preventing imminent threats (e.g. terrorism, serious crime suspects)
    • Business Impact: Affects companies providing AI surveillance, security, and policing technology.

How Businesses are Affected

  1. AI Systems Need Stronger Governance & Compliance
    • AI systems must be human-centric, transparent, and aligned with ethical standards.
    • Businesses will need AI risk assessments to ensure compliance.
    • Fines: Up to €35 million or 7% of global turnover for violations.
  2. Impact on Marketing & Customer Engagement
    • AI-driven psychological profiling and behavioural nudging must be explicitly disclosed.
    • Personalisation based on covert data collection or manipulative algorithms will be restricted.
  3. Data Privacy & Biometric AI Regulations Tighten
    • Companies using facial recognition, biometric data, or personal profiling must ensure compliance with GDPR and AI Act restrictions.
  4. Risk for AI in Recruitment, HR, and Law Enforcement
    • AI tools making hiring decisions or evaluating job performance based on emotions or psychological traits could be prohibited.
    • Predictive policing AI based on profiling will be heavily scrutinised.
  5. Facial Recognition & Surveillance Tech Will Be Heavily Regulated
    • Bans on real-time biometric identification mean businesses providing AI security tech, CCTV monitoring, and surveillance analytics must carefully define their use cases.

Final Thoughts

The AI Act’s prohibited practices create a legal and ethical framework to ensure trustworthy AI. As with the rest of the AI Act, there is an opportunity for innovation that is more ethical. Businesses currently leveraging AI in advertising, security, finance, law enforcement, and HR must ensure transparency, fairness, and compliance. Companies should start internal audits, legal assessments, and risk evaluations to avoid penalties. 

At this stage in the process, all companies should have prepared an internal inventory of their AI systems, and completed internal documentation outlining why their systems do not fall under the AI Act Article 5 on prohibited practices.

To enable organisations to evaluate if their AI systems meet the criteria of a Prohibited Practice under the EU AI Act, we have prepared a checklist template which can be used to evaluate AI systems under Article 5 as part of your AI compliance processes.

 

Download your free AI Act (Article 5) Compliance Checklist


Discover Inspeq AI's Platform for Monitoring Fairness in AI systems:

 

Reach out to our team at Inspeq AI on partners@inspeq.ai to learn about how Inspeq AI's RAI platform can help your organisation.