EU AI Act for HR Tech
Annex III(4) makes you high-risk. FRIA before launch. We ship it.
ATS, interview-scoring, performance-review, workforce-monitoring AI are explicitly Annex III(4) high-risk. Deployers (your customers) must complete an Article 26(9) Fundamental Rights Impact Assessment before first use. If your platform doesn't help them produce one, you lose deals.
What HR tech needs to ship
- Provider obligations — instructions for use, technical documentation (Annex IV), risk management system (Art. 9), data governance evidence (Art. 10), human oversight design (Art. 14), conformity assessment (Art. 43), CE marking + registration (Art. 49 + 71).
- Deployer support — give your customers a FRIA template + bias dashboard + DPIA bridge so they can comply with Article 26(9).
- Worker consultation — in Germany Mitbestimmung gives works councils a veto on workforce AI. Document the consultation.
- GDPR Article 22 + 35 — automated decision-making protection + DPIA. Stack on top of Article 26(9) FRIA.
Frequently asked
Is HR/recruitment AI high-risk?
Annex III(4) explicitly lists: 'AI systems intended to be used (a) for the recruitment or selection of natural persons, in particular to place targeted job advertisements, to analyse and filter job applications, and to evaluate candidates' AND '(b) to make decisions affecting terms of work-related relationships, the promotion or termination of work-related contractual relationships, to allocate tasks based on individual behaviour or personal traits or characteristics, or to monitor and evaluate the performance and behaviour of persons in such relationships.' That covers ATSes, interview scoring, performance review AI, and most workforce-management AI.
What does the deployer have to do?
Article 26 deployer obligations apply when you USE the system (not just provide it). Key: (a) follow the provider's instructions for use, (b) human oversight assigned to a competent natural person, (c) input data is relevant + sufficiently representative, (d) Article 26(9) FRIA before first deployment, (e) GDPR Article 35 DPIA where personal data is processed, (f) inform workers + worker reps + workers' councils.
What's a FRIA in HR context?
A Fundamental Rights Impact Assessment under Article 26(9). For HR systems it must cover: rights engaged (non-discrimination, dignity, data protection, effective remedy, freedom of choice of occupation), categories of affected persons (candidates, employees, vulnerable groups), specific harms by context, mitigation measures, residual risk justification, stakeholder consultation (works council mandatory in Germany under Mitbestimmung), monitoring schedule.
When does this take effect?
Annex III high-risk obligations now apply from 2 December 2027 after Digital Omnibus delay. BUT Article 4 (literacy) is already binding since 2 February 2025 and Article 5 (no exploitation of vulnerabilities) applies to recruitment AI now. GDPR Art. 22 automated-decision protections + Article 35 DPIA also bite immediately for AI-assisted hiring/firing.
What does MEOK ship for HR tech?
FRIA generator (free at /scorecard), Article 10 bias-detection (£299/mo) for selection-rate auditing across protected groups, Article 14 oversight templates, /audit-prep-bundle (£4,950) wrapping Annex IV + FRIA + Article 9 RMS in 14 days.
Make EU AI Act a sales accelerator, not a blocker
White-label our FRIA generator + bias dashboard so your customers stop saying "wait, we need to check with legal" and start saying "yes, we'll buy."
MEOK AI Labs · CSOAI LTD · UK Companies House 16939677