Use this checklist to review your infrastructure for transparency and audit-ready documentation.

Is your Voice AI ready for the EU AI Act?

EU AI Act checklist for enterprise Voice AI

In a few minutes, review 18 critical controls for transparency, security, and forensic evidence for your Voice AI.

I. Prohibited AI practices (Art. 5)

These practices are strictly banned in the EU. A “No” here usually means stop the project immediately.

Manipulation ban: Can you ensure your AI systems (e.g. voicebots) do not use subliminal techniques to manipulate customers or patients in a way that causes them harm?
Protection of vulnerable groups: Are your AI systems prevented from exploiting vulnerabilities of callers based on age, disability, or specific social or economic circumstances?
Social scoring: Do you refrain from AI-driven scoring of customers’ social behaviour that leads to detriment in unrelated contexts?
Workplace emotion recognition: If you use AI to infer call-centre agents’ emotions, is it limited to medical or safety purposes (e.g. fatigue detection in critical infrastructure), not general performance surveillance?

II. High-risk AI classification (Art. 6 & Annex III)

If you use any of these systems, strict obligations apply.

Emergency triage (e.g. utilities / rail): If you use AI to assess or classify emergency calls or prioritise rescue dispatch, do you treat the system as high-risk and meet all Chapter III requirements?
People management: Do you use AI for hiring, promotion, termination, or performance monitoring of contact-centre staff? If yes, is the system registered and documented as high-risk AI?
Creditworthiness: If your contact centre (e.g. for e‑commerce or banks) uses AI to assess customer creditworthiness, does the system meet high-risk standards (except pure fraud detection)?
Public benefits (e.g. government hotlines): Does your AI decide access to essential public services (social assistance, housing benefits)? If yes, is it managed as a high-risk system?

III. Transparency obligations (Art. 50)

This applies to almost all modern contact centres.

AI disclosure: Are customers or patients clearly and promptly informed when they interact with a voice or chatbot that they are dealing with AI (unless this is obvious)?
Emotion inference (customers): If you use sentiment or emotion analysis on callers, are people informed about this biometric processing?
Deepfake labelling: If you use synthetic audio or video (e.g. avatars) that could be mistaken for real staff, are outputs clearly marked as artificially generated?

IV. Deployer duties for high-risk AI (Art. 26 & 27)

Important for organisations buying AI solutions from vendors.

Use as instructed: Have you implemented technical and organisational measures so the AI is used strictly according to the provider’s instructions?
Human oversight: Are staff who supervise the AI sufficiently qualified, trained, and empowered to disregard outputs or stop the system when needed?
Data quality: Are inputs fed to the system (e.g. customer requests) relevant and sufficiently representative for the intended purpose?
Logging: Do you retain automatically generated logs of the AI system for at least six months to ensure traceability?
Fundamental rights impact assessment (FRIA): If you are a public body or provide essential private services (e.g. banks / insurers), did you assess fundamental-rights impacts before deploying high-risk AI?

V. Cross-cutting governance & AI literacy (Art. 4)

AI literacy: Have you ensured staff (from agents to management) have a minimum level of knowledge to understand AI systems, benefits, and risks?
Registration: Where applicable, is your high-risk AI system registered in the EU database?

Your compliance score (share of Yes): 0 / 18

Regulation text and reference PDFs

Download the full regulation text. For a deeper review, talk to our team.