August 2, 2026 is the date every company operating an AI chatbot in Europe needs to have circled on the calendar. That is when the remaining provisions of the EU AI Act (Regulation 2024/1689) become fully applicable — including the transparency obligations, high-risk system requirements, and the full penalty regime. After that date, regulators can issue fines of up to €35 million or 7% of global annual turnover for violations.
This is not a theoretical discussion. The prohibited practices provisions under Article 5 are already enforceable since February 2025. The clock is running, and there are fewer than five months left to address the rest.
This article provides a concrete, article-by-article compliance checklist for enterprise chatbots. It covers every provision that applies to AI systems interacting with end users, organized into 15 actionable steps. Whether your chatbot is a customer service assistant, a healthcare triage tool, or a financial advisor, this checklist tells you exactly what to verify and what to fix.
Before You Start: Classify Your Chatbot
The EU AI Act applies different obligations depending on your chatbot's risk classification. Before running through the checklist, you need to determine where your system sits:
- Limited risk (most customer service chatbots): Only transparency obligations apply (Art. 50). You still need to comply with the prohibited practices ban (Art. 5).
- High risk (chatbots in healthcare, finance, HR, legal, education, critical infrastructure): The full suite of obligations applies — Articles 9 through 15, plus Art. 50, plus Art. 5.
If your chatbot provides medical advice, handles insurance claims, screens job applicants, assesses creditworthiness, or makes decisions that materially affect individuals, it is almost certainly high-risk. When in doubt, treat it as high-risk. The cost of over-compliance is negligible compared to the cost of under-compliance.
Article 5 — Prohibited AI Practices
Art. 5 — Prohibited AI Practices
Multa: hasta Up to €35M or 7% of global annual turnover
Article 5 is the heaviest provision in the entire regulation. It bans certain AI practices outright, with the highest tier of fines. These prohibitions have been enforceable since February 1, 2025 — meaning your chatbot must already comply today.
Step 1: Verify No Subliminal or Manipulative Techniques
What the law requires: Article 5(1)(a) prohibits AI systems that deploy subliminal techniques beyond a person's consciousness, or purposefully manipulative or deceptive techniques, with the objective or effect of materially distorting a person's behavior in a manner that causes or is reasonably likely to cause significant harm.
What to check:
- Does your chatbot use urgency tactics, false scarcity, or dark patterns to push users toward purchases or decisions?
- Are there conversation flows designed to exploit emotional states (fear, anxiety, FOMO)?
- Does the system personalize persuasion techniques based on user vulnerabilities?
- Are A/B tests on conversation flows designed to maximize conversion at the expense of informed decision-making?
Action items:
- [ ] Audit all conversation flows for manipulative design patterns
- [ ] Review sales-oriented chatbot scripts for dark patterns
- [ ] Document the design rationale for any persuasion mechanisms
- [ ] Remove or redesign any flow that could be characterized as subliminal or deceptive manipulation
Step 2: Verify No Exploitation of Vulnerable Groups
What the law requires: Article 5(1)(b) prohibits AI systems that exploit vulnerabilities of specific groups of persons due to their age, disability, or social or economic situation.
What to check:
- Does your chatbot serve minors, elderly users, or persons with disabilities?
- Are there safeguards preventing the system from adapting its behavior to exploit these vulnerabilities?
- Does the chatbot detect and appropriately handle interactions with vulnerable users?
Action items:
- [ ] Implement age-appropriate response handling if minors may interact with the system
- [ ] Review language and tone for potential exploitation of vulnerable groups
- [ ] Document safeguards for interactions with vulnerable populations
Article 9 — Risk Management System
Art. 9 — Risk Management System
Multa: hasta Up to €15M or 3% of global annual turnover
Article 9 applies to high-risk AI systems. It mandates a continuous, iterative risk management process throughout the entire lifecycle of the AI system. This is not a one-time assessment — it is a living process.
Step 3: Establish a Documented Risk Management Process
What the law requires: A risk management system must be established, implemented, documented, and maintained. It must include identification and analysis of known and reasonably foreseeable risks, estimation and evaluation of those risks, and adoption of appropriate risk management measures.
What to check:
- Do you have a formal risk assessment document for your chatbot?
- Does it cover technical risks (prompt injection, data leakage, hallucination), operational risks (downtime, incorrect information), and legal risks (GDPR violations, liability)?
- Is the risk assessment updated when the system is modified?
- Are residual risks documented and communicated to deployers?
Action items:
- [ ] Create a risk management framework document specific to your chatbot
- [ ] Identify and categorize all known risks (technical, operational, legal, ethical)
- [ ] Define risk mitigation measures for each identified risk
- [ ] Establish a review cadence (minimum quarterly) for updating the risk assessment
- [ ] Assign a responsible person or team for ongoing risk management
Step 4: Test Against Known Attack Vectors
What the law requires: Article 9(6) requires testing to be performed against "preliminarily defined metrics and probabilistic thresholds." For chatbots, this means systematic testing against known vulnerability categories.
What to check:
- Has the chatbot been tested against prompt injection attacks?
- Has it been tested for data leakage and training data extraction?
- Has jailbreaking resistance been evaluated?
- Have excessive agency and hallucination risks been assessed?
Action items:
- [ ] Run an automated security scan covering OWASP LLM Top 10 categories
- [ ] Document test results with specific metrics (pass rate, severity breakdown)
- [ ] Define acceptable risk thresholds for each vulnerability category
- [ ] Remediate critical and high-severity findings before deployment
Article 10 — Data and Data Governance
Art. 10 — Data and Data Governance
Multa: hasta Up to €15M or 3% of global annual turnover
Article 10 applies to high-risk AI systems and covers the quality, relevance, and governance of training, validation, and testing datasets.
Step 5: Document Training Data Governance
What the law requires: Training, validation, and testing datasets must be subject to data governance and management practices. These practices must address, among other things: the design choices, data collection processes, data preparation operations (annotation, labeling, cleaning, enrichment), the formulation of assumptions, an assessment of availability and quantity of data, consideration of possible biases, and identification of data gaps.
What to check:
- If you fine-tuned the model: is the training data documented (source, size, characteristics)?
- If you use retrieval-augmented generation (RAG): are the knowledge base contents curated and reviewed?
- Are potential biases in the data identified and mitigated?
- Is there a process for updating and validating the data over time?
Action items:
- [ ] Create a data card documenting all datasets used (training, fine-tuning, RAG knowledge base)
- [ ] Conduct a bias assessment on data sources
- [ ] Implement data quality controls and validation procedures
- [ ] Document data provenance and lineage for all information the chatbot can access
- [ ] Establish a process for regular data review and updates
Step 6: Ensure Data Representativeness
What the law requires: Article 10(3) specifies that datasets must be relevant, sufficiently representative, and to the best extent possible, free of errors and complete in view of the intended purpose.
What to check:
- Does the training or knowledge base data represent the full range of scenarios the chatbot will encounter?
- Are edge cases and minority groups adequately represented?
- Is the data current and reflective of the operational context?
Action items:
- [ ] Map the intended use cases against the data coverage
- [ ] Identify gaps in data representation (languages, demographics, edge cases)
- [ ] Implement a process to address identified data gaps
Article 13 — Transparency and Information to Deployers
Art. 13 — Transparency and Provision of Information to Deployers
Multa: hasta Up to €15M or 3% of global annual turnover
Article 13 applies to high-risk AI systems and requires that the system be sufficiently transparent to enable deployers to interpret output and use it appropriately.
Step 7: Provide Technical Documentation to Deployers
What the law requires: High-risk AI systems must be accompanied by instructions for use that include concise, complete, correct, and clear information that is relevant, accessible, and comprehensible to deployers.
What to check:
- Is there documentation explaining how the chatbot generates responses?
- Are the system's capabilities and limitations clearly stated?
- Are known failure modes documented?
- Does the documentation specify the intended purpose and foreseeable misuse scenarios?
Action items:
- [ ] Create or update technical documentation per Annex IV requirements
- [ ] Document the system's intended purpose, capabilities, and known limitations
- [ ] Provide clear instructions on how to interpret chatbot outputs
- [ ] Document the level of accuracy, robustness, and cybersecurity achieved in testing
- [ ] Include information about training data characteristics relevant to understanding system behavior
Step 8: Ensure Interpretability of Outputs
What the law requires: The system must be designed so that its operation is sufficiently transparent to enable deployers to interpret the output and use it appropriately.
What to check:
- Can users and operators understand why the chatbot gave a specific response?
- Are confidence indicators provided where appropriate?
- Is it clear when the chatbot is generating vs. retrieving information?
Action items:
- [ ] Implement output attribution (source citations) where feasible
- [ ] Add confidence indicators for factual claims when technically possible
- [ ] Document the response generation pipeline for internal stakeholders
Article 14 — Human Oversight
Art. 14 — Human Oversight
Multa: hasta Up to €15M or 3% of global annual turnover
Article 14 requires that high-risk AI systems be designed to allow effective human oversight during their period of use. This is one of the most operationally demanding provisions for chatbot deployments.
Step 9: Implement Escalation Mechanisms
What the law requires: Human oversight measures must enable the individuals tasked with oversight to properly understand the system's capabilities and limitations, monitor its operation, remain aware of automation bias, and be able to correctly interpret the system's output.
What to check:
- Is there a clear escalation path from automated chatbot to human agent?
- Are there defined triggers for automatic escalation (topic sensitivity, user frustration, repeated failures)?
- Can a human operator intervene in or override any chatbot decision at any time?
- Is there a kill switch to shut down the chatbot entirely?
Action items:
- [ ] Map all conversation scenarios that require human escalation
- [ ] Implement automatic escalation triggers (e.g., medical emergencies, legal questions, complaints)
- [ ] Create a human-in-the-loop workflow for high-stakes decisions
- [ ] Deploy a manual override and emergency shutdown mechanism
- [ ] Document the escalation procedures and train support teams
Step 10: Define Oversight Responsibilities
What the law requires: Deployers must assign human oversight to natural persons who have the necessary competence, training, and authority.
What to check:
- Is there a designated person or team responsible for chatbot oversight?
- Do they have the technical competence to understand the system's behavior?
- Do they have the authority to override or shut down the system?
- Are oversight responsibilities documented and communicated?
Action items:
- [ ] Designate named individuals responsible for chatbot oversight
- [ ] Provide training on the AI system's operation, capabilities, and limitations
- [ ] Document oversight roles, responsibilities, and authority levels
- [ ] Establish logging and alerting for oversight-relevant events
Article 15 — Accuracy, Robustness, and Cybersecurity
Art. 15 — Accuracy, Robustness, and Cybersecurity
Multa: hasta Up to €15M or 3% of global annual turnover
Article 15 is where technical security testing meets legal compliance. It requires that high-risk AI systems achieve an appropriate level of accuracy, robustness, and cybersecurity throughout their lifecycle. For chatbots, this is the article that directly connects to security auditing.
Step 11: Test for Adversarial Robustness
What the law requires: Article 15(4) states that high-risk AI systems must be resilient against attempts by unauthorized third parties to alter their use, outputs, or performance by exploiting system vulnerabilities.
What to check:
- Has the chatbot been tested against prompt injection (direct and indirect)?
- Is it resistant to jailbreaking attempts?
- Does it maintain guardrails under adversarial conditions?
- Has it been tested against the 5 critical chatbot vulnerabilities?
Action items:
- [ ] Run adversarial testing covering all OWASP LLM Top 10 categories
- [ ] Test specifically for prompt injection, jailbreaking, and guardrail bypass
- [ ] Document test results with pass/fail metrics by category
- [ ] Remediate all critical and high-severity findings
- [ ] Establish recurring adversarial testing (minimum quarterly)
Step 12: Protect Against Data Leakage
What the law requires: Article 15(4) also addresses cybersecurity measures that must be appropriate to the circumstances and risks, including measures to prevent and control attacks that may attempt to manipulate the training dataset ("data poisoning"), or pre-trained components, inputs designed to cause the model to make errors ("adversarial examples" or "model evasion"), or confidentiality attacks.
What to check:
- Can the chatbot be tricked into revealing system prompts?
- Can it be manipulated into disclosing training data, PII, or internal documents?
- Are API keys, database credentials, or internal URLs protected from extraction?
- Is the RAG knowledge base properly scoped and access-controlled?
Action items:
- [ ] Test for system prompt extraction attacks
- [ ] Test for training data extraction and memorization leaks
- [ ] Verify that sensitive information is not accessible through the chatbot
- [ ] Implement input/output filtering for sensitive data patterns
- [ ] Document all cybersecurity measures implemented
Step 13: Measure and Document Accuracy
What the law requires: Article 15(1) requires that high-risk AI systems achieve an appropriate level of accuracy in light of their intended purpose. Article 15(2) requires that accuracy levels and relevant metrics be declared in the instructions for use.
What to check:
- What is the chatbot's factual accuracy rate?
- What is the hallucination rate?
- Are accuracy metrics measured and documented?
- Are users informed about the system's accuracy limitations?
Action items:
- [ ] Establish accuracy benchmarks for the chatbot's specific domain
- [ ] Measure hallucination rate through systematic evaluation
- [ ] Document accuracy metrics in the technical documentation
- [ ] Implement disclaimers where the chatbot provides factual information
- [ ] Set up continuous accuracy monitoring
Article 50 — Transparency Obligations
Art. 50 — Transparency Obligations for Certain AI Systems
Multa: hasta Up to €15M or 3% of global annual turnover
Article 50 is the universal obligation. It applies to every chatbot, regardless of risk classification. If your AI system interacts directly with natural persons, this article applies to you.
Step 14: Implement Clear AI Disclosure
What the law requires: Article 50(1) mandates that providers of AI systems intended to interact directly with natural persons must design and develop the system in such a way that the natural persons concerned are informed that they are interacting with an AI system, unless this is obvious from the circumstances and context of use.
What to check:
- Does the chatbot clearly state it is an AI system at the start of every interaction?
- Is the disclosure visible, unambiguous, and provided before the user shares any personal information?
- Is the disclosure available in the user's language?
- Does the disclosure persist throughout the conversation (not just at the start)?
Action items:
- [ ] Add a clear, visible AI disclosure at the beginning of every chat session
- [ ] Include the disclosure in the chat interface design (not buried in terms of service)
- [ ] Ensure the disclosure is presented in all languages the chatbot supports
- [ ] Test that the disclosure renders correctly across all platforms (web, mobile, API)
Step 15: Disclose AI-Generated Content
What the law requires: Article 50(2) requires that providers of AI systems that generate synthetic audio, image, video, or text content ensure that the outputs are marked in a machine-readable format and detectable as artificially generated.
What to check:
- Are chatbot responses marked as AI-generated in a machine-readable format?
- If the chatbot generates documents, reports, or emails: are they labeled as AI-generated?
- Are metadata standards (C2PA, watermarking) implemented where applicable?
Action items:
- [ ] Implement machine-readable AI content labeling in chatbot responses
- [ ] Add visible labels to any AI-generated documents, reports, or communications
- [ ] Review downstream uses of chatbot outputs for compliance with content marking requirements
Compliance Timeline: What to Do When
With fewer than five months until the August 2, 2026 deadline, here is a prioritized timeline for completing the checklist:
Immediately (Already Enforceable)
Article 5 has been in force since February 2025. If you have not yet verified your chatbot against the prohibited practices provisions, this is urgent.
- Complete Steps 1-2 (manipulative techniques and vulnerable groups)
- Document your Article 5 compliance assessment
March-April 2026 (Months 1-2)
Focus on assessment and documentation.
- Classify your chatbot's risk level (limited vs. high-risk)
- Run a comprehensive security audit (Steps 4, 11, 12, 13)
- Begin risk management documentation (Step 3)
- Begin data governance documentation (Steps 5-6)
May-June 2026 (Months 3-4)
Focus on implementation and remediation.
- Implement AI disclosure across all interfaces (Steps 14-15)
- Implement human oversight mechanisms (Steps 9-10)
- Remediate security findings from the audit
- Complete technical documentation (Steps 7-8)
July 2026 (Month 5)
Focus on verification and evidence.
- Run a final compliance verification audit
- Compile all documentation into a compliance package
- Brief executive leadership on compliance posture
- Prepare for potential regulatory inquiries
How Ercel Automates the Checklist
Going through 15 compliance steps manually is time-consuming, error-prone, and expensive. Traditional consulting engagements for AI compliance assessments start at €16,000 and take weeks to complete.
Ercel automates the technical verification steps in this checklist in under five minutes:
Security testing (Steps 4, 11, 12, 13): Ercel runs 46+ automated attack tests against your chatbot covering all OWASP LLM Top 10 categories — prompt injection, jailbreaking, data leakage, excessive agency, and more. Each finding is mapped to the specific EU AI Act article it violates.
Risk quantification (Step 3): Every finding includes the applicable fine tier, the specific article reference, and the financial exposure calculation based on your company's revenue. You get a concrete number, not a vague risk level.
Evidence generation: The audit produces a detailed compliance report with test evidence, severity classifications, and remediation steps. This document serves as evidence of proactive compliance efforts — a mitigating factor under Art. 99(3) when regulators assess penalties.
Remediation guidance (all steps): Each finding comes with specific, actionable remediation steps. No generic advice — concrete technical instructions your development team can implement.
Continuous monitoring: EU AI Act compliance is not a one-time event. Article 9 requires continuous risk management. Ercel's monitoring plans run recurring scans so you catch regressions before regulators do.
The free assessment covers the most critical technical checks — prompt injection, data leakage, and jailbreaking resistance. It takes five minutes and produces a baseline compliance score with specific EU AI Act article mappings.
The Cost of Waiting
Every week you delay is a week closer to full enforcement without a compliance baseline. The regulation explicitly rewards proactive compliance through mitigating factors in penalty calculations (Art. 99(3)). Companies that can demonstrate they began compliance work before enforcement — with documented assessments, remediation efforts, and ongoing monitoring — will face materially lower penalties if issues arise.
The inverse is also true. Regulators view negligence more harshly than good-faith violations. Having no compliance documentation at all when enforcement begins is a signal of negligence, not oversight.
Start with the free assessment. It takes five minutes, covers the most critical technical compliance checks, and gives you the data you need to prioritize the remaining steps in this checklist. Five months is enough time — but only if you start now.