The EU has introduced new regulatory frameworks to govern artificial intelligence (AI) broadly and within pharmaceutical manufacturing. In 2024 the EU promulgated the first comprehensive AI Regulation (EU) 2024/1689 (“AI Act”), which took effect on 1 August 2024[1][2]. Simultaneously, the European Commission has drafted new Good Manufacturing Practice (GMP) guidelines (EudraLex Vol. 4) to address AI. In July 2025 a consultation opened on revised GMP Chapter 4 and Annex 11, plus an entirely new Annex 22 on AI[3][4]. These reforms aim to support innovation while ensuring that AI use in drug manufacturing meets GMP quality and safety standards[4][5].
Key features of the AI Act include strict bans on unacceptable AI uses and a risk-based framework for high-risk systems[1]. In healthcare, AI-based medical software is explicitly high-risk and must include robust risk management, high-quality training data, transparency and human oversight[1]. The AI Act’s obligations – akin to CE‑mark conformity – will be phased in over 2024–2027[2]. Meanwhile, the new GMP Annex 22 sets detailed requirements for AI in manufacturing. It applies to all computerised systems using AI models in “critical applications” that directly impact patient safety, product quality or data integrity[6]. Annex 22 requires clear definition of each model’s intended use, comprehensive validation and performance metrics, and strict control of training and test data[5][7]. It mandates continuous oversight – e.g. formal change control, performance monitoring and human review procedures – mirroring core GMP principles[5][8].
The combined effect is a robust regulatory framework: the AI Act sets EU‑wide rules and standards for any AI system, while Annex 22 supplements existing GMP (especially Annex 11 on computerised systems) with AI‑specific controls. Pharmaceutical manufacturers must therefore integrate AI governance into their quality systems. The coming years will see phased implementation: companies should inventory AI usage, align their validation and data‑integrity practices with the new rules, train staff in AI risk management, and participate in industry consultations. Overall, these regulations aim to foster trustworthy AI in pharma manufacturing without compromising product quality or patient safety[9][1].
Regulatory Background
EU Artificial Intelligence Act (Regulation EU 2024/1689) – The AI Act was published in the Official Journal on 12 July 2024 and entered into force on 1 August 2024[1][2]. It is a regulation (directly applicable EU law) establishing a risk-based regime. Certain “unacceptable” uses (e.g. manipulative social scoring) are banned outright, while other applications are classified as high-risk if they can significantly affect health, safety or fundamental rights[1]. Notably, AI software for medical purposes (including diagnostics or clinical decision support) is expressly high-risk. High-risk systems must comply with stringent requirements: quality data governance, detailed technical documentation, transparency to users, human oversight and ongoing risk management[1]. Providers must demonstrate conformity (analogous to CE marking) through risk assessment and certification processes. The Act also addresses so-called “general-purpose” AI (GPAI) and sets up an EU AI Office to coordinate implementation. Key milestones: the AI Act is “fully applicable” two years after entry into force, with most requirements in force by August 2026, while rules on AI systems embedded in regulated products (e.g. devices) apply after 36 months[2].
EU GMP Annex 22 (AI) – In parallel, EMA and the European Commission have updated GMP guidelines. On 7 July 2025 a stakeholder consultation opened on EudraLex Vol. 4 (GMP), revising draft Chapter 4 (documentation), draft Annex 11 (computerised systems) and introducing a new draft Annex 22 on Artificial Intelligence[3][4]. These draft guidelines were prepared by EMA’s GMDP Inspectors working group and PIC/S to “support innovation in manufacturing” and ensure global harmonisation[4]. The consultation (open until October 2025) provides official text for Annex 22, which is poised to become part of EU GMP guidance. This reflects regulators’ view that rapid advances in digital tech and AI require specific GMP rules[4]. Annex 22 is entirely new (there is no current GMP annex covering AI), and it supplements, rather than replaces, Annex 11. It explicitly acknowledges that while AI can bring efficiency to pharma operations, controls must be in place so that “AI systems…do not pose unacceptable risks” to quality or safety[5][9].
Key Provisions
EU AI Act (2024/1689) – The AI Act lays down harmonised rules for all sectors, including pharma. Its key provisions include:
- Risk Classification: AI systems are categorized into banned, high-risk, and low/limited-risk classes. In healthcare/pharma, high-risk applications include AI in medical devices and software for medical purposes[1]. This category triggers compliance requirements (see below). AI used purely for internal process optimisation (e.g. non-critical data analytics) may not automatically be listed as high-risk, but manufacturers should assess risk to health/safety.
- High-Risk Requirements: Providers of high-risk AI must implement a Risk Management System for health/safety, use high-quality and representative training datasets, maintain detailed technical documentation, and ensure robustness against errorsDraft . They must also provide clear user information, warnings of limitations, and “appropriate human oversight” to mitigate risks[1]. Importantly, high-risk AI must undergo conformity assessment (by notified bodies) before market introduction
- Transparency Obligations: Certain AI (e.g. chatbots, emotion recognition) has specific transparency rules (labels, data origin) to prevent misleading users. The Act also encourages development of Codes of Practice for AI.
- Timeline: Most AI Act rules apply from August 2026 (two years post-entry into force)[2]. For example, obligations related to general-purpose AI models began after 12 months; rules for AI in “regulated products” (medical devices, etc.) apply after 36 months[2]. A voluntary “AI Pact” invites early compliance.
EU GMP Annex 22 (Artificial Intelligence) – The draft Annex 22 provides GMP‑specific rules for AI/ML in manufacturing. Its main elements are:
- Scope: Annex 22 applies to all computerised systems in active-substance or product manufacture where AI models are used in critical applications impacting patient safety, product quality or data integrity[. It covers AI/ML models obtained via data training, whether in‑house or supplied by a vendor[10]. Crucially, Annex 22 only applies to static models (pre-trained, non-adaptive) that produce a deterministic output for a given input[11]. AI systems that continuously learn or adapt during operation are explicitly excluded – dynamic learning models and generative AI (e.g. LLMs) “should not be used in critical GMP applications”[11][12]. If generative AI tools are used for non-critical tasks (e.g. drafting text), a qualified person must remain responsible as a “human-in-the-loop.”[12]
- Governance Principles: Annex 22 reinforces core GMP principles. A multidisciplinary team (including process SMEs, QA, IT, data scientists) must oversee AI projects[13]. All model development, training, validation and testing must be documented regardless of who performs it[14]. Quality Risk Management (QRM) is mandated: AI activities must be managed according to risk to safety, quality and data integrity[15]. These principles mirror Annex 11’s focus on lifecycle controls but are tailored to AI specifics[13][14].
- Intended Use and Requirements: For each AI model, a detailed intended-use statement is required[16]. This description must specify the exact task (e.g. defect classification), describe the input data domain (including common and rare variations), and identify any known limitations or biases[16]. The model’s training data and algorithm choices should align with this intended use.
- Acceptance Criteria: Annex 22 requires quantitative performance criteria. Suitable test metrics (accuracy, sensitivity/specificity, F1 score, confusion matrix, etc.) must be defined for the task[17]. Acceptance criteria for those metrics must be pre‑specified and approved by a process SME [7]. Critically, the model’s performance must be at least as good as the process it replaces (i.e. “no decrease” in capability)[7]. This ensures the AI does not degrade established quality.
- Test Data Controls: Test/validation datasets must be carefully managed. Test data should be representative of the full operational “sample space” (all subgroups, edge cases and variations of input)[17]. Selection criteria and sampling rationale must be documented[17]. The test dataset must be large enough to yield statistically reliable metrics, and labels must be independently verified for correctness (e.g. via expert review or lab tests)[17].
- Data Independence: Annex 22 emphasizes that no data leakage occur: test data must be independent of training/validation data[18]. Technical and procedural controls (access restrictions, audit trails) must ensure that data used for final testing were never seen during model development[18]. If test data are split off early, developers who train the model must have no access to them. All uses of test data (when and how many times) must be logged[18].
- Model Testing: Before deployment, a formal test plan is required[19]. The plan (approved by QA/SMEs) should detail the intended use, test data, metrics, scripts, and acceptance criteria[19]. Testing must verify the model’s ability to generalize to new data (over‑/underfitting checks)[20]. Every deviation or failure in testing must be recorded, investigated and justified[21]. All test records and results must be archived per GMP retention rules.
- Explainability and Transparency: Annex 22 uniquely mandates explainability. During testing, the model should record which input features contribute to each decision (e.g. using attribution techniques like SHAP or heat maps)[22]. These records allow QA to verify that decisions rely on scientifically relevant factors[22]. The model must also log a confidence score for each prediction[23]. If confidence is low, the system should be configured to flag the decision as “undecided” rather than outputting uncertain results[23].
- Operational Controls: Once in use, AI systems require ongoing oversight akin to other GMP equipment. Annex 22 requires strict change control: any modification to the model, its software environment or input materials must be documented and re‑evaluated for re‑testing[8]. Configuration control (access and version management) must prevent unauthorized changes[8]. The system’s performance (as defined by the chosen metrics) must be regularly monitored to detect drift or degradation[24]. Likewise, the input data stream must be monitored to ensure it remains within the validated domain; metrics or alarms should trigger if new data fall outside expected ranges[25]. In summary, Annex 22 embeds the AI model lifecycle within the GMP quality system (change control, audits, periodic review)[8][24].
Implications for Pharmaceutical Manufacturing and Quality Assurance
The AI Act and draft Annex 22 together signal a paradigm shift for pharmaceutical GMP. Companies must now treat AI/ML systems as regulated process components, subject to formal qualification and oversight. Key implications include:
- Inventory and Risk Assessment: Manufacturers should inventory all current and planned AI uses (e.g. process control, visual inspection, predictive maintenance). Any application touching quality or safety may fall under Annex 22 or the AI Act. Even if not explicitly listed as “high-risk” by the AI Act, these systems must meet robust risk-management standards[1][15]. Firms should perform quality-risk assessments (consistent with ICH Q9) to justify AI deployment.
- Validation and Documentation: Existing validation paradigms (per Annex 11) must expand to cover AI’s new dimensions. Procedures (SOPs) need updating to include model development logs, dataset curation records, algorithm vetting, and explainability analyses. The principle that “nothing should be black‑box” is intensified: QA must have access to model logic (via feature attribution), not just outcomes[22]. Training and test data should be archived and managed under data integrity controls. Batch release criteria may need revision to incorporate AI validation certificates.
- Data Integrity Emphasis: GMP requires that electronic records be attributable, legible, contemporaneous, original and accurate (ALCOA). AI brings new data streams (model outputs, logs, feature importance scores) that must be captured and secured. Annex 22’s call for audit trails on model changes and test data usage[18][8] reinforces overall data-integrity demands. Companies must ensure that any AI‑generated result used for decision-making is traceable and explainable.
- Quality-by-Design for AI: Like other processes, AI systems should follow Quality-by-Design (QbD). Defining the model’s intended use and performance up-front[16] mirrors defining a product’s CQAs. Acceptance criteria “no worse than the original process”[7] align with target product profile thinking. Change control for AI (Section 10) aligns with regulatory change management (Annex 11.30); it ensures that any retraining or update triggers re-validation[8].
- Exclusion of “Black-Box” AI: By disallowing adaptive/self-learning models in critical manufacturing, Annex 22 sets a clear boundary. This means current R&D uses of deep-learning or “cloud AI” in predictive tasks must either be frozen before production use or accompanied by human oversight. In practice, LLMs and generative AI cannot automatically adjust processes without review. For QA, this means existing GLP/GCP human review processes remain important.
- Integration with GMP Architecture: Annex 22 sits alongside Annex 11, Chapter 4, and the Computer System Validation framework. Many concepts overlap (e.g. supplier oversight, IT security, documentation requirements[4][14]), but Annex 22 drills down on AI-specific needs. For example, whereas Annex 11 expects “retrospective validation” if changes occur, Annex 22 mandates pre-specified test plans and retraining if data drift. Quality units will need expertise in AI/ML to audit these systems effectively.
- Regulatory Submissions and Inspections: As regulators integrate these rules, firms may be asked to submit AI validation documentation during inspections or license applications. The European Medicines Agency (EMA) and national authorities will expect evidence that AI tools used (whether for manufacturing control or analysis) meet Annex 22 criteria. Non-compliance could lead to inspection observations or demands for corrective action.
- Parallel Compliance: Firms that use AI in manufacturing may also be subject to the AI Act’s market requirements. For example, an AI vendor selling a predictive maintenance tool may need to CE-mark it as a medical device or high-risk product. Pharma companies must coordinate with vendors to ensure any AI component bears the correct conformity assessment.
In sum, these developments elevate AI to a first-class category of GMP-critical technology. They reinforce core GMP goals (patient safety, product quality, data integrity) in the AI context[9][15]. While imposing new documentation and technical demands, the rules also bring clarity: static AI models can be integrated responsibly under structured validation, whereas unchecked “black-box” AI is not acceptable in GMP-critical loops.
Way Forward
Pharmaceutical companies should take proactive steps to adapt to the AI Act and Annex 22:
- Conduct an AI Impact Assessment. Inventory all AI/ML applications (existing or planned) in R&D and manufacturing. Classify them by intended use and risk. Identify any systems subject to Annex 22 (critical GMP use) or likely “high-risk” under the AI Act (e.g. clinical decision support). This establishes a compliance roadmap.
- Develop Cross-Functional Governance. Establish an AI governance team involving QA, IT, data science, and process experts. Define clear roles: the team should oversee AI lifecycle from supplier selection to validation and change control[13]. Update SOPs to incorporate AI requirements (intended-use statements, test plans, data management policies). Ensure training so personnel understand AI-related principles (data bias, overfitting, explainability).
- Implement Annex 22 Validation Framework. For each AI application in scope, draft detailed validation plans per Annex 22 guidelines. Explicitly document intended use, acceptance criteria (performance metrics) and testing protocols[7][17]. Reserve and label training vs. test data with strict separation controls[18]. Adopt modern ML validation tools (e.g. “digital twins” of data, explainability software) as needed.
- Strengthen Data Integrity Controls. Extend existing data governance to cover AI datasets and outputs. Apply ALCOA principles to all AI records (training sets, model versions, logs). Ensure test results and model decision traces are retained in GMP-quality systems. Leverage audit trails and version control so every model update is tracked[8].
- Engage with Suppliers and Regulators. Communicate Annex 22 and AI Act requirements to third-party AI vendors. Require documented assurances (e.g. technical documentation, conformity certificates) as part of supplier qualification. Participate in industry working groups and regulatory consultations (e.g. PIC/S, EMA) to stay informed on final guidance. Prepare for inspectors by maintaining a dossier of AI validation evidence.
- Pilot and Monitor Implementation. Begin with pilot projects to apply Annex 22 processes on a smaller scale (e.g. a single ML-based lab test or equipment monitoring algorithm). Use lessons learned to refine procedures. Set up continuous monitoring of deployed models (as Annex 22 demands) to detect drift early[24]. Integrate monitoring alerts with quality surveillance systems.
- Plan for the AI Act Timeline. Align internal timelines with EU enforcement: by mid-2026 most high-risk AI requirements must be met[2]. If using general-purpose AI models, note code-of-practice rules by 2025. Prepare for potential CE‑marking of AI-enabled devices. Establish data protection measures compatible with the AI Act’s emphasis on safety and fundamental rights.
By adopting these measures, pharma companies can turn compliance into a strategic advantage. Embedding rigorous AI controls will not only meet legal mandates, but also enhance process understanding, reduce variability, and ultimately safeguard patient safety – the core GMP objective[9][15].
Respond to the consultation
If you wish to participate in the targeted consultation and you are a member of a stakeholders’ organisation, please contact your organisation to submit your comments.
If you wish to participate and you are not a member of one organisation, comments must be submitted via the EU Survey tool, using the specific table for each section of the Chapter/Annexes guidelines:
Sources: Official EU regulatory documents and guidance (EU AI Act [Reg. 2024/1689], European Commission press releases and draft EudraLex guidelines)[1][2][4][5][6][13][7][18][8][24].