Report of the CIOMS Working Group XIV on Artificial Intelligence in Pharmacovigilance

This document is the Report of the CIOMS Working Group XIV on Artificial Intelligence in Pharmacovigilance. It addresses the rapidly emerging and cross-disciplinary field of integrating Artificial Intelligence (AI) into Pharmacovigilance (PV).

The report’s core objective is to establish and promote guiding principles and a general framework of good practices for the development and use of AI in PV, rather than providing technical guidance. This framework is intended for individuals and organizations such as regulators, industry, academic researchers, clinicians, patients, and technology vendors.

The report is structured around seven core guiding principles:

Core Guiding Principles

  1. Risk-based Approach (Chapter 3): Integrating AI into PV processes must account for the potential inaccuracies and variability of AI systems and their corresponding impact on individual and societal safety.
    • The level of risk dictates the intensity of oversight, depending on whether the decision is high-stakes and if the AI operates in unchecked, stand-alone mode versus with human-computer interaction.
  2. Human Oversight (Chapter 4): This is essential for optimizing AI performance, increasing trustworthiness, and maintaining accountability.
    • The extent of human oversight should be risk-based.
    • Models include “human-in-the-loop” (HITL), where humans participate in every decision cycle, and “human-on-the-loop” (HOTL), where the machine operates autonomously but is monitored by a human.
    • The increased use of AI will transform traditional PV roles, requiring new competencies and appropriate change management.
  3. Validity & Robustness (Chapter 5): Performance must be continually and critically appraised, demonstrating acceptable and reliable results for the intended use under realistic conditions.
    • Evaluation must ensure sufficient representation of diverse data types (e.g., spontaneous reports, clinical trials, literature) to detect biases and ensure generalizability.
    • For rare event recognition (like safety signals), specialized test-set enrichment strategies may be needed.
  4. Transparency (Chapter 6): Disclosing when and how AI solutions are used is vital for building trust among all stakeholders.
    • This includes being transparent about the model’s architecture, inputs, outputs, and the nature of human-computer interaction.
    • Explainability is particularly relevant for “black box” models, providing plausible hypotheses about the internal decision pathways.
  5. Data Privacy (Chapter 7): This is a crucial ethical principle, especially considering the vast potential of Large Language Models (LLMs) to build large, linked databases, increasing the risk of patient re-identification.
    • Existing regulatory compliance procedures may need re-evaluation due to the heightened privacy risks posed by Generative AI (GenAI).
  6. Fairness & Equity (Chapter 8): The use of AI must support fairness and equity, avoiding the propagation or amplification of harmful biases and ensuring against underserving certain subpopulations.
    • The quality of the training and performance evaluation data sets must be scrutinized for adequate representation to mitigate the risk of bias.
  7. Governance & Accountability (Chapter 9): Robust governance ensures that AI solutions are used safely, responsibly, ethically, and in compliance with all legal and regulatory mandates.
    • This requires clearly defined roles and responsibilities for all stakeholders.
    • A governance framework grid (Table 9) is provided to structure documentation throughout the AI system’s lifecycle (specifications, development, pre-deployment, post-deployment, and routine use).

AI Applications and Future Outlook

  • AI Use Cases: The report details various AI applications already in use or under development in PV, including:
    • Adverse event capture from social media and literature.
    • Individual Case Safety Report (ICSR) processing, such as duplicate detection (Use Case B) and data extraction/encoding using Natural Language Processing (NLP) or LLMs (Use Case A, C).
    • Signal detection and analysis, including predictive models and process efficiencies (Use Case F, D).
  • Future Considerations (Chapter 10): AI is expected to accelerate the shift in PV from a reactive discipline (detection and processing) to a proactive one (prediction and prevention), often utilizing near real-time data collection and assessment.
    • The complexity of future systems (e.g., autonomous AI agents, neurotechnology) will require the core principles to continuously evolve and adapt.

Advancing Medication Safety Through Knowledge and Vigilance

2025 © AlVigiLance

Powered by SiraLance