Model Drift is a phenomenon in machine learning where the predictive accuracy of an AI model degrades over timebecause the statistical properties of the production data—the data the model receives in a live operating environment—have changed from the properties of the data the model was originally trained on. In the life sciences, this is a critical risk, particularly in dynamic areas such as:
- Manufacturing: Changes in raw material quality, supplier variability, or equipment wear can cause the underlying process data (cGMP parameters) to shift, rendering a quality prediction model unreliable.
- Pharmacovigilance/Real-World Evidence (RWE): New prescribing patterns or emerging drug-related events can alter the data distribution, causing a safety monitoring model to miss crucial signals.
Detecting and mitigating model drift requires continuous monitoring and often involves re-training or updating the model (creating a Dynamic Model) as part of a robust Data Governance strategy to ensure sustained Data Integrity and regulatory compliance.