Photo by Nappy on Unsplash

Striking the Balance: Safeguarding Clinical Trials with AI

Introduction

Clinical trials stand at the cusp of transformation as Artificial Intelligence (AI) becomes an integral part of data management. The infusion of AI into these trials holds great promise, but with that promise comes the responsibility to navigate a complex landscape of potential benefits and inherent risks. As we delve into the nuanced nature of AI's role in clinical trials, it becomes clear that the successful integration of this technology requires careful consideration and strategic mitigation strategies. Furthermore, understanding the pivotal role of data stewards in managing and mitigating risks associated with AI becomes crucial for ensuring responsible and ethical use.

This is the first post in a series on using AI/ML in drug development, considerations for benefit-risk analysis, and potential mitigations to support responsible AI. We’ll look at recruitment, participant selection, and dose optimization in this post. Further posts will cover adherence, retention, data management, and endpoint assessment, as well GxP AI model validation & auditing.

Recruitment Optimization with AI

By leveraging vast datasets, AI/ML technologies can significantly improve participant matching, leading to enhanced trial accessibility and increased efficiency.

This not only expedites the recruitment process but also ensures that individuals who stand to benefit the most from investigational treatments are identified with unprecedented accuracy. The potential benefits extend beyond efficiency; they encompass the ethical imperative of providing equal access to clinical trials, irrespective of demographic factors.

However, the potential for biased algorithms and algorithmic disparities poses a significant risk to the inclusivity of participant recruitment, potentially excluding certain demographics unintentionally.

The reliance on historical data for training algorithms may perpetuate existing biases present in the data. If the historical data is not diverse and representative, the AI system may inadvertently favor certain demographic groups over others, leading to an unequal distribution of clinical trial opportunities. This not only undermines the ethical principles of fairness and inclusivity but also jeopardizes the scientific validity of the trials.

To counter these risks, regular audits of algorithms are essential. This involves an ongoing process of assessing and addressing biases, ensuring that the algorithms used in participant matching are continually refined and improved.

Recommendations for Data Stewards:

Data stewards play a critical role in this process. By fostering collaboration between data scientists and domain experts, they can contribute to the development of algorithms that are not only efficient but also equitable. They should advocate for the incorporation of diverse datasets, ensuring that the algorithms are representative of the entire participant population.

Selection and Stratification of Trial Participants

AI can sift through vast amounts of data, including demographic information, clinical data, vital signs, labs, medical imaging data, and genomic data, to predict individual participant outcomes. This predictive power not only expedites the identification of suitable participants but also enriches the trial population by identifying those more likely to respond positively to the treatment and thereby leading to reduced variability and enhanced study power.

However, there's a delicate balance to strike. Over-reliance on AI predictions may lead to a reduction in the human element of decision-making, potentially excluding critical contextual and ethical considerations. Additionally, the transparency of AI models might be compromised, making it challenging to understand the rationale behind participant selection. This lack of transparency could erode trust in the system and raise ethical concerns about the fairness of participant inclusion.

To address these challenges, transparency in model decision-making becomes paramount. Ethical oversight is crucial, and continuous validation against diverse datasets ensures that the predictive models align with the principles of fairness and inclusivity.

Recommendations for Data Stewards:

Data stewards can actively contribute by establishing protocols for transparent reporting of model decisions. By facilitating ongoing ethical training for data scientists, they ensure that those involved in the development and deployment of AI models are well-versed in the ethical considerations surrounding participant selection.

Dose/Dosing Regimen Optimization

AI's ability to characterize and predict pharmacokinetic (PK) profiles opens doors to optimized dose and dosing regimens, especially in special populations where data might be limited.

Traditional dose optimization processes often rely on limited data, especially in specific populations such as rare diseases, pediatrics, and pregnant individuals. AI can analyze complex relationships between drug exposure and response, accounting for confounding factors and tailoring dose regimens to individual needs. This not only enhances the safety and efficacy of the drug but also represents a significant leap forward in personalized medicine.

In special populations, where data may be scarce, relying solely on AI predictions may lead to suboptimal dose recommendations. Additionally, if the data used to train the AI models is not representative, biases may be introduced, impacting the generalizability of the dose optimization recommendations. Ethical considerations arise when relying solely on AI without human intervention, as the potential consequences of incorrect dosing in vulnerable populations can be severe.

Rigorous model validation is critical in mitigating these risks. Involving domain experts in decision-making ensures that the optimization aligns with the complexities of clinical scenarios, and adherence to ethical guidelines becomes non-negotiable.

Recommendations for Data Stewards:

Data stewards can facilitate communication channels between data scientists and clinical experts. By ensuring adherence to ethical guidelines in data collection, they play a key role in establishing the ethical foundation of dose optimization models.

Adherence Monitoring and Improvement

One of the significant challenges in clinical trials is ensuring that participants adhere to medication regimens and follow protocol guidelines. AI addresses this challenge by providing real-time monitoring tools, such as smartphone alerts and reminders, eTracking of medication, and tools for visual confirmation. These tools not only enhance adherence but also contribute to the overall engagement and experience of trial participants.

The implementation of AI tools for adherence monitoring introduces privacy concerns, as it involves the continuous tracking of participants' behaviors. Additionally, data security risks arise from the collection and storage of sensitive information. Biases may be introduced in the monitoring tools, potentially affecting certain participant groups disproportionately.

Implementing robust data encryption safeguards participant privacy, ensuring explicit participant consent mitigates ethical concerns, and regular audits of adherence tools address potential biases.

Recommendations for Data Stewards:

Data stewards can advocate for strict adherence to data privacy policies, ensuring that the implementation of AI tools aligns with the highest standards of participant data protection. Their involvement in ongoing training on data security for data scientists contributes to a culture of data responsibility.

Retention Strategies with AI

The attrition of participants during clinical trials poses a significant challenge, impacting the reliability and validity of study outcomes. AI can address this by improving access to relevant trial information for participants through tools such as AI chatbots, voice assistance, and intelligent search. Additionally, passive data collection techniques and the extraction of valuable information from available data contribute to the development of participant profiles. This not only enhances participant retention but also aids in predicting potential dropouts and adverse events.

The use of AI in participant retention introduces challenges related to privacy, as the extraction of information for participant profiles may delve into sensitive areas. Data security risks arise from the storage and utilization of participant data, and ethical considerations are paramount when predicting participant behaviors and potential adverse events.

Ensuring transparent data usage policies, prioritization of participant consent, and continuous assessment of ethical implications are crucial mitigation steps.

Recommendations for Data Stewards:

Data stewards take a lead role in developing and communicating clear data usage policies. Their participation in the development of participant consent processes ensures that participants are well-informed, and their active contribution to ethical reviews of AI applications reinforces a commitment to responsible data practices.

Site Selection and Operational Optimization

Clinical trial efficiency is heavily dependent on the selection of appropriate sites and the optimization of operational processes. AI can analyze historical data from various trials to identify sites with the greatest potential for success. Algorithms can evaluate site performance, helping to determine which sites may have a higher risk of running behind schedule based on data from other trials at that site. This not only expedites the trial process but also ensures that resources are allocated efficiently.

The reliance on algorithms for site selection and operational optimization introduces the risk of perpetuating biases present in historical data. If certain sites were historically favored, the algorithm may inadvertently prioritize them, leading to an unfair distribution of opportunities.This unequal distribution of opportunities then can disproportionately represent a specific demographic and geographical profile, leading to underrepresentation of populations.  Additionally, algorithmic evaluations may not capture the nuanced factors that contribute to successful trial conduct, potentially leading to suboptimal decision-making. Ultimately this can come to haunt the sponsor by potentially constraining the generalizability of study findings. 

Combining AI insights with human expertise, conducting regular performance audits, and implementing transparent decision-making processes ensure that the benefits of optimization are not compromised by unintended consequences.

Recommendations for Data Stewards:

Data stewards can facilitate communication channels between data scientists and site managers. By advocating for transparency in decision-making algorithms, they ensure that the integration of AI in operational optimization aligns with organizational goals and ethical standards.

Adherence

AI/ML tools, such as smartphone alerts and reminders, provide real-time support to participants, ensuring they adhere to prescribed medication regimens. This not only enhances the accuracy of data collection but also contributes to the overall success of the trial by minimizing the risk of non-adherence.

Applications using digital biomarkers, such as facial and vocal expressivity, enable remote monitoring of adherence. These tools can adapt to individual participant needs, offering a personalized approach to medication management. eTracking of missed clinical visits, triggered by non-adherence alerts, allows for timely intervention. This proactive approach enables researchers to address potential issues before they escalate, ensuring the integrity of the trial data.

The use of facial and vocal biomarkers for remote adherence monitoring raises privacy concerns. Participants may feel uneasy about continuous surveillance, necessitating transparent communication about data usage and privacy protection measures.

Risk Mitigation Strategies for Data Stewards

Data stewards can ensure transparency about how participant data will be used and protected to foster trust. Actively engaging in participant education initiatives can also help address privacy concerns. Finally, implementing a system of regular audits on AI algorithms to identify and rectify biases. In collaboration with data scientists, data stewards can contribute to ongoing assessments, ensuring the fairness and accuracy of adherence monitoring tools.

Retention

AI-powered tools, including chatbots, voice assistance, and intelligent search, provide participants with seamless access to relevant trial information. This not only empowers participants but also contributes to a positive trial experience. Passive data collection techniques relieve participants of active reporting burdens. By extracting information from available data generated during clinical practice or study activities, AI minimizes the burden on participants and enhances retention.

By leveraging data from Digital Health Technologies (DHTs) and other systems, AI can develop patient profiles. These profiles enable predictive analytics, potentially identifying participants at risk of dropouts or adverse events. Proactive measures can then be taken to ensure participant retention.

The biggest risk in relation to retention relates to passive data collection and concerns about privacy and the ethical use of participant information. 

Risk Mitigation Strategies for Data Stewards

Making sure that data collection methods align with ethical guidelines and that the extent of passive data extraction is included in the informed consent is essential.

Conclusion

Acknowledging the indispensable role of data stewards in shaping ethical and responsible AI practices within the clinical trial landscape is vital. Through a collaborative and informed approach, we can strike the delicate balance needed to harness the full potential of AI in advancing clinical research.