
Artificial intelligence (AI): Putting the Patient-in-the-Loop
The question is no longer whether to use AI but how to use it responsibly. Below, scientists from Sprout Health Solutions explore some of the pros – efficiency and consistency – and cons – bias and transparency – of AI use within health-related quality of life and outcomes research and consider one approach to its implementation, putting the patient-in-the-loop.
Efficiency
By automating time-consuming tasks, AI promises to streamline research processes and improve outcomes for patients. Sprout scientists recently attended C-Path’s Clinical Outcomes Assessment (COA) Program Annual Meeting where Lynn Brielmaier, a person living with amyotrophic lateral sclerosis (PLWALS), was a panellist. Lynn noted that 90% of PLWALS die within 5 years yet obtaining a diagnosis can take years; a stark reminder of the critical need to leverage AI for rapid COA and biomarker development.1
Consistency
Another benefit of AI is the potential for increased reliability and replicability. The FDA’s ISTAND pilot program, which supports novel drug development tools outside typical pathways, accepted its first AI-based tool in 2024. The AI-COA™ is a machine learning (ML) model designed to infer clinician-reported outcomes (ClinROs) for depression and anxiety by analysing video interviews, thereby reducing the subjectivity typically associated with ClinRO assessments.2
Bias
While AI has the potential to provide greater impartiality than humans, it nevertheless remains susceptible to algorithmic bias. ML models are often trained on datasets that under-represent certain populations, thereby reproducing systematic disadvantages for marginalised groups. Careful validation of AI models is essential to ensuring robust outputs that do not perpetuate or exacerbate existing health inequalities.3
Transparency
While complex deep learning (DL) models can be remarkably effective at detecting patterns and generating predictions, they often remain opaque. This “black-box problem” means we can see the inputs and outputs, but we cannot fully explain how the model arrives at its conclusions. This is problematic because outputs that may appear correct are difficult to validate or trust without understanding the underlying process.4 Moreover, large language models (LLMs) are prone to “hallucinations” where false or misleading information is presented as if true. It is therefore imperative that every AI-generated statement, reference, and recommendation is carefully verified, but this need for constant vigilance can make early AI adoption more time-consuming than traditional methods.5
Putting the Patient-in-the-Loop
AI is not inherently good or bad; its outcomes will depend on how we choose to implement it. These decisions involve trade-offs between efficiency, consistency, transparency, and many other methodological and ethical concerns. The so-called “human-in-the-loop” approach introduces human oversight within AI workflows, providing accountability to mitigate potential harms, ensure regulatory compliance, and support ethical decision-making. Within healthcare, this typically equates to a physician-in-the-loop with AI functioning as an auxiliary to healthcare professionals rather than a replacement for them. A recent global survey by Busch et al confirmed that most patients are in favour of human oversight within AI-assisted healthcare.6
This survey also found that over 70% of patients valued a transparent AI model over a merely accurate one.6 A patient-in-the-loop approach, where patients are directly involved in the design and development of AI models, helps ensure complex trade-offs are aligned with their values and preferences. Embedding patient perspectives in this way represents a natural extension of patient-centric care within the emerging landscape of AI decision-making; at Sprout, where the patient voice is central to everything we do, we support putting the patient-in-the-loop.6,7,8
References
1Brielmaier, L. Remarks as panellist in The potential of AI for COA development and deployment in clinical trials [Panel session]. In: Critical Path Institute Clinical Outcome Assessment Program Annual Meeting; 2025 Apr 9; Virtual. C-Path; 2025. Available from: https://c-path.org/2025-clinical-outcome-assessment-program-annual-meeting/
2U.S. Food and Drug Administration. FDA’s ISTAND Pilot Program accepts submission of first artificial intelligence-based and digital health technology for neuroscience [Internet]. U.S. Food and Drug Administration; 2024 Jan 23 [cited 2025 Sep 17]. Available from: https://www.fda.gov/drugs/drug-safety-and-availability/fdas-istand-pilot-program-accepts-submission-first-artificial-intelligence-based-and-digital-health
3Chin MH, Afsar-Manesh N, Bierman AS, et al. Guiding Principles to Address the Impact of Algorithm Bias on Racial and Ethnic Disparities in Health and Health Care. JAMA Netw Open. 2023;6(12):e2345050. doi:10.1001/jamanetworkopen.2023.45050
4Xu, H, Shuttleworth, KMJ. Medical artificial intelligence and the black box problem: a view based on the ethical principle of “do no harm”. Intelligent Medicine. 2024;4(1):52-57. doi.org/10.1016/j.imed.2023.08.001
5Rodríguez, JE, Lussier, Y. The AI Moonshot: What We Need and What We Do Not. The Annals of Family Medicine. 2025;23(1)7. DOI: 10.1370/afm.240602
6Busch F, Hoffmann L, Xu L, et al. Multinational Attitudes Toward AI in Health Care and Diagnostics Among Hospital Patients. JAMA Netw Open. 2025;8(6):e2514452. doi:10.1001/jamanetworkopen.2025.14452
7Griot MF, Walker GA. A Patient-in-the-Loop Approach to Artificial Intelligence in Medicine. JAMA Netw Open. 2025;8(6):e2514460. doi:10.1001/jamanetworkopen.2025.14460
8Helme A, Kalra D, Brichetto G, Peryer G , Vermersch P, Weiland H, White A, Zaratin P. Artificial intelligence and science of patient input: a perspective from people with multiple sclerosis. Frontiers in Immunology. 2025;16. doi:10.3389/fimmu.2025.1487709
This is a paid advertisement from Sprout Health Solutions.

The International Society for Quality of Life Research (ISOQOL) is a global community of researchers, clinicians, health care professionals, industry professionals, consultants, and patient research partners advancing health related quality of life research (HRQL).
Together, we are creating a future in which patient perspective is integral to health research, care and policy.