{shortcode-c29cae19ee26c20729aef240fce32f5d781c1615}
Yael Bensoussan, an otolaryngology specialist affiliated with the Tampa General Hospital in Florida, can often deduce a patient’s potential medical diagnosis simply by listening to their voice.
“When I close my eyes and somebody comes into my office, I can usually tell a lot,” she says. “I can tell if they’re happy or sad, I can tell if they’re a male or female, I can tell about how old they are, I can usually tell what disease they have.”
Bensoussan’s experience with her patients’ voices sparked a question: could she harness the power of artificial intelligence —voice recognition technology, in this case — as a simple, non-invasive way to identify disease?
Her inquiry fits into a larger, ongoing movement to integrate the field of AI with medicine. The National Institutes of Health’s Bridge to Artificial Intelligence program, or Bridge2AI, recently awarded grants to four research projects working to integrate AI into the health care sector.
Bensoussan is a principal investigator on one of these projects: Voice as a Biomarker for Health. Her project aims to create a foundation for integrating AI into the work that head and neck doctors do. Achieving this goal involves many moving parts: data generation and collection, creating specific software and technology like apps for safe and ethical data collection, and social engineering to connect health professionals across the country and the world.
However, developing a procedure that delivers reproducible results is not the only concern for researchers like Bensoussan. The burgeoning field of AI in medicine comes with a host of potential ethical challenges: for example, technology might not be trained on data that is representative of the populations it intends to serve, integrating AI into a hospital system might be too expensive for low-resource settings, or data collection might threaten patient privacy.
Francis X. Shen, a professor at the Harvard Medical School Center for Bioethics who has an interdisciplinary background in law and neuroscience, focuses his work on these ethical considerations.
“One of the foundational ideas is that rather than wait for problems to arise, you want to do anticipatory governance — you want to think about research and innovation,” Shen says.
Shen is a member of the Ethical and Trustworthy AI Module on another Bridge2AI-funded project at Massachusetts General Hospital, called the Patient-Focused Collaborative Repository Uniting Standards for Equitable AI. CHoRUS aims to improve recovery from acute illness by using software programs trained to flag vulnerable patients who might need critical care — such as those with sepsis, seizures, cardiac arrest, heart failure, lung injury, or dangerous levels of intracranial pressure..
For both of these Bridge2AI researchers, the process of data collection is a pressing ethical concern. Existing datasets that are used to train AI tend to be lacking in diversity and representation, posing a major challenge to equitable treatment.
Bensoussan’s team is trying to assemble a diverse dataset that meets quotas for race, ethnicity, and, because the tool relates to voice recognition, even accent. In order to do so, her research team partners with low-resource clinics that serve diverse populations.
Working in these types of clinics makes another related ethical consideration especially apparent: the need to improve access to emerging AI technologies that may be expensive to mass-produce.
“A part of the mission should be to create these technologies in a way that closes the health gap between the Global South and the Global North, between those who have more or less within countries, rather than exacerbate these gaps once again,” says Vardit Ravitsky, another faculty member at the HMS Center for Bioethics and a principal investigator for the ethics modules of two Bridge2AI projects. “If we standardize things and ... make diagnostic tools and decision-making more accessible, then we’re closing gaps between different environments that have different levels of resources.”
{shortcode-a4219c90404c106150a1b13c5441d637d3925656}
As AI technologies in health care become increasingly accessible, they threaten to infringe on data privacy — another ethical issue Ravitsky is studying.
“We all know that we live in a world where privacy is either completely gone or has no meaning now,” she says, mentioning the impact of social media and the practice of sending genetic information to commercial companies. “We can either try to protect privacy according to past models or we can try to come up with more sophisticated understandings of what privacy is and how it might be threatened.”
Ravitsky and her teams plan to conduct extensive literature reviews, supplemented by a stakeholder forum where clinicians, researchers, patients, families, and scientists come together to discuss what privacy should look like.
“The novel aspect of consent and privacy here [is] that if we’re going to learn new things about human beings, maybe our promises to keep things confidential will not be maintained, because this project will discover new ways of identifying people,” Ravitsky says. She explains that one challenge sheand her colleagues must overcome is determining how to accurately inform participants about privacy risks before they enroll in a study when these risks may evolve over time as the technology does.
Alongside individual privacy concerns, the Bridge2AI projects face the additional challenge of combating general public mistrust of AI in medicine.
“If we don’t learn how to communicate clearly to the public ... the reliability and trustworthiness of AI in medicine, we’re going to lose the benefits of this technology because people will resist it,” Ravitsky says, referencing how this same loss of public trust led to vaccine hesitancy during the Covid-19 pandemic.
As the Bridge2AI project is still in its infancy, the risks and benefits of the technologies the different research teams are developing remain unclear. But even as researchers grapple with various ethical challenges, they remain optimistic about the transformative potential of their technologies.
“The idea of developing a technology that would make something that’s very cutting-edge, and potentially very beneficial, accessible to everybody everywhere through a cheap, non-invasive collection process —” Ravisky says, “to me, that is hugely attractive.”