When Professor Anant Madabhushi began to plan Emory’s first symposium on artificial intelligence in health, he wondered if attendance would be enough to fill the 160 seats in the Health Sciences Research Building auditorium. “It turns out,” he says, “we had to shut off registration the day before, because we topped 450. It was remarkable to see the excitement, the enthusiasm.”
Madabhushi leads the newly established Emory Empathetic AI for Health Institute, part of the university-wide AI.Humanity Initiative, which was created with the innovative goal of breaking down disciplinary barriers between researchers working in AI, medicine and the humanities, letting them work with common purposes to use the power of machine learning and big data for disease prevention and better patient care. He attributes the large turnout to the robust AI community, not only among students and faculty at Emory but at Georgia Tech, the University of Georgia, the Atlanta Veterans Affair Medical Center, Georgia State and Morehouse as well, and to interest by multiple corporate partners.
Attendees who came to learn about the broadest possible spectrum of AI topics weren’t disappointed. While some sessions focused on AI’s clinical potential in areas like health care diagnosis, genomics and pathology, others examined AI’s impact on medical privacy and security or the challenge of creating databases free of bias. Difficult, troublesome issues AI researchers will need to face were placed in the symposium’s program so they alternated with discussions about medical innovations.
In two days, attendees learned about leading-edge research from radiology to acute care to public health, including AI’s challenge to patient privacy, how AI models can perform more precise diagnoses and how to make diverse Emory Healthcare patient data more widely available to investigators.
“An example,” said Madabhushi, “is the work where we showed AI could be used to prise out subtle differences in the appearance of prostate cancer between Black men and white men. We use these subtle differences to create population-tailored models that we showed were more accurate in risk stratification and prediction of disease recurrence in Black men compared to a population agnostic model.”
On day one, Madabhushi also held a lunchtime Fireside Chat with Joe Depa, Emory’s new chief data officer, to discuss the daunting task of managing the oceans of medical data AI requires. The diversity of subjects—the promising developments as well as the cautions—was part of the point of the gathering. Provost Ravi Bellamkonda emphasized the need for people to become what he called “bilingual,” understanding both engineering and medicine with no artificial silos to separate them. “This is when we see the true power of AI,” he said.
Some of that power is already being realized at Emory. One example of the bilingual abilities Bellamkonda called for is the husband-and-wife team of Gari Clifford, professor of biomedical informatics, and Rachel Hall-Clifford, professor of anthropology. The pair worked with indigenous Mayan midwives in Guatemala to create devices low-literacy midwives can use to gauge a range of problems in pregnant women.
“You can't simulate an adverse birth,” Clifford notes. “It's unethical. But you can build models, and we’ve done that. The Food and Drug Administration came back and said, ‘What would happen when, pathologically the fetal heart rate goes below the mother’s?’ We said, well, we don't have any data. But we can simulate it. We simulated exactly how it would happen and the FDA accepted that, and we showed that our algorithm would work on that particular type of pathology. I think that's where it's exciting. We're moving from experimenting on humans to experimenting in silicon more and more.”
At the School of Nursing Center for Data Science, Professor Monique Bouvier is working to address a critical shortage of nurses by incorporating AI into patient care. “Within two years, 32 percent of our novice nurses are leaving the workforce because they don't have the resources to provide the care they need,” Bouvier said. “What has changed is unprecedented advances in technology.”
Among other projects, the Center is experimenting with using AI to monitor hospital patients. That includes a virtual platform doing one-on-one observation to make sure patients don't pull out their lines or get up to walk, then fall. “A virtual nurse who can provide that hands-off care to the patient will leave that bedside nurse more time with his or her patient to provide the holistic nurturing care that we're taught in our schools of nursing,” she said.
Some participants approached the “bilingual” goal through a deep dive into the problem of creating AI models and databases that clinicians can use. It’s a steep technical challenge. Tony Pan, professor of biomedical informatics, discussed the demands of developing a trial algorithm that could predict sepsis, drawing on data from EPIC, Emory’s patient record system, about the regulatory system, liver functions, cardiovascular functions, the nervous system, without compromising patient privacy. The ultimate goal is to develop a model clinicians could use to predict a patient’s likelihood of developing sepsis.
Marly van Assen, professor of radiology and imaging sciences, reported on efforts to predict heart and vascular disease, by integrating multiple kinds of risk data into a single model. “We’ve seen all these studies that prove that multimodal data works,” she said. “Why are we not using this currently? Getting large databases is hard. Patients move around and change doctors and hospitals. Not a lot of data is actively recorded, especially when it comes to risk factors that are not coming directly from, for example, lab tests. A lot of the patients that we're interested in don't necessarily show up at or are not referred for an imaging exam because for example they lack access or don't experience the typical symptoms. And those are patients that would be very interesting for studies to see if we also can improve their outcomes.”
The power, reach and intrusiveness of AI had many symposium participants worried about effects on patient privacy. “You can predict the sex of the individual,” Madabhushi noted. “You can predict race. You can look at a scan of the eye and predict a whole series of different cardio metabolic conditions. Do we really think we're going to be at a point where we can truly preserve the privacy of an individual, given that AI is able to prise out so much for so little?” Other participants worried that future AI-assisted attacks might break into hospital systems and re-identify private patient data by associating it with information on social media and the like.
An entire discussion focused on persistent concerns about ethical questions raised by the hidden biases widely recognized as embedded in the data used to train AI models. It’s an issue looming over the entire field of AI. “I have this weird definition of ethics,” said Professor of Radiology Janice Newsome. “I ask ‘who is this good for?’ This is a question we have to ask at the beginning, in the middle and at the end as we start thinking about how we ethically introduce disruptive technologies into our space.”
Gray areas are everywhere in AI ethics, according to computer science professor Joyce Ho. “The uncomfortable truth is that students don't like gray. What we need to teach before we even get the ethics is the fact that gray is built into the plan.” John Banja, a medical ethicist at the Emory University Center for Ethics, noted that there are many different definitions of fairness. “There is no one-size-fits-all,” he said. “When we talk about AI, we’re going to be talking about very specific kinds of cases, about specific ethical dilemmas that emerge from these cases. We are at the infancy of these kinds of problems.”
This kind of ethically informed thinking behind the Empathetic AI for Health Institute may have helped drive the symposium’s large turnout. Madabhushi, bragging that he didn’t compose his remarks on ChatGPT, recalled the health care disparity he saw growing up in India. “I lost my aunt due to breast cancer when I was in my teens,” he observed. “And I think at a very young age I realized the importance of empathy in the practice of medicine.”
The word “empathetic” in the Institute’s name is a reflection of how much the quality of empathy and the challenge of achieving it, matter at Emory. The Institute’s leaders know that American health care outcomes still show big disparities between different groups and they’re focused on working to make AI models more inclusive. As they work to leverage Emory’s strengths in areas like oncology, cardiovascular health, brain health, diabetes, HIV and immunology, they’re also focused on three goals: to develop new AI technologies, to introduce them into clinical practice and to scale them up through industry partnerships. “As we think about AI,” Madabhushi concluded, “we need to make sure that we're imbuing that same sense of empathy in the development and the application of AI tools for precision medicine.”