Hello, World: Input medical ethics here
Can humans teach morality to machines for medical decision-making?
If you received this in your inbox, you are among the first wave to sign up for Phase 5 and its inaugural edition—and you have my sincere gratitude. This newsletter, and I, will strive to earn your time, readership, and support with rigorous journalism and analysis exploring some of the most important healthcare, biotech, med tech, science, and technology issues of our times.
Phase 5 will focus on the fundamental questions, and the thorny ethical issues, which will drive the future as technologies like AI/ML and gene-editing intersect with industries from medicine to agriculture to energy—all beneath the specter of a tumultuous future challenged by climate change, exploding (and increasingly elderly) populations, and the limited resources we have to sustain our planet and humanity.
This is a new venture for me; I’m sure there will be considerable room for improvement and growth, and I will make that my constant aim with your feedback and engagement as a guide.
In the meantime, this is the type of content (or at least one form of it) that you can expect to see from Phase 5. And if you’d be so kind, please do share and spread the word to anyone and any network you feel would find these issues, and this breadth of coverage, of interest.
In the meantime… Read on.
There is a fundamental dilemma in trying to teach moral and ethical values to a machine algorithm: We humans don’t exactly have an established vernacular or guiding framework for it ourselves. Given the prominent role that algorithms, through AI and machine learning for instance, have already begun playing in everything from drug discovery to the ways in which hospital systems try to streamline costs in emergency rooms, that ethical conundrum is worth vigorous exploration.
As humans, we have general notions of fairness and what makes something right or wrong—murder is bad, stealing should be a crime.
But even these notions are vulnerable to the subjectivity wrought by circumstance. Murder is bad—but is it still that bad if you kill a serial killer? Stealing should be a crime—but should our conception of theft be limited to an immediate action (a pocket picked in the moment) or to a grander story, such as the plight of disadvantaged and low-income masses, or potential wage theft perpetrated by companies against their workers through far more subtle means like soft pressure to reconsider taking that full lunch break.
In medicine and science, this is a critical debate for industry leaders, academics, and government officials that can’t be dismissed as some pretentious thought experiment. It’s the stuff of how industries with noble goals carry out their function, and whether or not their approaches fail the stakeholders they claim to serve. It’s something that already affects people in the real world right now and will only affect millions more in the future.
So there’s a fair bit of moral and ethical finagling, the kinds of questions philosophers have debated for centuries, underlying the debate on how to balance the sheer horsepower of new machines with a human-focused imperative—and to make sure we don’t just end up building highly efficient systems for exacerbating health disparities and suffering.
It’s fitting that, just yesterday, JAMA ran the latest in its series on the intersection of AI and medicine. It’s a fantastic piece where JAMA Editor in Chief Kirsten Bibbins-Domingo interviews Princeton computer science professor Arvind Narayanan. The following insight, based on a recent study in Science, sticks out:
Dr Narayanan: There was a study in Science that looked at an algorithm for risk prediction that many hospitals use to target interventions to patients. What it found was that the algorithm had a strong racial bias in the sense that for 2 patients who had the same health risks—one who is White and one who is Black—the algorithm would be much more likely to prioritize the patient who is White.
What the authors figured out was that the algorithm had been trained to predict health costs and minimize them. Like all AI algorithms, it’s trained on past data from the system […] In terms of what it was programmed to do, the algorithm was working. It was correctly predicting that by targeting these interventions more to patients who are White than patients who are Black, hospitals could save more on costs. This is one kind of bias: perpetuating past inequalities into the future.
Perpetuating past inequalities into the future.
This, of course, is nothing new when it comes to industries and their quest to optimize efficiency. From the places where government and industry choose to build highways and bridges to the universally recognized, yet still unfixed, discrimination prevalent in clinical trial recruitment for drug development, status quo bias is a dominant current that shapes the flow of the future.
Technology has the capacity to reverse these inequities. AI and machine learning can, in fact, help feed discovery that makes clinical science more precise and attuned to the needs of specific populations—including those who have been long underserved, whether because of their race, the type of disease they have, where they live, their socioeconomic class, or any number of other considerations.
To achieve that end, however, will require not just the development and refinement of these technological tools that are already revolutionizing science and medicine, or building up robust datasets that feed software which can parse mountains of biological or real-time medical data.
It will require a more common understanding of the morals we imbue into our machines, and the head-spinning task of feeding subjective ethics into empirical algorithms.