AI in medicine is the darling of JPM2024. But how, and how well, is it being used right now?
Some data on the knowns, unknowns, and sort-of-knowns on the current state of algorithmic medicine heading into JPM2024
Happy Sunday, readers.
I suspect a fair number of you in the health, biopharma, med tech, and related industries are either already in or soon headed to San Francisco for this year’s JPMorgan Healthcare Conference. (I’ll be at JPM2024 soon enough myself - if you’d like to chat, I’ll be there in person Monday afternoon through Wednesday afternoon - shoot me an email or a DM!).
For those less familiar with the pharma fest: JPM is the annual pilgrimage when one and all in the health and biopharma sector - the actual “JPMorgan Healthcare Conference” aside - converge in San Francisco. Everyone from execs at upstart biotechs to bigwigs at massive pharma companies to blue chip VCs and even a few journalists (I may be biased, but the trade media-sponsored events are among the best in my experience) have decided to descend on the Bay every second week of January to make major announcements, participate in panels, hobnob in the hallways, and generally start off the new year with an industry-wide vibe check.
The vibes, this year, seem hyped after a dismal retrenchment in biopharma market performance, deal making, and overall sentiment in 2022 that spilled into 2023 before showing some potentially encouraging signs of a rebound later last year, which ended with 55 novel FDA drug approvals and the first-ever FDA approved CRISPR gene-editing treatment. With that innovation wind at the industry’s back, alongside the comfort of a recession that didn’t manifest, companies seem hungry to claw back from the bottom and get in on the biological and technological waves of the future—especially through AI, machine learning, and natural language processing-driven partnerships that build on digital health trends originating in 2017 and the first FDA-cleared AI-driven medical device in April 2018.
There’s already been a deluge of AI-related news going into this year’s JPM. Endpoints News’ Andrew Dunn reported just a few hours ago that Isomorphic, an AI startup from Google parent Alphabet, struck its first major pharma deal with industry giants Eli Lilly and Novartis for $83 million upfront and up to $2.9 billion in milestone payments. It’s a drug discovery partnership for predicting protein structures through an algorithmic tool called AlphaFold, part of the biology research arm of Google’s DeepMind AI. And that’s just one piece in a cascade of AI-related news announced in the past week.
This isn’t surprising given the confluence of market trends, increasingly sophisticated biological drug development that requires parsing petabytes of data no human could hope to sift, the rise of AI/ML/NLP, and the Biden administration’s executive order last fall calling for a responsible framework to guide the use of artificial intelligence by industry—a directive which largely focuses on the technology’s use in healthcare, from drug development to patient care decisions to medical devices and diagnostic tools.
It’s useful to look at the state of the present as industry leaders strike the AI-related deals that will shape the future of their buisensses—and, consequently, their impact on and value for patients. If you glance through the available public data, and what’s been compiled by independent sources, there’s still some uncertainty surrounding some of the first AI-based technologies to be granted FDA clearance: Medical devices.
That’s not conjecture. The FDA states plainly on its own site tracking AI/ML-enabled medical devices that it must rely, at least in part, on externally tracked data about these devices as information trickles in. The agency does, however, state that as of October 19, 2023 “no device has been authorized that uses generative AI or artificial general intelligence (AGI) or is powered by large language models.”
But there have been plenty of devices green lit under the FDA’s de novo and 510(k) pathways, a trend the agency states is expected to accelerate (and perhaps alongside the eventual approval of generative AI-based products). And it’s not clear that these devices, developed by companies like Siemens and GE Medical Systems, among others, are being robustly validated across multiple sites with large datasets.
The FDA lists several third party databases, from groups like Medical Futurist, Nature, and STAT, who have compiled these AI/ML-based medical device clearances as part of its public-facing communications. One trend sticks out like a sore thumb: Data validation for these market-cleared products have been on a sharp decline.
Credit: Chart by author/Phase 5; Source data from STAT as listed on FDA site
Other third party sources referenced by the FDA reach the same conclusion: Medical devices using AI/ML that have been on the market for years are not being robustly validated.
Credit: Source data from Nature Medicine, Wu, E., et. al. as listed on FDA site
This is the reality of the rigor with which algorithmic technologies are currently being validated in the earliest use cases of AI/ML in medicine. The following weeks, months, years, and decades will likely accelerate the adoption of even newer forms of AI, ML, and NLP. And that will impact everything from radiology to diagnostics to patient monitoring to health system workflows and, yes, drug development. “There is virtually no area in medicine and care delivery that is not already being touched by AI,” as a March 2023 editorial published in NEJM states.
The authors go on to say:
The underlying technology is rapidly changing and, in many cases, is being produced by companies and academic investigators with financial interests in their products. For a growing class of large-scale AI models, companies that have the necessary resources may be the only ones able to push the frontier of AI systems. Since many such models are not widely available yet, hands-on experience and a detailed understanding of a model’s operating characteristics often rest with only a small handful of model developers. Despite the potential for financial incentives that could create conflicts of interest, a deep understanding of AI and machine learning and their uses in medicine requires the participation of people involved in their development.
I’d love to hear your thoughts on all this in a time of (deserved) optimism for the future of science—in my inbox or, if schedules permit, at JPM. Always feel free to email me, and if you have the Substack app, we can start some threads there as well.
See you in your inboxes again on Tuesday, and some of you in person in San Francisco tomorrow.