In the annual AI Index Report issued by the Institute for Human-Centered Artificial Intelligence (HAI), experts detailed the trends in artificial intelligence over the past year and how it’s transforming society. This year, the report features an expanded chapter on science and medicine, developed by a team from RAISE Health, a collaboration between Stanford Medicine and HAI.
Russ Altman, MD, PhD, a professor of bioengineering, of genetics and of biomedical data science, helped lead the development of the science and medicine chapter, which highlighted AI milestones, such as advances in protein and molecule design, support for clinical care, and automated disease detection.
Altman discussed the top takeaways from the year, how trends in AI will shape the future of biomedical science and medicine, and what he sees as the most promising areas of growth.
This report’s section on science and medicine greatly expands on past years. How does this reflect what you’re seeing when it comes to the explosion of AI?
Last year’s chapter did a great job highlighting some key examples of AI in science and medicine, almost like a biopsy of the bigger picture. We have something called a review of systems when we’re seeing a patient: You do a complete workup to make sure you don’t miss anything — assess the skin, the eyes, the heart, the lungs, etc. We think of this year’s chapter as moving from a biopsy, just looking at a little spot here or there, to a review of systems.
Anybody at the medical school knows that AI is everywhere. You might be having lunch with someone who never struck you as an AI researcher, and you find out that they’re leading an effort to build a large language model in their clinical practice. I host a podcast, The Future of Everything, and I see the same trend there. I interview faculty from across the university; the most common thing they say to me right before I press record is, “Make sure you ask me about how AI is revolutionizing my work.” It really is revolutionizing university life and scholarship, and that’s pretty exciting.
The science and medicine chapter touches on a variety of trends, from how AI is impacting clinical care to the ethical considerations surrounding AI use and development. What were your three biggest takeaways?
First is the creation of foundation models. A foundation model is basically a statistical model that describes a very large set of data. About 10 to 15 years ago we all talked about “big data”: People were collecting huge amounts of data, but we didn’t always know what to do with it, and in many cases, it was too much. Scientists would end up cherry picking, using data that’s clean and points to tidy learnings. Foundation models allow us to look at all the cherries, not just the ones that are ripe and perfect and at eye level. Foundation models take every piece of data in your big dataset and put it toward a rich statistical model so that it can make predictions and projections. That’s why so much of science is seeing such rapid advancement — because scientists now have a model that essentially allows them to talk to their data, to ask a question and get an answer.
Second, AI research contributed directly to two Nobel prizes. That’s a real stake in the ground. Yes, there’s hype, and there are questions about whether AI is good or bad for society. For science, it’s good, and I can’t think of a better short-answer validation than “Two Nobel prizes with AI technology at their core were awarded in the same year.” That’s the headline that I would tell my mom or children: “Yes, this is real.”
Third, the ability for us to use large language models to improve all parts of clinical care is huge. So many of my clinical colleagues who are in the trenches every day are interested in integrating AI into their day-to-day workstream, like using an LLM to help them write notes, or to listen to and watch a surgery, then receive a quality summary of what happened in the operating room. Large language models can reduce what’s called “pajama time” — the hours that doctors spend after the clinic closes, catching up on all their paperwork. That can have a big negative effect on quality of life.
Where do you see the most potential for AI in the coming year?
Large language models’ ability to deliver messages at different educational levels or with nuances of different cultural backgrounds is a huge untapped opportunity. Language models can distill information to help patients understand their disease and treatment plan. AI can suggest ways to effectively communicate or provide different perspectives that the physician may not have thought of.
For instance, someone may be averse to taking pills. You could imagine the chatbot coaching the doctor to say something like, “I hear you. I understand you’re not a pill person. There are five pills that could have been prescribed, but we’re giving you only two, because these are the most important ones.” I’m optimistic that better communication and clarity will lead to an improvement of patients’ understanding of their disease and therefore improvement of the doctor-patient therapeutic alliance.
This story was first reported by Stanford Medicine.