“The open-source model is likely to be more appealing to many chief information officers, hospital administrators, and physicians since there’s something fundamentally different about data leaving the hospital for another entity, even a trusted one,” said the study’s lead author, Thomas Buckley, a doctoral student in the new AI in Medicine track in the HMS Department of Biomedical Informatics.
Second, medical and IT professionals can tweak open-source models to address unique clinical and research needs, while closed-source tools are generally more difficult to tailor.
“This is key,” said Buckley. “You can use local data to fine-tune these models, either in basic ways or sophisticated ways, so that they’re adapted for the needs of your own physicians, researchers, and patients.”
Third, closed-source AI developers such as OpenAI and Google host their own models and provide traditional customer support, while open-source models place the responsibility for model setup and maintenance on the users. And at least so far, closed-source models have proven easier to integrate with electronic health records and hospital IT infrastructure.
Open-source AI versus closed-source AI: A scorecard for solving challenging clinical cases
Both open-source and closed-source AI algorithms are trained on immense datasets that include medical textbooks, peer-reviewed research, clinical-decision support tools, and anonymized patient data, such as case studies, test results, scans, and confirmed diagnoses. By scrutinizing these mountains of material at hyperspeed, the algorithms learn patterns. For example, what do cancerous and benign tumors look like on pathology slide? What are the earliest telltale signs of heart failure? How do you distinguish between a normal and an inflamed colon on a CT scan? When presented with a new clinical scenario, AI models compare the incoming information to content they’ve assimilated during training and propose possible diagnoses.
In their analysis, the researchers tested Llama on 70 challenging clinical NEJM cases previously used to assess GPT-4’s performance and described in an earlier study led by Adam Rodman, HMS assistant professor of medicine at Beth Israel Deaconess and co-author on the new research. In the new study, the researchers added 22 new cases published after the end of Llama’s training period to guard against the chance that Llama may have inadvertently encountered some of the 70 published cases during its basic training.
The open-source model exhibited genuine depth: Llama made a correct diagnosis in 70 percent of cases, compared with 64 percent for GPT-4. It also ranked the correct choice as its first suggestion 41 percent of the time, compared with 37 percent for GPT-4. For the subset of 22 newer cases, the open-source model scored even higher, making the right call 73 percent of the time and identifying the final diagnosis as its top suggestion 45 percent of the time.
“As a physician, I’ve seen much of the focus on powerful large language models center around proprietary models that we can’t run locally,” said Rodman. “Our study suggests that open-source models might be just as powerful, giving physicians and health systems much more control on how these technologies are used.”
Each year, some 795,000 patients in the United States die or suffer permanent disability due to diagnostic error, according to a 2023 report.
Beyond the immediate harm to patients, diagnostic errors and delays can place a serious financial burden on the health care system. Inaccurate or late diagnoses may lead to unnecessary tests, inappropriate treatment, and, in some cases, serious complications that become harder — and more expensive — to manage over time.
“Used wisely and incorporated responsibly in current health infrastructure, AI tools could be invaluable copilots for busy clinicians and serve as trusted diagnostic aides to enhance both the accuracy and speed of diagnosis,” Manrai said. “But it remains crucial that physicians help drive these efforts to make sure AI works for them.”