The Optum health care giant has limited access to an internal AI chatbot used by employees after a security researcher found that he was accessible to the public online, and anyone could access it using only a web browser.
The Chatbot, which Techcrunch has seen, has enabled employees to ask the company questions about how to manage patient health insurance complaints and disputes for members in accordance with the company’s standard operational procedures (SOPS).
Although the chatbot does not seem to contain or produce sensitive personal or protected health information, its exposure inadvertently occurs at a time when its parent company, the health insurance conglomerate, faces a meticulous examination for its use of artificial intelligence tools and algorithms allegedly replacing the medical decisions of doctors and refusing patient complaints.
Mossab Hussein, director of security and co-founder of the Spidersilk cybersecurity company, alerted TechCrunch to the internal chatbot exposed publicly, nicknamed “Sop Chatbot”. Although the tool has been hosted on an internal Optum field and could not be accessible from its web address, its IP address was public and accessible from the Internet and did not have to enter a password.
It is not known how long the chatbot has been accessible to the public from the internet. The AI chatbot became inaccessible on the internet shortly after Techcrunch contacted Optum to comment on Thursday.
Optum spokesperson Andrew Krejci told Techcrunch in a statement that the Optum Sop Chatbot “was a demonstration tool developed as a proof of potential concept” but has never been put into production and the site is no longer accessible “.
“The demo was aimed at testing how the tool reacts to questions about a small sample of SOP documents,” said the spokesperson. The company confirmed that there was no protected health information used in the bot or its training.
“This tool would not make decisions and would never make decisions, but only allowed better access to existing SOPs. In short, this technology has never been set up or used in a real way, “said the spokesperson.
AI chatbots, like Optum, are generally designed to produce answers according to the data on which the chatbot has been formed. In this case, the chatbot was trained on internal Optum documents relating to SOPs for the processing of certain complaints, which can help Optum employees answer questions on complaints and their eligibility to reimburse. OPTUM documents were organized on the Entreprise network of Unitedhealthcare and inaccessible without connection to employees, but are cited and referenced by the Chatbot when they are invited to their content.
According to the statistics displayed on the main dashboard of the chatbot, Optum employees have used hundreds of times Sop Chatbot since September. The chatbot also stored a story of the hundreds of conversations that Optum employees had with the chatbot during this period. Cat’s history shows that Optum employees would ask the things of the chatbot as “what should be the determination of the complaint?” And “how to check the date of renewal of policies?”
Some of the files that chatbot references include management of the dispute process and eligibility screening, Techcrunch has seen. The chatbot also produced answers that have shown them, when they were generally refused to refuse coverage.


Like many AI models, Optum’s chatbot was able to produce answers to questions and prompts outside the documents on which it was formed. Some Optum employees seemed intrigued by the chatbot, which prompted the bot with questions like “tell me a joke on cats” (which he refused: “there is no joke available”). The history of the cat has also shown several attempts by “jailbreaker” employees the chatbot by having them produced answers that are not linked to the chatbot training data.
When Techcrunch asked the chatbot to “write a poem on the refusal of a complaint”, the chatbot produced a stanza with seven paragraphs, which is partly read:
In the field of the Grand Domaine de Healthcare
Where policies and rules are often limited
A complaint arrives, looking for his due
But alas, his fate is to say goodbye.The supplier hopes, with serious plea,
For payment on a wave of services,
However, the examination reveals history,
And the reasons for denial prevail.
Unitedhealth Group, who holds Optum and United, faces criticism and legal action for its use of artificial intelligence to allegedly refuse complaints for patients. Since the targeted murder of the director general of Unitedhealthcare, Brian Thompson, in early December, the media reported Patient reports of patients expressing anxiety and frustration of denials their health coverage by the health insurance giant.
The conglomerate – The largest private supplier of health insurance in the United States – was continued earlier this year for having pretended to denied critical health coverage to patients who have lost access to health care, Citing an investigation into Stat news. The federal trial accuses Unitedhealthcare of using an AI model with an error rate of 90% “instead of real health professionals to wrongly refuse care to elderly patients”. Unitedhealthcare, for his part, said that he would defend himself in court.
Unitedhealth Group achieved $ 22 billion in profits on revenues of $ 371 billion in 2023, its results.