Cybersecurity researchers have detailed a now -stuffed safety flaw affecting the Ollla Open Source Artificial Intelligence Intelligence Platform (AI) which could be used to carry out the distant code execution.
Monitoring like CVE-2024-37032Vulnerability has been appointed Problama by the Safety Company Cloud Wiz. After the disclosure responsible on May 5, 2024, the problem was resolved in Version 0.1.34 Released on May 7, 2024.
Olllama is a service for packaging, deployment, execution of large language models (LLMS) locally on Windows, Linux and MacOS devices.
Basically, the problem relates to an insufficient input validation case which results in a fault in crossing the path that an attacker could exploit to crush arbitrary files on the server and finally lead to the execution of the remote code.
The gap requires that the threat actor sends HTTP requests specially designed to the API Ollama server for successful exploitation.
It specifically benefits from the termination point of the API ” / API / Pull” – which is used to download a model from the official register or from a private repository – to provide a manifest file of malicious model which contains a crossing of crossing in the digest field.
This problem could be abused not only to corrupt arbitrary files on the system, but also to obtain the execution of remote code by crushing a configuration file (“etc / LD.SO.PRELOAD”) associated with the dynamic editor (“LD.SO”) Include a Library shared Voyou and launch it each time before running a program.
While the risk of execution of the remote code is reduced to a large extent in default Linux installations due to the fact that the API server binds to local hostThis is not the case with Docker deployments, where the API server is exposed publicly.
“This problem is extremely severe in Docker’s installations, because the server runs with” root “privileges and enumerations on” 0.0.0.0 “- which allows remote exploitation of this vulnerability”, the SAGI TZADIK security researcher said.
The composition of the questions is also the inherent lack of authentication associated with Olllama, allowing an attacker to operate an accessible server on the stock market to steal or alter the AI models, and compromise the AI self-heberged inference servers.
This also requires that these services be guaranteed Use of middleware as opposite proxys with authentication. WIZ said that it has identified more than 1,000 cases exposed to Olllama hosting many AI models without any protection.
“CVE-2024-37032 is an easy-to-use remote code execution which affects the infrastructure of modern AI,” said Tzadik. “Although the code base is relatively new and written in modern programming languages, conventional vulnerabilities such as crossing of the path remain a problem.”
Development comes as the AI security company protected AI warned moreover 60 safety defects Affecting various Open Source IA / ML tools, including critical problems that could lead to the disclosure of information, access to restricted resources, climbing privileges and full control of the system.
The most serious of these vulnerabilities is CVE-2024-22476 (CVSS score 10.0), an SQL injection defect in Intel Neural Compressor software that could allow attackers to download arbitrary files from the host system. It was treated in version 2.5.0.