Microsoft has launched an innovative cybersecurity challenge that puts artificial intelligence (AI) to the test.
As Microsoft invites hackers and security researchers to try to break its built-in LLM simulation email clientdubbed the LLMail service, with rewards of up to $10,000 for successful attacks.
The competition, titled “LLMail-Inject: Adaptive Prompt Injection Challenge,” aims to evaluate and improve defenses against rapid injection attacks in AI-based systems.
Participants are responsible for avoiding rapid injection defenses in the LLMail service, which uses a large language model (LLM) to process user requests and perform actions.
Competitors play the role of an attacker, attempting to manipulate the LLM into executing unauthorized commands.
Analysts at Microsoft observed that the main goal is to create an email that bypasses system defenses and triggers specific actions without user consent.
Leverage MITER ATT&CK 2024 Results for SMB and MSP Cybersecurity Leaders – Participate in the free webinar
Technical analysis
The LLMail service integrates several key components: –
- An email database containing simulated messages
- A retriever that searches and retrieves relevant emails
- An LLM that processes user requests and generates responses
- Several defenses against rapid injections
Participants must navigate these elements to successfully operate the system.
To participate, individuals or teams of up to five members can register on the official website using their GitHub account. Applications can be submitted directly through the website or programmatically through an API.
The challenge assumes that attackers are aware of existing defenses, requiring them to develop adaptive rapid injection techniques. This approach aims to push the limits of AI Security and discover potential vulnerabilities of LLM-based systems.
Microsoft’s initiative highlights the growing importance of AI security at a time when language models are increasingly integrated into various applications. By simulating real attack scenarios, the company aims to:
- Identifying weaknesses in current defenses against rapid injections
- Encourage the development of more robust systems security measures
- Foster collaboration between security researchers and AI developers
The competition is a joint effort organized by experts from Microsoft, the Austrian Institute of Science and Technology (ISTA) and ETH Zurich.
This collaboration brings together diverse perspectives and expertise in the fields of AI, cybersecurity and IT.
By inviting the global security community to test its defenses, Microsoft is taking a proactive approach to address potential vulnerabilities before they can be exploited in real-world scenarios.
Analyse Real-World Malware & Phishing Attacks With ANY.RUN - Get up to 3 Free Licenses