Basically, they said: “The appearance of Red AI strives to push beyond the security references at the level of the model by emulating the real world attacks against end-to-end systems. However, there are many open questions about how red team operations should be carried out and a good dose of skepticism about the effectiveness of the RED team’s current efforts. »»
The document noted that, when formed in 2018, the Microsoft AI Red (ART) team focused mainly on the identification of traditional security vulnerabilities and escape attacks against conventional ML models. “Since then,” he said, “the scope and scale of integration of the RED AI to Microsoft have developed considerably in response to two major trends.”
The first, he said, is that AI has become more sophisticated, and the second is that Microsoft’s recent investments in AI have led to the development of many other products that require a red team. “This increase in volume and the extended scope of the Red IA team have rendered fully manual tests impracticable, forcing us to evolve our operations using automation,” wrote the authors.