Last year, we announced our secure AI framework (Saif) to help others deploy AI models safely and in a responsible manner. It shares not only our best practices, but offers a framework for industry, front -line developers and security professionals to ensure that when AI models are implemented, they are secured by design. To stimulate the adoption of AI critical security measures, we used the SAIF principles to help form the coalition for Secure IA (COSAI) with industry partners. Today, we share a new tool that can help others assess their security posture, apply these best practices and put the SAIF principles in action.
THE Saif risk assessmentAvailable to use today on our new website Saif.googleis a tool based on a questionnaire that will generate an instant and tailor -made control list to guide practitioners to secure their AI systems. We believe that this easily accessible tool fills a critical gap to move the ECOC ecosystem to a safer future.
New Saif Risk Assessment
THE Saif risk assessment Helps to transform SAIF of a conceptual framework into an usable control list for practitioners responsible for securing their AI systems. Practitioners can find the tool on the menu bar of the new home page of Saif.google.