Each day, millions of people use powerful generative AI tools to overeat their creative expression. In many ways, AI will create exciting opportunities for all of us to give life to new ideas. But, as these new tools arrive on the Microsoft market and through the technological sector, we must take new measures to ensure that these new technologies are resisting abuse.
The history of technology has long demonstrated that creativity is not limited to people with good intentions. The tools unfortunately also become weapons, and this model is repeated. We are currently witnessing a rapid expansion of the abuse of these new AI tools by bad players, including via Deepfakes based on videos, audio and images generated by AI. This trend constitutes new threats to elections, financial fraud, harassment through non -consensual pornography and the next generation of cyberbullying.
We have to act urgently to combat all these problems.
In an encouraging way, we can learn a lot from our experience as an industry in adjacent spaces – in the progress of cybersecurity, the promotion of election safety, the fight against violent extremist content and the protection of children. We are committed as a business in a robust and complete approach that protects people and our communities, based on six areas of intervention:
1. A strong safety architecture. We are committed to a complete technical approach based on design security. According to the scenario, a strong safety architecture must be applied to the levels of the platform, the model and the AI applications. It includes aspects such as current red team analysis, preventive classifiers, blocking abusive prompts, automated tests and rapid prohibitions of users who abuse the system. It must be based on a solid and wide data analysis. Microsoft has established solid architecture and shared our learning via our responsible AI and Digital security Standards, but it is clear that we will have to continue to innovate in these spaces as technology evolves.
2. Provenance of the durable and watermark media. This is essential to fight Deepfakes in video, images or audio. Last year at our Conference Build 2023, we announced capacity from the media who use cryptographic methods to mark and sign the content generated by AI-A with metadata on its source and its history. With other leading companies, Microsoft was a leader in R&D on the authentication methods of the provenance, in particular as co-founder of Project Origin and the Coalition for Content Provenance and Authenticity (C2PA). Last week, Google and Meta took important steps to take care of the C2PA, measures that we appreciate and applaud.
We already use provenance technology in the Microsoft Designer Image creation tools in bing and co -pilot, and we are extending the origin of the media to all our tools that create or manipulate images. We also actively explore waterproof and fingerprint techniques that help strengthen the techniques of provenance. We are committed to innovating in progress that will help users quickly determine whether an image or video is generated or manipulated.
3. Backup of our services from abusive content and driving. We are committed to protecting freedom of expression. But that should not protect people who seek to simulate a person’s voice to defraud an elderly person with their money. He should not extend to Deep Fakes who modify the actions or declarations of political candidates to deceive the public. Nor should it protect a cyber-intimidator or a dispenser of non-consensual pornography. We are committed to identifying and deleting the deceptive and abusive content like this when it appears on our hosted consumption services such as LinkedIn, our network of games and other relevant services.
4. Robust collaboration through industry and with governments and civil society. Although each company has the responsibility of its own products and services, experience suggests that we often do our best when we work together for a safer digital ecosystem. We are committed to working in collaboration with others in the technology sector, including in the generative spaces of AI and social media. We are also committed to proactive efforts with civil society groups and in appropriate collaboration with governments.
While we are going ahead, we will rely on our experience of fighting violent extremism under the call of Christchurch, our collaboration with the police through our unity of digital crimes and our efforts To better protect children through the Global Alliance Weprotect and more broadly. We are committed to taking new initiatives in the technological sector and with other groups of stakeholders.
5. Modernized legislation to protect people from the abuse of technology. It is already obvious that some of these new threats will require the development of new laws and new efforts of the application of laws. We look forward to contributing ideas and supporting new initiatives from governments around the world, so that we can better protect people online while honoring timeless values such as the protection of freedom of expression and privacy.
6. Public awareness and education. Finally, a strong defense will require a well -informed audience. As the second quarter of the 21stst Century, most people have learned that you cannot believe everything you read on the Internet (or elsewhere). A well -informed combination of curiosity and skepticism is a competence of critical life for everyone.
In the same way, we must help people recognize that you cannot believe every video you see or audio you hear. We must help people learn to identify the differences between legitimate and false content, including with the watermark. This will require new public education tools and programs, particularly in close collaboration with civil society and the managers of the company.
In the end, none of this will be easy. It will take difficult but essential efforts every day. But with a common commitment to innovation and collaboration, we believe that we can all work together to make sure that technology remains in its ability to protect the public. Perhaps more than ever, it must be our collective goal.