Updated November 5, 2024: This article, originally published on November 4, now includes research findings on the use of AI deepfakes.
An AI agent has discovered a previously unknown exploitable zero-day memory security vulnerability in widely used real-world software. It’s the first, at least publicized, example of such a discovery, according to Project Zero and Google’s DeepMind, the forces behind Big Sleep, the large language model-assisted vulnerability agent that spotted the vulnerability.
If you don’t know what Project Zero is and aren’t impressed by what it has accomplished in the security space, then you simply haven’t been paying attention in recent years. These elite hackers and security researchers work tirelessly to uncover zero-day vulnerabilities in Google products and beyond. The same accusation of lack of attention applies if you don’t know about DeepMind, Google’s AI research labs. So when these two tech behemoths joined forces to create Big Sleep, they were bound to make waves.
Google uses a large language model to detect zero-day vulnerability in real-world code
In a November 1 announcement, Google’s Project Zero blog confirmed that the project Security vulnerability research framework aided by a large Naptime language model evolved towards Great sleep. This collaborative effort involving some of the best ethical hackers, as part of Project Zero, and the best AI researchers, as part of Google DeepMind, has developed a large agent powered by a language model that can discover security vulnerabilities very real. in widely used code. In the case of this world-first, the Big Sleep team claims to have found “an exploitable stack buffer underflow in SQLite, a widely used open source database engine.”
The zero-day vulnerability was reported to the SQLite development team in October, who fixed it the same day. “We found this issue before it appeared in an official release,” Google’s Big Sleep team said, “so SQLite users were not affected.”
AI could be the future of fuzzing, says Google Big Sleep team
Although you may have never heard the term “fuzzing,” it has been a staple of security research for decades. Fuzzing is about using random data to trigger errors in code. Although the use of fuzzing is widely accepted as an essential tool for those looking for vulnerabilities in code, hackers will readily admit that they can’t find everything. “We need an approach that can help defenders find bugs that are difficult (or even impossible) to find through fuzzing,” the Big Sleep team said, adding that they hope AI can fill the void and finding “software vulnerabilities before they are even detected.” released”, leaving little room for maneuver for the attackers.
“Finding a vulnerability in a widely used and well-fuzzy open source project is an exciting result,” the Google Big Sleep team said, while admitting that the results are currently “highly experimental.” Currently, the Big Sleep agent is considered to be as effective as a target-specific fuzzer. However, the near future looks promising. “This effort will provide a significant benefit to defenders,” Google’s Big Sleep team said, “with the potential to not only find crashing test cases, but also provide high-quality root cause analysis, the Sorting and resolving issues could prove to be much cheaper and more efficient in the future.
The downside of AI is visible in Deepfake security threats
While Google’s Big Sleep news is refreshing and important, as is a new RSA report examining how AI can help eliminate passwords in 2025the flip side of AI safety must always be considered as well. One of these setbacks is the use of deepfakes. I have already explained how Google Support Deepfakes Used in Attack on Gmail User a report that went viral for all the right reasons. Today, a Forbes.com reader contacted me to tell me about some research being undertaken to evaluate how AI technology can be used to influence public opinion. Again, I spoke about this recently as FBI Issues Warning Over 2024 Election Voting Video it was actually a fake backed by Russian distributors. The last VPNRanks Search It’s worth reading in its entirety, but here are some hand-picked stats that certainly get the gray cells working.
- 50% of respondents have encountered deepfake videos online several times.
- 37.1% view deepfakes as an extremely serious reputational threat, especially when it comes to creating fake videos of public figures or ordinary people.
- Concerns about the manipulation of public opinion by deepfakes are high, with 74.3% of people extremely concerned about possible misuse in political or social contexts.
- 65.7% believe that a deepfake published during an election campaign would likely influence voters’ opinions.
- 41.4% believe it is extremely important that social media platforms immediately remove non-consensual deepfake content once reported.
- Looking to 2025, global identity fraud attempts linked to deepfakes are expected to reach 50,000 and more than 80% of global elections could be affected by deepfake interference, threatening the integrity of democracy.