A team of researchers from the Department of Information Security at Royal Holloway, University of London, have highlighted major privacy risks in technologies designed to help people permanently delete images of sexual abuse based on on images (IBSA), such as non-consensual intimate images, from the Internet.
These findings, published In IEEE Security and Privacyreveal how techniques currently used to identify and remove abusive content can be attacked with generative AI, potentially putting vulnerable users at risk.
The research team focused on “perceptual hashing”: a method that creates “digital fingerprints” of images to detect harmful content without storing or distributing the original files. Most online platforms (especially social media sites) maintain a hash list of known abusive images, allowing detection and prevention of re-uploaded copies.
Additionally, tools like “Take It Down” run by the National Center for Missing and Exploited Children (NCMEC) allow users to self-report IBSA in one place. To do this, users can upload perceptual hashes of images, which are then shared with partner platforms such as Facebook and OnlyFans.
However, the newly published paper demonstrates that perceptual hashes are not as irreversible as expected, undermining the privacy guarantees claimed by IBSA’s removal tools on their FAQ pages.
Led by Department of Information Security researcher Sophie Hawkes, Ph.D., the research team examined four widely used perceptual hash functions, including Facebook’s PDQ Hash (used by “Take It Down”) and Apple’s NeuralHash, and found that they are all vulnerable to inversion attacks.
Specifically, it became clear that adversarial use of generative AI could approximately reconstruct the original image. Hawkes explains: “Our results challenge the assumption that perceptual hashes alone are sufficient to ensure image privacy, but rather perceptual hashes should be treated as securely as the original images. »
This is particularly concerning given the sensitive nature of IBSA content and the vulnerable user groups these tools aim to protect. Co-authors Dr Maryam Mehrnezhad (Royal Holloway) and Dr Teresa Almeida (University of Lisbon) say: “The harms of modern technologies can unfold in complex ways. Although the risks of IBSA are not limited to any demographic group, certain groups such as children may be at greater risk, including psychological harm and danger to their safety. Therefore, designing secure and safe tools is essential to address these risks.
The researchers say the current design of services such as “Take It Down” is insufficient and highlight the need for stronger data protection measures, for example using cryptographic protocols such as Private Set Intersection (PSI). Using PSI, it would be possible to enable secure hash matching without exposing sensitive data. This would ensure a more privacy-focused solution to remove harmful content and protect vulnerable users.
However, researchers currently advise users to carefully consider the risks of perceptual hashing and make an informed decision when submitting a report. In particular, users should consider both the risk that images will be posted online and the risk that images will be reconstructed from declared hash values.
While there is no significant loss of privacy when reporting hashes of images already shared online, proactively reporting images can be problematic.
Following responsible disclosure procedures, the researchers alerted NCMEC of their findings, urging service providers to prioritize implementing more secure solutions to ensure user privacy.
Additionally, the researchers advocate for greater transparency, so that users can make an informed decision about the trade-off between their privacy and security when deciding whether to use IBSA reporting tools based on perceptual hashing .
Co-author Dr. Christian Weinert, Department of Information Security, concludes: “Future work in this area will require collaborative efforts involving technology designers, policy makers, law enforcement , educators, and most importantly, victims and survivors of IBSA to create better solutions. solutions for everyone. »
More information:
Sophie Hawkes et al, Perceptual hash inversion attacks on image-based sexual abuse removal tools, IEEE Security and Privacy (2024). DOI: 10.1109/MSEC.2024.3485497
Quote: Image-based sexual abuse removal tools are vulnerable to generative AI attacks, research reveals (December 3, 2024) retrieved December 4, 2024 from
This document is subject to copyright. Except for fair use for private study or research purposes, no part may be reproduced without written permission. The content is provided for informational purposes only.