Image-based sexual abuse removal tools are vulnerable to generative AI attacks, research reveals

A team of researchers from the Department of Information Security at Royal Holloway, University of London have highlighted major privacy risks in technologies designed to help people permanently remove image-based sexual abuse (IBSA) material—such as non-consensual intimate images—from the Internet.

This article is brought to you by this site.

Skip The Dishes Referral Code