Microsoft announced it has partnered with StopNCII to help remove non-consensual intimate images – including deepfakes – from its Bing search engine.
When a victim opens a “case” with StopNCII, the database creates a digital fingerprint, also called a “hash,” of the intimate image or video stored on that person’s device, without having to upload the file. The hash is then sent to participating industry partners, who can look for matches to the original and remove it from their platforms if it violates their content policies. This process also applies to AI-generated deepfakes of a real person.
Several other tech companies have agreed to work with StopNCII to remove intimate images shared without permission. Meta helped build the tool, and uses it on its Facebook, Instagram, and Threads platforms; Other services that have partnered with this effort include TikTok, Bumble, Reddit, Snap, Niantic, OnlyFans, PornHub, Playhouse, and Redgifs.
Oddly, Google is missing from that list. The tech giant has its own tools for reporting non-consensual images, including AI-generated deepfakes. However, failing to participate in one of the few centralized locations for cleaning up revenge porn and other private images puts an additional burden on victims to take a piecemeal approach to reclaiming their privacy.
In addition to efforts like StopNCII, the US government has taken some steps this year to specifically address the harm done by the deepfake side of non-consensual images. The US Copyright Office called for new legislation on the topic, and a group of senators took steps to protect victims with the NO FAKES Act introduced in July.
If you think you have been a victim of non-consensual intimate image-sharing, you can open a case with StopNCII here and with Google here; if you are under 18, you can file a report with NCMEC here.