Ahead of the 30th anniversary of the Violence Against Women Act, the White House has announced voluntary commitments from major tech companies to tackle the growing issue of image-based sexual abuse, including the use of artificial intelligence to generate “deepfakes.” These efforts aim to curb the creation, distribution, and monetization of nonconsensual sexually explicit content, Kat Tenbarge of NBC News reported.
Advocacy groups like the Cyber Civil Rights Initiative have long pushed for federal and state legislation targeting this issue, including reforms to Section 230 of the Communications Decency Act, which currently protects tech companies from legal liability for user-generated content. While the White House’s voluntary measures are a step forward, Mary Anne Franks, president of the Cyber Civil Rights Initiative, argues that these efforts should complement—not replace—legislation. She acknowledged the Biden administration’s focus on gender violence, online harassment, and image-based sexual abuse as “transformative” compared to previous administrations, but emphasized that much more needs to be done to address the crisis fully.
Other tech companies are encouraged to join the initiative by adopting these principles, which are designed to guide the industry toward better practices. The White House’s announcement also highlighted recent efforts by companies like Google, which in July began deranking and delisting websites in search results that contained nonconsensual sexually explicit deepfakes. Meta, which owns Instagram, reported the removal of 63,000 accounts involved in “sextortion,” a crime where sexual images are coerced from victims and used to extort money.
The rise of deepfakes and sextortion has been especially alarming. Since 2022, NBC News has reported a dramatic increase in sextortion cases, particularly affecting teenage boys using Instagram. The investigation also uncovered apps running on Facebook and Instagram in late 2022 and early 2023 that promoted the creation of nude images of teenage celebrities. Additionally, explicit deepfakes often appeared at the top of search results on Google and Microsoft’s Bing, exposing victims to widespread exploitation.
“These are not new issues, but generative AI has made it much easier to create fake images, making the problem worse,” said Alexandra Reeve Givens, CEO of the Center for Democracy and Technology. “Principles alone won’t solve the issue; actual changes in practice are needed.”
Several tech companies, including Meta, Microsoft, TikTok, Bumble, Discord, Hugging Face, and Match Group, have signed on to a set of principles aimed at combating image-based sexual abuse. These principles address various forms of exploitation, including the nonconsensual sharing of intimate images, sextortion, the distribution of child sexual abuse material, and the increasing use of AI-generated deepfakes. Examples of deepfakes include digitally altering victims’ faces into explicit videos or creating fabricated nude images.
The White House noted that image-based sexual abuse has surged, disproportionately impacting women, children, and LGBTQ+ individuals. In 2023, the problem escalated in schools worldwide, with deepfake images of teenage girls being shared by their peers. More nonconsensual deepfake videos were uploaded in 2023 than in all previous years combined.
“This type of abuse has severe consequences for personal safety and well-being,” the White House stated, “and it also has broader societal impacts by discouraging survivors from participating in schools, workplaces, and communities.”
Two digital rights nonprofits, the Center for Democracy and Technology and the Cyber Civil Rights Initiative, along with the National Network to End Domestic Violence, spearheaded the effort to create and sign these principles. The guidelines include granting individuals control over how their likeness is used, clearly prohibiting nonconsensual intimate imagery in company policies, and implementing user-friendly tools to prevent and address image-based sexual abuse. Other core principles include ensuring accessibility, using trauma-informed approaches, preventing harm, maintaining transparency and accountability, and fostering collaboration between companies and organizations.
Mary Anne Franks emphasized that while progress has been made, it’s still not enough. “If companies were doing their jobs—if they were responsible and accountable—we wouldn’t be facing these epidemics,” she said. “We’ve made progress from where we were, but we wouldn’t be in this crisis if the industry had been held accountable for the harms their platforms are facilitating.”
Leave a Reply