On Tuesday, the Senate unanimously passed a bill that would allow victims to sue the creators of nonconsensual sexually explicit deepfakes for a minimum of $150,000.
The DEFIANCE Act is now headed to the House, where leadership failed to bring it to the floor last session. But there’s new momentum around the issue, as advocates push for Google and Apple to ban Grok from app stores after the chatbot allowed the creation of nonconsensual intimate imagery.
On December 20, Elon Musk, the CEO of social media platform X, richest person in the world and close associate of the Trump administration, announced that the company’s integrated chatbot Grok would be able to generate images from user prompts.
X users quickly realized that they could create explicit images by prompting Grok to “undress” women and girls. Various workarounds have resulted in nonconsensual intimate imagery of real people all over the platform, including child sexual abuse material and fake images of Muslim women with their hijabs removed.
On January 3, the official account of the safety team at X posted a brief statement saying the platform removes illegal content. “Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content,” the statement said.
When asked if any changes had been made to the chatbot to prevent the creation of nonconsensual images, xAI’s media team responded to The 19th, which was founded in 2020, “Legacy Media Lies.” xAI, also founded by Musk, is the developer of Grok. X, the social media platform formally known as Twitter, did not immediately reply to emailed questions.
On or around January 9, it appears X moved the image-generation capabilities of Grok behind a paywall. The move was largely criticized by tech policy experts and survivor advocates who say the company is now profiting off of abuse. (The Verge also reported the paywall hasn’t prevented free users from generating explicit images.)
“Sexual abuse is not a premium service. It’s a crime,” said Jenna Sherman, campaign director at gender justice advocacy group UltraViolet.
After taking over the company in 2022, Musk dissolved Twitter’s volunteer-run Trust and Safety Council, fueled a harassment campaign against the head of trust and safety and gutted the team responsible for content moderation.
Sen. Dick Durbin, one of the co-sponsors of the Senate version of the DEFIANCE Act, cited the Grok scandal and X’s lackluster response when he raised the legislation for a vote Tuesday morning.
“Even after these terrible, deepfake, harming images are pointed out to Grok and to X, formerly Twitter, they did not respond. They don’t take the images off the internet. They don’t come to the rescue of the people who are victims,” he said.
“That’s why this legislation is critical, because this legislation says if they are guilty of such reckless misconduct that they can be sued for it and held civilly liable for the damages.”
Durbin praised the passage by unanimous consent as polarized parties coming together on a bipartisan issue. The bill is co-sponsored by the Illinois Democrat and Sen. Lindsey Graham, a Republican from South Carolina.
It’s not clear if the DEFIANCE Act would be able to hold X the company civilly liable for the rash of nonconsensual imagery, said Omny Miranda Martone, the founder and CEO of the nonprofit Sexual Violence Prevention Association, but it underscores how the problem will continue to grow.
The DEFIANCE Act was introduced in the House last session by Reps. Alexandra Ocasio-Cortez, a Democrat from New York, and Laurel Lee, a Republican from Florida. Ocasio-Cortez has consistently brought attention to the issue and shared her experiences of being a target of deepfake abuse.
It’s no coincidence that women lawmakers are championing the issue, as they are likely to be targeted by this kind of digital sexual violence. A 2024 report from the American Sunlight Project found 35,000 mentions of nonconsensual intimate imagery depicting 26 members of Congress. Twenty-five of those legislators were women.
Martone noted that the House version has gained six co-sponsors since the start of this year, including three women, evenly split between parties.
Last year, Congress passed the Take It Down Act, which outlaws both real and computer-generated nonconsensual intimate imagery. Platforms will be required to implement a process to remove nonconsensual intimate images within 48 hours by May of this year.
The DEFIANCE Act and the Take It Down Act were always conceived of as a dual solution to the problem of deepfake abuse, Martone said. The Sexual Violence Prevention Association was involved in the drafting of both bills.
The Take It Down Act empowers courts and district attorneys to take action against nonconsensual imagery and also puts pressure on social media companies and search engines to remove harmful content, Martone said. The DEFIANCE Act is meant to empower survivors, who get to decide whether to take perpetrators to court directly.
“The most empowering thing you can do to survivors is give them options, and that’s the goal of the DEFIANCE Act,” they said.
Martone’s advocacy against digital sexual violence led to strangers attacking them with explicit deepfakes. The abuse targeted Martone specifically and their organization as a whole. Martone had to take time away from work and fork over rent and grocery money to get help taking the images down.
If the DEFIANCE Act is signed into law, Martone hopes to use it to recoup not just damages from emotional harm but also the very tangible cost of dealing with this kind of abuse.
Both the DEFIANCE Act and the Take It Down Act were introduced after AI-generated sexually explicit images of Taylor Swift went viral on X in January 2024. The company blocked searches of her name and eventually took the images down, sharing that posting nonconsensual intimate imagery is “strictly prohibited” and it has a “zero-tolerance policy towards such content.”
404 Media traced the deepfakes of Swift to a Telegram group that was exploiting a loophole in Microsoft’s Designer image generator. Microsoft took immediate action to address the exploit.
“It’s about global, societal convergence on certain norms, and we can do it, especially when you have law and law enforcement and tech platforms that can come together,” Microsoft CEO Satya Nadella said in an interview with NBC News.
“I think we can govern a lot more than we give ourselves credit for.”
This story was originally reported by Jasmine Mithani of The 19th.
Poli Alert Politics & Civics