Recently, AI-generated explicit images falsely depicting Taylor Swift have been distributed on X (previously known as Twitter). This situation underscores the growing issue of AI-generated fake pornography and its rapid dissemination.
One specific post featuring these images garnered over 45 million views, 24,000 reposts, and a significant number of likes and bookmarks on X, staying live for approximately 17 hours before the account responsible was suspended for policy violations. Despite this, the images continued to spread across various accounts, with some still accessible and an influx of new explicit fakes emerging. The phrase “Taylor Swift AI” even trended in certain areas, amplifying the reach of these images.
A 404 Media report suggested that these images originated from a Telegram group where users share explicit AI-created images of women, often using Microsoft Designer. This group reportedly made light of the viral spread of the fake Swift images on X.
X’s guidelines clearly prohibit synthetic, manipulated media, and nonconsensual nudity. However, despite requests for comments, representatives from X, Taylor Swift, and the NFL have not yet responded. X did issue a public statement nearly a day after the incident began, but it did not specifically address the Swift images.
Swift’s fans have criticized X for not acting swiftly enough to remove the posts. In a countermove, fans have flooded related hashtags with genuine clips of Swift performing, aiming to obscure the explicit fakes.
This incident highlights the difficulty in curbing the spread of deepfake porn and AI-generated images of real people. While some AI image generators have safeguards against producing nude, pornographic, or photorealistic celebrity images, many lack such restrictions. Social media platforms often bear the responsibility of controlling the spread of these fakes, a task that can be challenging, especially for platforms like X that have reduced their moderation capabilities.
Currently, the European Union is investigating X over allegations that it is a conduit for “illegal content and disinformation.” This includes scrutiny over its crisis protocols following the spread of misinformation about the Israel-Hamas conflict on the platform.