Searches for Taylor Swift on X come up empty after explicit pictures created by AI go viral

Searching for Taylor Swift on X, formerly known as Twitter, showed an error message over the weekend after pornographic, AI-generated images of the singer were circulated across social media last week.

X’s search function only displays results for Swift under its ‘Media’ and ‘List’ tabs. However, Swift is still searchable using several boolean operators. Inputting “Taylor Swift” with quotation marks, as well as “Taylor AND Swift” yield normal search results under all of X’s search function tabs.

The search function error message does not appear on either Instagram or Reddit.

READ MORE: Neighbours star confirms co-star’s death with emotional message

The fake images of Swift – which show the singer in sexually suggestive and explicit positions – were predominantly circulating on X, and were viewed tens of millions of times before being removed from social platforms.

Like most major social media platforms, X’s policies ban the sharing of “synthetic, manipulated, or out-of-context media that may deceive or confuse people and lead to harm.”

“This is a temporary action and done with an abundance of caution as we prioritise safety on this issue,” the company told CNN in a statement.

For a daily dose of 9Honey, subscribe to our newsletter here.

Digitally manipulated pornographic images of celebrities are nothing new on the internet, and have been circulating online since the advent of software like Photoshop. But the rise in mainstream artificial intelligence software has heightened concerns due to its ability to create convincingly real and damaging images.

The incident comes as the United States heads into a presidential election year, prompting fears misleading AI-generated images and videos could be used in disinformation efforts.

And it’s not just public figures with massive online presences who fall victim to this type of harassment.

READ MORE: Viewers stunned to spot Oscar nominee quietly sitting in Australian Open crowd

In November, a 14-year-old New Jersey high school student called on school and government officials to take action after she said photos of her and more than 30 female classmates were manipulated and possibly shared publicly.

At the time, the school provided CNN with a statement from Superintendent Dr. Raymond González, who said, “All school districts are grappling with the challenges and impact of artificial intelligence and other technology available to students at any time and anywhere.”

Nine US states currently have laws against the creation or sharing this kind of nonconsensual deepfake photography, which are synthetic images created to mimic one’s likeness.

Support is available from the National Sexual Assault, Domestic Family Violence Counselling Service at 1800RESPECT (1800 737 732).

   

Advertisements