OpenAI‘s latest AI video generation model, Sora 2, wowed users with its more realistic images and physics when it debuted with an invite-only release earlier this month — even as it ran into immediate problems with copyright infringement and deepfakes of historical figures including Martin Luther King Jr. and John F. Kennedy. The company took predictable steps to cover its legal liabilities shortly after this chaotic rollout, with CEO Sam Altman announcing that intellectual property holders would have to opt in to the platform before users could remix content like SpongeBob SquarePants or Family Guy, and the company blocking “disrespectful depictions” of King at the request of the civil rights leader’s estate. OpenAI also assured the labor union SAG-AFTRA that it was taking steps to guard against deepfakes of recognizable entertainers.
But troubling content persists on the Sora app, a social media platform that is designed to function similarly to TikTok, only featuring exclusively AI-generated videos. Users can consent to have their likenesses used in “Cameos,” wherein others may prompt the AI model to insert the person into a variety of contexts — not all of them flattering. According to research from Copyleaks, an AI analysis firm that helps businesses and institutions navigate the shifting landscape of this emergent technology, a new trend has produced Sora videos of celebrities appearing to spew hateful racist epithets.
More from Rolling Stone
“We identified a phenomenon resembling ‘Kingposting,’ a reference to a viral 2020 incident in which an airplane passenger wearing a Burger King crown was filmed shouting racial slurs,” Copyleaks researchers wrote in a report shared with Rolling Stone. In the Sora Cameo videos, public figures including Altman, billionaire Mark Cuban, influencer-turned-boxer Jake Paul, and streamers xQc and Amouranth each appear as a plane passenger in a Burger King crown, reenacting the offensive meme. (All evidently uploaded their likenesses to the app; Cuban announced on X earlier this month that his Cameos were “open” and that users should “have at it.”) To get around platform filters that block hate speech, Copyleaks observed, users apparently prompted Sora with “coded or phonetically similar terms to generate audio that mimics a well-known racial slur.” A deepfaked version of Altman, for example, screams “I hate knitters” as he is escorted off an aircraft. (OpenAI did not immediately respond to a request for comment.)
“This behavior illustrates an unsurprising trend in prompt-based evasion, where users intentionally probe systems for weaknesses in content moderation,” the Copyleaks report explains. “When combined with the likenesses of recognizable individuals, these deepfakes become more viral and damaging — spreading quickly across and beyond the platform.” Because the videos are available for download, the researchers noted, they are easily reappropriated and disseminated on other apps. Sure enough, a Sora-generated clip of Jake Paul repeating the phrase “neck hurts” has racked up 1.5 million views and 168,000 likes on TikTok. Another Sora-watermarked video that migrated to TikTok is a deepfake of Paul shouting “I hate juice,” obviously intended as an antisemitic provocation. (Sora 2 users in general have had no trouble making it visualize any number of antisemitic tropes.)
The “Kingposting” Sora videos are hardly an isolated case of AI users exploiting available images of celebrities. This week, the streamer IShowSpeed was incensed by realistic Sora 2 deepfakes that depicted him kissing a fan, announcing trips abroad, and coming out as gay. He criticized viewers who had evidently encouraged him to opt in to the Cameo system. “That was not the right move to do,” he said on a stream. “Whoever told me to make it public, chat, you’re not here for my own safety, bro. I’m fucked, chat.” His only immediate recourse, he learned, was to manually delete each video. Cuban has taken up this tedious task himself: “I try to go through and delete them,” he wrote in an X post on Wednesday, replying to a user who had shared a Sora clip of his AI doppelgänger acting out the racist “Kingposting” meme.
Grok Imagine, the image and video generator from Elon Musk‘s xAI, has come under fire too as its users generate harmful deepfakes of celebrities who did not consent to have their faces replicated by the AI model. While some individuals have produced hardcore pornographic clips starring Disney and comic book characters, others on the platform have succeeded in prompting Grok Imagine to conjure lewd imagery of Taylor Swift, Scarlett Johansson, and Sydney Sweeney. Even celebrities themselves have gotten in on the action: amid a long-simmering feud with Jay-Z, rapper Nicki Minaj on Wednesday posted a Grok-generated picture that appeared to show the hip-hop mogul in a pink crop top and wig and “Queen” necklace, with the announced release date for her next album printed on his bare stomach.
And, though users can make the most of lackluster AI moderation (Grok Imagine) or figure out clever ways to circumvent guardrails (Sora) in order to misrepresent famous people, the greater danger probably comes from videos that purport to capture events involving unknown parties. “Fake news broadcasts and fabricated Ring or street camera footage are among the most popular categories gaining significant traction,” according to Copyleaks’ researchers. They point to a Sora-generated clip of a man catching a baby that has fallen out of an apartment building — it has nearly 2 million likes on TikTok, with many commenters seeming to regard it as authentic. “Hyperrealistic AI video is outpacing the average person’s ability to detect manipulation,” the firm concludes. That could have drastic consequences as bad actors use it to enact hateful stereotypes and push extremist propaganda.
Yet the AI companies are so caught up in a race to dominance that these concerns hardly enter into the equation until the damage is already done. The rich and powerful, at least, might have the means to protect their reputations from an onslaught of deepfakes. The rest of us, however, will have to learn to navigate a world in which seeing is rarely believing.
Best of Rolling Stone
Sign up for RollingStone’s Newsletter. For the latest news, follow us on Facebook, Twitter, and Instagram.
First Appeared on
Source link

