TikTok's AI video generator made to recite Hitler, quickly pulled after that

2comments
TikTok's AI video generator made to recite Mein Kampf, pulled right after that
Generative AI is all the hype nowadays, but its rapid adoption can cause problems. In a dramatic and somewhat comic string of events, TikTok's Symphony Assistant was used to create AI avatars, reciting problematic scripts such as Hitler's Mein Kampf and Osama Bin Laden's "Letter to America."

TikTok announced Symphony Avatars last month as part of its "Creative AI Suite," aimed at helping create personalized videos without the need to hire real actors. The feature has been rolling out for the past week to TikTok Ads Manager accounts, or it was supposed to be limited to those accounts.

CNN producer and digital content creator Jon Sarlin gained access to the AI tool using his personal account and found that there are no safeguards when it comes to sensitive content.

Generative AI is in extensive use across the tech world, and TikTok is no exception. The platform first announced Symphony Avatars last month as part of its "Creative AI Suite," enabling businesses, brands, and creators to create fully customized ads using generative AI and the likenesses of paid actors (avatars).

This feature was then rolled out earlier this week, but only for people with a TikTok Ads Manager account. However, it seems like this limitation was briefly not in place, with a CNN reporter gaining access to one of the Symphony AI tools using their personal account. This led to the reporter finding practically no guardrails or safeguards in this AI tool.



In a post on X (formerly Twitter), Sarlin showed some examples of videos, created with Symphony Assistant, including avatars, reciting the aforementioned problematic scripts. What's even worse is that there were no watermarks or any indications on the videos to show that they were AI-generated.

Unsurprisingly, CNN reached out to TikTok for comment and received the following statement: "A technical error, which has now been resolved, allowed an extremely small number of users to create content using an internal testing version of the tool for a few days. If CNN had attempted to upload the harmful content it created, this content would have been rejected for violating our policies. TikTok is an industry leader in responsible AIGC creation, and we will continue to test and build in the safety mitigations we apply to all TikTok products before public launch."

Recommended Stories
It's unclear whether TikTok accidentally rolled out an "internal testing version" of the AI tool or just missed placing the appropriate filters and safeguard mechanisms, but now the issue has been remedied, according to TikTok (we understand that Symphony Assistant has been pulled off, at least for the moment).

ByteDance, the company behind TikTok, has an ultimatum until the end of the year to sell TikTok or have the app banned in the U.S. after the Senate voted to send a bill to the White House, and it was subsequently signed by President Biden. Legislators in the United States are concerned that ByteDance may be coerced by the Chinese Communist Party (CCP) to give up user personal information and spy on Americans.

AI isn't going anywhere anytime soon, so expect more stories like this one to pop up. What do you think about it? AI is just a tool, and it's us humans who ultimately use it unethically. But there should be mechanisms in place to prevent such mishaps, right?

Recommended Stories

Loading Comments...
FCC OKs Cingular\'s purchase of AT&T Wireless