That transformation presents the short-video app, whose parent is Chinese technology giant ByteDance Ltd., with one of its biggest challenges since it was launched about five years ago.
As tensions between Russia and Ukraine rose, TikTok grappled internally with how to deal with its heightened role in geopolitics, people familiar with the matter said. Some of TikTok’s content moderators struggled to figure out whether to avoid recommending certain posts, remove them from the app or restrict the creators’ accounts, they said.
The content moderators have also been confused about how to deal with some clips flagged by the app’s content-filtering systems, the people said. Without detailed instructions in place for war-related content, junior-level managers were charged with refining the rules as they went along, the people said. The result was inconsistencies in treatment of similar content, they said.
“We continue to respond to the war in Ukraine with increased safety and security resources to detect emerging threats and remove harmful misinformation,” a TikTok spokeswoman said.
TikTok on Sunday took its biggest step yet, suspending new video uploads and live streaming from Russia, citing the safety of its employees after Russia passed a new “fake news” law. The move, which followed pullbacks by other major tech and media companies from their operations there, was notable given that TikTok’s parent, ByteDance, is based in Beijing, where the government has refrained from supporting Western sanctions on Russia.
This came about a week after TikTok said that it would restrict access to some Russian state-controlled media accounts, including RT and Sputnik, in the European Union. In a sign of the gravity of the matter, TikTok notified executives at ByteDance in Beijing, who didn’t contest the decision, one person familiar with the matter said. A TikTok spokeswoman said its chief executive has full autonomy for all decisions about TikTok’s operations.
Since Russia invaded Ukraine on Feb. 24, social-media users have devoured photos and video clips uploaded to platforms including TikTok, Meta Platforms Inc.’s Facebook, Twitter Inc., and Google’s YouTube. TikTok in particular has provided a ground-level, often visceral view of modern warfare, but social-media researchers say it has also become a hotbed of unreliable information.
“People go to TikTok for entertainment but are being served up unclear and even misleading information about the war,” said Anne Kruger, a Sydney-based director for misinformation-research group First Draft. The platform’s constant video replays help reinforce messages, she said.
As Russian troops advanced on Ukraine, one widely shared video on TikTok of military planes flying in formation claimed to be footage of the invasion. PolitiFact, a Washington, D.C.-based fact-checking website, later found that the video was taken from a Russian military parade in mid-2020. The video has since been removed.
Another video of soldiers parachuting into a conflict zone was watched by 20 million TikTok users before being removed—after the footage was found to be from seven years ago, according to First Draft.
Such content often carries a message seeking donations or tips for the content creator in apparent efforts to monetize their clips.
“Globally, the platform has become a prominent space for many across the world to view and become informed about the invasion,” said Ciarán O’Connor, an Ireland-based researcher at the Institute for Strategic Dialogue. “But it’s also become an instrument in information warfare too.”
To be sure, TikTok is far from the only platform contending with false information. But Mr. O’Connor said his research showed that TikTok was more potent in disseminating false information about Ukraine by Russian state-controlled media compared with other social media.
He analyzed 12 TikTok videos posted by the editor in chief of Russia’s state-linked news broadcaster RT that promoted Kremlin propaganda of Ukraine as an aggressor. Posted on an account that wasn’t labeled as state media, the videos were viewed 21.3 million times as of March 8, more than the 11 million views she had garnered from posting 21 videos on YouTube. TikTok’s state-media labeling policy only applies to organizations.
Just days before the war in Ukraine broke out, TikTok’s senior staff gathered online to propose new rules to their teams that operate the platform for the Russian and Ukrainian markets, said people familiar with the matter. The staff came from legal, public policy, and trust and safety teams globally, mostly based in TikTok’s large regional bases such as Dublin and Singapore, some of the people said.
The short-video app’s leaders have been meeting regularly to discuss strategies to respond to the crisis, and it runs an operation center open at all hours to respond to unfolding events, TikTok said. Its global trust and safety team, led by a head in Dublin, oversees and enforces their content policies, it said.
As a result, TikTok started running war-related videos through online open resources and databases to check whether the footage had existed online before the conflict, seeking to identify and take down old images of jet fighters, bombings and military operations being passed off as recent content, people familiar with the matter said.
Other platforms have been ahead of TikTok in addressing some of these issues.
Within days of the conflict in Ukraine breaking out, Meta, Twitter and YouTube detailed the steps they were taking to reduce information that they deem to be false or misleading. The companies introduced new policies and began labeling and demoting posts from, and containing links to, state-linked Russian media.
The three have also detailed how they have removed and permanently suspended accounts, videos and posts either originating from Russia targeting Ukraine or for deceptive practices and misinformation. TikTok hasn’t publicly disclosed any concrete data regarding removal of inauthentic posts and users.
Never miss a story! Stay connected and informed with Mint.
our App Now!!