Over several months, the Journal set up over a hundred bots, or automated TikTok accounts, that watched hundreds of thousands of videos on the app. All of those videos were downloaded and became visual evidence of the Journal’s findings.
The team then created a database of those hundreds of thousands of TikTok videos and we classified them through a mix of machine learning and human labeling. Using those classifications, we selected hundreds of TikToks and used animation to visualize the various paths users can be sent down based on the slightest hesitation on videos in their feed. We also created a data visualization from all the hashtags in the videos the bots watched, to give viewers a unique glimpse into the universe of TikTok content. We stitched that all together to tell the story of one bot’s journey down a TikTok rabbit hole.
The impact of the reporting was immediate. More than 1,000 videos were removed from the platform after the Journal first flagged them to TikTok. After publication, TikTok said it would adjust its recommendation algorithm to avoid showing users too much of the same content. The company said that it was testing ways to avoid pushing too much content from a certain topic, such as extreme dieting, sadness or breakups, to individual users to protect their mental well-being.