
According to a recent BBC report, whistleblowers have levied allegations against prominent social media giants including TikTok and Meta Platforms. These insiders have stated that the companies allowed harmful content to circulate on their platforms after internal research proved outrage-driven posts generated higher user engagement.
More than a dozen insiders told the publication the companies made trade-offs between user safety and content engagement as short-form videos reshaped the social media landscape.
A former Meta engineer claimed management instructed teams to allow more “borderline” harmful material, including misogyny and conspiracy theories, on Instagram and Facebook users’ feeds as the company attempted to compete with TikTok’s rapid growth. The engineer said staff were told the move was linked to financial pressure, adding the decision was made “because the stock price is down”.
Matt Motyl, a senior university researcher specialising in Meta’s business, revealed Instagram Reels, launched in 2020 to rival TikTok, went live without adequate safeguards. Research reportedly suggested Reels comments showed higher levels of harmful behaviour including bullying, harassment and hate speech, compared with Instagram’s main feed.
Motyl added the company was aware of the risks tied to its recommendation systems, stating the platform’s algorithms created a “path that maximises profits at the expense of their audience’s wellbeing”.
Black box
Separately, a member of TikTok’s trust and safety team told the BBC moderation priorities sometimes favoured political complaints over cases involving harmful content featuring children. The employee claimed cases were handled to “maintain a strong relationship” with political figures and avoid potential regulatory action rather than prioritise user safety.
The whistleblower also warned the volume of moderation cases had become difficult to manage, adding that material linked to trafficking, violence, terrorism and sexual abuse appeared to be increasing.
Former TikTok machine learning engineer Ruofan Ding described the company’s recommendation system as a “black box”, noting engineers had limited visibility over how deep learning algorithms promote content.
Fabricated
Both companies have issued statements to BBC rejecting the claims.
Meta said: “Any suggestion that we deliberately amplify harmful content for financial gain is wrong”, while TikTok labelled the allegations as “fabricated claims”, stating it invests in technology designed to prevent harmful content.
The report comes as social media platforms face growing scrutiny across the world, with countries including Australia, Spain, UK, Indonesia, Malaysia and India moving to restrict or ban social media access for children.
Source: Mobile World Live
Image Credit: Stock Image





