Exposing Meta's Overstated Safety Features: Protecting Teens or Broken Promises?
A recent analysis by advocacy groups and Northeastern University researchers reveals that many of Meta's safety features on Instagram for teens are ineffective or non-existent. Meta has disputed these findings, suggesting they misrepresent the company's efforts. This raises concerns about the effectiveness of tech companies in protecting young users.

A new report by child-safety advocacy groups, corroborated by Northeastern University researchers, finds that many safety features Meta claimed to implement on Instagram for teen protection are ineffective. Meta disputes the report, labeling it misleading, as the debate over online youth safety intensifies.
Out of 47 tested features, only eight proved fully effective, with others showing flaws or no longer available. Despite efforts, features aimed at shielding teens from self-harm content or bullying messages often failed. Notably, some features like 'quiet mode' function correctly, providing temporary notification suppression.
Meta spokesperson Andy Stone criticized the findings as misleading, asserting continuous improvements and inviting constructive criticism. Meanwhile, internal Meta documents and former employees highlighted awareness of these shortcomings, questioning the true efficacy of the company's protective measures.