Facebook, Instagram block teens from sensitive content, even from friends
Out of harm’s way? — Facebook, Instagram block teens from sensitive content, even from friends Meta hiding harmful content from teens isnt enough, whistleblower says.
Ashley Belanger – Jan 9, 2024 10:38 pm UTC Enlarge reader comments 65
Meta has begun hiding sensitive content from teenagers under the age of 18 on Facebook and Instagram, a company blog announced on Tuesday.
Starting now, Meta will begin removing content from feeds and Stories about sensitive topics that have been flagged as harmful to teens by experts in adolescent development, psychology, and mental health. That includes content about self-harm, suicide, and eating disorders, as well as content discussing restricted goods or featuring nudity.
Even if sensitive content is shared by friends or accounts that teens follow, the teen will be blocked from viewing it, Meta confirmed.
Any teen searching for sensitive content will instead be prompted to contact a friend or consult “expert organizations like the National Alliance on Mental Illness,” Meta said.
In addition to hiding more content from teens, Meta has also announced that in the coming weeks, it will be blocking everyone on its apps from searching for a wider range of sensitive terms “that inherently break” Meta’s rules. Meta did not specify what new terms might be blocked but noted that it was already hiding results for suicide and self-harm search terms.
A Meta spokesperson told Ars that the company can’t “share a more comprehensive list of those terms since we don’t want people to be able to go around them or develop workarounds.”
On top of limiting teens’ access to content, Meta is “sending new notifications encouraging” teens “to update their settings to a more private experience with a single tap.” Teens who opt in to Meta’s “recommended settings” will enjoy more privacy on Facebook and Instagram, restricting unwanted tags, mentions, or reposting of their content to only approved followers. Teens opting in will also be spared from viewing some “offensive” comments. Perhaps most importantly, recommended settings will “ensure only their followers can message them.” Advertisement
Meta said that previously, any new teens joining Facebook or Instagram have defaulted to “the most restrictive settings,” but now Meta is expanding that effort to “teens who are already using these apps.” These restrictive settings, Meta said, will prevent teens from stumbling across sensitive content.
Last year, 41 states sued Meta for allegedly addicting kids to Facebook and Instagram. States accused Meta of intentionally designing its apps to be unsafe for young users. Massachusetts Attorney General Andrea Joy Campbell went so far as to allege that Meta “deliberately” exploited “young users’ vulnerabilities for profit.”
At that time, Meta said it was disappointed that instead of working productively with companies across the industry to create clear, age-appropriate standards for the many apps teens use,” state attorneys general chose to pursue legal action.
Experts considered the states’ push to hold Meta accountable for its allegedly harmful design choices as the most significant effort yet. It followed disturbing testimony from a whistleblower, former Meta employee Arturo Bejar, who told a US Senate subcommittee in November that “Meta continues to publicly misrepresent the level and frequency of harm that users, especially children, experience on the platform.”
Bejar claimed that Meta could easily make its apps safer for kids, “if they were motivated to do so.” He also provided recommendations to regulators, including suggesting new laws requiring social media platforms to develop ways for teens to report content that causes them discomfort.
That policy shift, Bejar said, would “generate extensive user experience data, which then should be regularly and routinely reported to the public, probably alongside financial data.” Bejar said that “if such systems are properly designed, we can radically improve the experience of our children on social media” without “eliminating the joy and value they otherwise get from using such services.”
Intensified legal scrutiny on Meta isn’t restricted only to the US, though. In the European Union, Meta has also been asked to inform regulators about how it designs apps to shield kids from potentially harmful content.
It’s possible that the EU probe prompted Meta’s updates this week.
Meta said it expects all these changes to be “fully in place on Instagram and Facebook in the coming months.”
Even with these updates, though, Bejar said on Tuesday that Meta still wasn’t doing enough to protect teens. Bejar pointed out that Meta’s platforms still lack a meaningful way for teens to report unwanted advances, Reuters reported. reader comments 65 Ashley Belanger Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience. Advertisement Channel Ars Technica ← Previous story Next story → Related Stories Today on Ars