Meta unveils newer teenage Instagram user safety features


“Today, we’re announcing a range of updates to bolster these efforts, and we’re sharing new data on the impact of our latest safety tools,” Meta said Wednesday in its roll out of new safety features on Instagram for teens and teen-represented profiles. File Photo by Terry Schmitt/UPI | License Photo
Meta said Wednesday it’s rolling out new direct messaging safety features for teen users and related accounts as the social media giant continues its “exploitive content” crackdown.
“Today, we’re announcing a range of updates to bolster these efforts, and we’re sharing new data on the impact of our latest safety tools,” the company wrote in a release.
It said more context and information will now be available to teens, such as when an Instagram account got created. In addition, new safety tips to spot possible scammers will come with a newly combined report and block action.
It arrived as part of a broader effort to protect its younger users on Meta platforms, Instagram and Facebook, as overall social media companies have seen greater scrutiny by Congress over sexual exploitation of young people.
Instagram users must be at least 13 years old to use the platform, but an adult can run a teen account if it’s appropriately clear in the bio section.
Meta said it “blocked accounts 1 million times and reported another 1 million after seeing a Safety Notice.”
Last week, it announced that roughly 10 million impersonating profiles had been removed the first half of this year part of a larger corporate effort to reduce “spammy” social media content.
According to Meta officials, nearly 135,000 Instagram accounts this year were removed for writing sexualized comments or inquiring about explicit images via adult-managed accounts displaying children.
Meta says that included some 500,000 Facebook and Instagram profiles linked to an original account.
Child and teen-represented social media profiles will now be automatically placed into strict comment and messaging settings in order to filter out offensive online content and limit contact from unknown users.
Social media conglomerates and its platforms, such as Meta, have been accused as being addictive to children and detrimental to their mental health.
It comes on the heels of Meta’s announcement in April that its “Teen Accounts” would not be permitted to livestream or switch off certain protections related to direct messaging if a user is under age 16.