Starting from today, Instagram will automatically place new teen users into Teen Accounts and begin moving existing accounts over in a bid to protect teens online.
Teens will be automatically placed in the strict messaging settings, meaning only those they follow or are already connected to can message them.
Teen Accounts for those under 18 will also be automatically set to private, so they will need to accept new followers, and interactions for teens are also limited. They can only be tagged or mentioned by those they follow.
Instagram will place teens into its sensitive content control’s most restrictive setting, which limits the type of sensitive content they’ll see in spaces like Explore and Reels. For example, they should see less or no content showing violence and self-harm.
Insta’s Teen Accounts will also filter offensive words and phrases from DM requests and comments by turning on the most restrictive version of Hidden Words, the platform’s anti-bullying feature, by default.
Finally, Instagram has taken steps to help teens better manage their time on social media. When teens have spent 60 minutes online, an automatic time limit will send them a notification telling them to leave the app.
Sleep mode will be automatically turned on between 10PM and 7AM, muting notifications and auto-replying to DMs.
Teens Under 16 Need Parental Permission to Lower Settings
Anyone under the age of 16 will need their parents’ permission to downgrade any of these default settings. In order to request this, they’ll need to set up Parental Supervision on Instagram. Parents can also turn this feature on for older teens aged 16 plus, to get an overview of what their teen is experiencing in the app.
Once the feature is turned on, parents can approve or deny requests to change settings or grant permission for teens to manage settings themselves. Soon, Instagram will introduce the option for parents to change these settings directly.
Instagram’s Supervision Feature Updates Give Parents More Control
Instagram is also updating its supervision feature, which will allow parents to:
- Set total daily time limits for their teen’s Instagram usage and block them from the app for specific time periods, such as at night.
- See the age-appropriate topics their teen has chosen to look at.
- Get insights and see who their teen has messaged in the past week (though they can’t read messages).
Instagram’s AI Tech Will Identify Teens With Adult Accounts
These changes could incentivize many teens to lie about their age in order to bypass the new restrictions. To tackle this, Instagram plans to ask teens to verify their age in more places, if they try to use an account with an adult birthday.
Meta is also building AI tech that will proactively identify accounts held by teens, even if the account has an adult birthday. It plans to start testing this change in the US in early 2025.
Teen Accounts Rolling Out From Today
Meta plans to start placing teens who sign up for new Instagram accounts into Teen Accounts from today. It will notify teens already using Instagram about the changes before it begins moving them into Teen Accounts next week.
It plans to place teens into Teen Accounts within 60 days across the US, Canada, UK, and Australia. Teens in the European Union will be placed into Teen Accounts by “later this year,” though no specific timeline has been given.
Teens worldwide will start to get Teen Accounts in January 2024, and the feature will come to other Meta platforms from next year.
Meta has introduced these changes in response to federal and state regulators considering laws that force social media apps to introduce similar features to protect teens.
Instagram head Adam Mosseri said that device makers should also be doing more to protect younger users, saying, “Apple and Google need to provide birthdays in a privacy-safe way to apps.”
This latest update follows Meta’s announcement in January that it would take new measures to protect teens across Instagram and Facebook, including placing them into the most restrictive content control settings. It also promised to hide posts related to sensitive content such as self-harm, and to prompt teens to restrict who can message them.
That January update came shortly after a 2023 lawsuit where half the US’ state attorneys general decided Meta had knowingly released features and products that harmed teens’ mental health.
In May, Instagram expanded its Limits feature to let teens restrict interactions to Close Friends, protecting them from harassment and cyberbullying.
Experts have long been sounding the alarm on the risks of social media usage among teens.
The American Psychological Association recognizes the risks of social media apps such as TikTok and Instagram, and recommends that parents monitor and limit their teens’ use of social media platforms, as well as looking out for problematic signs such as disruptions to their sleep or daily routine.
But while some legislation has been put in place to protect younger users, it’s clear that more needs to be done.
In the UK, the Online Safety Act 2023, which comes into force in 2025, puts the onus on social media firms and technology companies to protect children from material that’s legal yet harmful, such as terrorism, illegal drugs, sexual violence, and controlling or coercive behavior.
In the US, the proposed Kids Online Safety Act (KOSA) would ensure that social media platforms have to automatically enable the strongest privacy settings for kids and teens by default. It would also give parents more control over the online experience for kids and teens, and ensure that social media platforms receive independent audits and research into how they are impacting younger users’ well-being.