Instagram has announced a series of sweeping changes to its platform aimed at enhancing the safety and privacy of its teenage users. The changes, introduced under the “Teen Accounts” initiative, come at a time of growing public concern and scrutiny over the potential harms that social media platforms pose to young users. Instagram, which is owned by Meta, is implementing these measures to address issues like unwanted contact from strangers, exposure to harmful content, and excessive screen time. These changes reflect an effort by the company to address these problems while giving parents more tools to oversee their children’s online activities.

Privacy and Control Features

The most significant change is that all accounts for users under 18 will now be set to private by default. This means that only people whom the user approves as followers can view their posts, interact with them, or message them. For users under the age of 16, the default setting will be private, and changing it to public will require parental consent. Instagram’s goal is to create a safer environment for younger users by limiting the exposure of their personal information and posts to a wider audience.

In addition, Meta has introduced new features to help parents supervise their children’s accounts more effectively. Parents will now have the ability to see who their children have recently messaged, although they won’t be able to view the actual content of the messages. This provides parents with insights into whom their teens are communicating with while maintaining a degree of privacy for the children. Moreover, parents can monitor the topics their children are exploring on Instagram, such as sports, music, or other interests, though only in a general sense, to help start conversations about their online habits.

The company is also addressing concerns about excessive screen time, which has been linked to poor mental health outcomes for teenagers. Notifications for teen accounts will be automatically turned off between 10 p.m. and 7 a.m. in an effort to improve sleep. Instagram has also enhanced its “Take a Break” feature, which will now remind teens to stop using the app after prolonged periods of time. These reminders are intended to encourage healthier usage habits.

Safety from Inappropriate Content and Contact

Instagram’s new policies also focus on protecting teens from inappropriate content and online contact. Teens will no longer be able to receive messages from people they don’t follow, reducing the chances of unwanted interactions with strangers. Furthermore, content from accounts that teens don’t follow will be limited in their Instagram feed and on features like Reels, which means they will see fewer posts from unknown sources that may contain sensitive or harmful material. These efforts are aimed at shielding teens from exposure to violence, sexual content, and other harmful material that has become prevalent on social media platforms.

Editor’s Imagination

Instagram has been working to reduce the risks of inappropriate contact with adults. For example, adults who are not connected to a teen’s account will be unable to send them direct messages or tag them in posts. Additionally, the company is using artificial intelligence (AI) to identify and prevent teenagers from lying about their age to create adult accounts. The AI systems will flag suspicious accounts and require users to verify their age using third-party services like Yoti, which uses facial recognition technology to estimate a person’s age. This measure is designed to prevent minors from bypassing the platform’s protective features​.

Responding to Criticism and Scrutiny

These changes come as Instagram and its parent company, Meta, face mounting pressure from lawmakers, child safety advocates, and the general public. For years, critics have argued that Instagram and other social media platforms contribute to a youth mental health crisis, exposing young users to harmful content such as bullying, sexual exploitation, and content promoting self-harm or eating disorders. Several lawsuits have been filed against Meta, alleging that its platforms, including Instagram, use features that exploit teenagers’ psychological vulnerabilities, such as “dopamine-inducing” notifications designed to keep users engaged for longer periods.

In recent years, bipartisan legislation has been introduced to address these concerns. One prominent example is the Kids Online Safety Act (KOSA), which would require social media companies to take more robust steps to prevent harm to children online, such as preventing cyberbullying and restricting the spread of harmful content. Although the bill has passed the U.S. Senate, it has stalled in the House of Representatives amid concerns over potential infringements on free speech. Meta’s latest safety measures, including those unveiled for Instagram, are widely seen as an attempt to preempt further regulations by demonstrating that the company is taking voluntary action to protect younger users.

Mark Zuckerberg, Meta’s CEO, has been under particularly intense scrutiny for his company’s handling of child safety issues. In January, Zuckerberg publicly apologized to the families of children who had suffered harm, including suicides linked to online bullying and harassment. These dramatic moments have underscored the pressure facing social media companies to address concerns about the safety and well-being of their youngest users.

Challenges and Concerns

Despite these efforts, many experts and advocates remain cautious. While some have praised Instagram for setting higher standards of privacy and safety for teens, others have pointed out potential gaps in the new policies. For instance, critics argue that tech-savvy teens might find ways to circumvent the restrictions, using fake accounts or “finstas” (fake Instagram accounts) to avoid parental controls or the new privacy settings. Additionally, the new features place much of the burden on parents to monitor their children’s behavior, which some argue is an unfair shift of responsibility from the company to the families.

Another concern is that the new parental supervision tools could potentially lead to conflicts in households where teenagers are exploring personal and sensitive topics, such as political views, sexual identity, or religious beliefs. While parents can now view the topics their children are interested in, some worry that this feature could create tensions, particularly in families with differing views or in cases where parents are overly controlling or abusive. Instagram’s challenge, therefore, lies in balancing teen privacy with parental oversight in a way that protects young users while respecting their autonomy.

This article is based on the following articles:

https://www.npr.org/2024/09/17/g-s1-23181/instagram-teen-accounts-private-meta-child-safety

https://www.washingtonpost.com/technology/2024/09/17/instagram-teen-accounts-meta-child-safety-scrutiny

https://www.nytimes.com/2024/09/17/technology/instagram-teens-safety-privacy-changes.html

Background Information

What is Social Media?

Social media platforms like Instagram, TikTok, Snapchat, and Facebook are online networks where people can share photos, videos, and messages with friends or even with strangers across the world. These platforms allow users to connect, communicate, and consume content, including news, entertainment, and educational material. Social media has become especially popular among young people, with millions of teenagers using these platforms daily. However, because of their widespread use and influence, social media has raised concerns about how it affects users, especially children and teenagers.

Why is Social Media Safety Important for Teens?

Social media can be a fun way to stay connected with friends, but it also has risks. Some people use social media to harass, bully, or take advantage of others, particularly younger users. Teenagers may also be exposed to harmful content, such as violent or inappropriate material, or face pressure to behave in unhealthy ways, such as participating in dangerous online challenges or comparing themselves to others in ways that harm their self-esteem.

Another big issue is the amount of time teens spend on social media. Spending too much time online, especially late at night, can lead to problems like sleep deprivation, anxiety, and depression. This is why many people believe it’s important for social media platforms to have special rules to protect young users.

How Does Instagram Work?

Instagram is a social media app where users can post pictures and videos, follow other users, and interact with posts by liking, commenting, and sharing. It’s especially popular among teenagers. Users can make their accounts public or private. In a public account, anyone can see what the user posts, while in a private account, only approved followers can view their content. Many teens prefer public accounts because they want to gain more followers and become “influencers,” which are users who have a large online following.

In addition to posts, Instagram has features like “Reels” for short videos, “Stories” that disappear after 24 hours, and direct messaging, where users can chat privately with others. The app uses algorithms to suggest content based on a user’s interests, which means that teens might see posts from people they don’t follow, including content that could be inappropriate for their age.

Why Do Governments and Advocacy Groups Get Involved?

Because social media can have a powerful impact on people’s lives, especially young users, governments and advocacy groups often step in to help protect children and teens. Lawmakers create laws and policies that social media companies have to follow to keep kids safe. For example, in many countries, it’s illegal for children under 13 to have social media accounts without parental permission. Governments also make rules to limit how companies can collect and use personal information from children.

Advocacy groups, like child safety organizations, also play an important role. These groups focus on making sure that children are protected from online predators, bullying, and harmful content. They often pressure social media companies to adopt safer practices, such as blocking inappropriate content or limiting how long teens can use the app. If these groups believe that social media platforms are not doing enough to protect kids, they may push for new laws to force these companies to take action.

How Do Social Media Platforms Make Money?

Social media companies, including Instagram, make most of their money through advertising. They allow businesses to show ads to users based on their interests, which are determined by the things they post, like, and follow on the app. The more time people spend on these platforms, the more ads they see, and the more money the platform makes.

This has led to some criticism that social media companies design their apps to keep users, especially young ones, engaged for as long as possible, even if it harms their well-being. For example, notifications, likes, and comments can make people want to check the app constantly, which can lead to “addiction” to social media. Companies have been accused of prioritizing profits over the mental health and safety of their users, particularly teens, who are more vulnerable to peer pressure and online trends.

What Are Parental Controls?

Parental controls are tools that allow parents to monitor and limit what their children do online. On Instagram, for example, parents can set restrictions on who can contact their child, how much time they can spend on the app, and what types of content they can see. These controls are designed to help parents protect their children from inappropriate content and dangerous situations, such as online predators or cyberbullying.

However, one challenge is that many parents don’t always know how to use these tools or don’t have the time to keep up with their children’s online activities. Additionally, some teens find ways to bypass parental controls by creating fake accounts or using different devices, which can make it harder for parents to supervise them.

What is AI and How is It Used for Safety?

Artificial Intelligence (AI) refers to computer systems that can perform tasks that normally require human intelligence, such as recognizing faces or detecting patterns. Social media companies like Instagram use AI to help monitor the platform for harmful content and to flag suspicious behavior. For example, AI systems can detect if someone is pretending to be an adult by analyzing their behavior on the app or their profile information.

In Instagram’s case, AI is also used to help verify the age of users. Teens who try to create accounts with adult birth dates to avoid the app’s teen protections may be asked to verify their age using AI-based tools like facial recognition. While these technologies can help keep young users safe, some people worry about privacy issues and whether AI might make mistakes, such as blocking users who are telling the truth about their age.

Why is This Change Important Now?

In recent years, there has been increasing concern about the mental health of teenagers who spend a lot of time on social media. Research has shown that social media can contribute to issues like anxiety, depression, and feelings of isolation, especially among teens who are exposed to bullying, harmful content, or unrealistic portrayals of life.

Parents, doctors, and mental health experts have been calling for social media companies to take more responsibility for the well-being of their younger users. Additionally, state and federal lawmakers have introduced new bills, like the Kids Online Safety Act (KOSA), which would require platforms to do more to protect children from cyberbullying, sexual exploitation, and harmful content. Instagram’s new safety measures are partly a response to these pressures, as the company tries to show that it is serious about protecting its youngest users.

Debate/Essay Questions

  1. Do the new privacy features on Instagram go far enough in protecting teenagers from online predators and harmful content, or are more stringent measures needed?
  2. Should social media companies, like Instagram, be responsible for protecting teenagers from harmful content and online dangers, or should this responsibility rest primarily with parents?

Please subscribe to Insight Fortnight, our biweekly newsletter!

By Editor

I have worked in English education for more than two decades. The idea for this website sprang from a real need as an English teacher. I enjoy curating the content for this website very much.

Leave a Reply

Your email address will not be published. Required fields are marked *