Rightsizing the Role of Social Media for Young People
The Reboot Foundation conducts research and studies around the globe on the effects of social media and children. This research has led to the Foundation’s advocacy for strengthening education in critical thinking and the regulation of social media platforms.
Efforts are underway in the U.S. to restrict the use of social media among minors. The Reboot Foundation supports this effort and others to keep young people safe online, calling on Congress and the President to take immediate action.
Prohibiting social media use among children younger than 16
Research has demonstrated links between increased social media usage and harmful or destructive behaviors. Consider:
- Studies in Britain, Spain, and Scotland have linked social media to increased aggression, anxiety, bullying, psychological distress, and thoughts of suicide in young people, between the ages of 11 and 16.
- An Australian study found that after spending only 10 minutes on Facebook, participants reported being in a more negative mood than those who browsed a control website.
- A survey by the Pew Research Center found that young men and women experience different kinds of harassment online. Young men typically receive “less severe” forms of harassment such as name-calling and embarrassment. Young women, however, are more likely to experience “more severe” harassment such as being the target of physical threats, harassment over a sustained period, stalking, and sexual harassment.
These research findings, and more, lead us to support proposed laws that would set new age restrictions on social media users. Congress has proposed legislation that would raise the legal age for social media use in the U.S. to 16, and would require social networks to verify the identity and age of their users.
Researchers have found (and most parents can verify) that current age restrictions are ineffective. Anyone can bypass them on most platforms by lying about their age with no recourse. These ineffective restrictions put children at risk of mental health issues, cyber-bullying, and identity theft, and expose them to child predators, to name a few real and serious dangers.
In many cases, social media companies have proven untrustworthy when it comes to safeguarding children’s data – and that of adults as well. Many have been caught using children’s data in inappropriate or illegal ways. In September 2022, a privacy breach of children’s data at Instagram resulted in a $400 million fine (U.S. dollars) in Europe under the EU’s General Data Protection Regulation. Then, two years ago in the U.S., TikTok paid $92 million to settle a dispute involving the company’s improper collection of children’s personal data, some of whom were only 6 years old.
If platforms are not able to safeguard the data of their most-vulnerable users and have been caught misusing the data themselves, then the government is right to prevent children from accessing the service in the first place.
Restricting social media use to those 16 and under will be pointless if the platforms aren’t also required to verify the age and identities of their users. For decades, government agencies have required stores selling tobacco or alcohol to verify the ages of their customers. It is time to make social media companies follow suit.
These types of laws work if enforced. After the legal drinking age was raised to age 21 in the U.S., alcohol consumption among those between 18 to 20 years old fell 19 percentage points in the five years. Three years after California made 21 the age at which people could purchase cigarettes, the prevalence of daily smoking among 18- to 20-year-olds fell to nearly zero.
Federal research into the effects of social media on young people’s mental health
Bills before Congress propose funding research into how social media impacts the mental health of young people. Reboot agrees that agencies such as the National Institutes of Health, as well as the National Science Foundation must conduct comprehensive, definitive research into these issues.
But the foundation also supports requirements that platforms open their datasets and APIs to independent researchers, advocacy groups, NGOs, and other organizations that specialize in fields and subfields that may warrant deeper analysis, such as cognitive psychology, educational psychology, psychiatry, domestic violence prevention, and gender-based harassment to name a few.
While new, stricter age requirements and substantial investment in research are two necessary steps toward making online spaces and social media platforms safer for children, they are not sufficient. That is why Reboot supports the additional measures outlined below.
Allowing users to opt-out of algorithms, with a single click
One measure would be the prohibition of digital platforms recommending to users unsolicited content (or specific accounts) based on their profile and/or past searches. In other words, a crackdown on algorithms that recommend content that users didn’t specifically seek out.
Algorithms are designed to facilitate ever-greater user engagement and are designed to keep users scrolling through the app. This addiction engineering is particularly dangerous for children, as increased screen time correlates with higher rates of obesity, depression, sleep problems, and other psychological problems. In addition, these algorithms can recommend inappropriate content or hate speech to children without them ever having searched for it.
The bi-partisan Kids Online Safety Act also contains provisions that would require platforms to allow parents the ability to opt their children out of algorithmic recommendation – and requires platforms to enable the strongest settings by default. The bill also requires platforms to give academic researchers and non-profit organizations access to critical datasets so that they can conduct research into how the platforms might be harmful to children. Reboot supports this bill.
Warning labels on social media apps
Advertising restrictions and the removal of verified sites that spread misinformation, disinformation, conspiracy theories, harassment, or hate speech
Social media accounts that have been “verified” or authenticated by a platform should be held to a higher standard than a typical private user. When an account is verified, that connotes a level of trust and respectability because users know that the identity of the account holder has been vetted by the platform. Therefore, followers may place an enhanced level of trust in the information originating with verified accounts.
Verified accounts often have huge followings, so there is great potential for them to function as “super-spreaders” of misinformation, disinformation, and other dangerous speech. For example, researchers found 65 percent of all anti-vaccine content on social media related to COVID had stemmed from just 12 users. Similarly, in the U.S. a small number of verified social media accounts had a huge influence on the spread of false information about the 2020 Presidential election. By making it impossible or more difficult for users like this to profit from spreading dangerous or harmful content, the platforms could take a significant step toward eliminating misinformation online. Congress should lead the way by making verification required for all social media.