Rightsizing The Role Of Social Media For Young People
How U.S. Leaders Can Protect Youth From the Dangers of Social Media
March 2023
The Reboot Foundation conducts research and studies around the globe on the effects of social media and children. This research has led to the Foundation’s advocacy for strengthening education in critical thinking and the regulation of social media platforms.
Efforts are underway in the U.S. to restrict the use of social media among minors. The Reboot Foundation supports this effort and others to keep young people safe online, calling on Congress and the President to take immediate action.
Reboot supports the following social media reforms
Prohibiting social media use among children younger than 16
A growing body of research demonstrates connections between social media use and mental health issues among young people, particularly young girls. These research findings are alarming enough to warrant new and tougher age restrictions on social media use by Congress.
Federally-subsidized research into the effects of social media on young people’s mental health
The National Institutes of Health is the authoritative research institution of the federal government and it has yet to undertake a comprehensive study of how social media impacts young people. It is time they – and other key research agencies like the National Science Foundation – supported such research. That is work that the current presidential administration could roll out immediately through executive order.
Giving users the choice to opt out of algorithms, with a single click
Opting out would prevent users from seeing content recommended by the platform’s algorithm, and would prevent algorithms from using their data to make content recommendations to others. Opting out must be easy and unmistakable, and something that Congress should require of social media platforms.
Banning digital advertising to anyone under the age of 16
The U.S. government has long recognized children need special protections from advertisers but has not extended those protections to Internet ads even though they are formulated based on data on children collected from multiple platforms and include demographics, browser histories, and purchase histories.
Adding warning labels on social media apps
The government should treat social media platforms like those selling tobacco, alcohol, or other addictive or dangerous products, by requiring warning labels be added on their apps to alert users of the dangers of overuse. Congress should take action to mandate such labels.
Implementing advertising restrictions and removing verified sites that spread misinformation, disinformation, conspiracy theories, harassment, or hate speech
Unfortunately, there is big money to be made pushing conspiracy theories and peddling disinformation online. Governments and platforms could take a significant step toward eliminating harmful speech online by making it difficult, if not impossible, for platforms to profit from harmful speech. By making it impossible or more difficult for users like this to profit from spreading dangerous or harmful content, the platforms could take a significant step toward eliminating misinformation online.
As a foundation created to elevate critical thinking and reflective thought, we recognize that there is always a danger when giving the government more control over what people in a free society can see, hear, and say. But it is clear from our research, and others, that it is past time for action. It is also clear that social media companies cannot – or will not – police themselves adequately. We believe it is time for the government to intercede with thoughtful, well-crafted laws.
Prohibiting social media use among children younger than 16
Research has demonstrated links between increased social media usage and harmful or destructive behaviors. Consider:
- Studies in Britain, Spain, and Scotland have linked social media to increased aggression, anxiety, bullying, psychological distress, and thoughts of suicide in young people, between the ages of 11 and 16.
- An Australian study found that after spending only 10 minutes on Facebook, participants reported being in a more negative mood than those who browsed a control website.
- A survey by the Pew Research Center found that young men and women experience different kinds of harassment online. Young men typically receive “less severe” forms of harassment such as name-calling and embarrassment. Young women, however, are more likely to experience “more severe” harassment such as being the target of physical threats, harassment over a sustained period, stalking, and sexual harassment.
These research findings, and more, lead us to support proposed laws that would set new age restrictions on social media users. Congress has proposed legislation that would raise the legal age for social media use in the U.S. to 16, and would require social networks to verify the identity and age of their users.
Researchers have found (and most parents can verify) that current age restrictions are ineffective. Anyone can bypass them on most platforms by lying about their age with no recourse. These ineffective restrictions put children at risk of mental health issues, cyber-bullying, and identity theft, and expose them to child predators, to name a few real and serious dangers.
In many cases, social media companies have proven untrustworthy when it comes to safeguarding children’s data – and that of adults as well. Many have been caught using children’s data in inappropriate or illegal ways. In September 2022, a privacy breach of children’s data at Instagram resulted in a $400 million fine (U.S. dollars) in Europe under the EU’s General Data Protection Regulation. Then, two years ago in the U.S., TikTok paid $92 million to settle a dispute involving the company’s improper collection of children’s personal data, some of whom were only 6 years old.
If platforms are not able to safeguard the data of their most-vulnerable users and have been caught misusing the data themselves, then the government is right to prevent children from accessing the service in the first place.
Restricting social media use to those 16 and under will be pointless if the platforms aren’t also required to verify the age and identities of their users. For decades, government agencies have required stores selling tobacco or alcohol to verify the ages of their customers. It is time to make social media companies follow suit.
These types of laws work if enforced. After the legal drinking age was raised to age 21 in the U.S., alcohol consumption among those between 18 to 20 years old fell 19 percentage points in the five years. Three years after California made 21 the age at which people could purchase cigarettes, the prevalence of daily smoking among 18- to 20-year-olds fell to nearly zero.
Federal research into the effects of social media on young people’s mental health
Bills before Congress propose funding research into how social media impacts the mental health of young people. Reboot agrees that agencies such as the National Institutes of Health, as well as the National Science Foundation must conduct comprehensive, definitive research into these issues.
But the foundation also supports requirements that platforms open their datasets and APIs to independent researchers, advocacy groups, NGOs, and other organizations that specialize in fields and subfields that may warrant deeper analysis, such as cognitive psychology, educational psychology, psychiatry, domestic violence prevention, and gender-based harassment to name a few.
While new, stricter age requirements and substantial investment in research are two necessary steps toward making online spaces and social media platforms safer for children, they are not sufficient. That is why Reboot supports the additional measures outlined below.
Giving users the choice to opt-out of algorithms, with a single click
One measure would be the prohibition of digital platforms recommending to users unsolicited content (or specific accounts) based on their profile and/or past searches. In other words, a crackdown on algorithms that recommend content that users didn’t specifically seek out.
Algorithms are designed to facilitate ever-greater user engagement and are designed to keep users scrolling through the app. This addiction engineering is particularly dangerous for children, as increased screen time correlates with higher rates of obesity, depression, sleep problems, and other psychological problems. In addition, these algorithms can recommend inappropriate content or hate speech to children without them ever having searched for it.
The bi-partisan Kids Online Safety Act also contains provisions that would require platforms to allow parents the ability to opt their children out of algorithmic recommendation – and requires platforms to enable the strongest settings by default. The bill also requires platforms to give academic researchers and non-profit organizations access to critical datasets so that they can conduct research into how the platforms might be harmful to children. Reboot supports this bill.
Warning labels on social media apps
It is common to require warning messages on addictive and dangerous products like tobacco and alcohol products and to require public service advertisements from industries profiting from addictive behaviors, like gambling. It is clear from an extensive body of social science research, that social media users often exhibit usage patterns, traits, and behaviors similar to those with addiction issues.
Furthermore, social media sites acknowledge that they design their interfaces to be addicting, with one calling his invention “behavioral cocaine.” That is not unlike how tobacco companies have manipulated their products to increase addiction and boost revenues. Congress should act and treat social media platforms like other companies that sell addictive products and require them to put warning labels on their apps so that users are fully aware of the dangers.
Advertising restrictions and the removal of verified sites that spread misinformation, disinformation, conspiracy theories, harassment, or hate speech
Social media accounts that have been “verified” or authenticated by a platform should be held to a higher standard than a typical private user. When an account is verified, that connotes a level of trust and respectability because users know that the identity of the account holder has been vetted by the platform. Therefore, followers may place an enhanced level of trust in the information originating with verified accounts.
Verified accounts often have huge followings, so there is great potential for them to function as “super-spreaders” of misinformation, disinformation, and other dangerous speech. For example, researchers found 65 percent of all anti-vaccine content on social media related to COVID had stemmed from just 12 users. Similarly, in the U.S. a small number of verified social media accounts had a huge influence on the spread of false information about the 2020 Presidential election. By making it impossible or more difficult for users like this to profit from spreading dangerous or harmful content, the platforms could take a significant step toward eliminating misinformation online. Congress should lead the way by making verification required for all social media.