Catherine Thorbecke, Tribune News Service
Australia’s government wants to ban children up to age 16 from social media, and is spending millions of dollars to figure out how. I’m willing to wager it won’t take long for tech-savvy teens who grew up on Instagram, TikTok and YouTube to figure out how to log back on. The promised regulation, currently sparse on details, comes at a time when policymakers and parents around the globe are grappling with the negative consequences these platforms can have on developing minds. This global debate has raged for years, reaching a fever pitch in 2021 after former Facebook (now Meta Platforms Inc.) employee Frances Haugen leaked documents showing the company was aware its products were harmful to girls’ mental health. Years later, US lawmakers are still sputtering on federal regulation to keep the powerful Big Tech companies accountable for harms to young users.
Australia is taking matters into its own hands. Prime Minister Anthony Albanese promised to introduce new laws that set age limits this year, saying that the government was considering a range between 14 and 16 for the cutoff. In a video posted on X for “the mums and dads,” Albanese said he wants children “off their devices and onto the footy field.” Surveys indicate most Australians support a social-media age limit, and the idea has broad political support.
But even Albanese acknowledges that they are still trying to figure out how this would actually work. The government doesn’t identify what social media platforms the youth ban would apply to (Can children message their parents on WhatsApp? Or watch Khan Academy’s Algebra tutorials on YouTube?). It also doesn’t offer specifics on enforcement (Big Brother-esque digital IDs? Further criminalizing children, this time for opening TikTok?). And in the absence of substantive policies, it’s hard not to see this as a soundbite-y proposal to signal concern to voting parents on a popular issue ahead of an election year — without actually accomplishing anything to keep children safe.
Thousands of miles away from Silicon Valley, Australia has been leading the charge in efforts to rein in the dominance of Big Tech. Separate proposed legislation aimed at cracking down on digital misinformation has even drawn ire from Elon Musk, who last week labeled the government “fascists.” (The government has sued Musk’s X, formerly known as Twitter, over a violent video of a terrorist attack but lost in court.) The nation has also been engaged in a years-long battle to force tech titans to pay for news content. At a time when other jurisdictions have struggled with taking on such powerful companies, Australia’s multi-faceted attacks are admirable.
But research has shown that age limits for social media aren’t the most effective way to protect teens from its potential harms. Young people have shown remarkable prowess for finding workarounds — even those under the age of 13 whom most platforms already prohibit. The American Psychological Association has argued that using social media is not inherently beneficial or harmful to teens, but strict age limits ignore individual differences in adolescents’ maturity levels. In other words, turning 16 doesn’t instantly make you more competent at navigating the digital world than a mature 14-year-old.
The process of enforcing broad age verification online raises a slew of privacy concerns, ranging from how identifying information about young users could be stored to cutting off their ability to freely browse the internet while maintaining digital anonymity.
Completely shutting off access to digital communities can also sever lifelines for some young people, especially those from marginalized groups. TikTok, in particular, has emerged as a popular platform for Indigenous Australians, allowing them a space where they share everything from budget-friendly recipes to relatable responses to racism.
Indigenous youth in remote areas who may not see their stories reflected in traditional media can feel less isolated.
Still, a growing body of evidence points to a minefield of harms young people can encounter, as much as company executives like to deflect any links. It’s absolutely critical that lawmakers take action to protect children from these risks, but selling quick fixes for complex, global problems distracts from the harder policy work required to come up with effective real-world solutions.
Simply banning young people from participating in digital life comes a generation too late. The reality is teens today are very much growing up online, a trend accelerated by the pandemic. So much so that the United Nations has said that children have the right to get information from the internet, but adults have a responsibility to make sure it isn’t harmful.
Policymakers need to focus on holding social media companies accountable for the harms, especially for young users, embedded within their services. They can start by demanding that platforms offer more transparency about how their algorithms work and allowing more outside researchers to look under the hood to identify risks. Without sharing data on how their services are designed, it’s hard for mental health experts and officials to recommend solutions that address the dangers. Lawmakers must also focus on requiring social media companies, which go to great lengths to understand their users, to create and enforce more guardrails for young people.
Without putting the onus on tech companies to reduce risks on their platforms, raising the age limit by a couple of years doesn’t keep the next generation safe. Instead of bucketing out floodwater, policymakers in Australia and beyond should turn off the spewing faucets.