Australia has expanded its landmark ban on social media use for children under 16, adding Reddit and Kick to its growing list of restricted platforms. The move marks a major step in the government’s effort to curb the influence of online platforms on minors and to hold tech companies accountable for protecting young users. Beginning December 10, 2025, both Reddit and Kick will be required to prevent under-16 users from creating or maintaining accounts in Australia.
The inclusion of these platforms follows months of consultation between the federal government, the eSafety Commission, and major tech firms. Officials describe the list of restricted services as “dynamic,” meaning it can evolve as new platforms rise in popularity or change their functionality. With the addition of Reddit and Kick, a total of nine platforms are now covered by the legislation, including Facebook, Instagram, TikTok, Snapchat, X (formerly Twitter), and YouTube.
Why Reddit and Kick Were Added
Reddit, a vast network of online forums where users post and comment on a wide range of topics, has long been viewed as a grey area in terms of social media regulation. While it differs from image- or video-based platforms like TikTok and Instagram, the government considers its focus on open public discussion, community interaction, and content sharing as fitting the definition of a “social media service.” Officials argue that Reddit’s forum structure can expose children to adult or harmful content, particularly in unmoderated communities.

Kick, a live-streaming platform that rose to prominence for its looser moderation policies and ties to online gaming culture, has also faced scrutiny for exposing young audiences to explicit content. The platform’s real-time nature, combined with its emphasis on direct audience interaction, has raised particular concerns about grooming, harassment, and the spread of inappropriate material. By including Kick, regulators hope to set a precedent that live-streaming services will face the same standards as traditional social media platforms.
Communications Minister Anika Wells defended the additions, stating that platforms have sophisticated technologies to track, engage, and advertise to users—tools she says can and should be redirected toward protecting children. “If companies can identify children for marketing purposes,” she said, “they can identify them to keep them safe.”
The New Rules
Under the expanded regulation, platforms whose main purpose is to enable online interaction must take “reasonable steps” to prevent anyone under 16 from maintaining an account. Companies found to be in violation of the rule could face civil penalties of up to 49.5 million Australian dollars, or roughly 32 million U.S. dollars.
The law does not penalize children or their parents directly. Instead, the burden lies squarely on companies to verify users’ ages and implement safeguards. This could involve new forms of age-assurance technology, parental consent mechanisms, or stricter onboarding systems to detect underage users. Each platform will be required to publish a compliance plan outlining how it intends to meet the new standards.
The government has emphasized that the goal is not to criminalize access but to make platforms more accountable. The eSafety Commission will monitor compliance and retain the authority to investigate, issue warnings, or levy fines against companies that fail to comply.
Support and Criticism
Reactions to the announcement have been mixed. Supporters of the policy argue that it represents a long-overdue step in protecting children from the harms of social media, which can include exposure to bullying, addictive engagement algorithms, and mental-health pressures. Many parents’ groups have welcomed the decision, describing it as an opportunity for children to “reclaim their time and attention” from platforms designed to keep them online.
Proponents also argue that the regulation shifts responsibility away from parents—who often struggle to monitor their children’s digital lives—and toward corporations that profit from engagement. By forcing companies to design systems that respect age boundaries, they say, the law ensures that the commercial interests of tech giants are balanced against public welfare.
Critics, however, warn that enforcement could be complicated and potentially intrusive. Accurately verifying users’ ages on the internet is a longstanding challenge. Requiring ID checks or facial-recognition systems could introduce privacy risks or inadvertently exclude vulnerable groups, such as children without easy access to identification documents. Civil-liberties advocates have also raised concerns about government overreach, suggesting that the policy might lead to excessive data collection.
Some experts question whether bans are an effective solution at all. They warn that children may simply migrate to less-regulated platforms or find ways to circumvent restrictions, potentially exposing themselves to even greater risks. Others argue that education, parental guidance, and digital-literacy initiatives would be more effective than outright prohibitions.
What Happens Next
When the law takes effect in December, Reddit, Kick, and other restricted platforms will be expected to prevent new under-16 accounts and manage existing users who fall below the age threshold. The eSafety Commission will issue technical guidance on acceptable verification methods, balancing safety with privacy protections.
Parents and guardians will be offered resources to help navigate the new rules, including advice on supporting children’s online habits and understanding how age-verification systems work. Schools may also be enlisted to help educate students on the reasons behind the ban and to promote safer internet behavior.
The government views this policy as part of a broader effort to redefine the relationship between young people and digital platforms. Officials say that the ultimate goal is to create a safer online environment in which children can engage in learning and creativity without being subjected to the same risks as adult users. The results of the policy will be closely monitored, with data collected on whether it reduces exposure to harmful content, improves mental-health outcomes, or changes patterns of online activity.

Global Implications
Australia’s policy is one of the most ambitious attempts yet to restrict minors’ access to social media. Other countries are watching closely to see how the rollout unfolds. Lawmakers in Europe and North America have floated similar ideas, but few have implemented such broad and enforceable bans. If successful, Australia’s approach could serve as a model for other nations seeking to rein in the influence of major tech platforms on children.
As the debate continues, one thing is clear: the inclusion of Reddit and Kick signals that the Australian government intends to treat all major online communities—whether they focus on discussion, streaming, or sharing—as part of the same digital ecosystem. In doing so, it is forcing a global conversation about how to balance innovation, privacy, and the safety of young people in an increasingly connected world.








