Australia’s decision to ban children under 16 from social media, with Denmark eyeing similar measures for under-15s, has reignited a global debate about children, technology and harm. The political appeal is obvious: draw a clear line, claim protection and move on. But from a South African legal and policy perspective, this approach is both insufficient and misdirected. It treats children as the problem, rather than the digital systems that systematically fail them.
Studies across psychology, human-computer interaction and digital governance show that harm to children online is not simply a function of access, but of platform design choices such as algorithmic amplification, persuasive design, endless scrolling, social comparison metrics and targeted advertising optimised for engagement rather than well-being. These are not neutral technologies. They are engineered environments that shape behaviour in predictable ways, particularly for developing minds.
Research published in journals such as Nature Human Behaviour, the Journal of Child Psychology and Psychiatry and Computers in Human Behaviour demonstrates that design features, not mere screen time, correlate most strongly with anxiety, depression, compulsive use and exposure to harmful content. Removing children from platforms without reforming those design features leaves the underlying risk architecture intact. It also incentivises displacement. Children migrate to less visible, less regulated spaces where safeguards are weaker and where harms are harder to detect.
Australia is not acting in isolation. Policymakers in Denmark, Norway and parts of the European Union have publicly explored minimum-age bans, while similar proposals have surfaced in the UK, Ireland and several US states. These initiatives differ in scope, but they share a common political logic: when platform regulation proves difficult, restricting children’s access becomes the path of least resistance. The rapid spread of these proposals risks normalising exclusion as a substitute for structural reform.
South African law already provides a more principled framework than blanket bans. Section 28(2) of the Constitution establishes that the “best interests of the child” are of paramount importance, while the Bill of Rights also guarantees the right to freedom of expression (Section 16) and the right of access to information (Section 32). These rights do not evaporate simply because a citizen is under 16. This aligns with the UN Convention on the Rights of the Child (UNCRC), which affirms that children are not merely passive recipients of protection, but holders of a right to participate in society.

A categorical social media ban sits uneasily within this framework. In the modern era, rights to expression, information and participation are increasingly exercised online. Cutting children off from these platforms effectively severs their connection to the primary public square of their generation. A ban prioritises risk avoidance over proportionality without demonstrating that less restrictive measures would be ineffective. International children’s rights scholarship consistently warns against policies that sacrifice this agency and access in the name of safety.
South Africa’s Protection of Personal Information Act (POPIA) already identifies the correct target. It recognises children as a special category of data subjects deserving heightened protection. It imposes obligations around lawful processing, purpose limitation and minimality principles directly implicated by behavioural advertising, profiling and algorithmic curation aimed at young users. The problem is not a lack of legal tools but a lack of enforcement and design-level accountability.
From a regulatory perspective, the logical response is not exclusion, but structural intervention: limits on data-driven targeting of minors, transparency obligations for recommender systems, bans on manipulative design practices and meaningful consequences for platforms that profit from foreseeable harm. These approaches align far more closely with constitutional proportionality, POPIA’s data-protection logic and South Africa’s obligations to uphold a child’s right to participate in the digital world.
Australia and Denmark may be conducting a political experiment in age-based exclusion. South Africa should resist the temptation to follow. The best interests of the child are not served by pushing young people out of digital public life while leaving the machinery of harm untouched. If the law is serious about children’s rights, then regulation must focus where the power actually lies – with platform design, business models and the incentives that shape the modern internet.
Banning children is simple. Governing technology is harder. But only one of those approaches is likely to be lawful, effective and genuinely in the best interests of the child.
- Belinda Matore, Doctoral Researcher (Child Digital Rights) at the Centre for Human Rights at the University of Pretoria.

