Making digital spaces safer for everyone – not just children
Protecting children online will require more than reactive laws. It requires shifting the focus upstream towards accountability, transparency, and safer platform design.
Malaysia is considering a social media ban for children under 16, following a series of distressing incidents involving children harming other children in schools. Although the causes for the violence are undetermined and multi-varied, the proposal is motivated by genuine concern about social media. But will a ban work to protect children from the deeper, structural issues with social media?
A ban may sound decisive, yet it is far from a silver bullet. The reality of digital life is far more complex.
The limits of a ban
Experience elsewhere suggests that keeping children off social media is easier said than done. In Australia, for instance, platforms will soon be required to take “reasonable steps” to take appropriate measures to keep under-16s off social media. Yet there are questions whether this will be anything more than cosmetic. Many young people are adept at bypassing age checks using VPNs, ‘old man’ masks or even parental assistance.
To verify users’ ages, there has been talk of using facial recognition or ID verification, raising privacy concerns for everyone, not just children. And if children are excluded entirely, it may deepen social isolation, in a world where social interaction increasingly happens online. A ban requiring entering ID details may also disenfranchise marginalised peoples from using social media entirely – stateless persons, for example, may lack IDs required to access these sites.
Even if such bans worked perfectly, what happens when a child turns 16? The tragic case of a 16-year-old Malaysian girl, who died by suicide after posting an Instagram poll about whether she should live or die, shows that online harm does not disappear with age.
A ban may also risk creating a false sense of security; convincing us that children are safe simply because they are kept away, while the platforms themselves remain unsafe by design. The underlying problem is that platforms’ business models thrive on engagement and attention, even when that means amplifying harmful or addictive content.
Can platforms be safe by design?
Instead of keeping children out, perhaps the better question is: can we make platforms safe by design?
Brazil appears to be attempting this with its new ECA Digital law, which applies to digital products and services “targeted at or likely to be accessed by minors”. The law requires accounts of under-16s to be linked with a parent or guardian, and mandates platforms to build in parental supervision tools such as ability to set time limits and restrict purchases.
Both Brazil and EU regulations prohibit platforms from profiling minors to serve targeted ads. In the EU, children’s accounts are private by default and cannot be publicly recommended. This responds to past abuses where predators exploited “recommended friend” algorithms to find children.
Alongside its proposed age restrictions, Australia has plans to introduce a Digital Duty of Care, requiring platforms to pro-actively prevent harm rather than simply react after it occurs.
These laws are still new and their efficacy will depend heavily on accompanying regulations and enforcement, but they are similar in that they attempt to regulate ‘upstream’ features relating to platform design.
In Malaysia, however, conversations still mostly centre on downstream measures: ordering takedowns, prosecuting harmful posts, or now, proposing bans. These steps focus on control after harm has occurred, or on keeping children away, without fixing the upstream problem of unsafe platform design.
Beyond bans and takedowns
Large tech companies, especially social media platforms, have largely escaped legal oversight in Malaysia and much of Southeast Asia, despite their role in facilitating well-documented harms. This is due to several reasons:
-
Social media platforms are treated as too large, complex, and essential to regulate.
-
Platforms switch roles as it suits them – publishers when moderating content, “innocent carriers” when trying to avoid accountability.
-
Harms are not confined to social media platforms. They appear on gaming platforms, live-streaming sites and increasingly, in AI chatbots.
Most of the harms however are not new. False advertising, impersonation, gambling, fraud, and misinformation have existed long before Facebook or TikTok. Miracle cures, for example, have existed in many forms – from 19th century “snake oil” cures to today’s AI hallucinations providing harmful medical advice.
Regulatory frameworks have been built over the years to protect society from such harms. In Malaysia, this includes consumer protection laws, financial regulation, accreditation of professionals such as doctors, and the setting up of ministries and agencies such as the Ministry of Domestic Trade and Costs of Living and MyIPO to protect consumers and creative works. The Penal Code also criminalises threats and incitement of violence.
A sharper regulatory path
Regulating giant tech platforms as a whole is certainly a daunting task. But what if Malaysia reviewed its current regulatory framework – on consumer protection, advertising standards, and child protection laws – and updated those to address today’s harms to children?For example, ads targeting children under 12 could be banned across all media including streaming, gaming and social media platforms. This would be akin to California laws disallowing kids’ meal toys linked to unhealthy food. In any event, many social media platforms state that only 13 and above are allowed on their platforms.
If social media platforms cannot guarantee that such ads won’t reach children, their services could be classified as 18+ by default. To lift that rating, they would need to show concrete measures to prevent child-targeted advertising. Failure to comply would bring financial penalties.
While updating the regulatory framework to address digital harms would indeed be daunting, it’s certainly not unprecedented. Malaysia successfully reformed its laws to prepare for the internet era and again for the digital transformation era. There’s no reason it can’t do the same, especially when the safety and well-being of children is involved.
Moving upstream
Protecting children online will require more than reactive laws. It requires shifting the focus upstream towards accountability, transparency, and safer platform design.
Yes, children can encounter real harms online, but it’s important that any regulation introduced genuinely makes their digital spaces safer. Well-intentioned measures can sometimes have unintended effects. For instance, broad bans that are difficult to enforce may do little to reduce risks, while leaving platforms themselves unsafe by design.
Rather than focusing solely on limiting children’s access, it may be more effective to create a digital environment that is safer for everyone. This could include stronger standards for data use, advertising, and algorithmic design, greater transparency from platforms, and enforcement mechanisms that deliver meaningful protection.
Ding Jo-Ann is with a global non-profit working on the impact of technology on society.
Khairil Yusof is Coordinator at Sinar Project, a civic tech organisation promoting transparency and open data in Southeast Asia.
