Scroll, Post, Delete… But Who’s Responsible? Modalities for platform accountability
Objectives: Surface Core Issues and Gaps: Identify and map the current challenges related to online safety, platform practices, and regulatory fragmentation in Malaysia. Explore Global Modalities: Share comparative models (e.g., Ireland’s voluntary council, New Zealand’s code of practice, UK’s Ofcom approach) to evaluate strengths, weaknesses, and applicability to the Malaysian context. Build Multistakeholder Consensus: Facilitate dialogue across government, tech platforms, civil society, media, legal experts, and academia to co-create shared principles for platform accountability. Identify Pathways for Implementation: Discuss potential formats, eg, a voluntary code, advisory body, national council, and set short- and medium-term steps for their development.
- Type
- Status
Aug 27, 2025 from 12:00 PM to 12:50 PM (Asia/Kuala_Lumpur / UTC800)
Theatrette 1.11, First Floor, AICB Centre for Excellence
Centre for Independent Journalism, Sinar Project and ARTICLE 19
As we enter a new era of rapid technological advancement, the rise of online harms is a shared global concern that affects democracies and diverse societies alike. In ASEAN, these harms include algorithmic discrimination, ‘hate speech’, xenophobia, and digital abuse, which are further complicated by opaque platform operations and fragmented content moderation.
Global and local trends show that ‘harmful’ content online, especially during political or crisis periods, intensifies societal divides and disproportionately targets marginalised communities. In ASEAN, recent events (e.g General elections, ethno-religious controversies, or narratives around refugee communities) have shown that Misinformation, Disinformation and Hate Speech (MDH) often remain unchecked due to limited guidelines, accountability and transparency across actors.
Tech companies are ushering in a new era of computing and mainstreaming AI into their systems, which comes with its own set of problems; the issues mentioned have been compounded. For example, issues that arise from automated content moderation, recommendation algorithms, and generative AI tools. While these technologies are marketed as solutions for safety and efficiency, their opaque design, biased training data, and lack of accountability introduce new risks. The absence of public oversight and shared safety standards around these technologies means that communities are increasingly exposed to harms that are increasingly difficult to cope/deal with.
Meanwhile, governments worldwide are increasingly adopting strict regulatory measures to control freedom of expression, often under the pretext of protecting citizens from online ‘harm’. However, these actions usually fail to create a safer online environment. In Malaysia, the introduction of social media licensing is viewed as an effort to regulate platforms by requiring them to obtain government licenses. While supporters argue this will encourage companies to manage ‘harmful’ content, such as disinformation and ‘hate speech’, civil society organisations warn it could lead to excessive censorship and suppress dissent. The vague definition of ‘harmful’ content risks being manipulated, potentially stifling open dialogue. As Malaysia and other countries in ASEAN move forward with these measures, key questions arise: What will be the actual impact on freedom of expression, and will these regulations effectively address the issues they intend to mitigate?
We are facing a period where the self-regulatory frameworks of social media platforms are falling woefully short in protecting users from serious harms such as online harassment, disinformation, ‘hate speech’ and illegal content. Meanwhile, the extreme legislative measures being rushed into place by the government in response to these shortcomings pose a grave threat to our fundamental rights, specifically the freedoms of expression and access to information. While these measures are intended to enhance safety, they often result in overreach that not only stifles open dialogue but also suppresses diverse voices and perspectives.
Ensuring internet safety in this context demands more than just reactive content moderation; it requires a proactive, multi-stakeholder approach grounded in human rights. In Malaysia, the absence of robust digital literacy, clear safety protocols, and coordinated enforcement mechanisms leaves users, especially youth and communities, at risk, exposed to online manipulation, harassment, and exploitation. A forward-looking internet safety agenda must include preventive education, responsive support systems, and stronger safeguards to foster safer, more inclusive digital environments for all.
At this critical juncture, reimagining internet safety, platform accountability, and governance through a new modality may offer a way forward. The question to ask is whether a whole-of-society approach, particularly one which leverages the collective expertise and influence of government, tech companies, civil society, academia, media and affected communities, could help establish meaningful and locally relevant mechanisms to hold platforms accountable.