Malaysia and Indonesia have blocked access to social media platform X after accusing the company of failing to effectively curb the spread of sexually explicit deepfake content, marking a rare and forceful intervention against a major global technology platform over the misuse of artificial intelligence.
Authorities in both countries said the decision followed mounting public outrage over the circulation of non-consensual, AI-generated pornographic images, many of which targeted women and, in some cases, minors. Officials said repeated requests for stronger safeguards were either inadequately addressed or ignored, leaving regulators with no option but to impose nationwide restrictions.
Malaysia’s communications regulator said the platform demonstrated a “serious failure of responsibility” in preventing the creation and spread of manipulated sexual content. According to officials, deepfake images remained accessible for extended periods even after being flagged, while others resurfaced repeatedly due to weak enforcement mechanisms.
Indonesia’s Ministry of Communication and Digital Affairs echoed those concerns, calling deepfake pornography a form of digital sexual violence. The ministry said the technology behind such content has advanced faster than moderation systems, allowing harmful material to spread at scale with devastating consequences for victims.
“Freedom of expression does not include the right to exploit, humiliate, or violate others through technology,” an Indonesian official said, adding that platforms must be held accountable for the tools they deploy.
Deepfake smut uses artificial intelligence to digitally superimpose a person’s face onto explicit images or videos, often without consent. The results can appear highly realistic, making it difficult for victims to disprove authenticity. Experts say the psychological toll is severe, frequently leading to anxiety, reputational damage, professional harm, and social isolation.
Victims in both countries reported discovering the content through acquaintances or online harassment, with little recourse once images began circulating. Advocacy groups say the burden of reporting and removal unfairly falls on those already harmed, while platforms benefit from viral engagement.

Regulators said X relied too heavily on user-driven reporting systems, which are often slow, opaque, and ineffective. They argued that proactive detection tools, stricter content filters, and rapid takedown protocols should be standard for platforms deploying generative AI features.
Following the decision, internet service providers were instructed to block access to X, and mobile platforms were advised to restrict availability. Officials said the ban would remain in place until the company demonstrates meaningful improvements in moderation practices and compliance with local laws.
The move has intensified debate over the responsibilities of technology companies in the age of generative AI. While artificial intelligence has unlocked new creative and economic opportunities, governments worldwide are grappling with its darker applications, particularly when used to exploit individuals at scale.
Women’s rights groups in Malaysia welcomed the ban, describing it as a long-overdue acknowledgment of the harm caused by non-consensual digital content. Indonesian child protection organizations also praised the decision, warning that AI tools can generate explicit material involving minors without any original images, creating significant legal and ethical challenges.
However, the action has drawn criticism from some business and digital rights groups, who warn that blocking major platforms risks overreach and may set precedents for broader censorship. They argue that regulation should focus on targeted enforcement rather than blanket restrictions.
Government officials dismissed those concerns, emphasizing that the measures are temporary and conditional. They stressed that the issue is not political speech or dissent, but the unchecked spread of content that violates basic human dignity and existing laws.
Legal experts note that while both countries have laws addressing pornography and online abuse, deepfakes expose gaps in enforcement frameworks. Because platforms control the underlying technology and algorithms, regulators argue that companies must share responsibility for preventing misuse.
Analysts say the bans could influence other governments in the region, many of which are currently drafting or revising AI governance policies. Southeast Asia, with its large and young digital population, has become a key battleground for debates over online safety, innovation, and regulation.
For X, the consequences are significant. Losing access to two major Southeast Asian markets damages growth prospects and adds pressure amid increasing global scrutiny. Restoring service will likely require the company to implement more robust safeguards and engage more closely with regulators.
Officials in both Malaysia and Indonesia said discussions could resume if the platform presents a credible, transparent plan to combat deepfake abuse, including improved detection tools, faster content removal, and cooperation with law enforcement.

Until then, the blocks remain in place.
As generative AI continues to evolve, the confrontation underscores a broader reality: governments are no longer willing to rely solely on voluntary moderation. The era of unchecked experimentation, they say, is coming to an end.
For now, Malaysia and Indonesia have drawn a firm line—signaling that in the race between innovation and accountability, digital safety will not be left behind.








