A major US teachers’ union has announced it is leaving the social media platform X, citing the spread of sexualized, artificial intelligence–generated images of children as the decisive reason for its exit. The move adds to growing pressure on technology companies over the misuse of generative AI and raises renewed concerns about online child safety.
The American Federation of Teachers (AFT), one of the largest education unions in the United States, said it could no longer maintain a presence on X after encountering what it described as disturbing and unacceptable content involving minors. Union leaders said that while they had long been troubled by the platform’s direction, the appearance and circulation of sexualized AI-generated images of children represented a clear moral and ethical line that could not be crossed.
In a statement, the union’s leadership said educators have a responsibility to protect children not only in classrooms, but also in the digital spaces where young people are increasingly vulnerable. Remaining on a platform that allows such material to appear, even indirectly, would contradict the values teachers are meant to uphold, the union said.
The decision marks one of the most prominent departures from X by a major US institution in recent months. Since the platform’s ownership and policies shifted in recent years, it has faced criticism from civil society groups, advertisers, and public institutions over changes to content moderation and safety enforcement. The teachers’ union said its concerns had been building over time, but that the use of AI tools to create or manipulate images of children in sexualized ways was “the last straw.”

Artificial intelligence image generators have rapidly improved in realism and accessibility, allowing users to create convincing images with minimal technical skill. While such tools have legitimate creative and educational uses, they have also been misused to produce non-consensual and exploitative content. Child safety advocates warn that even AI-generated images that do not depict real children can still normalize abuse, retraumatize victims, and be used for grooming or harassment.
The AFT said the presence of such material on a widely used social platform poses serious risks. Union officials stressed that teachers increasingly see the consequences of online harm in schools, from bullying and harassment to the psychological effects on young people exposed to explicit or violent content. They argued that platforms must take stronger responsibility for preventing the creation and spread of AI-generated material that exploits children.
X has previously said it is committed to combating illegal content and protecting users, including minors. The company has rules against sexual content involving children and has stated that it removes such material when detected. However, critics argue that enforcement has been inconsistent and that rapid advances in AI have outpaced existing safeguards. The teachers’ union said that whatever policies exist on paper, the reality on the platform no longer aligned with its mission.
As part of its withdrawal, the AFT said it would stop posting on X and encourage its members to seek information and engagement through other channels. While acknowledging that leaving a major platform could reduce its immediate reach, the union said ethical considerations outweighed concerns about visibility or influence.
The move reflects a broader debate about how public institutions should engage with social media platforms that struggle to control harmful content. Some organizations have chosen to stay and advocate for reform from within, while others have opted to leave entirely as a form of protest. The teachers’ union’s decision adds weight to the argument that continued participation can be seen as tacit approval of platform practices.
Experts say the controversy also highlights the urgent need for clearer rules around generative AI. Lawmakers in the US and elsewhere have begun discussing regulations to address deepfakes, non-consensual imagery, and AI-generated child sexual abuse material, but progress has been uneven. Educators and child advocates argue that regulation must keep pace with technology, rather than reacting after harm has already occurred.
For teachers, the issue is deeply personal. Many report dealing with the fallout of online exploitation and harassment in their daily work, supporting students who have been targeted or exposed to harmful content. The union said its decision to leave X was made with those realities in mind, emphasizing that digital platforms should not make educators’ jobs harder by allowing dangerous content to circulate.
The AFT’s exit is unlikely to be the last such move as debates over AI, online safety, and platform responsibility intensify. Whether it prompts meaningful changes at X or accelerates broader regulatory action remains to be seen. What is clear is that the misuse of AI to sexualize children has become a flashpoint that even long-tolerated concerns about social media can no longer overshadow.

In stepping away, the teachers’ union said it hopes to send a clear message: protecting children must come before engagement metrics, technological experimentation, or platform loyalty. For educators, it said, there is no compromise on that principle.








