A Northern Ireland politician has announced she is quitting social media platform X, formerly known as Twitter, citing deep concerns over the misuse of artificial intelligence and the platform’s failure to adequately address deepfake abuse linked to its Grok AI system.
Cara Hunter, a Social Democratic and Labour Party (SDLP) Member of the Legislative Assembly (MLA) for East Londonderry, said her decision was driven by what she described as a “toxic and unsafe environment,” particularly for women in public life. Hunter has previously been the target of a deepfake video in which her likeness was digitally manipulated and circulated online without consent, an experience she says has left a lasting impact.

In a statement explaining her departure, Hunter said she could no longer justify remaining on a platform that, in her view, has not taken meaningful responsibility for the harms enabled by rapidly advancing generative AI tools. She pointed specifically to Grok, the AI chatbot integrated into X, arguing that its presence has coincided with a surge in abusive, sexualised, and manipulated content.
“Social media should be a place for democratic debate and public engagement,” Hunter said. “Instead, it has become a space where women are routinely targeted, humiliated, and silenced through technology that is evolving faster than the safeguards meant to control it.”
Her announcement comes amid growing controversy surrounding Grok and similar AI systems, which critics say have made it easier to create realistic deepfake images and videos. These tools, while marketed as innovative conversational assistants, have raised alarms among politicians, regulators, and civil society groups over their potential to generate non-consensual and harmful material.
Hunter’s experience with deepfakes dates back several years, when a manipulated video featuring her image circulated during an election period. Although the content was eventually challenged and removed, she has said the damage — both personal and political — was already done. She argues that the widespread availability of AI tools has only intensified such risks.
“Deepfakes are no longer a fringe issue,” she said. “They are a direct threat to personal safety, democratic participation, and trust in public life.”
Her decision to leave X has resonated across Northern Ireland’s political landscape. The Green Party in Northern Ireland has also announced it will stop using the platform, citing similar concerns about online abuse, misinformation, and the lack of effective moderation. Party representatives have described X’s problems as systemic rather than accidental, arguing that the platform’s current structure prioritises engagement over safety.
The controversy extends well beyond Northern Ireland. Across the UK and internationally, governments and regulators are scrutinising how AI-powered platforms handle harmful content. The rapid rise of generative AI has outpaced existing legal frameworks, prompting calls for tougher laws and clearer accountability for technology companies.
In the UK, the debate has intensified around the enforcement of online safety rules, particularly those designed to protect users from illegal and abusive material. Critics argue that platforms like X have been slow to act, relying heavily on automated systems that struggle to keep up with the volume and sophistication of AI-generated content.
Supporters of stronger regulation say the issue is not innovation itself, but the absence of guardrails. They warn that without firm oversight, AI tools can be weaponised against individuals, especially women, minorities, and those in public-facing roles.
X has repeatedly stated that it is committed to free expression and that it takes safety seriously. The company says it is continuously improving its moderation systems and AI safeguards. However, for critics like Hunter, such assurances ring hollow without visible and effective action.
“Freedom of speech does not mean freedom to abuse,” she said. “When platforms allow technology to be used as a tool of intimidation and sexual harassment, they are complicit in the harm that follows.”
The issue of deepfakes has also raised broader concerns about democracy. Experts warn that manipulated images, videos, and audio clips could be used to spread disinformation, undermine elections, or discredit political figures. As AI-generated content becomes more convincing, distinguishing truth from fabrication grows increasingly difficult.
Hunter’s exit from X highlights a growing dilemma for politicians and public figures: whether remaining on major social media platforms is worth the personal cost. While X remains a powerful tool for reaching voters and shaping public debate, some argue that its current environment undermines those very goals.
For now, Hunter says she will continue engaging with constituents through alternative channels and platforms she believes offer stronger protections. She has called on fellow politicians to reflect seriously on their digital presence and to push for meaningful reform in how tech companies deploy and regulate AI.
“This is not about stepping away from public discourse,” she said. “It’s about demanding a digital space that is safe, ethical, and worthy of the society it claims to serve.”
As the debate over AI, deepfakes, and online responsibility continues, Hunter’s decision adds to mounting pressure on social media companies to confront the unintended consequences of the technologies they promote — before more voices choose to walk away.







