32 views 3 mins 0 comments

EU Launches Probe Into X Over AI Generated Sexual Images

In Tech & AI
January 26, 2026
Share on:

European Union regulators have opened a formal investigation into social media platform X following widespread concern over the circulation of manipulated sexualised images generated its artificial intelligence chatbot, Grok. The European Commission said it is examining whether the platform adequately assessed and mitigated risks linked to the rollout of the AI tool, particularly regarding the spread of illegal content. The inquiry follows public backlash after images depicting undressed women and minors were shared online, prompting condemnation from regulators and lawmakers across several jurisdictions. European officials described the material as unlawful and unacceptable, stressing that platforms operating within the bloc are required to prevent the dissemination of harmful content. The move places renewed pressure on technology companies deploying generative AI tools, as authorities scrutinise whether innovation has outpaced safeguards designed to protect users and vulnerable groups.

The investigation comes shortly after Britain’s media regulator opened its own inquiry into Grok, while several Asian countries temporarily blocked access to the chatbot over similar concerns. EU officials said the probe will focus on whether X complied with obligations under the Digital Services Act, which requires large online platforms to identify systemic risks and take proactive steps to address them. Regulators believe the company may not have conducted a sufficient risk assessment before introducing Grok’s image generation features in Europe. While the platform has said its owner introduced additional safeguards and restricted certain image generation functions in some jurisdictions, EU officials indicated these steps may not fully address underlying risks associated with the service’s design and deployment.

Under EU law, companies found to have breached the Digital Services Act can face fines of up to six percent of their global annual turnover, along with potential interim measures if regulators deem adjustments inadequate. European officials emphasised that non consensual sexual deepfakes represent a serious violation of personal rights and dignity, particularly when involving women and children. Lawmakers have argued that the case exposes broader weaknesses in how rapidly evolving AI technologies are monitored and enforced once they are made available to the public. The investigation will also assess whether X’s internal systems for content recommendation and moderation contributed to the amplification of harmful material across the platform.

The EU’s action adds to a growing regulatory confrontation between Brussels and major US technology firms, a dynamic that has already drawn political criticism from Washington. European authorities have insisted that enforcement of digital rules is aimed at protecting citizens rather than targeting specific companies or countries. As scrutiny intensifies, the case is likely to influence ongoing debates around AI governance, platform accountability, and the balance between technological development and public safety. The outcome of the investigation could set an important precedent for how AI driven features are introduced and regulated within the European digital market.