Poland Urges EU to Investigate TikTok for AI-Driven Misinformation Campaign

Published:

spot_img

Poland Calls for Investigation into TikTok Over Disinformation

Poland’s Ministry of Digital Affairs has taken a significant step this week by formally requesting the European Commission to investigate TikTok. The primary concern revolves around allegations that the platform has inadequately managed a widespread disinformation campaign utilizing AI-generated content that promotes the notion of Poland exiting the European Union, often referred to as “Polexit.” According to Polish authorities, TikTok may have breached its obligations as a Very Large Online Platform (VLOP) under the Digital Services Act (DSA).

Concerns Over AI-Generated Content

Secretary of State Dariusz Standerski expressed alarm over the potential dangers posed by these synthetic audiovisual materials. He highlighted that such content threatens public safety, information security, and the very foundations of democracy both in Poland and throughout the EU. Reports indicate that some videos featured young women advocating for “Polexit,” possibly aimed at younger viewers. Notably, an analytics group named Res Futura identified a TikTok account called “Prawilne Polki,” which shared videos showing women in patriotic attire endorsing this controversial viewpoint.

One particular video caught attention, where the content creator stated, “I want Polexit because I want freedom of choice, even if it will be more expensive. I don’t remember Poland before the European Union, but I feel it was more Polish then.” Such narratives are concerning to officials, suggesting a well-orchestrated disinformation effort.

Characteristics of a Coordinated Disinformation Campaign

According to Standerski, the content found in TikTok’s Polish-language segment aligns with patterns indicative of a “coordinated disinformation campaign.” He criticized TikTok for its inadequate mechanisms for moderating AI-generated content and emphasized its failure to maintain transparency about the origins of such material. This lapse in responsibility appears to undermine the goals set forth by the DSA, which aims to protect users from misinformation.

Four-Point Action Request

In its formal communication to Henna Virkkunen, the Executive Vice President for Tech Sovereignty, Security, and Democracy, Poland outlined a four-point action plan. The requests include:

  1. Initiation of investigatory proceedings to examine potential violations of DSA provisions concerning risk management and content moderation.
  2. A detailed report from TikTok on the scale, nature, and dissemination of the flagged content, including measures taken to address it.
  3. Consideration of interim measures to limit the spread of AI-generated content that encourages Polish withdrawal from the EU.
  4. Coordination with Poland’s Digital Services Coordinator and notifications to national authorities of any proceedings outcomes.

Systemic Risk Management Shortcomings

Standerski emphasized that existing evidence suggests TikTok has failed to implement sufficient moderating mechanisms for AI-generated content. This neglect, he argued, weakens the DSA’s objectives regarding disinformation prevention and user safeguarding. He called attention to the broader implications of this situation, noting the link between the scale of disinformation campaigns and potential threats to political stability and democratic integrity.

As a VLOP under DSA regulations, TikTok is required to conduct systemic risk assessments, engage in independent audits, and offer transparency reporting. This includes identifying and mitigating risks related to the spread of unlawful content and potential disruptions in civic discourse.

Heightened Fears Over AI Disinformation

Poland’s complaint represents one of the early formal enforcement requests under the DSA specifically targeting AI-driven disinformation on major social media platforms. This case reflects the escalating concerns within EU member states regarding the usage of synthetic media to influence public opinion negatively and undermine democratic systems.

With the DSA fully enacted in February 2024, the European Commission has been empowered to investigate large platforms like TikTok, imposing fines that can reach up to 6% of their global annual revenue for violations. The law mandates platforms to assess and address systemic risks that may manipulate services and threaten democratic processes.

Background of TikTok’s EU Scrutiny

TikTok has continually found itself in the crosshairs of scrutiny from the European Commission regarding compliance with the DSA. Back in February 2022, the Commission launched a formal inquiry into the platform for potential infringements linked to protecting minors, advertising transparency, and risk management related to addictive design and harmful content.

In light of these developments, the future of TikTok in the EU becomes more contingent on its ability to adapt to regulatory landscapes focused on user safety and disinformation management.

spot_img

Related articles

Recent articles

Did IAS Officers Manipulate Data and AI Images for a Presidential Award? Unpacking the Bhaskar Exposé

Transformation and Reality in Khandwa: A Tale of Deception In the arid plains of Khandwa, Madhya Pradesh, the narrative of transformation presented in government records...

Two Security Experts Admit Guilt in BlackCat Ransomware Case

Two cybersecurity professionals have admitted to federal charges related to the deployment of ALPHV BlackCat ransomware against multiple companies, as...

Recent Oracle EBS Breaches: Korean Air and University of Phoenix Targeted

Impact of CL0P Ransomware Attacks: Latest Developments The aftermath of the CL0P ransomware group's aggressive campaign targeting Oracle E-Business Suite (EBS) vulnerabilities continues to unfold....

Assessing the Impact of NUEYS Activities in Eritrea

National Union of Eritrean Youth and Students Plans for 2025-2026 The National Union of Eritrean Youth and Students held an important activity assessment meeting from...