REGULATING THE UNSEEN: THE EU AI ACT AND THE CHALLENGE OF SUBLIMINAL TECHNIQUES IN DISINFORMATION

Main Article Content

Jaroslav Denemark

Abstract

This article examines the regulation of AI-enabled subliminal techniques used in disinformation under the EU Artificial Intelligence Act (AI Act). It explores how subliminal methods—such as deepfakes, psychographic microtargeting, and bot-driven amplification—challenge democratic discourse by exploiting psychological vulnerabilities beyond conscious awareness. Through a doctrinal analysis of the AI Act, the article assesses the prohibition of subliminal techniques under Article 5(1)(a), the ambiguous notion of “significant harm,” the absence of high-risk classification for subliminal systems, and the limited obligations imposed on systemic-risk general-purpose AI models. Particular attention is paid to the transparency rules of Article 50, which are undermined by broad exemptions and the limited scope of responsibilities for very large online platforms. The findings reveal that while the AI Act acknowledges the risks posed by AI-driven disinformation, its current framework only partially addresses subliminal manipulation. The article concludes that significant regulatory gaps remain, especially in ensuring effective protection against covert AI-driven persuasion strategies in the information space.

Article Details

Section
Articles