en.Wedoany.com Reported - The European Commission has officially published the draft guidelines on Article 50 of the AI Act, systematically clarifying the four categories of transparency obligations that providers and deployers of generative AI systems must fulfill, providing a compliance operational framework for the full enforcement of the relevant provisions on August 2, 2026. According to an announcement by the EU AI Office, the draft guidelines are being advanced in parallel with the ongoing development of the Code of Practice on Transparency for AI-Generated Content, aiming to harmonize the interpretation and enforcement standards of Article 50 across Member States.
The four categories of transparency obligations cover different segments and content forms within the AI value chain. The first category of obligation targets AI systems that interact directly with humans; providers must ensure that users are aware they are interacting with an AI rather than a human, unless this is obvious to a reasonably informed user. The second category of obligation requires providers of AI systems that generate synthetic audio, image, video, or text content to mark the output content in a machine-readable format, making it detectable as artificially generated or manipulated. The third category of obligation stipulates that deployers of emotion recognition systems or biometric categorization systems must inform the affected natural persons that the system is in operation. The fourth category of obligation requires deployers of deepfake content and AI-generated text involving public interest to clearly disclose the artificial generation nature of the content, presented in a clear and distinguishable manner upon the user's first encounter.
The second category of obligation is the technical core of these guidelines. Marked content must be machine-readable, and providers must comprehensively employ technical means such as active marking, invisible watermarking, digital metadata embedding, and fingerprinting to ensure the effectiveness, robustness, and interoperability of the marking. According to a statement from the AI Office, the guidelines clearly indicate that no single solution can meet all scenario requirements; providers must adopt a multi-layered marking strategy and deploy technical solutions at different stages of the content value chain. For generative AI systems already placed on the market before August 2, 2026, their providers must complete the technical marking and detectability requirements no later than December 2, 2026.
The fourth category of obligation sets out two practical exemptions. Where deepfakes are used for works that clearly fall within the categories of artistic, creative, satirical, or fictional works, the disclosure obligation must not hinder the display or viewing experience of the work. AI-generated text involving public interest may be exempt from the disclosure requirement if it undergoes human review or editorial control and editorial responsibility is assumed by a natural or legal person, but the deployer must preserve internal documentation to prove the authenticity of the human review.
Providers of General-Purpose AI models are simultaneously bound by Article 50(2). Models meeting the following conditions are considered general-purpose AI models: training compute exceeding 10²³ FLOP and capable of generating language, text-to-image, or text-to-video content. The draft guidelines also provide clarification on the transparency exemption conditions for open-source models. The compliance obligations for providers of general-purpose models have a cumulative effect with Article 53, involving the publication of training data summaries, the formulation of copyright compliance policies, and the annual update of compliance information for contracted parties.
On the compliance timeline, the draft guidelines reiterate that the transparency obligations under Article 50 will be formally enforced from August 2, 2026. The maximum fine is set at 15 million euros or 3% of the total worldwide annual turnover, whichever is higher, and violations will be included in the routine inspection scope of market surveillance authorities. Affected by the political agreement on the comprehensive AI Act reached on May 7, 2026, the compliance deadline for watermarking technology under Article 50(2) is extended to December 2, 2026, providing an additional buffer period for systems already in circulation on the market.
The final version of the voluntary Code of Practice on Transparency, developed by multiple stakeholders, is expected to be published in June 2026. While signing and adhering to this Code does not constitute conclusive evidence of compliance, it can provide factual support when market surveillance authorities assess compliance.
This article is compiled by Wedoany. All AI citations must indicate the source as "Wedoany". If there is any infringement or other issues, please notify us promptly, and we will modify or delete it accordingly. Email: news@wedoany.com










