default-exper
auteurs
Juliette Robin Vernay Partner
Mathilde Ponchel Partner
Expert insight
06 March 2025

Transparency Obligations for the Use of Artificial Intelligence

Even as artificial intelligence is becoming a growing part of our lives, and the issues surrounding protection of the data entering and leaving artificial intelligence systems are monopolizing the debate, few consider the question of transparency when it comes to using artificial intelligence. 

If I post an AI-generated photo on social networks, do I have to mention it? If I use artificial intelligence as part of a creative service, must I tell my client? 

Although there is currently no general legislation imposing a duty of transparency on the use of artificial intelligence for content generation, some specific provisions do apply. 

Article 5 II, 2° of French Law no. 2023-451 of June 9, 2023, aimed at regulating commercial influence and combating influencer abuses, as amended by Ordinance no. 2024-978 of November 6, 2024, stipulates that "Content communicated by the persons mentioned in Article 1 of this Law comprising images that have been (...) produced by any artificial intelligence process aimed at representing a face or silhouette shall be accompanied by the words: ‘Virtual images’.” This statement must be clear, legible and comprehensible on any medium used. 

Article 50 of the EU IA Act of June 13, 2024, also creates new transparency obligations for providers and deployers (natural or legal person, public authority, agency or other body using an AI system under its authority, except if the AI system is used in the context of a personal non-professional activity) of AI systems, but these appear to be limited to specific cases. 

For example, providers must mark the various AI-generated contents in a machine-readable format, so that they can be easily identified as having been artificially generated or manipulated. However, this obligation does not apply when the AI systems perform an assistance function for standard editing or do not substantially alter the input data provided by the deployer or the semantics thereof. 

Deployers of an AI system that generates or manipulates image, audio or video content constituting a deep fake (i.e., AI-generated or manipulated image, audio or video content that resembles existing people, objects, places, entities or events and would appear deceptively authentic or truthful to a person) must disclose that the content has been artificially generated or manipulated. Where the content forms part of an artistic, creative, satirical, fictional or similar work or program, this transparency obligation must not, however, hamper the display or enjoyment of the work. 

So if an AI system creates or alters content, the company must disclose it (unless the AI is used for legal purposes, or the content is artistic or satirical). 

In the case of AI-generated texts, these must also be clearly identified as generated or manipulated by AI, provided that their publication is intended to inform the public about matters of public interest. However, this obligation does not apply where the content has undergone a process of human review or editorial control, and where a natural or legal person holds editorial responsibility for the publication of the content. 

While providers are subject to general transparency obligations, deployers are only subject to such obligations in specific cases. 

Consequently, there is no general obligation to disclose the use of artificial intelligence in content production. 

However, this legal loophole may soon be filled by French Bill no. 675, tabled on December 2, 2024, which calls for the reinstatement of Article 6-6 of Law 2004-575 of June 21, 2004, on confidence in the digital economy, stipulating that "Anyone publishing on a social network an image generated or altered by an artificial intelligence system must explicitly disclose its origin.” 

Moreover, in the context of commercial relations, there seems to be an implicit transparency obligation under French consumer law. 

Indeed, non-disclosure of the artificial origin of the content must not have the effect of misleading the consumer, at the risk of triggering liability for misleading advertising and commercial practices. Article L.121-2 2° of the French Consumer Code states that a commercial practice is misleading where it is based on claims, statements or presentations that are false or likely to mislead as to the essential characteristics of the good or service, such as its substantial qualities, origin, method and date of manufacture, etc.  

It is therefore likely that producing creative content as part of a service, without disclosing the artificial nature of that content to the consumer, could fall within the scope of Article L.121-2 of the French Consumer Code. 

While awaiting clarification of a possible transparency obligation regarding the use of artificial intelligence, it seems wise to always clearly mention any artificially generated or manipulated content.