Transparency and trust : why it becomes essential to indicate when content is generated by AI

Transparency and trust : why it becomes essential to indicate when content is generated by AI

Posted 11/6/25
6 min read

Nearly 9 out of 10 consumers want to know if content is AI-generated. Here’s why transparency is becoming a key trust factor for brands

The era of algorithmic transparency

The rise of generative artificial intelligence is transforming how content is produced, from text and visuals to videos and marketing campaigns. But this growing automation comes with a powerful expectation: transparency.
According to Getty Images (2024), nearly 90% of consumers want to know when a visual has been created using AI.
The question is no longer if brands should disclose AI use, but how they can do it without compromising creativity or reputation.

What does AI transparency mean?

AI transparency refers to clearly informing audiences about the use of artificial intelligence in creating content — whether text, image, or video.
It involves three essential elements:

  • When AI intervenes (writing, retouching, or synthesis)
  • How it contributes (assisted, generated, or co-created)
  • Why it is used (efficiency, creativity, consistency)

AI transparency aims to preserve trust while promoting responsible and ethical use of artificial intelligence in creative processes.
It’s not about “calling out” technology, it’s about making its contribution visible and understandable, much like crediting a creative collaborator.

Rising consumer expectations: trust above all

Audiences now expect clarity about how much of the content they consume is human-made versus AI-assisted.
Several studies support this trend:

Beyond ethics, this issue touches the credibility of brand messaging: consumers want to be sure that the information they see, hear, or read isn’t being manipulated.
When AI-generated content is not disclosed, it creates cognitive dissonance and may feel less authentic.
In a landscape flooded with synthetic media, trust becomes the ultimate differentiator.

The risks of opacity for brands

Failing to disclose AI use carries several major risks:

  • Reputational risk – A brand seen as hiding its practices loses credibility (see the GUESS x Vogue case).
  • Regulatory risk – The European AI Act makes disclosure mandatory.
  • Engagement risk – Lack of transparency erodes consumer confidence, impacting engagement and conversion rates.

The RWS Group (2025) Unlocked 2025 report reveals that 62% of European consumers would stop following a brand if they found out it used AI without disclosing it.

The legal framework: Europe leads the way

Europe is setting the standard for AI disclosure.
The European AI Act establishes a requirement to disclose AI-generated or manipulated content likely to mislead the public.
In France, the CNIL recommends that organizations explicitly inform users whenever AI tools are involved in producing public communications.
The IMPACT AI Report (2024) calls for “responsible AI,” grounded in transparency and traceability.
IBM Think (2024) states: “Transparency is not a risk; it’s a prerequisite for digital trust.”
Similarly, the NIM Institute (2024) warns that “transparency without ethical governance remains incomplete.”
Together, these reports confirm one reality: AI transparency is becoming a legal and moral standard.

Case study: GUESS x Vogue — when lack of transparency sparks backlash

In August 2025, global fashion brand GUESS faced a wave of criticism after releasing a campaign in Vogue featuring AI-generated models.
Developed with creative agency Seraphinne Vallora, the campaign aimed to highlight how fashion evolves with AI. The visuals were technically stunning — yet the tiny “AI-generated imagery” caption was barely visible.
According to News.DesignRush (2025), the unclear disclosure sparked a social media storm. Many consumers felt misled, accusing GUESS and Vogue of passing off synthetic imagery as real photography.
Chief AI Officer (2025) described the scandal as a “cultural flashpoint,” marking a turning point in how AI’s role in advertising is perceived.

The lesson is simple: a poorly placed or hidden AI disclaimer can damage even the strongest brand reputation.
Transparency doesn’t limit creativity — it legitimizes it.

Transparency as a strategic advantage

Being transparent about AI use demonstrates digital maturity and integrity.
According to Usercentrics (2024), 62% of consumers say they trust brands more when they clearly communicate about AI use.
Leaders like Getty Images, IBM, and Canva have already adopted AI-origin labeling practices to reinforce brand credibility.

Strategic benefits:

  • Enhanced brand credibility through openness and accountability.
  • Ethical innovation positioning — showing that AI is used responsibly, not secretly.
  • Future-proof compliance, building trust before legal mandates take effect.

Communication impact:

Transparent communication about AI use drives higher engagement and positive brand sentiment. Consumers reward honesty and clarity; vague or evasive messaging, on the other hand, breeds distrust.

Best practices for disclosing AI-generated content

1. Clear and visible labeling

Use simple, understandable wording such as “This content was AI-assisted.” Avoid overly technical phrasing.

2. Internal disclosure policies

Define when and where to disclose AI use, how to phrase it, and ensure a human validation step before publishing any content.

3. Team training

Educate marketing and communication teams on responsible automation and ethical disclosure.
Resources like Upvoty (2024) and KNB Communications (2024) offer practical frameworks.

4. Contextual transparency

Explain why AI was used — for efficiency, creative exploration, or content consistency.
Even a short clarification enhances understanding and builds trust.

5. Content traceability and validation

Implement a clear human review process before publishing AI-generated or AI-assisted content.
Beyond metadata or watermarking, brands should leverage collaborative editing and review tools that enable annotation, correction, and approval at each step.
Platforms like MTM provide this comprehensive workflow: AI content editing, version tracking, validation via review links, and versioned archiving.
Such systems ensure transparent AI traceability and uphold the highest content quality and ethical standards.

The future: toward standardized AI transparency

The next step for AI ethics may involve standardized transparency labels or certifications, such as:

  • “AI Disclosed” or “Human Verified” tags attached to verified content.
  • Adoption of frameworks like the C2PA protocol (developed by Adobe and the BBC) to authenticate media provenance.
  • Integration of transparency verification tools in browsers, social platforms, and CMS solutions.

These initiatives point toward a single goal: re-establishing a trust contract between brands and audiences in an era of synthetic content.

Conclusion: toward enhanced trust

Transparency in AI-generated content is no longer optional — it’s an emerging global standard.
Artificial intelligence should be viewed as a creative partner, not a source of deception.
By clearly disclosing AI use, brands build credibility, loyalty, and long-term trust.
Declaring that content was AI-generated isn’t a weakness — it’s proof that your brand values honesty, creativity, and responsibility.

FAQ: How to manage AI content transparency in your brand strategy

  1. Why should AI-generated content be disclosed?
    To guarantee transparency, reinforce public trust, and comply with upcoming regulations (AI Act, CNIL).
  2. Is there a law requiring AI disclosure?
    Yes, the forthcoming European AI Act mandates labeling of AI-generated or manipulated content.
  3. How can brands disclose AI use without harming their image?
    By using clear, educational labels such as “This content was AI-assisted.”
  4. Do consumers trust AI-generated content?
    Yes, if they’re clearly informed about human-AI collaboration (Getty Images, 2024).
  5. Which tools help ensure AI transparency?
    Platforms like MTM support tracking, validation, and documentation of AI-generated content.

Sources