News & Views | AAM

Ethical AI in Media: A Framework for Building Trust Through Transparency

Written by Kevin Rehberg | May 01, 2025

By Kevin Rehberg, VP, Client Development

Artificial intelligence emerged as a key focus at the 2025 News Media Mega-Conference, where industry leaders convened to discuss innovation and the future of news media.

As AI becomes more integrated in operations, news organizations are facing questions about AI use and transparency. During the conference, I was joined by Jessica Davis, vice president of news automation and AI product at Gannett, to present how to leverage AI’s benefits while remaining committed to editorial integrity. Here are highlights from our discussion.

 

The AI Trust Gap

Media organizations are using AI in a variety of ways from operations and client services to marketing, audience development and content creation. While public awareness of AI’s role in journalism grows, so do challenges including:

  • A lack of transparency around how AI is used
  • Insufficient human oversight of AI-generated content
  • The risk of bias and misinformation
  • Data privacy concerns

Adding to these challenges are the proliferation of low-quality AI-generated content. Examples include a network of AI generated newsletters and fabricated articles attributed to nonexistent reporters. This content competes with legitimate journalism and further undermines public trust.

 

The Eight Pillars of Ethical AI

To help media outlets navigate these challenges, AAM introduced a framework built by publishers, for publishers, to guide the responsible implementation of AI in media. These Eight Pillars of Ethical AI include:

Establishing guidelines that address bias, privacy and accountability.

Clearly labeling and disclosing AI use.

Ensuring permission has been obtained to use AI-generated content.

Assigning roles to oversee AI implementation.

Monitoring and mitigating AI bias in news coverage.

Complying with data privacy regulations.

Providing ongoing training to staff on AI technologies.

Continuously assessing and addressing risks tied to AI.

 

Davis shared how Gannett and the USA TODAY Network is putting these principles into practice across its portfolio of brands. For instance, to ensure transparency, AI-generated article summaries are accompanied by a disclaimer that explains how the key points were created and confirms they were reviewed by a journalist prior to publication. The disclosure also links to an ethical AI policy and includes a form for readers to submit feedback directly to the AI and news automation team.

Davis added that at first journalists expressed concern that AI use might lead to less engagement, but Gannett’s research told a different story.

“We learned that articles with AI-generated summaries received 40% more time spent on page compared to the same article without a summary,” Davis said. “It does not deter people from reading the article, but rather encourages them to dive in.”

 

A Path Forward with Ethical AI Certification

To support publishers in their ethical AI journeys, AAM launched its Ethical AI Certification program to verify responsible AI practices within media organizations. The certification offers a roadmap for transparency and accountability, helping publishers demonstrate their commitment to ethical innovation to readers, advertisers and stakeholders.

For a detailed look at the Eight Pillars of Ethical AI, download the report.