As AI becomes more integrated into newsroom and business operations, media companies are balancing its use to improve efficiency with questions from audiences and advertisers about how the technology is being used.
We recently co-hosted a webinar with the Local Media Association on how media companies can implement a new framework for ethical, responsible AI implementation. Here are five key takeaways.
New research shows that audiences and advertisers are increasingly concerned about how AI is being used in media. Recent findings include:
This widening disconnect means media organizations must focus on communicating how and when AI is used and take steps to reinforce governance as AI adoption grows.
The first step toward responsible AI use is establishing clear policies and making them publicly available. It’s important to review these policies regularly as technology and regulations change. Outlining where AI is used and what oversight is in place to minimize bias or errors helps readers and advertisers understand your approach and maintains trust.
Even as AI tools become more sophisticated, audiences expect humans to remain accountable for the output. Keeping a human in the loop ensures editorial judgment, quality checks and clear ownership of editorial and process decisions.
Publishers like The Wall Street Journal include public notes explaining when AI assisted in producing content and when a human editor reviewed the results. This level of transparency reinforces accountability and helps audiences understand that AI is not a replacement for human judgment.
Implementing AI responsibly means actively identifying bias and risks. Best practices implemented by media companies to address this include:
These safeguards help reduce risk and reinforce a company’s commitment to ethical AI practices.
AI governance is a continuous process. To ensure staff stay up to date on new technology and regulatory changes, media companies are offering training to build AI literacy, creating cross-functional AI councils to monitor regulatory changes and embedding risk reviews into ongoing operations. This approach prepares teams for innovation while reinforcing a culture of responsible adoption.
As AI tools become more prevalent, organizations that prioritize transparency, accountability and continuous adaptation will be better positioned to maintain trust with readers, advertisers and industry partners. Learn more about how AAM’s Ethical AI Certification offers a clear roadmap to help companies adopt AI responsibly and transparently.