Quality publishers work hard to create meaningful, informative content for their readers. In turn, those loyal audiences return and support publishers’ good work through subscriptions and interactions with advertisements.
As an independent auditor with more than 25 years of digital auditing experience, AAM has analyzed billions of page and ad impressions and worked with hundreds of the top digital publishers to audit their properties, ensuring their traffic is accurate, reliable and consistent. In this article, we explain the types of website traffic, how to identify invalid traffic and the impact inaccurate data has on publishers’ businesses.
Publishers provide readers with engaging information that encourages them to return for more insights. These readers might visit a publisher’s website directly if they subscribe, or as the result of a Google search or through marketing efforts by the publisher. Publishers want real people to visit their websites and interact with their content, but if steps aren’t taken to protect their sites, a significant portion of a website’s traffic can be robotic.
Valid traffic is any human user who visits a website and engages with the content or converts. While bots can click and visit pages, they cannot buy products, request and attend demos or perform other actions of value. Human traffic provides value to marketers, which makes such traffic valid.
The Media Rating Council (MRC) defines invalid traffic (or IVT) as “traffic that does not meet certain ad serving quality or completeness criteria, or otherwise does not represent legitimate ad traffic that should be included in measurement counts.” Examples include spiders and bots. There are several reasons why a bot might visit a site:
Identifying IVT is aided by using sophisticated technology, but it's also helpful to know your audience and ask common sense questions to determine if traffic patterns look abnormal.
Traffic patterns are unique to each publisher and may be affected by characteristics such as the region in which your readers are located, the times of day they typically view content, or what seasons see the most activity (think snowbirds heading to warmer climates and reading local publications).
Since bots often exhibit non-human characteristics, there are several signs to determine whether traffic is robotic. Some examples include:
|
|
|
Analyzing traffic spikes might also uncover surges in legitimate traffic. For example, if an article sees a traffic surge it could indicate that it was picked up by another news source, shared on a popular website or received a lot of attention on social media. Identifying unusual traffic patterns can reveal why an article performed well, which can also inform publishers’ content decisions.
Many analytics programs have built-in bot filtering capabilities. While these tools eliminate some bot traffic, it is challenging to detect all IVT since bad actors create new bots designed to pass through bot detection software. It is important to stay vigilant, know their characteristics and how to filter them from analytics.
Invalid traffic can negatively impact publishers’ internal and external decisions. Questionable data leads to inaccurate assessments of what promotions are driving traffic and what content is performing well, which can skew future business decisions.
Inaccurate data also interferes with publishers’ relationships with advertisers, who may be unable to determine if their ads are reaching their intended audiences if the data contains bot traffic. Separating the humans from the bots help publishers get better, more accurate data that can strengthen advertiser relationships.