Be Aware Of What Analytics Don't Tell You

The web community lives in a world of numbers. Tools like Google Analytics, Mint, Campaign Monitor and others do a wonderful job of helping put concrete numbers in the hands of site owners allowing them to make informed decisions about their business.

Last week ReadWriteWeb had an article The Death of the Pageview which provided an overview of some ways the industry has moved from simple “clicks” or “views” into more meaningful metrics. While it is great that we can now measure campaign conversion rates or watch the cow paths form through the sites we build we must always ask ourselves if the analytics are measuring the correct things, or if numbers or trends can even help answer a particular question.

Sometimes what is measured isn’t what you need

It’s long been a pet peeve of mine that statistic packages report on computer screen resolution in the context of viewing a web site. Some generic hardware information like color depth might be [or may have once been] useful, but there is such a variety of how users manage windows on their computers, how many toolbars they have as party of their browser application, and other similar features that you can’t make assumptions about how much screen real estate is being used for a web page based on the dimensions of a screen.

Google Browser Size does an excellent job at visualizing the broad range of viewing scenarios in the wild. The description of the tool reads in part:

The sizes represented in this contour are client area sizes, not browser window sizes. This means they represent the size of the browser without the title bar, toolbars, status bars, etc., and thus give a true representation of how much content can be seen by a particular segment of the Web-using population.

Unfortunately, to date, Google Analytics does not track this viewport size or “true representation” and I’ve seen undue weight placed on the tracked “Screen Resolutions” report.

“Using” can be passive

In recent weeks Netflix has undergone a few design changes and in the process deemphasized a feature called “Netflix Friends” in which you could pass notes on movies to your listed contacts as well as view the ratings of movies by your friends in a prominent area of a movie listing page. There’s been some community dust up over which has prompted Netflix VP of Product Management, Todd Yellin to make an appearance on their company blog defending the changes. Every service makes changes and there’s always a segment of the community that rebels against change so this face off isn’t unique, but this point by Yellin intrigued me:

Friends is a feature on the Netflix Web site that’s been used by less than two percent of all subscribers since we added the feature in 2004.

He’s gotten a lot of push back and cries of disbelief about that number. Being a long time Netflix user who has enjoyed that feature I can see that number meaning one of two things — either that only 2% of users have ever added a friend [sending or receiving an email based invite] or that 2% of users use Friends based features like sending personalized recommendations. If the former, then yes, it is a dead feature and put it in the dirt. Killed by lack of interest or never given a chance to grow could be grounds for debate, but at this point the feature is dead.

So why mention it in this article? Because the later case, true or not, can illustrate the exact case where the use of a feature can be incredibly difficult to track without AB testing, eye tracking or user surveys. In 5 or 6 years of using Friends myself while maintaining a small set of under 20 or so close contacts I don’t know if I’ve ever actively used Friends to discuss, share or recommend movies on the Netflix web site. Those discussions take place elsewhere, either in face to face conversations or other places on the net where we congregate.

The strength and huge benefit I saw in the Netflix Friends system was that whenever I browsed movies — particularly in specific genres like foreign films or horror movies — I’d see the ratings of friends who saw the movie prominently displayed as my own personal staff of movie reviewers. Nobody had to do anything social or submit extra information to make it work, they just went about their normal activity of rating movies and driving the recommendation engines that Neflix is built on and the Friends features I used were a happy side effect.

But the side effect of that lack of action is that there is a lack of action to measure. The setting of ratings and contribution to Friends data pool is piggy backing on existing ratings actions. And on the consumption side the ratings were displayed in a few places and in particular right at the forefront of the movie details page so there was no action needed, often not even scrolling, to read the information. So when all was said and done after being an active Netflix user [and still am] for the 6 year life of the Friends program, including a visit or 5 to the web site a week, I may have only performed 20 or 25 total traditionally measurable actions even though I “used” the feature every time I browsed to a movie details page.

Hmm, yeah

Well that was a bit long winded, but I think you get my point. There’s a ton of great metrics out there & we’re always looking at them be it to make technical decisions, business decisions, or to evaluate the success of previous decisions. But the hardest decision will remain to be picking what metrics to use or not to use when making those decisions.

Comments Temporarily(?) Removed