Mack Web is sort of like Bruce Banner (a.k.a. the Incredible Hulk) – we test stuff on ourselves.
Unlike his, our tests don’t turn Mack Web team members into giant, green superheros filled with rage (though once in while we DO turn into team members that feel ill from eating too many gummy bears).
Rather than experimental biochemistry, we’re running tests on things that live within the world of online marketing.
One recent (and in process) test we’re running on Mack Web is around a curated blog post and e-news we share once a month. You might be familiar with it? We call it Nuggets of Knowledge (also known as the NOKlist).
The NOKlist is 5 months and running, which is a great time for us to do a first pass assessment of the data we’ve collected and see what it can teach us about our audience. So, befitting to the Mack Web spirit, we wanted to share those findings with you.
First, some background on the NOKlist itself. Each month, every team member selects an article (or resource) to share with our community and explains, in their very own words, why they found the article valuable. We share these findings on the blog:
Then we create an e-news around this content and send it to our Newsletter list:
To be very clear, we didn’t dream up the NOKlist as a testing ground. Our intent with this post and e-news remains the same as our other content: to share value with our community. It just so happens that it allows us to share the articles we admire, giving props to all the awesome (and smart) folks out there doing awesome (and smart) things. It also gives you guys a little peek into our team, getting to know us a little better.
But it can’t be denied that there is another, hidden benefit to the NOKlist (no, it’s not a subliminal message worked into the text to help us achieve world domination. That project is for another day). It lends us some insight into our community’s preferences and interests. (Or at least we hope it will).
How are we going to accomplish this you ask? By creating tests – based largely off clickthrough rates – that revolve around the following questions:
Do people have a favorite/most trusted team member?
So we’re pretty sure Arthur, were he able to type, would be your favorite Mack Webber.
Until Arthur figures out how to type on a keyboard with his hooves, we’re looking at ways to determine if our community favors one Mack Webber over another (more on this later).
There’s an intra-office bet going and quite a lot of gummy bears hinge on the result.
Do people have a favorite/most trusted source?
The team shares articles from various authors and websites that we love and learn from. We’re wondering if our community gravitates towards a particular author or website we share more than others, and if so, why?
Do people respond to different titles?
How does our community respond when we phrase an email subject line a certain way? Or how about the article titles? The curated articles we choose often use very creative and interesting titles, which is a great way for us to measure how those title types perform with our audience. We’re wondering about titles that:
- use statistics
- are instructional
- are cautionary or negative
- are questions, statements, or comparisons
- are creative or funny
Do people respond to different subjects/topics?
Within the NOKlist we share design posts, content posts, analytics posts, social posts, and more. We find these topics interesting (and relevant to our world), and now we have a way of gauging if our audience also finds them interesting (and relevant to their world). We’re already formulating other ways to test in this area, for example, what happens when everyone shares an article within the same subject?
Do people respond to different formats?
How does a video, slide deck, or infographic stack up against a blog post? If our audience ends up preferring a handful of formats over others, this will help us to plan our own future content with those formats in mind.
Answering all these questions can help us understand our audience and shape and tailor our content. That’s why we like the NOKlist so very much – curated content allows us to test our questions simultaneously (and without having to draft all that content ourselves). We think it’s a pretty nifty way to get the most out of curated content and highly recommend it to anyone else who wants to give it a try.
Still, even with curated content, this is a lengthy process. We’re five months into the NOKlist, and still have loads of testing to do, but thus far, we’re off to a pretty decent start.
With 5 months of data under our belt, and a lot of testing protocol to work out, here’s what we’ve learned so far:
Our email marketing audience shows no favoritism towards specific team members.
With our email audience, the person recommending the article doesn’t seem to make a difference in how well the article performs (measured by the number of click-throughs each article receives) Huzzah! You really like ALL of us, not just our fearless leader.
Determining what sort of content our audience prefers is going to take longer than 5 months.
We do know that these 5 posts got the best response:
- Publish Your Blog Post Without SEO, and 1000s of Visits Will Be Forever Lost, by Rand Fishkin
- 70% of Time Could Be Used Better – How the Best CEOs Get the Most Out of Every Day, by Bill Trenchard
- Social Engagements Metrics that Matter – measuring, tracking and reporting FTW, by Jennifer Sable Lopez
- The Ultimate Guide to Successful Email Marketing, by Vero
- The Holy Grail of Building Communities: Developing a Strong Sense of Community, by Richard Millington
And these the lowest:
- Putting On the Ritz, Six Words at a Time, by Stuart Elliott
- Why Content Marketers Are Using All the Wrong Metrics (And What They Should Be Measuring Instead), by Contently
- Walk Cycle Demonstration (Stop Motion Animation), by Adam Pierce
- 82% of women think social media drives the definition of beauty, by Samantha Murphy Kelly
We can draw a few tentative bonus conclusions just from these:
Our audience doesn’t seem overly influenced by the inclusion of statistics in the title.
They appear in both the most and least favored posts.
Our audience might just have taste as eclectic as ours.
So far, people don’t seem to clickthrough on any one topic with consistency. The top posts come from all the topics. So do the second place topics and the last place topics (mostly). Sometimes funny posts do well and sometimes they suffer. There’s no clear-cut winner, just yet.
Our audience doesn’t automatically favor videos over other things.
We haven’t shared a lot of videos in the NOKlist yet, but one is in the lowest ranks and one lost in the middle. Whether that’s a format thing, a placement thing, or a subject thing remains to be seen.
As more time goes by, we’ll continue to gauge audience response, seeing what develops, and what patterns (if any) become apparent. One pattern we’ve noticed is that being at the top of the list does seem to matter…which leads me to our next topic:
These are the things that make it difficult for us to come to absolutes about the data we collect. Here’s the running list:
Position on the list may influence article performance.
Right now, the order of the NOKlist goes in alphabetical order (from Ayelet – Rebecca). In the first few months of the NOKlist, Ayelet’s articles received the most clicks. Then Ann joined the team, and the content she shared received the highest number of click-throughs. Rebecca’s articles (which are last on the list), consistently receive lower number of click-throughs. (Which we find sad because Rebecca’s pretty great).
This causes us to wonder if it’s about order. Does the first article receive more clicks simply because it’s first? Does the last article receive fewer clicks simply because it’s last? For future NOKlists we’re gonna try mixing up the order to double check.
Testing is secondary to providing value.
Sharing great content with our community and giving our readers a chance to know the team are the primary goals of the NOKlist, which means we won’t enforce testing parameters at the expense of those goals.
We don’t know if the audience has already read things that we share.
This could potentially hinder content performance within the NOKlist, thus leading us to the wrong conclusions about our community’s content preferences.
Other factors – such as send time and send date – can influence response.
We try to keep these as steady as possible with the NOKlist, but they remain factors that we must take into consideration. Depending on the date or time we send the NOKlist out over email and social, what sort of return are we getting back and how is that affecting article performance?
People are people.
There’s no accounting for what people are going to do on any given day, which means that sometimes our data is going to just be random, static on the line. We need to keep this in mind when we look at the data we collect from the NOKlist, and understand that some of the tests we put in place are going to fail and/or and need fine-tuning.
Only time will tell. In the meantime, we welcome suggestions. What else can we use to gauge our audience’s interest? Any thoughts on circumventing the roadblocks?
And, hey! enjoy next week’s NOKlist here on the blog or sign up for our e-news to get it delivered directly to you. (Try to not to be self-consciously aware of how closely we’re observing your every move).