Just how well have you tested your Web site? One standard you may want to look at is the site’s credibility.
5 September 2003
By David Berkowitz
To determine the credibility (which can impact sales, referrals, visitor loyalty and responsiveness to advertising, among other things), there are a number of factors to examine. According to Stanford University and Consumer WebWatch, these include site design, information usefulness, bias, functionality and readability. The design seems to matter most.
Some of these are very broad and difficult to control, and they require major company-wide efforts to change, such as company motive and reputation. Yet others, like design, can be improved relatively easily, especially if there’s continual testing to evaluate the direction (positive or negative) and size of a change’s impact.
Testing is not new to marketers, yet testing Web sites and e-mail campaigns bring new opportunities and challenges. Eric Anderson, director of marketing at White Horse, defined and dissected A/B testing in a WebTrends audio presentation, which eMarketer readers can access for free here. At the end of the Webcast (and, if you’re that eager, you can fast forward), you can view a white paper by Mr. Anderson co-sponsored by WebTrends called “Optimize Your Marketing Campaigns with A/B Testing.”
After finding the presentation and white paper invaluable, eMarketer called up Mr. Anderson to recap the basics of A/B testing and then dig a little deeper.
eMarketer: What is A/B testing?
Eric Anderson: A/B testing is a formal term for something a lot of marketers are probably doing informally and probably not doing in a best practice fashion. I think that’s what we really wanted to say with this white paper and presentation. We all buy into the value of testing, and we’re working in a medium where testing is really king, but there’s probably a level of sophistication in testing that’s not there, but we could be doing a lot more.
In particular, where White Horse has an interest is in doing more around the user experience factors, especially on the site itself, and the creative factors in the advertising. I suspect that most of the testing optimization that’s gone on in the Web marketing space right now is around campaign optimization, optimization by media placement primarily, and less so around things like user experience and the creative, and that’s what A/B testing is really about.
It’s a very simple notion. It’s about isolating the factors that affect performance, testing those factors in a very simple, straightforward fashion, and then building on the result.
eMarketer: I guess what you’re saying here is that one’s Web site alone can be a great starting point for testing, before you even send out a campaign.
EA: That’s right. Obviously, campaigns appeal to people as a more malleable medium. It seems natural to optimize them. I think marketers don’t feel like their Web sites are as malleable — they think, “This is a set environment, for the most part. Unless I undertake a redesign, this is my site.” So they think less about how they can isolate performance factors there, and that’s why I think a big part of the argument is around the use of small sample sizes. It’s not like we’re saying, “I’ll do your homepage and test it for six months.” Let’s set up landing pages, make small, incremental changes and test those.
eMarketer: Who is really capable of doing A/B testing? Do you need an advanced degree to do this, or can this be done more simply?
EA: We see very sophisticated marketers on the client side, and we believe with our marketing consultants, we bring another level of sophistication. You can get a long way just based on what we consider to be best practices. That’s how marketers have sold themselves: “We have the experience, we have the best practices, we know what works, so we’ll build what works and put it out there.” I think you can get a long way with that.
I would be underselling our own services if I said you couldn’t. I think the application of best practices will take you a long way. But I think you get to a certain point where you feel like you know what your audience wants to hear, you’ve done the user testing on your site so you know what kind of site experience you need to have so you put that out there, and then the A/B testing is really just about finding those incremental levels of improvement, really getting down to the minutiae of it.
The short answer to the question is I do think that anyone can do it. I think that the quality you need in order to do it is a willingness to invest in testing — in other words, to say, “I don’t need to make a big splash right away. I need to simply make the best effort I can and incrementally improve on that.” You have to build that time into your campaign planning. You have to build that time into your Web development cycle.
When we’ve found clients who are willing to do it, they are the clients who really see Web marketing as a game of inches. They’re already doing it at a certain level of sophistication where it’s really meaningful for them, for instance, to go from a 2.7% conversion rate to a 3.0% conversion rate. That’s something they are very interested in, and those kinds of marketers are the ones we find really embracing it. They have the patience to sit through multiple iterations of a test.
eMarketer: When you talk about “incremental,” how valuable can a small increment be?
EA: It varies widely. At the level of messaging, a different call to action can sometimes make an enormous difference. You can see a swing of 20%, 30% based on the difference of something in the primary messaging. It seems obvious to people to futz around with the messaging, especially around things like calls to action or the emotional hook you’re trying with the advertising.
It’s not as intuitive for most marketers to say, “Let’s take a look at the set of instructions that accompanies the buy flow, and even the location of that set of instructions and see what kind of effect that has.” That’s been very humbling for us over the years, to see just how profound an effect that kind of thing can have. I think a lot of marketers would be fairly shocked if they saw how often a lot of the helpful instruction that they give to users is ignored, and as a result you have a lot of negative attrition once users are getting into the buy flow because they didn’t take the time to take the time to read what steps are coming up and what they’re expected to do next. Looking at things like that with A/B testing, those can really have a profound effect.
eMarketer: Getting back to examples where a campaign is involved, let’s say a marketer does pretty thorough testing on a certain campaign — they test the subject lines and message length and a couple landing pages. Do you find that a lot of these lessons can be reapplied to future campaigns, or is it always an issue of starting from scratch?
EA: That’s a really good question, and I think the answer is somewhere in the middle. For someone who’s invested in that kind of testing, you have to guard yourself against some tendency to go after the ‘singularity theory,’ the idea that you learn everything you need to know about what the audience wants to see, and then you’ve got it, you’ve got the bible. Some marketers are doing this, developing the bible, one sheet that says, “If we follow these 25 best practices, we’re always going to succeed.” I don’t believe in that because I think that those techniques exhaust themselves. You’re talking to a finite audience, and that audience shifts over time as your market shifts. You do have to continually retest.
I say the answer’s somewhere in the middle because I think you will learn things that you can continue to apply going forward, but there’s a danger in resting on those assumptions too long. You have to go back and continually retest over time.
There’s definitely virtue to pull out the rules and apply those rules long-term. Everybody does. We all have best practices that we tell our marketers and our designers — follow these rules, these work — but we also recognize the limitations of those and that for a given audience and for a given campaign, we have to throw open those questions again.
eMarketer: No matter what it is you’re testing, how do you decide how far to go? You could probably come up with 100 variables, and that leads to quite a few combinations.
EA: Right. You can slice and dice so thin, especially when you’re dealing with the inherent slipperiness of language. If I’m running a keyword test, and I test two different ads, how many words in those two different ads can I change and still have a valid test?
There’s no single answer for that, but I think what’s useful and I think what’s tolerable to our clients is we’re going to try some differences in messaging, and it will be key messaging, that we know the user will notice and will make a difference for them on the landing pages and on the site, those would be the key factors. If we get there and we’re satisfied with the results, we can always go further. The thing to keep in mind is you can always slice it a little bit more thinly, but partly it’s a negotiation in terms of how fast do we need to get to market, what’s the client’s tolerance for more testing, and can we go this far and get some results that we’re happy with.
eMarketer: In regards to the Web, is this testing getting easier now?
EA: Absolutely. Part of it is a level of acceptance — there’s a willingness to take this kind of testing, but also the tools themselves are making it easier. The Web analytics tools of which WebTrends is just one example have this capability as well — they make it easy to isolate those factors. Google has a built-in A/B testing capability in the sense that you can randomly rotate ads against a single keyword, and the ad serving tools make it easier.
Overall, I don’t think anyone is consciously saying, “Let’s move toward A/B testing with these tools.” I think it’s simply that as these tools evolve, they become more aware of the kinds of tests that marketers are running, and it does make it easier.
eMarketer: Someone reads this interview here and hasn’t done much testing, and he’s inspired and wants to get more serious about it, where should he begin?
EA (laughs): Obviously, begin by calling us.
I think the thing to do is, first of all, put a little more burden on the client to think about are they at a point where they’re ready for A/B testing. Have they done the due diligence that we think ought to precede that? We think that A/B testing is not a substitute for actual, empirical user testing where you’re talking to real users about your messaging and site experience and making sure those are optimal for your users. That’s a starting point, that’s a stake in the ground.
Beyond that, if you have a high level of confidence to that point that you’ve addressed the needs of your users, then I think the place to begin is simply sit down and think about the performance factors worth testing here. Think about the way that we run the campaigns and simply identify what the testing setup is going to look like based on the types of campaigns we’re running, because obviously it differs a little bit if you’re doing ad campaigns versus if you’re doing e-mail, or if you’re not doing any campaigns at all and you want to optimize on the site, which is another good way to go. It’s either working with your internal team or working with your agency to build out those creative iterations in a way that isolates the factors, and I think for most clients, that’s the biggest leap.
It’s not as much fun to play the game of inches. You really want to try a couple things that are radically different. That somehow seems to make more sense to use, and that’s a place to start but it’s not a place to end up. You really want to get to the point where you feel confident about your overall approach and you’re looking to achieve those next increments.
You can reach David Berkowitz at firstname.lastname@example.org.
By gathering the latest research and news from over 1,000 sources, eMarketer has established itself as the world’s leading provider of internet and e-business statistics. eMarketer’s Web site is at www.emarketer.com