One of the toughest things about achieving a successful campaign, as a marketer, is first figuring out the components of generating that success. There’s no instruction manual to advertising placements, whom to target, what to write, or what images to use, so marketers are left to sift through research and past campaigns.
Things are only more complex now with the wide range of ways to consume content. The internet may have taken some of the attention off traditional mediums, such as TV or radio, but now there’s the issue of what to advertise and how to advertise it.
Thanks to the rise in mobile usage, now everything has to be truncated and adjusted, in order to fit accessibility requirements.
We all know that the purpose in advertising is to generate an emotional response from the user, which will inevitably lead to a conversion. To do so, the content needs to engage and connect with their needs. But with so many outlets and mediums to reach them through, the only way to fully understand and comprehend our target is through experimenting with the content.
That is why we test. Because the only way to get a proper read of an intended audience is to introduce options, uncover the responses, and then apply the results to the next campaign you run for them. Therefore, you are presenting them with what they prefer.
Generally, you want to limit your testing to two options. Otherwise, you may saturate your efforts and end up with a number of ads that are showcasing results that are too close to each other to make a proper judgment on.
So presenting two options and getting a 60/40 result in favor of one side, as opposed to running four options and getting 35/30/25/10, provides better indication of what to run with.
Now, there’s nothing wrong with testing multiple options, but only in the right circumstances. This is called Multivariate testing. Here’s a brief description:
“Multivariate testing changes many different elements in an email or landing page. It’s great if you need to test multiple variables but you don’t have the time conduct a series of one-off tests. They’ll help you discover which version performs the best, but you won’t be able to pinpoint which change had the biggest impact on the performance of your campaign.”
Plus, it’s far more expensive and time-consuming to create all of those variations.
Today, we’re going to talk about A/B Testing, which is essentially two identical tests with one variation.
Say, for example, you have a landing page users click to for a little more info, before downloading a white paper. Now everything is the same—the info, the form to fill out, and image—but with one difference: the CTA button to download.
For Test A, the CTA button reads “Download Now”
For Test B, the CTA button reads “Download Now”
The difference? The color. Test A’s CTA button is green, while Test B’s CTA button is red. While this may seem minor, even something as simple as a change in color could dramatically affect the action of a user.
Here’s reasoning, as cited by a Entrepeneur.com study, as to why a red CTA button boosted conversions by 21% over a green CTA:
“Take a closer look at the image: It’s obvious that the rest of the page is geared towards a green palette, which means a green call to action simply blends in with the surroundings. Red, meanwhile, provides a stark visual contrast (and is a complementary color to green). “
All of that just because of a simple change in color! Now let’s take a look at a change in type, albeit with a few more than two examples:
And the results:
Was that the answer you were expecting? Let’s examine it and see how there could be such a stark difference in conversion rate between the top performer and second place.
Remember the previous example of color contrast? It has that; the main CTA is a bright red, while the bottom text is a dull grey that’s almost difficult to see, so there’s a little bit more intrigue there.
Notice the contrast between the sizes, too. The top text is prominent, while the bottom text is so small you can barely make it out upon first glance.
Of course you’ll also see one of the greatest words in the history of marketing: FREE.
Now, this isn’t to say that red is a better color than green or that bigger/smaller texts are going to work every single time. These are just some examples you can try with your campaign. As mentioned before, what matters most are the circumstances of what you write, how you write it and where you place it.
What you should take away from this are how subtle changes you may perceive as insignificant can actually make all the difference between five sales and two.
Similar results are applied to using certain words.
Here’s an example I discovered and placed in another blog:
“Social psychologist Ellen Langer tested the power of a single world in an experiment where she asked to cut in line at a copy machine. She tried three different ways of asking:
‘Excuse me, I have five pages. May I use the Xerox machine?’
‘Excuse me, I have five pages. May I use the Xerox machine because I’m in a rush?’
‘Excuse me’ I have five pages. May I use the Xerox machine because I have to make some copies?’
60% said OK to the first sentence. The other two? 94% and 93%, respectively. The only change was ‘Because’.
Notice how weak the reasoning after the two instances where ‘Because’ was used is, yet it made a stark difference in the result. “I’m in a rush!” is an obvious reason that many of the other people in line could be experiencing, while the other reason isn’t a reason at all.
‘Because’ was a trigger word that convinced those in line to let the subject cut, despite seemingly giving no good reason to do so.
Once again, allow this to be an indicator of just how much of a difference one word can make. As much as this blog is about testing different styles in persuasion, it’s also about realizing the importance of certain words, colors, or placements and just how much of a difference they can make.