The Six Steps of Running a Successful CRO Campaign

Written by Chris Marsh - 23 Sept 2014

CRO services | 8 MIN READ

When attempting conversion rate optimisation (CRO), how do you know what to optimise? No matter how big or small your website, the list of testing opportunities is almost never-ending. Every form field, discount code box, call to action, coloured button, checkout page, confirmation email – any of these things and more can be tested to ensure you’re getting the most out of the visitors to your site.

Before beginning your CRO campaign, one of the most important things to note is that there’s no definitive list of elements that should or should not be tested. The testing plan created for your website must therefore be unique: it will depend on where the key issues have been found across your site and their impact on visitor pathways through it. The problems (and solutions) you find are likely to fall into one of two categories:

1. Removal of blockers – removal of an issue that is preventing a user from converting. For example:

  • Forms that do not function correctly
  • Prices that are too high
  • Messaging or other elements that might cause you to distrust a site, such as bad reviews or visible source code on your site
  • Limited checkout options
  • Not delivering to particular areas of your customer base.

2. Opportunities to be more persuasive – making the most of an opportunity to convince your users to convert. For example:

  • Making USPs more prominent
  • Offering guarantees
  • Adding trust signals, such as good reviews
  • Providing social proof
  • Using better imagery

It is a fact that companies that have a structured and methodical approach to CRO are twice as likely to see an increase in conversion as those that do not. But how do you go about finding out what the most prominent issues are with your site? The checklist below broadly gives a top-line template to help structure your testing program.

  1. Audit your analytics
  2. Quantitative site performance analysis
  3. Qualitative site performance analysis
  4. Creation of hypothesis log and prioritisation of experiments
  5. Live testing
  6. Implementation of winning variations and feeding back learnings

Audit your analytics

To create valuable and actionable insight to power successful split testing, you need reliable and accurate data. For the most part, when carrying out a split test as part of a CRO campaign, you are committing to making business decisions that will affect your bottom line (hopefully a positive one). If you base your results on flawed data, then your business decisions will also be impaired.

Ensuring the data in your analytics package (Google Analytics, for example) is accurate is about more than ensuring the implementation is correct in the first place. Tracking the right goals and events is essential.

As an agency, it’s also vitally important for us to conduct interviews with key stakeholders in the business we’re carrying out the CRO work for to determine stakeholder needs. Only by fully understanding what is essential to a business can we begin to create hypotheses (see step four) that will help contribute to that aim.

Quantitative site performance analysis

While it is true that you can run CRO experiments on a website’s homepage or a single page in the checkout process, the changes that have the most significant impact often lie across multiple pages – category pages, for example. Likewise, we often find issues that harm conversions across multiple pages – perhaps the pricing text is too small on a site’s product pages, for example.

These nuances highlight the importance of considering the whole user journey rather than isolated pages.

After grouping pages into page types or templates, you can then look at the site ‘leakage’. The leakage is the amount of revenue potentially lost by people dropping out of the conversion path at each page type. To work out the leakage of a particular page type, perform the following calculation:

Leakage calculation - leakage = unique page exits x average page value

While we shouldn't limit step three (qualitative performance analysis) to the sections highlighted by the quantitative analysis at this stage, it can help prioritise if time is limited. It gives you focus as to where to look for issues.

Leakage calculations should be used with care as they do not work for all page types. Product pages, for example, are notoriously difficult to assess using a leakage calculation, especially when aggregated. A page might look like it has very high leakage, but only when you map out the whole user journey to that page might you realise that people are dropping out because of something happening earlier in the journey.

For example, say the product price on the page with high leakage is different from the price stated on all other pages in the journey. Rectifying this may involve changing prices on the surrounding pages. With this in mind, it's important to assess certain focus pages or templates not just in isolation but with regard to the pages around them. It is also important to understand where the natural exit points of a website are. Some pages, such as the shopping cart, will naturally have a higher level of leakage.

Qualitative site performance analysis

Data tells you a lot but does not necessarily tell you everything. To carry out truly effective CRO, you need to have a solid understanding of the different journeys users may take throughout your site, which is what we mean when we say ‘qualitative analysis’.

Using techniques such as heat mapping, user testing, visitor surveys, competitor analysis, and those key stakeholder interviews we mentioned earlier, you better understand the user journey from the first touchpoint to conversion.

The results of this analysis, and the earlier quantitative analysis, are then used to formulate and record plausible and informed hypotheses.

Creation of hypothesis log and prioritisation of experiments

By creating a hypothesis log, considering the ease of testing, implementation, and predicted impact of each hypothesis, you can prioritise which experiments to run first.

For example, your analysis may lead you to believe that people are distracted from converting by the number of banners presented to them when they view their basket. In this case, your hypothesis would be something similar to the following:

“Removing the advertising banners from the ‘view basket’ will reduce distraction and increase the number of people proceeding to the payment page and ultimately converting.”

We would weigh this hypothesis against others regarding the cost of testing and implementation, plus the potential benefit it is estimated to return. If the test is resource-light but could give tremendous revenue uplift, it will be near the top of the list. It will be a much lower priority if the test costs a fortune to implement and is unlikely to make a huge difference.

Additional benefits of prioritising hypotheses

Working on the most potentially beneficial hypotheses first is not just about making your money and efforts work most effectively (although that is beneficial). It is also about generating buy-in from your clients and colleagues.

Securing quick wins – successfully carrying out those experiments that require low resources and take minimal time to build but have a high impact on business help engage stakeholders, particularly those who might be pessimistic. People are naturally more likely to believe in a practice that provides a return on the first try rather than something that ‘fails’ for the first three experiments.

That said, conversion optimisation is not about an endless stream of wins. Throughout your CRO campaign, you are likely to have some experiments that end with inconclusive or negative results. That does not mean they have no value. From each experiment, you gain valuable insight into your audience, market and website, which will ultimately help you build better and stronger experiments in the future.

Live testing

As we discussed, the first hypotheses to be tested will be those likely to offer the greatest return on investment.

Before conducting a test, the proposed variations will need wireframing and building before directing a percentage of your site traffic to that variation (or variations). You can use many tools to manage your testing program. For example:

  • Optimizely
  • Adobe Target
  • Visual Website Optimizer
  • Monetate
  • Qubit
  • Google Optimize

The length of time a test will run will depend on various elements, including:

  • The volume of traffic to the areas of the site being tested
  • The typical purchase cycle related to what is being tested
  • The difference in conversion rate between the variations being tested

Concluding experiments should occur only when the results are statistically significant. ‘Statistical significance’ is the probability that an effect is not due to chance alone – i.e. the likelihood that the experiment will show the same results if re-run under the same conditions. The scientific consensus is that experiments are required to reach 95% statistical significance (at a minimum) to be considered conclusive.

Before we conclude an experiment, we require it to reach the 95% threshold. As well as this, we consider the length of the buying cycle and any relevant buying trends when deciding the minimum amount of time to run a test.

Which metrics should you track?
In terms of measurement, it is not effective to only track the conversion rate of your whole site or a particular channel – you need to keep an eye on other metrics too.

Ultimately, it all comes down to campaign objectives. Make sure you measure whichever key metric is associated with the primary objective of your campaign, but also keep an eye on other top-line metrics, like conversion and revenue. For example, suppose you are running a campaign intending to increase newsletter signups. In that case, you still need to track transactions and revenue to ensure your variations do not negatively impact any of them.

That said, showing restraint when tracking goals is also necessary. Track too many, and you could find yourself in analysis paralysis – having too much information to make a clear decision.

Implementation of winning variations and feeding back learnings

Suppose the experiment you have created has provided a statistically significant increase in conversion. In that case, it stands to reason that you will want to get that variation implemented on your site as soon as possible.

There are two options to achieve this:

  1. Getting your development team to hard code the change on your site
  2. Going for the ‘soft’ implementation option and making the changes via your A/B testing software itself.

Soft implementation should only ever be a temporary option. It is by no means a long term fix and may cause penalisation by search engines for cloaking content – showing different content to visitors and search engines. If you are looking to make a permanent change, it must be hardcoded.

If your experiment has not panned out the way you had hoped – perhaps your variation did not perform as well as the original – all is not lost. As we mentioned before, so much of CRO and testing is about accumulated learning. Whatever you have learned from your test can be fed back into the testing plan to inform further experiments.

Post-experiment segmentation

You can gain important insight from post-experiment segmentation, which allows you to drill down into the experiment data and understand how different segments of the participants behaved.

Most experiments declare a winner when one variation displays a higher conversion rate or average revenue than the original. However, this only tells you which variation converted best on average and only tells you half the story.

For example, Variation A might look like it converts 10% better overall than all other variations. Post-experiment segmentation might show that Variation A converts 20% worse for all iPad users and that on different devices, it converts even better than 10%.

With this knowledge, you might choose to serve iPad users a different variation than other devices, generating an even better conversion rate overall.

This data can again be fed into subsequent experiments and can be used to help you serve tailored content to specific audiences.

Remember that no website is ever fully optimised. It is also important not to rest on your laurels. Things change over time, and it may be that your winning variation will need revisiting further down the line to ensure its long term validity.