The Six Steps of Running a Successful CRO Campaign - Fresh Egg Blog

The Six Steps of Running a Successful CRO Campaign

(Google+)

Senior content marketing specialist

Conversion rate optimisation logo

When attempting conversion rate optimisation (CRO), how do you know what to optimise? No matter how big or small your website, the list of testing opportunities is almost never ending. Every form field, discount code box, call to action, coloured button, checkout page, confirmation email – any of these things and more can be tested to ensure you’re getting the most out of the visitors to your site.

Before beginning your CRO campaign, one of the most important things to note is that there’s no definitive list of elements that should or should not be tested. The testing plan created for your own website must therefore be unique: it will depend on where the key issues have been found across your site and the impact they have on visitor pathways through it. The issues (and solutions) you find are likely to fall into one of two categories:

1. Removal of blockers – removal of an issue that is preventing a user from converting. For example:

  • Forms that do not function correctly
  • Prices that are too high
  • Messaging or other elements that might cause you to distrust a site, such as bad reviews or source code being visible on your site
  • Limited checkout options
  • Not delivering to particular areas of your customer base

2. Opportunities to be more persuasive – making the most of an opportunity to convince your users to convert. For example:

  • Making USPs more prominent
  • Offering guarantees
  • Adding trust signals, such as good reviews
  • Providing social proof
  • Using better imagery

But how do you go about finding out what the most prominent issues are with your site? The latest Econsultancy Conversion Rate Optimisation Report found that companies who have a structured and methodical approach to CRO are twice as likely to see an increase in conversion as those that do not. The checklist below broadly gives a top line template to help structure your testing program.

Step one: audit your analytics

To create useful and actionable insight, split testing needs reliable and accurate data. For the most part, when carrying out a split test as part of a CRO campaign, you are committing to making business decisions that will have an effect on your bottom line (hopefully a positive one). If the results you get are based on flawed data, then your business decisions will also be flawed.

Ensuring the data being tracked in your analytics package (Google Analytics, for example) is accurate is about more than ensuring the implementation is correct in the first place. The right goals and events must also be tracked. As an agency, it’s also vitally important for us to conduct interviews with key stakeholders in the business we’re carrying out the CRO work for. Only by fully understanding what is important to a business can we begin to think about creating hypotheses (see step four) that will help contribute to that aim.

Step two: quantitative site performance analysis

While it is true that CRO experiments can be run on a website’s homepage, or a single page in the checkout process, the changes that have the largest impact often lie across multiple pages – category pages, for example. Likewise, issues that have a negative effect on conversions can often be found across multiple pages – perhaps the pricing text is too small on a site’s product pages, for example. This highlights the importance of considering the whole user journey, rather than isolated pages.

After grouping pages into page types or templates, you can then begin to look at the site ‘leakage’. This is the amount of revenue potentially lost by people dropping out of the conversion path at each page type. To work out the leakage of a particular page type, perform the following calculation:

Leakage calculation - leakage = unique page exits x average page value

While step three (qualitative performance analysis) should not just be limited to the sections highlighted by the quantitative analysis at this stage, it can help prioritise if time is limited. It gives you focus as to where to look for issues.

Leakage calculations should be used with care as they do not work for all page types. Product pages, for example, are notoriously difficult to assess using a leakage calculation, especially when aggregated. A page might look like it has very high leakage, but only when you map out the whole user journey to that page might you realise that people are dropping out because of something that is happening earlier.

For example, say the product price it states on the page with high leakage is different to the price stated on all other pages in the journey. Rectifying this may involve actually changing prices on the surrounding pages. With this in mind, it’s important to assess certain focus pages or templates not just in isolation but in relation to the pages around them. It is also important to understand where the natural exit points of a website are. Some pages, such as the shopping cart, will naturally have a higher level of leakage.

Step three: qualitative site performance analysis

Data tells you a lot, but does not necessarily tell you everything. To carry out truly effective CRO, you need to have a solid understanding of the different journeys users may take throughout your site, which is what we mean when we say ‘qualitative analysis’.

By using techniques such as heat-mapping, user testing, visitor surveys, competitor analysis and those key stakeholder interviews we mentioned earlier, a better understanding of the user journey from first touch point to conversion is built up.

The results of this analysis, and the earlier quantitative analysis, are then used to formulate and record plausible and informed hypotheses.

Step four: creation of hypothesis log and prioritisation of experiments

By creating a hypothesis log, taking into account the ease of testing, ease of implementation and predicted impact of each hypothesis, you can prioritise which experiments to run first.

For example, your analysis may lead you to believe that people are being distracted from converting by the number of banners being presented to them when they view their basket. In this case, your hypothesis would be something similar to the following:

“Removing the advertising banners from the ‘view basket’ will reduce distraction and increase the number of people proceeding to the payment page and ultimately converting.”

This hypothesis would then be weighed up against others in terms of the cost of testing and implementation, plus the potential benefit its implementation is estimated to return. If the test is resource-light but could give tremendous revenue uplift, it will be near the top of the list. If the test will cost a fortune to implement and is unlikely to make a huge amount of difference it will be a much lower priority.

Additional benefits of prioritising hypotheses

Working on the most potentially beneficial hypotheses first is not just about making your money and efforts work most effectively (although that is clearly beneficial). It is also about generating buy-in from your clients and colleagues.

Securing quick wins – successfully carrying out those experiments that require low resources and take minimal time to build, but have a high impact on business – help engage stakeholders, particularly those who might be pessimistic. People are naturally more likely to believe in a practice that provides return on the first try, rather than something that ‘fails’ for the first three experiments.

That said, conversion optimisation is not about an endless stream of wins. Throughout your CRO campaign, you are likely to have a number of experiments that end with inconclusive or negative results. That does not mean they have no value – from each experiment you gain valuable insight into your audience, market and website, which will ultimately help you build better and stronger experiments in the future.

As Matt Althauser, GM Optimizely EMEA, says:

“The true value of testing is the accumulative learning over time.”

Step five: live testing

As we discussed, the first hypotheses to be tested will be those that are likely to offer the greatest return on investment.

To conduct a test, the proposed variations will need wireframing and building, before a percentage of your site traffic is directed to that variation (or variations). There are a number tools that can be used to manage your testing program. For example:

Conversion rate optimisation services logos

The length of time a test will run will depend on various elements, including:

  • The volume of traffic to the areas of the site being tested
  • The typical purchase cycle related to what is being tested
  • The difference in conversion rate between the variations being tested

Experiments should only be concluded when the results are statistically significant. ‘Statistical significance’ is the probability that an effect is not due to chance alone – i.e. the likelihood that the experiment will show the same results if re-run under the same conditions. Scientific consensus is that experiments are required to reach 95% statistical significance (at a minimum) in order to be considered conclusive.

Before Fresh Egg concludes an experiment, we require it to reach that minimum of 95% statistical significance. As well as this, the length of the buying cycle and any relevant buying trends are also taken into consideration when deciding the minimum amount of time a test needs to run for.

Which metrics should you track?

In terms of measurement, it is not effective to just track the conversion rate of your whole site or a particular channel – you need to keep an eye on other key metrics too.

Ultimately, it all comes down to campaign objectives. Make sure you measure whichever key metric is associated with the primary objective of your campaign, but also keep an eye on other top-line metrics, like conversion and revenue. For example, if you are running a campaign with the objective of increasing newsletter signups, you still need to track transactions and revenue to ensure your variations do not have a negative impact on any of them.

That said, restraint when tracking goals is also necessary. Track too many and you could find yourself in analysis paralysis – having too much information to make a clear decision.

Step six: implementation of winning variations and feeding back learnings

If the experiment you have created has provided a statistically significant increase in conversion, then it stands to reason that you will want to get that variation implemented on your site as soon as possible. This could mean either getting your development team to hard code the change on your site, or going for the ‘soft’ implementation option, and making the changes via your A/B testing software itself.

Soft implementation should only ever be a temporary option. It is by no means a long term fix and may cause penalisation by search engines for cloaking content – showing different content to visitors and search engines. If you are looking to make a permanent change, it must be hard coded.

If your experiment has not panned out the way you had hoped – perhaps your variation did not perform as well as the original – all is not lost. As we mentioned before, so much of CRO and testing is about accumulated learning. Whatever you have learned from your test can be fed back into the testing plan to inform further experiments.

Post-experiment segmentation

You can gain important insight from post-experiment segmentation, which allows you to drill down into the experiment data and understand how different segments of the participants behaved.

Most experiments declare a winner when one variation displays a higher conversion rate or average revenue than the original. However, this only tells you which variation converted best on average, and therefore only tells you half the story.

For example, Variation A might look like it converts 10% better overall than all other variations. Post-experiment segmentation might show that Variation A converts 20% worse for all iPad users, and that on other devices it converts even better than 10%.

With this knowledge, you might choose to serve iPad users a different variation than is served to other devices, generating an even better conversion rate overall.

This data can again be fed into subsequent experiments, and can be used to help you serve tailored content to specific audiences.

It is also important to not rest on your laurels – remember that no website is ever fully optimised. Things change over time and it may be that your winning variation will need revisiting further down the line to ensure its long term validity.

Effective CRO leads to making better busienss decisions. Find out more about our CRO services.

Want to find out more about conversion optimisation? Check out these other blog posts and case studies from Fresh Egg:

As ever, you can also contact us with any of your CRO-related questions.


Share this post


comments powered by Disqus