Conversion Optimisation: choosing the right method for testing your website.

Ben x

Over the past few months I’ve been really getting to grips with the world of Conversion Optimisation Testing - the process of testing to improve the performance of a landing page or conversion funnel with a view to increasing leads, sales or completed objectives.

Conversion optimisation can be a tricky subject, with many caveats. The field is littered with recommendations, best practice guidelines and suggestions for how to maximise the conversion potential of a website. Additionally there is no shortage of tools and packages designed to empower designers, developers and online marketers to collect and analyse data with a view to making informed changes to key converting landing pages and conversion funnels. But with this plethora of resources, the process can feel daunting and a little overwhelming. Trying to establish where to begin and what methods to use can seem like an impossible task, which is why I’m writing this blog post.

In part this is to divulge my findings over the coming months, but also as a way to formalise my findings and provoke some discussion on best practices and the preferred methods used by other people. Conversion Optimisation testing offers the opportunity to test a website in a live environment. This is an advantage over the alternative method whereby a new page or section is created to replace an old one with the impact being monitored. Live testing means all versions are subject to the same market conditions as they are running at the same time, enabling us to limit the impact of other factors which may affect the test.

There are a variety of methods which are recommended for Conversion Optimisation Testing, and the truth is, the correct one will depend entirely on the fundamentals of your given project, but two predominant methods are discussed again and again as the most effective methods of Conversion Optimisation Testing.

A/B (Split) Testing

The first and most simple is known as “A/B” testing. A/B testing in its simplest form is when a separate, second variation of the page being tested is created with various alterations from the control page (original page). Traffic to the page is then split between the original version (Version A) and the new version (Version B). The performance of the two pages is then monitored for a period of time and the page which performs the best is then selected. At this point another variation of the page may be created (Variation C) and traffic is then split between the winner of the initial test, and the newest version of the page. This process would typically continue until all logical variations have been created and tested. As the test progresses, each new variation that is developed must take into consideration the learned knowledge from the test which preceded it. Once the A/B testing has been completed and all logical page variations have been created and tested, the highest performing page is selected and permanently replaces the original variation.  Different tools allow you to conduct split testing in slightly different ways, for example it is common to create more than just one variation (a/b), for example; if a site receives enough traffic it may be possible to create two or three additional variations of the control page (test versions a/b/c/d) and split the traffic between all of them equally. The amount of variations which can be tested at once typically depends on how much traffic the website receives on a day to day basis, as attempting to create multiple variations on a low traffic website will mean statistically useful results take a long time to accumulate. Google Website Optimiser is a common tool for performing A/B testing, and takes a more structured approach to A/B testing. It allows us to create two variations in a single experiment,  and once a winner is found, a new variation (C) can be tested in a second experiment against the winner from the first experiment and so on.

Multivariate (MVT) Testing
MVT Testing

The second method for testing and improving website conversion is Multivariate testing. This is a process by which several components of a landing page are tested. The simplest way think of Multivariate testing is as several A/B/C/D/Etc tests being performed on a single page at any one time. For example, you may create 4 variations of the page header, 2 variations of the lead website image, 3 variations of the sidebar section, and 2 different colours on the primary CTAs (call to actions).  These different variations are then tested in different combinations. So a user may land on the page and see headline 1, lead image 2, sidebar section 1 and the 2nd call to action colour variation – or some other random combination. What this effectively means is that with Multivariate testing it is possible to test a potentially limitless number of combinations at any given time – the only real limitation being the time it will take to get a statistically valid sample of visitors to each of the specified combinations. As the test progresses we then disable underperforming combinations in favour of the high converting ones. Multivariate testing is more granular than A/B testing in that it allows for the testing of specific elements on a page, rather than A/B testing which requires that an entirely different variation of the page be created for testing. Multivariate testing allows us to identify specifically which elements are serving as a blocker or as an enabler to conversion, and in which combination they work best. Unlike A/B testing, which doesn’t give us this granularity, MVT Testing allows us to pinpoint exactly which element an improvement in conversion can be attributed to. Many popular websites adopt a continual 24/7 MVT testing strategy – is one of the most well known. They continually test and re-test various elements of their website. This is why the Amazon site evolves gradually, so for the user, there is never a visibly drastic change to the layout of pages or design of page elements, but over time the site changes to include the highest performing combination of tested elements.


Deciding which method of testing to use on your website

A/B Testing

A/B testing is the easiest method of conversion optimisation testing, as the implementation of this type of test is so simple. Two variations of a page are created (e.g. / and then traffic is literally split 50/50 between the two. There are also less technical implications of testing in this way; typically, cookies are used to maintain a consistent user experience – once a user has seen a particular variation of a page, this is the same variation they will see if they leave the site and return at a later date with the same computer and web browser. Google Website Optimiser offers a pain free method for implementing this type of test requiring only the addition of the following code to the pages being tested:

    • Two .js codes on the original variation, One to perform a redirect to the additional variations and one that measures the number of times the page was seen.
    • One JavaScript code snippet on each variation page to measure the number of visitors viewing each page
    • One JavaScript code snippet on the conversion page to measure which visitors converted.


Another significant benefit of testing in this way is the seamless integration with Google Analytics. As two separate pages are being created, within Google Analytics, a report can be generated on the two pages which can then be compared against each other after an appropriate amount of time has passed – allowing us to not only see if a page conversion rate has increased, but also to see if other metrics such as Bounce Rate or Time on Page have shown an improvement as well.

Advantages of A/B testing

Advantages of A/B testing


In terms of flexibility, again A/B testing has great benefits regarding the design and functionality of a page. Because we’re creating a completely new version of the page we are not limited or restricted by other elements which may need to be considered when testing in combinations as with an MVT experiment. Due to the fact that an A/B test will usually involve two (or more) separate and unique variations the margin of difference between the performances of the variations is typically much greater and a winning variation is usually identified quickly. Faster results means less time testing and more time for your winning combination to start converting visitors.

The simplicity of this method of testing also means that permanently changing from one variation to another is a simple process too, requiring only the removal of the GWO js code snippets. Because of these benefits A/B testing is almost always ideal for relatively low traffic websites or websites where conversion levels are minimal – such as blogs or educational sites.

A/B Testing also makes testing conversion funnels very simple, but potentially a little hard to manage. Traffic can be split between two pages, which then lead down different conversion funnels to a shared conversion goal page. This is simple to implement but has potential issues regarding the resource required to develop these entirely separate funnels. Furthermore, changing elements along the conversion funnel makes it even more difficult to identify exactly what increased or decreased conversion rates. For example, something which serves to increase conversion rates at the beginning of funnel A may be countered by something which decreases conversion rates at the end of funnel A; with a simple A/B test this would be difficult to measure.

MVT Testing

Many different factors have an impact on website conversion rates, which is where A/B testing is lacking; the simple testing of one page over another doesn’t offer enough specific information relating to what is aiding in a visitor’s path to conversion. It might be an unclear CTA, or an off-putting image acting as a barrier to conversion. Creating a new version of the page may well improve conversion, but the downside of this is that we haven’t learnt which elements of the original variation weren’t working, and more importantly, which elements of the new version are working. MVT testing allows for a more refined strategy, because we can make much smaller tweaks and changes and then test and analyse their impact, and more significantly, we can also test which elements are working well in combination and which might be working against each other. For lower traffic websites minor changes to colours, headings and images may not have a measurable impact, which is why simple A/B testing is a viable alternative. However for websites with a high volume of traffic, MVT Testing is the preferred method as this factorial process of testing is carried out by defining key areas of a page, on which elements can be alternated independently of other defined areas. For example; this might include but is not limited to:

    • The Site Header Image
    • The Sidebar
    • A Primary Call To Action
    • The H1 Text

Advantages of MVT Testing

Advantages of MVT Testing
Because MVT Testing allows us to test a potentially limitless set of combinations, it is absolutely paramount that a structured approach be taken, and that a testing plan is developed. The resulting amount of combinations vs. the average daily traffic to the website should be considered. Doing so will ensure that you see statistically relevant results in a reasonable timeframe. Failing to consider this may result in a wasted effort, and a test which lasts for months without showing a winning combination. If the correct approach to MVT Testing is taken, then the information gathered from this sort of experiment will be invaluable, offering insight into how your users think and interact with your site and how different elements within your website work together.


Technically, an MVT experiment is more of a challenge to implement than an A/B test. Relying more heavily on JavaScript:

    • Initial JS Code snippet in the header
    • An JS Code Snippet for each individual element on the page that will be tested
    • Additional JS Code snippet on the conversion page


Unlike an A/B test, all variations and combinations are displayed on the fly within a page, and so thought must be given to the impact of swapping various elements in and out of the defined areas on the page and how (if at all) they will work together. This seems simple enough when we’re considering swapping one image for another but consider the notion of moving a primary CTA from one defined area to another – in some cases this might offer a combination with two CTAs and one with none. In this case it is important to have a plan in place for ensuring that only the combinations you want to test are displayed to users.

Other complexities relating to an MVT test include the integration with Google Analytics. Different tools offer different levels of analytical reporting. In the case of Google Website Optimiser, the analytical reporting is minimal and therefore intetegration is required with Google Analytics for more comprehensive analysis. The most documented method for integrating a Google Website Optimiser MVT test with Google Analytics is to use custom variables to identify combinations, which is then fed into Google Analytics. This is a technical implementation and opens the floodgates regarding the impact this might have on our existing website tracking, and as such should be implemented with care and consideration.

To summarise, the key benefits of each method of testing are outlined below:


Multivariate Testing A/B Testing





    • More specific testing



    • Allows for “fine tuning” of a page



    • Requires a HIGH level of traffic



    • Less design resources required in long term



    • More technical implementation



    • More granular results – learning exactly what works and what doesn’t at a page element level.



    • More informed, scientific results








    • Simple version A vs version B testing



    • Not designed to identify minor areas where conversion could be improved



    • Requires a Medium – High level of traffic



    • Simple implementation



    • Typically yields a more clear winner – more dramatic results



    • Typically takes less time than MV testing





Share this post

comments powered by Disqus