Use A/B testing (split tests) to make informed decisions about your website based on facts, and not assumptions.
Our A/B test philosophy is simple, we keep the winning solutions, and learn from the tests that don't make the grade.
Implementing CRO experiments online involves turning your hypotheses into designed and developed solutions, split testing the variants and measuring the outcome. You’ll keep the winners and learn from the others. By A/B testing (split testing) your UX changes in this way, you’ll know you are cumulating positive changes for your website and you’ll learn a lot about user behaviour along the way.
Once you’ve developed your CRO strategy and testing roadmap and you have hypotheses backed up by data and user insights, it’s time to start generating ROI by deploying and measuring solutions with A/B testing.
Every organisation has a set of unique conversion optimisation challenges, and no two are the same. We have experience working with the industry's leading conversion optimisation platforms. Talk to us about your conversion challenges and we'll help you find the right solution.Dinah Alam, head of conversion services
Our simple, yet effective approach to ongoing conversion testing helps our clients with their optimisation goals.
We utilise an iterative cycle of test and learn to ensure we generate findings and continuously feed in learnings from the tests and improve the test plan.
Our conversion team can deliver our end-to-end process with a fully managed service or work with your in-house team and tailor our service around your existing skillset. We adapt our way of working to a method that suits your business, the choice is yours. We work across multiple sectors including, eCommerce, not-for-profit, travel, b2b and SaaS.
Our methodology of implementing a successful A/B test programme is tried and tested.
Our simple four-step process consistently delivers results for our clients.
Step 1. Selecting the correct conversion test tool
We work with you to evaluate which CRO and personalisation tool best fits your budget and we will work with your team to get the tool deployed and integrated on your site. The deployment of a test tool is simple and normally only requires that a ‘tag’, a line of code, is added to your website to get going.
Step 2. Creating variant or challenger designs
We work with your team to develop visual solutions to the challenge presented by a test hypothesis - this is a simple process for smaller tests. For larger tests, we bring our experience design expertise to the process with ideation workshops and wireframe prototypes. If necessary we carry out user testing, or use other forms of user research, like tree testing to gain further insight.
Step 3. Building and deploying tests
Once the test visuals are signed off, and test requirements are fully documented, the development of tests begins.
The development of tests includes front-end code, setting up testing and targeting conditions in the tool, tracking primary and secondary KPIs across analytics platforms.
Before launch, we undertake rigorous Quality Assurance testing of the solution across relevant browsers and devices. You receive fully tested preview links to sign off, and then your test is ready to launch.
Step 4. Measuring and understanding impact
We monitor tests to ensure everything is functioning and that user data is registering against the variants, with all correct analytics events and goals firing. Close monitoring happens until a test reaches statistical significance against primary goals
Statistical measurement ensures the results are not just due to random chance. We deep dive all our tests – winners, losers and inconclusive results are all gold dust in terms of data and learnings - We segment data to better understand how audiences reacted and assess click behaviour to understand changing user behaviours.
Once we have properly understood the impact of the test, we make our onward recommendation, be it full deployment of the winning variant or further refinement
User testing frequently asked questions
Q. How does user testing work?
User testing involves observing participants from your target audience using your website. The tactic is often task-based, meaning that users will attempt to perform a specific action on the website, such as finding a particular article, making a purchase or registering for a service.
Q. What does user testing do?
User testing enables you to understand better how ‘real’ users interact with your website. The process allows you to base your website design and development decisions on feedback from the people who will be using the website, rather than just basing changes on the whims of internal stakeholders.
In short, user testing removes unqualified opinions from the web design and development process.
Run correctly; user testing will lead to a website that is both usable and also able to meet the needs of your users.
Q. What are the different types of user testing’?
Broadly speaking user testing falls into three types; remote unmoderated testing, remote moderated testing and in-person moderated testing.
During remote unmoderated testing, users work through a series of tasks to complete in their own home, in their own time with no moderator present. The output from these sessions are videos of the user attempting to complete the tasks while thinking aloud.
Remote moderated user testing involves participants being taken through a list of tasks by an experienced facilitator. Using audio and screen sharing software like Skype or Zoom, The sessions allow two-way communication between people situated in different locations. The output from these sessions are videos of the user attempting to complete the tasks while discussing their thoughts with the moderator.
In-person moderated user testing is live and takes place in a test ‘lab’. A good lab is often a relaxed lounge-style environment where users feel at ease. As with remote moderated testing, participants are taken through a list of tasks by an experienced facilitator. The output from these sessions are videos of the user attempting to complete the tasks while discussing their thoughts with the moderator.
Q. How much does user testing cost?
The cost of user testing depends on several factors. The first consideration is whether you are going to use any paid tools. While there are tools available to help with user testing in most cases, free tools will suffice.
Another consideration is how you are going to recruit the right type of users. If you have access to these people yourself, then there may be a minimal cost involved here (beyond the likely need for an incentive payment for each participant). If you need assistance in recruiting specific users though then there is likely to be costs, potentially of up to £100 per user, if you use the services of a third-party recruitment company.
Supplementary costs will include the time required to run testing and how you value that time. This will, of course, depend primarily on how many tests you’re looking to run.
Q. How does user testing and usability testing differ?
The terms 'usability testing' and 'user testing' are often used interchangeably. Different user researchers will generally have different definitions of each so the answer may depend on who you are working alongside.
The main difference between the two is that usability testing is focused primarily on whether users can 'use' a website. In contrast, user testing tends to be broader, covering user needs and opinions. Researchers use user testing to understand whether there is a need for a new piece of functionality or to get thoughts on new ideas.
Why should I run user testing?
Researchers employ user testing to understand better how your target audience interacts with your website or app.
If you are a website owner, you’re likely to be one of the worst people to make design and development decisions as you are too close to the product. Ensuring that your website or app works for your target audience, and meets their needs, makes perfect sense as you can be confident that decisions made based on user testing will perform well when launched.
Reasons for working with Fresh Egg
- Our conversion experts are on hand to provide expertise across the entire A/B testing process
- We help you choose an optimal testing solution
- Support in selecting what tests to perform and prioritisation
- Collaboration with ideation of solutions, wireframing and visual design
- Support for building the front-end coding of test variants
- Implementation of goal tracking and analytics integration targeting a, QA and monitoring
- Easy-to-understand dashboards to report on test performance
- Detailed learning of each test to refine the test plan, remembering all tests (both winners and losers) provide insight
Read our CRO case studies
Learn how to inject growth into plateauing CRO test programmes
Watch the discussion where the panel discussed a range of CRO topics including, common performance plateau scenarios, the local vs global maxima concept, how to use different techniques to generate new ideas to impact your metrics and iteration - testing to your different segments.