What is Conversion Rate Optimisation?

speedo.png

When people talk about conversion rates all they are asking is how many people in a group did something. How many people signed up to your mailing list last month? If 1000 people visited your website and 20 of them sign up then the conversion rate for your mailing list is 2% (20/1000 X 100).

Conversion rate optimisation is about changing aspects of your website so that more than 20 people to sign up next month.

Paying too much attention to conversion rates can spell trouble because they are an easy metric to game. You could run a massive sale or promote fake reviews and your conversion rates might go up. The long term, sustainable growth of your business would not improve.

Why Is Conversion Important?#

If you look at the larger picture, conversion rate optimisation is about understanding what people want so that you can give it to them. This means understanding what your customers care about, which words resonate with them, what emotional needs you’re helping them satisfy, and how your product fits into their lives.

The fact is nobody knows how to improve conversion. The best optimisation experts in the world have an in-depth understanding of the issues underlying most conversion problems, but that’s not the same as knowing how to fix them. What works in one context may not work in another. Best practices get old, people get used to things, then they stop working. Lots of blue on your website used to imply trust and reliability. These days Facebook is taking lots of liberties with privacy. Things change. The only way to find out if something will work for you is to test it.

How Do You Improve?#

One way to improve conversion is to change things at random and see what sticks. This is how most people redesign their website. It sometimes works.

A more methodical approach is to start with a problem. Establish what the business objectives are. Then collect as much data as you can about how those objectives are currently being met.

Sit with the data and start looking for patterns. As insights emerge you can form testable hypothesis. For example, if analytics show a difference between conversion across browsers, the insight is that there might be a functional problem on some browsers. The hypothesis is that fixing the issue will equalise conversion rates across browsers. This is a testable hypothesis.

The idea is to come up with as many hypotheses as possible. Then rank them by confidence, impact and ease. Prioritise the ones that you are most confident will have the largest impact with the least difficulty.

Implementing a test involves an AB testing framework. Google Optimise, VWO and Optimizely seems to be the main players. As a rule of thumb, 1000 hits is the minimum amount of traffic you need to run the most basic AB test. You can run multiple tests simultaneously, but you need at least 25 conversions per variant for the results to mean anything. If you have less than 1000 hits a month, there is no point AB testing because the results will be indistinguishable from chance.

So Where Do I Start?#

The best place to start is to ask what the problem is. Once you know what to fix, you can start collecting data. Typically, this comes from three places:

The most abundant source of analytic data comes from integrating Google Analytics or a similar platform. You can also derive analytic data from mouse tracking, form testing, and in-app search. The chrome lighthouse audit and Web Page Test are great sources for functional, performance and accessibility audits.

User testing can be in-person or remote (platforms like TryMyUi makes this easy). You give people specific tasks to complete and then watch how well they do and where they get stuck. 5 to 15 people are more than enough because you hit diminishing returns fast. Session replays can also be useful, more to corroborate insights rather than a primary source of information (as you have little understanding of the context and intentions in a replay).

Finally, qualitative research encompasses things like pop-ups, polls, email surveys, customer interviews, public reviews and live chat transcripts (the data your support team is already collecting).

You decide which of these approaches you want to use with the time and resources you have. I would argue that 3 months of analytics, a web page test audit, 5 new user tests and 5 existing customer interviews are non-negotiable.

Once you have data, you sift through it to find out how your business objectives are currently met. Look for bottlenecks and contrasting zones of performance. Understand what people want so you can compare it to what you have, and then bridge the gap. Use best practices as a starting point and incorporate lessons from earlier tests if you have them.

At best, all you will end up with is educated guesses. Rate your best hypotheses by confidence, impact and ease and then prioritise the biggest net numbers first.

Once you pick an AB testing platform that lets you implement changes you will need to work out how long to run tests for. I like Vlad Malik’s Calculator for this. You feed it your current conversion rate and monthly traffic and it tells you how big your sample size needs to be and how long the test needs to run.

If your changes improved conversion then hold onto them. If they didn’t then move onto your next test.

Collect data, gain insights, run tests, repeat.


If you’d like to read more posts about conversion optimisation you can follow me on twitter @joshpitzalis.

This is post belongs to a series. The rest of the posts are listed here.