Why your experiment's impact is probably greater than you think
When we experiment, we try and address hypotheses we believe are blockers (or enablers) for conversion success. Eg if your signup form for a service converts 65%, one can imagine many blockers for the 35% of users that don’t convert:
- They don’t understand the offer;
- They don’t trust you;
- It’s too long to fill out;
- It requests details they don’t feel comfortable providing;
- They get stuck unable to answer one of the questions (there’s an answer you hadn’t considered);
- It’s too slow;
- It’s buggy;
- They don’t feel it’s valuable for them to complete this stage
- …
When we experiment, we try and address one such blocker and remove it. Let’s imagine we believe (or used qualitative research to find out) that some people find the form too long. You experiment with removing a few fields - but find it only grows conversions rate by 0.5%. Does that mean the solution of that problem was only an issue for 0.5% of your users? No.
Multiple blockers will cause first few fixes to register as less impact, with increasing impact as we work more on the same blocker, until it plateaus. What happens is that most people are not affected by just one blocker to completing a process/funnel. They might both feel it’s too long and not trust the process, or not trust the process and not understand what we’re asking. As a result, when you start fixing the ‘trust’ issues, if you didn’t touch anything else yet, you can only affect the people who only have the trust issue. Even if you “fixed” trust completely, there are still blockers for the rest of the population. As you remove the next blocker, say “understanding”, you now reap the rewards of both fixing the trust issue beforehand, and of the understanding blocker. The diagram illustrates the dynamic at play here, with hypotheses are called “Trust”, “Time” (I don’t have time to fill this out), “Price” and “Understand” (I don’t understand why I need it).
You can see many dynamics are possible, including something you think you’ve solved (stopped being a bottleneck) becoming once more a bottleneck after other things are solved. Users/clients also change preference, or the audience of users changes and we are exposed to people with a different mix of preferences, resulting in shifting the “blockers”. All of this is to say that it’s important to revisit hypotheses you think you’ve solved before, and it’s important to acknowledge that you won’t see all the fruits of every effort immediately, but sometimes a successful ‘clearing’ of the path to user success is actually reliant on a lot of previous blockers that were removed in a way that was impossible to measure. Essentially - every experiment’s impact is tempered by all the other issues so that you see only a part of its effect.