Reasons why we think the problem exists
Let’s say we’ve done our homework and we have a clear problem we want to focus on in our product. What does the process of designing a solution look like?
We have product data to suggest that scale of the issue is worth paying attention to. There are a few bugs connected to the issue and several customers have asked for features that we managed to trace back to the same underlying issue. We’ve compared this problem to all the other problems we have to deal with and we’ve collectively decided that it makes sense to work on this now. It’s happening.
The knee-jerk reaction here is to jump straight to a solution.
“I’m telling you, we should build reminders into the app.”
The problem is that everyone has their own version of the best solution and picking a winner can be a bit of a minefield. Egos are easily bruised, people tend to fly off on tangents, every gradually start regretting having to come to these product meetings.
Another approach is to establish why we think the problem is happening first.
Say that the problem we’re dealing with is a massive drop-off on step 4 of the onboarding funnel. The goal is to get as many explanations for the problem as possible on the board.
- The button might be broken on some browsers, have we checked?
- Maybe the value of proceeding is not clear at this step.
- Maybe there are too many options at this juncture and it’s confusing.
- Maybe there are just too many steps.
- Maybe we just don’t come off as a legit operation at step 4 and people change their minds.
- Maybe our marketing language and our in-product messaging are really different and people feel like they’re in the wrong place.
Most of our solutions won’t work, and most of the reasons why we think the problem exists will be wrong. That’s just the nature of the beast. Rather than relying on an oracle for the solution, a more sustainable approach is to put the legwork in and test out as many different reasons that a problem might exist as quickly as possible.
Most people like to jump straight to solutions and start talking about how to fix things. I think it’s really important to establish all the reasons why we think something is broken before we start discussing how to fix it. Everyone has ideas on why things are broken. Big, small, vague, specific, simple, complicated, anything goes, and should actively be encouraged, at this stage.
When someone throws out a solution (and they will) ask them why they think it’s a good idea. Find the embedded assumption about why the problem exists implied in the solution. For example, we’re all focused on fixing step 4, everyone’s thinking about reasons why people might be dropping off, then someone inevitably suggests building reminders into the app.
“Why do you think reminders would be a good solution here?”
“Oh, because people are busy and then sign up to 50 apps a month and sometimes it’s so easy to forget why you downloaded something.”
“So the problem is app-saturation or are you saying the purpose of our product could be clearer?”
“Both I guess?
Write both reasons down.
Adding reminders might be one way to solve these problems. Maybe there’s a much simpler way to solve them. We can figure that out together if we all decide that app-saturation is the most likely explanation for the drop-off.
The reason this step is so important is that when a solution works, a clear understanding of what the problem was, allows you to double down and make the solution even better.
For example, if we decide to fix the drop-off at step 4 by hiring a copywriter to improve the writing on the page and it works, then we can’t really take it any further after that. If we want to eke out another couple of percentage points do we just ask them to rewrite the copy again?
On the other hand, if we think the problem was that people just didn’t understand how the product works, and a copywriter helped clarify that. Then we can double down on this and put a video explainer together, maybe we can add testimonials about someone talking about how they figured out what it does, more images with different use cases, etc.
Once we have all the reasons we think something is broken it’s much easier to establish which ones we think are most likely the cause. Once we collectively shortlist our top 2-3 explanations for the problem, we can proceed to a meaningful discussion about the best way to fix the problem. Now the conversation is scoped to a specific problem and there is agreement on why we think the problem exists.
Picking the best solution is really contextual to the resources we can spare. There is never a “correct solution”, there is only our best attempt to solve what we can with the resources and information we have.
As a general rule of thumb, we’re aiming for ROI. The simplest version of a solution that we are most confident will have an impact on the problem. There’s this concept in medicine called the minimum effective dose. Basically don’t take 800 mg of medicine if 250 mg will work. The same goes for product work, you want to do the smallest, minimum amount of work to get the insight you’re looking for. Resources are always scarce so keeping bets small lets us get more shots in the same amount of time and energy.
If we managed to team-source 15 different reasons why there’s a massive drop-off at step 4 from your team, we figure out the fastest way to test the top 2-3. If one of them moves the needle then we can safely invest a chunk of time into a better solution now that we understand what actually matters. If we’re wrong, we haven’t sunk months of work into something nobody wants and we can move on to exploring the other reasons we initially discarded.
When our first solution doesn’t work, and it usually doesn’t, we don’t have to start from scratch. We just return to the list of reasons of why we think the problem exists. All we have to do is scratch off the 2-3 we tested that didn’t work out and start thinking of simple ways to test the next most likely reasons.