I got to week 10 and then I had to pause the program because I got busy. Unfortunately, I never resumed the project. I would like to finish it someday.
The reality is, a lot of the time, most of us haven’t got a clue what we’re doing. A growth mindset lets you collectively acknowledge this elephant in the room and it lets you deal with it in a systematic way that improves your odds with the resources available.
The biggest difference between brand marketing and growth is that growth is driven by experimentation. The idea is that you constantly experiment with different ideas, programs, campaigns, features to continually eke out improvements.
The key is being able to run lots of experiments over time. You’ll probably get most of them wrong. But you only need a few to work each quarter. Moving things by 2% points each time adds up.
The lean methodology is all about coming up with hypotheses and figuring out the fastest way to test the best ones. If you’re wrong, you’re never heavily invested in something nobody wants.
With traditional brand marketing you flesh out one idea, end to end, and then put all your chips on it. If you’re right, you win big. If you’re wrong, there’s no second chance because you put all of your chips on one big campaign.
Three types of skills you need to get good at growth#
There’s expertise, analytics and strategy. Expertise just means understanding how a marketing channel works and having some experience with it.
The most important ones are SEO, email marketing, ads, content marketing or social. You can’t work in growth and not know how to do at least one of these things. Expertise is also the least important aspect of growth to focus on because it’s easy to learn.
Next is analytics. You must be able to use data to make better decisions. This one is important. You don’t have to master SQL, you can just do everything on Excel but you need to be able to extract data, gather insights, and analyse your own experiments.
Strategy is about being able to come up with good ideas and figuring out which experiments to prioritise. This one is tricky. You have to actually understand your customers and what they’re trying to do. Working with lots of different teams and stakeholders is also key here.
Generally speaking, a good way to think about this is to get really good at one of these skills in the long run, but maintain a baseline in all of them because you can’t do growth work without all three.
A growth model is an answer to the question of how your product grows. You should be able to answer four basic questions: How do you find new users? How do you plan to keep them? How will you make money? How are you going to defend against the competition?
The most common framework for tracking how well you’re performing on each of these questions is Dave McClure’s Pirate Metrics (AARRR). At the moment, I’m focusing on one metric for traffic, one for conversion, one for weekly active usage (so weekly active listeners/drivers/reader, etc). One for retention and one for the 💰.
Every project is unique but the top-level metrics you pick are just a high-level view of your entire funnel. Each metric represents an opportunity to grow your business in a different way. First, figure out what your metrics are. Then walk through the user journey and understand what it all looks like from their perspective.
The best way that I’ve found to map the customer journey is to start at the end and work backwards. What is the ideal place you want customers to end up with your product? Define that and then work backwards, step by step to the very first point of contact.
If you have different types of users, or different end goals then map a different journey for each one. Understanding how all the pieces fit together is an important exercise to do up front but it’s also useful to revisit once a quarter. You never want to lose sight of what everything looks like from the customer’s point of view. The bigger the team, the easier it is to lose track of this.
Once you’ve established your top-level metrics and you’ve mapped the customer journey then you can begin quarterly planning.
Most teams set goals on a quarterly basis. 12 weeks is long enough to do something useful but short enough that the end is always in sight.
The first step to coming up with a good 12-week growth plan is to understand the biggest areas of opportunity and the biggest pain points your customers have. It’s not about the features you build, it’s all down to the problems you solve for people.
I always start with the data. Look at your funnel, explore the data, and identify the biggest areas of opportunity. I’d argue that you shouldn’t start a growth team until you have at least a year of data. You need a baseline understanding of what’s working and what’s broken.
Where are the most people falling off your funnel? Are people visiting your website but not converting? Are lots of people converting but few come back for a second purchase?
You need to be able to identify the biggest area of opportunity so you can prioritise things that have the most impact first. Guessing is not the best way to do this.
Once you’ve narrowed in on the bit of the funnel that needs the most love, the idea is to do everything you can to really fine-tune that part of the machine for 12 weeks.
Generally speaking, the sequence for growth planning is to optimise what you have first. Squeeze every ounce out of what’s working before you reach for your next big idea. You improve a metric a little by little, 1% here, 1% there, week over week.
By the end of the quarter, you’ve significantly improved a certain experience or metric. Once you start feeling like you’re hitting a ceiling, you either look for a radical new way to improve things, or you move onto the next metric and focus on a different area of growth.
Once you’ve picked an area of opportunity the next step is to get as many ideas on paper as possible. Big ideas, broad ideas, super-specific ones, silly ones, bold new ones, bat-shit-crazy ones – within the context of the problem you’re focusing on, anything goes.
Get as many different teams and perspectives in the room as practical. At the very least you want to make sure you have someone from the user research team and at least one customer success representative.
Trace the user journey together and flag any rough edges. Use best practices, research what other companies are doing as a source of inspiration, if you can speak to other people in the industry and ask them what they did for your focus problem.
Rather than thinking about increasing value for your business, think about how you can improve things for your customers. The worst way to come up with improvements is to focus on the metrics. The idea is to think about this in terms of increasing value for your customers.
People don’t care about your product, understanding and focusing on their problems is a far more effective way to improve your performance. Improve their experience and the byproduct is a bump in your metrics.
No matter how random ideas are there needs to be a rationale: “If we do x, we’ll see an improvement in y metric because of z reason.” The more rooted in data or research you’ve done the better. Structuring ideas like this sets you up to turn them into tangible experiments.
Figuring out what to work on next is a complicated, multi-stakeholder problem that never goes away.
Look online for how to prioritise stuff and you will run into frameworks RICE, ICE and PIE. The problem with these frameworks is that everything ends up being rated a medium. The great ideas were obvious to begin with, and everything else ends up in the messy middle.
Stack ranking forces you to rank things from best to worse. You can never have two medium ideas, one idea will always get ranked better than the other.
You can also have as many dimensions as you want.
I typically start with:
The number of people that will be affected
How often the problem comes up
How severe the problem is
How much value the idea adds to our primary value proposition
How often people will get value from the idea
How long it will take to build
How much evidence we have that it will work
Pick 3 to start with, more than 7 and you’re getting into the weeds.
What’s cool about this is that you are only ever comparing options that are in front of you. You’re not asking if something is a good idea, you just looking at whether it’s better or worse than the other options on the table.
You add all the scores up and the ones with the lowest totals are the best. So idea 3 would be the winner in the example below.
The system is far from perfect, but it does tell you why the best idea won. This is important in a team. If I pitch an idea that doesn’t get picked up, I want to know why. The stack shows me what dimensions it did well on and which ideas beat it in other areas. It helps me understand the tradeoffs the decision-maker had to make.
Even if a HIPPO rules out an idea, having a stack makes it easier to ask why. When the reason is a dimension that wasn’t on the board, you can add it to the board as you build up a clearer shared criteria for ranking things in your organisation over time.
Stack ranking is not a silver bullet, but it helps. The bottom line with prioritisation is finding the highest return on investment, you want to work on the ideas that have the highest impact with the least amount of effort first.
The first step in the experimentation process is designing your experiment. A good experiment has an independent variable, a dependent variable, and it’s based on a clear assumption: “If we do x, we’ll see an improvement in y metric because of z reason.”
Let’s say we want to improve retention for a food delivery app. You see know people who order with a coupon have lower retention than average. We’re training people not to pay full price, so the rationale is they’re less likely to come back for a second full-price meal.
You set up the experiment as an A/B test. You want to start by testing the most basic version of the A/B test. Your dependant variable is the % of users who buy a second full-price meal after using a coupon. The independent variable is whether or not we send them an email.
The next step is to implement and then ship the experiment so you can measure the results. This post is about the mindset around doing this stuff so I’m going to skip over implementation.
We get the results and we see that email has no measurable impact on getting people to come back for a second meal. The experiment did not succeed but now you know that email is not the best approach with this audience.
On the other hand, let’s say 5% more people buy a second meal because of the email. Now you can test on top of this, you try and come up with a better version of the test. You start testing different value propositions in the email content.
The next hypothesis could be that talking about price is more effective than talking about convenience when it comes to ordering a second meal. You A/B test them and whichever one wins is the one you move forward with.
You keep repeating that process and stack wins on top of each other until you you hit diminishing returns. That’s it. You build, then measure, then you repeat the whole cycle with what you learn. And you keep going through experimentation cycles as quickly as possible.
##Links mentioned
I got accepted to CXL institutes’ growth marketing mini-degree scholarship. The program runs online and covers 112 hours of content over 12 weeks. As part of the scholarship, I have to write an essay about what I learn each week.
SEO stands for search engine optimization. In the context of improving search results for web apps, SEO is about developing a better user experience and providing useful, compelling content. The industry may not be perfect, but this post will show what you can do to make it better.
Search engine optimization began when a group of geeks figured out how search engines work and tried to reverse engineer the technology. As a result, the industry has a bad reputation. Too many people think the best way to market their website is to game search engines.
There is also a good side to SEO. Much of what search engines do nowadays wouldn’t be possible without it. A good search engine optimizer will focus on improving the end-user experience. Things like making sure a site loads as fast as possible, fixing broken links and make sure people can find the website with the keywords that people actually use when they search for it.
There are two pillars to search engine optimization. The first is off-page optimization. This is about getting people to link to your website. The more reputable links you get, the higher your website will rank. There is no playbook for getting respected websites to link to you. It’s all down to how creative you can get and how connected you are.
The second pillar is ‘on-page optimization’. This is the technical pillar and the rest of this post focus on technical SEO in the context of web apps.
A high converting landing page helps people understand what your product does and why someone should care about it. People of the internet have been building landing pages for a while now and have established a pattern that works. Don’t deviate from this pattern unless you have a good reason. Save the fancy stuff for the rest of your marketing efforts.
I cover what questions to ask, how to do the interviews, finding the right people to speak to, organise the data you end up with afterwards, and how the whole process connects to actual product decisions and changes you end up making.
I’ve found customer research helps me improve product in three ways: It allows me to say no to stuff, it helps me map out the problem space, and it defines useful criteria for a solution.
Saying no to stuff
It’s easy to get lost when you’re building features. Grounding yourself in your user’s perspective can let you know when you’re barking up the wrong tree.
For example, at Chirr App we let you compose twitter threads. We were considering modifying Twitter’s tweet box with our chrome extension so that you could write threads inside the Twitter interface. After speaking to people and listening to how they use our product, it became clear that people value creating content without the distraction of Twitter.
Modifying the native tweet box was easy to get excited about. It sounded like a wicked idea. If we invested a chunk of time into making it happen, we would have let all our lovely users know that we’re completely disconnected from how they use our product.
Mapping out the problem space
I also use customer research to map out the problem space. It’s easy to focus on solutions, people instinctively do too much of this in product teams. Sometimes you need to be able to put all your solutions to one side and ask what problems people care about.
When I process discovery interviews I have four columns: insights, opportunities, verbs and one for my primary research question. Verbs are the things people talk about doing in a product. If clusters or themes begin to appear then I pull the out into their own column.
For example, if 7 of the highlights in the verbs column are about ‘editing’ then I will make a new ‘editing’ column and move everything over. Then I rename the tags to describe what it is about the editing experience they’re highlighting (it’s slow, there’s no-redo button, needs autosave, wonky placement, etc).
A blurry example from a real project so you can see how transcript highlights get organised into columns .
This means there’s always a place I can turn to when I want to know what customers care about. Instead of worrying about whether we’re going to build this feature or that one, you can look at the problem space and see that people think of the product in terms of searching, reading, sending and highlighting. This perspective lets you have a conversation about which one of these experiences you want to improve next. Mapping the problem space makes it easier to think of product improvements in terms of outcomes to the end user experience.
This isn’t some kind of formula, There are always business constraints and realities you have to work within. Having the problem space mapped out and being able to talk about it makes it easier to balance a business-needs-only approach with the stuff your users care about.
Useful criteria for a solution
Let’s say we’ve decided that the search experience makes the most sense to focus on next. Now we have a list of all the points of friction in the current search experience. Each of the highlights in the “Search” column is a rough edge a real person brought up in conversation with you.
When you don’t understand the forces acting on a problem then you inevitably end up casting the net wide. The idea is that if you cover enough territory you’re bound to solve the problem. A wide net means building more stuff, which means more moving parts, and that means more things to maintain. You know what I’m talking about.
Understanding all the places people butt up against the current experience exist means I can reference them when I sit down to work on a solution. I’ll add all the feedback I can get from the customer support team, mix it with any business requirement and I end up with a diagram of all the forces that I know are influencing the design space.
To make this less abstract here is an example from an old project where I was improving the search experience. Some of the nodes here are a customer feedback, some are direct quotes from interviews and other bits are business requirements. All these forces influence what will ultimately determine a good solution.
Once I have all my bits the next step is to figure which bits to focus on first. Jason Fried has a lovely essay where he talks about the obvious, the easy and the possible. I am yet to discover a better way of thinking about the tensions at play here.
You can’t make everything obvious. The more things that are, the less obvious they become. As a general rule, the more often something happens, the more obvious it should be. Stuff that just needs to be possible can be tucked away. Work on figuring out where the easy stuff goes once you’ve dealt with the obvious stuff.
This is me halfway through a design solution that has addressed the most important bits.
A solution won’t always address all the forces. It’s good to be aware of which elements are not addressed. Sometimes they don’t need to be, other times you can pick them up in a separate sprint.
I decided not to address two of the know forces in the final solution I came up with.
Without understanding these forces, you have no criteria by which to judge the success of a solution. Listening to people and documenting what they care about lets you reference the important stuff when you need it. Rich context like this is the antidote to a messy scope. You can stop speculating about X, Y and Z because you know exactly what to include and what you can safely omit.
That’s it, that everything I have to share on my journey so far.
Customer research helps you improve products by helping you say no, mapping out the problem space so you can direct your attention towards meaningful outcomes, and by defining clear criteria for solutions.
If you’ve been speaking to people and then not having a way to connect the research to the product roadmap, hopefully some of this will help you think about ways you can organise your research so that you can rely on it to make better product decisions.
When I first read the Mom Test and started doing customer interviews, it became clear that I’d be accumulating lots of notes and recordings of interviews I didn’t know what to do with.
There were two of us doing interviews at the time. We would do a bunch of interviews and then share takeaways with each other. Whoever was doing the interview was still a massive bottleneck to the actual insights.
We tried recording sessions when we got permission, but going through every recording took too long and was unsustainable. I’m going to share the process I’ve settled on since.
I’ve come to rely on a tool called Dovetail. I have no affiliation. I’m just love their product. You can probably use a free Kanban board to replicate most of this, but Dovetail makes the whole process a delight.
The raw input here is a transcript of your interview (or notes as a fallback).The idea is to go through the interview, line by line and highlight points of note. I don’t think there is a correct way to do this, but my points of note are insights, opportunities and verbs.
A blurry example from a real project so you can see how I highlight and tag points of note .
An insight is anything that resonates with you (explicit or implied). How do you know what’s relevant when you’re not sure what you’re trying to figure out? Close your eyes at the end of an interview, the two or three bits that stick out most vividly are your key insights.
An opportunity is more practical. Julia wants a way to listen to the audio at double speed. Listening to what people ask for is not to same as building everything they want. Make a note of what people ask for so that you can start to see patterns in the underlying problems.
Then there’s verbs. If someone talks about the highlighter feature acting wonky, tag that under ‘highlighting’. If people bring up the text being too small, tag that under ‘reading’. When people talk about your product, capture the action at the centre of the conversation.
The goal here is to to cluster all your notes around the verbs your users use to think about your product experience. I think of our product in terms of feature A, B, C and D. They’re great features but people only think about the product in terms of reading , writing and highlighting. Sometime’s there’s alignment here, most of the time there isn’t. The latter is all that matters.
Insight, opportunities and verbs. That’s how I organise interview data. There’s going to be a lot of overlap, but just relying on verbs doesn’t let you capture general insights and opportunities. Dovetail lets you tag the same thing in multiple ways so that’s not a problem.
To keep track of all this you can organise everything in columns. I start with 4: Insights, opportunities, verbs and one for my research question. When clusters begin to appear I pull them into their own column. So I start with 4 and then let the rest form organically.
A blurry example from a real project so you can see how highlights eventually get organised into columns
For example, if 7 of the verb highlights are about editing then I will make a new ‘editing’ column and move everything over. Then I rename the tags to describe what it is about the editing experience they’re highlighting (slow, no-redo button, autosave, placement, etc).
This is fundamentally a qualitative database. A place you can turn to when you want to know the customers perspective. Organisation by verbs means you know how people group the experience in their heads. Now you also know what most people care about when it comes to ‘drafting’.
The process scales to small teams well. Double entry work best in groups. Transcripts gets analysed then reviewed by another before it’s ‘done’. Helps everyone stay on the same page (and minimises bias). Double entry is a luxury few teams can afford though.
I’ve also learned that exposing stakeholders to 2 hours of raw research every 6 weeks is key. If you’re interviews are 20-30 minutes long then shortlist 4-5 for people to watch every 6 weeks. I didn’t pull that number our of a hat, learned it from Jared Spool. It works.
Customer discovery and doing user interviews is about grounding everyone’s decision making process in your customer’s perspective. A minimum of 2 hours inside your user’s heads every 6 weeks makes collectively judging whether stuff will be useful becomes much easier.
Being able to recall actual conversations when you’re making important decisions means you never have to rely on bullshit personas ever again.
You don’t need to speak to lots of people, nor do you need to speak to them all at once. Two or three people a week is more than enough to start with. Recent customer support wins are always a good place to start. You don’t need a complicated reason to reach out to people that have just had a great experience with customer support. Explain that you want to improve the product and you’d like to better understand how they use it. Clarifying that it will be a short call always helps. In addition to following up on past interactions, you can begin closing out successful interactions by asking if they’d be open to schedule a quick conversation with the product team to improve the product. The success rate on converting support calls is usually pretty high. The problem is that you don’t control who gets in touch or how often. Eventually, you will need to be able to pick who and you talk to people. When I don’t have a specific question and I’m just listening for opportunities to improve the product then I look at last month’s activity and plot out the number of times each person performed our core action. You’ll end up with a bar chart of how many people performed how many actions.
Those who did it once or twice will be on one end, your superusers live on the other end. Filter out anyone who signed up less than a month ago and then reach out to 10 or 20 people in the top and bottom 5%. What I’m trying to understand is the differences in the way that people on either end of this spectrum think about and use the product. Speaking to 5-6 people from each group is usually enough to get a sense of the key points of contrast in the spectrum of usage on your product. One final approach I’ve had success with is doing in-product surveys. NPS scores and those little satisfaction ratings that show up in the corner of people’s screens. You can end a quick survey like this with a request to schedule a call. If you have any other approaches to recruitment that have worked for you please let me know. I’ve always found getting people to sign up to be the hardest part of doing customer interviews.
I can go long stretches without doing interviews. When I jump back into them I’m always a little rusty. I have a little game I play that helps me get into the groove.
I score each conversation with a simple point system. 2 negative points and 2 plus points. Easy to remember while I’m listening. Then I score the recording afterward to see if I’m improving between interviews.
-1 You pitch, you lose. If you try and push a feature or start talking about your product then the conversation becomes a sales call and stops being an interview. An interview is about the customer, not your product. The moment you start pitching a feature, people will gravitate towards telling you what you want to hear. If you have nothing to sell, people don’t know what you want to hear, so they can’t lie to you.
-1 Don’t Interrupt People You can’t interrupt people. Ever. This is where I always lose the most points. When someone stops talking, the best thing to do is count to five in your head. I’ve never made it past 3. The idea is to create a mildly uncomfortable vacuum that elicits valuable follow-up information. Conversely, when people are talking and you have an important question. Make a note of it so that you remember to come back to it later.
+1 Talk Specifics Hypotheticals are toxic shiny objects. They sound great and mean nothing. People are terrible at predicting their own behaviour. Instead, you can only talk about specifics that have happened in the past. It’s much harder for people to lie about specifics. When you start talking about what someone might do, people want to tell you the *correct* answer. Regardless of how true it is. It’s not that people want to lie to you, it’s just what we do in polite conversation. It’s the path of least resistance. Every time you shut down a vague, hypothetical statement and redirect it to something specific in the past you get a point.
+1 Summarise And Then Ask When you can’t interrupt people, it can be hard to get a wandering conversation back on track. One way to do this is to summarise the important bits of what people said when they stop talking. This re-aligns the conversation to what’s important to you. It also helps them reflect on what they said, and clarifies any misunderstandings. Every time you summarise what someone says before proceeding, you get a point. That’s it. Much bullshit can be avoided by not having anything to sell and only talking specifics.