I’m working on a project at the moment where we’re shifting our analytics infrastructure. This is an opportunity to revisit the fundamentals and clear out any cruft in our tracking plan.
I start with the most important question (what success means to the people that use our product) and then work backwards from there.
For me, this always ends as [# people] who did [core action] in [timeframe]. Weekly active readers, monthly active publishers, the number of people who found work on the platform in the last 90 days, etc.
At a high level, our north star / focus metric / retention metric (whatever you want to call it) tells us if people are getting value out of our product or not.
The next step is to answer 5 key questions, any more gets fuzzy.
- How much traffic are we getting at the top of the funnel?
- How well are we converting our traffic?
- How many people are we retaining?
- How engaged are the people who stay?
- How much money are we making?
Each one of these questions then has a bunch of related questions. For example, where is our traffic coming from? Where does the most traffic come from? Which source converts the best? By the end of it, I’ll have between 20 and 30 intermediate questions.
The last step is to describe the exact data point I need for each of my questions and map out exactly where an engineer needs to instrument the tracking. I put this all in a google sheet, describe each event, and embed screenshots so it’s easy to implement from top to bottom.
The big mistake to make here is to start with the event you want to track and then work your way to some kind of meaning. All you will end up with is a noisy mess. Analytics instrumentation is a top-down process.