Are you building the right product? It’s an important question whether you are a startup or a big company. Good research can help guide you. Doing it incorrectly and you’ll go down the wrong path.
There are two basic types of research: qualitative and quantitative. Qualitative generally involves asking people what they want or their experiences with existing products. Quantitative using hard numbers from your users.
Quantitative research can help you answer questions like “What features do I need to add to my product?” “What features can I remove from my product?” “How is my user base generating revenue?” “Where is there fraud and abuse?” (There is some overlap; I’ll do a separate post on qualitative research.)
Some caveats to look out for when doing quantitative analysis:
- Data talk, but people hear it in different ways. Given the same set of facts, people can come to multiple interpretations.
- Interpretation of some metrics can and should change over time. At the very least, acceleration will change over time.
- A single metric can easily be gamed, either by accident or intent (conning investors).
These are some of my favorite things in quantitative analysis. This is by no means a complete list.
This is commonly used to test different messages or designs. Two variants (A and B) are presented to different users. Marketing emails commonly use A/B testing. Take a small portion of your subscriber list and send one subject line to half and another subject line to another half. With the data on open rates that you get from these emails, you can send the one with the better conversion to the rest of your list. There can be more than two; you can have A/B/C.
This is a variant of A/B testing. It’s commonly used to test different features, especially in complicated products or products so well established that you don’t want to change the experience overnight.
Take Facebook’s News Feed. This is a product that is used by billions of users around the world. Adding a new feature without testing can cause a lot of grief and negative feedback. Before you roll it out widely, you present the new feature to a tiny percent of the user and track how it performs. Do people use it? How often do people use it? Does it add or subtract from other features people use. (I call it 1% testing, but in Facebook’s case, it might be 0.001% testing.)
One of the challenges with data is that averages can mask important differences. You can dig into data to identify segments that you want to go after. If you’re running a credit card business and find that 15% of your overall spend is travel, that tells you one thing. But when you look deeper, you find that a group of customers spend $50,000 a year on travel. This might lead you to create products for that lucrative customer.
You can also use data to figure out who your profitable and unprofitable customers are. In many products, you’ll find that some customers are unprofitable. They could be doing too many returns. (E-commerce.)
Detecting fraud (illegal behavior) and abuse (legal behavior but not within your business model) is a great way to use data.
I worked for a company that sold long distance calls. When we looked at the usage data, we found that we had a very large amount of usage to Tuvalu. Given that it’s a tiny nation, this didn’t make sense. A closer look found that there was an error in our rate tables and we were selling something for 10 cents that cost us $2.00. As you’d expect, people from Tuvalu told each other about it and we became the calling service of choice for them. (Some of the details here have been changed.)
Another use case is finding the outliers in all-you-can-eat plans. Think about cell phone data plans. In the AYCE model, some customers might use 1 GB of data and others use 100GB. Your business model and network capacity is based on average usage of 5 GB of data. The 100GB user hogs capacity and slows things down for everyone else. With data, you can develop new policies: the * that says data rates will be slowed down after 25 GB of use.
Looking at what people search for but you haven’t delivered is important to product planning and improving the experience. After all, they came to you for it.
Let’s say you run a ride app. When someone launches the app, they might be an area where you don’t offer service. Tracking those requests gives you insight on markets that you might want to look at when developing expansion plans.
It’s also a way to improve the product to suggest alternatives that the user might want. If someone is in New York City and searches for “In-N-Out,” you might respond “There are no In-N-Out burgers in New York City, but here are some McDonald’s.” Just kidding. I’d probably return Shake Shack, but In-N-Out is so much better.
The key to a successful business is that lifetime value is greater than customer acquisition cost. (Often written as LTV > CAC). You want to make sure that, on average, you make more from customers over their lifetime than it costs to get them.
Look at the customers that signed up for your service 1 year ago and how much they spent and when. For customers that sign up today, can you use the historical data to model what the new customers are likely to do?
When looking at data, you also have to weigh the cost of the analysis against the value of the data. If you’re using data to analyze how people navigate through your site, it may be sufficient just to track data on a small subset of users. Adding too much tracking can add to latency in your site or app.
Also, if you’re trying to decide whether or not to implement something that will take 2 days, it doesn’t make sense to spend 2 weeks to build a system to get the data.
There are a variety of tools you can use for quantitative analysis, depending on what you’re trying to get at. Marketing tools like HubSpot handle A/B testing for email campaigns. Google and Facebook have their own tools for ad performance analytics. Google Analytics and Adobe Analytics allow you to analyze user behavior. For complex feature-level data, you will likely have to create your own database and run SQL queries against it.
COIVD-19 caveat: For most businesses, I don’t recommend doing quantitative research based on data beginning in March 2020, unless you’re using it to compare the impact of COVID-19. If you try to project based on data from March 2020 on, you’re likely to over or underestimate behavior post COVID-19.
Grammar trivia: Data is plural, not singular. (Plural of datum.) It’s one of those weird English things that doesn’t seem right, but is. Like how a person who runs a restaurant is a restaurateur, not a restauranteur.