How We Increased a Metric by 300% to Generate Millions of Additional Deals

Product Experimentation Case Study: How we leveraged experimentation to increase the use of a activation feature by 300% in 3 months, with a potential impact of an additional 6 million deals per year in the marketplace.

Sérgio Schüler
Sérgio Schüler

--

The challenge

Saving a search for "iPhone"

I was part of a marketplace where millions of buying decisions happened over messages or calls between users. Because of this it was very important to increase the percentage of buyers who communicate with sellers. Because of this, we created tools for users to find what they are looking for. One of these features was "saving a search". Once a search was saved, the user was notified when new ads that matched their search went live in the marketplace.

This feature is important because we knew there was a positive correlation between saving a search and the likelihood of buyers messaging a seller.

But saving a search was not the only feature with a positive correlation with messages. It was not even the feature with the biggest correlation. But saving searches had a very big problem: a very small user base, in other words, less than 1% of users used it.

Why the save search feature was not used?

We observed that a very small percentage of the user base was saving searches. But those were using it frequently. Why did a small percentage of users loved saving searches and the majority never touched it?

Our first instinct was looking for user behaviors that could be better replaced by saving a search. We found out that a sizeable chunk of buyers was repeating the same search on different days. This was a clear indication that we had room for increasing the use of the save search feature.

While this was a very data-informed company, data can only get you so far. Data will tell you what is happening, but not why. To understand the reason some used the feature and most didn't, we conducted qualitative research through user interviews.

We found out that the problem was in fact quite simple: users didn't know the feature. Some confused it with another feature (saving a specific ad). And for the users who used it, a lot of them had discovered the feature by accident.

There was a clear discoverability problem.

Generating solution hypotheses

Now we needed to generate ideas on how to increase the discoverability of saving a search. Here lots of teams would retreat to their shell, with Product Managers and UX Designers doing the brunt of the brainstorm and decision making. But our team believed everyone is a valuable source of input, so we went idea hunting.

The first thing we did was involve anyone in the company that wanted to contribute in an asynchronous brainstorm exercise. This is just a fancy way of saying that people could submit their own ideas. In this case, we used an Easyretro board, but we could as easily have used a Google Form. A lot of teams do this synchronously, but research shows that people perform better at generating ideas when doing it alone.

One caveat: to get relevant ideas, it's smart to get people up to speed with what you already know about the problem. We have included a short video explaining all we knew about the problem, why it was important and gave some examples of what a useful idea looked like versus what a useless idea sounded (i.e. not specific enough).

Even with the explainer video, there were non-useful ideas or ideas that solved the wrong problem. So before we did anything else, we categorized the ideas into 3 buckets:

  1. increase save search adoption (our goal);
  2. improve the effectiveness of saved searches (i.e. conversion, not our current goal, we could work on how effective was the feature after more users were using it); and
  3. others/not useful.

While we are better at generating ideas alone, we are better at picking the best ones while in a group. So we got the team together, with a couple of engineers from the team that was responsible for the save search feature. We voted on the ideas that we thought would bring the most impact to the goal. Once we have a few chosen ideas, we mapped out how they would work and what assumptions needed to be true for that step to work as intended:

The mapping of one of the ideas: actors, steps each actor needs to do and assumptions of each step

If you want to use a template for mapping the idea, I made one in Miro.

With all that mapped out, we prioritized experiments based on a matrix of impact on the goal vs effort to build. Now we were ready to design our experiments.

Our main experiments

Even if we feel very strongly towards any idea, we can't really know if it will bring the desired impact. So we do A/B (C/D/E…) tests to understand if our change has the desired impact or not.

Experiment 01 to 04: First search experience (several tooltip models)

The obvious choice of improving the knowledge that the feature existed was to inform the user. The problem is that there was already a tooltip that appeared the first time a user searched for something at OLX. But that tooltp didn't have a good click-through rate and, as we saw in the qualitative research, it was not working to educate users. Not that we didn't believe in the tooltip pattern. It worked in several other products, so we decided to iterate with several formats and different copywriting:

Different tooltip solutions. The best performing was the tooltip on the leftmost image.

We have tested many formats and, to our surprise, the ones with images performed worse than the text-only tooltips. In this particular case, the best performing tooltip was not so much about copy or format, but focusing users on what we wanted them to do (save the search) by putting a dark overlay on top of the page.

Experiment 05: Always on save search button

One of our hypotheses we had (kindly ̶s̶t̶o̶l̶e̶n̶ inspired by the Immoscout24 app) was that the save search button was too hidden at the top of the search results. Also, we hypothesized that the user decided to save a search after looking at the search results, so they would have to scroll all the way to the top of the page to save it. Not very nice. So we experimented with having the save search button floating through the page.

The button would appear as soon as the top save search button was out of the screen

This had a much better lift on the metric than we were expecting 🎉

Experiment 06: Repeated search tooltip

We knew users were searching for the same thing more than once, so we just rolled with their behavior: the first time a user repeated the same search 3 times, we would ask the user if they wanted to save the search.

This was very similar to the winning variant tooltip. Just with a different copy. This also had a surprisingly high positive effect.

Results and learnings

We managed to increase 300% the use of the save search feature. But that was just the proxy feature for buyers messaging sellers, remember? Our experiments brought an increase in the number of messages of 44% coming from saved searches. Projecting this over a year, it represents an additional 6 million messages exchanged between buyers and sellers. That was a huge win!

The left and right axis are different, of course! We have many more users than saved searches

Here are our key learning points:

  • Mind the difference between mobile and desktop: the desktop is our oldest platform, so it has more technical debt than the other ones. This makes it harder to experiment on the desktop (it takes too long). So we performed our experiments on mobile web, iOS and Android. But when we would deploy those changes on the desktop, they just wouldn’t work. It’s too much of a different medium than mobile, we needed dedicated experiments to move the metrics on the desktop.
  • Async works great for brainstorm, and you can involve a lot of people too so they feel heard and contribute to the success of the initiative. While sync is great for decision making, but you will have to restrict the number of participants.
  • Experiment, because “obviously” better things are not necessarily better: I was sure personalizing the tooltip and adding contextual images would increase the conversion, but it was in fact the opposite! It ended up distracting the user from the main action.
  • Have some buffer time to clean up the experiments: it’s ok to be quick and dirty with experiments, but once they become part of the main product, you need to clean up the code.

If you like this, you might like my online course: Product Discovery with Opportunity Solution Tree

Product Discovery is one of the most important activities carried out by product teams in startups and enterprises alike. Without good Product Discovery, teams fail more often than not by building features that end up not delivering any meaningful value to customers and the business. But Product Discovery is this often misunderstood term. Part of it is that Product Discovery is equal parts art and science. But also part of it is that most product teams don’t have a solid framework to base their Discovery process around. My online course, Product Discovery with the Opportunity Solution Tree will help you with that.

--

--