The illusion of truth. Why market research is contributing to product failure and what you can do about it.

Alex Street
8 min readSep 17, 2021

--

It was the Monday after our new product release. A new product that had our whole team had been thrilled to launch and that we had high hopes and dreams for had gone live over the weekend. By now the users would be rolling in. The dollar signs would be multiplying and we would be toasting to our success.

Or would we?…

We had spent hundreds of hours running user research and online surveys. All the indications had been good. Users had said they wanted it. They said they would use it. They said they needed it. I’d spent many hours with the data science team speccing our out our analytics, so I could study how the product was used and if it was used in the way we expected. I sat down at my desk, on that blustery Monday morning, it was early and no one had arrived yet. Rain pattered against the windows. I was excited to look at the data. What kind of hockey-stick style graphs was I going to present this afternoon?…

And then nothing.

I looked at the empty dashboards. No users. No sign ups. No usage. What had gone wrong? Why were people not using this awesome new product? All of our research had predicted stellar take up, interest and use. Why were people not behaving as they claimed they would?

This situation rolled on for more than a year, with repeated efforts to drive take up and adoption but to no avail. Usage actually declined amongst the early base and after a year the product was dead in the water.

What had gone wrong? Why had the product failed? Why did our research — that predicted take up — been an illusion?

The answer is simple. Traditional market research is an illusion. It is an illusion of fact and truth.

We had failed — not because we had avoided talking to customers and not because we hadn’t invested in research to back up our hypotheses-we had done all of these things in spades. And yet we still launched a dud.

Why? I decided to go on a mission to figure out what had gone wrong.

I knew from my experience of managing research and analytics for existing products that once we got products to market teams effectively ceased to run user research. Why? Because once a product launched, they could put analytics into it and run A/B tests. This gave them a view of actual user behaviour and user preferences, based upon that behaviour. Why run an online survey asking people to imagine a new feature and if they would use it when you could put the feature into an A/B test and see if it attracted users. It made sense. Once you could track, at scale, user behaviour and run A/B tests, primary research became less useful. But our conundrum, in a team that was developing new products and businesses, was that we had no analytics or A/B testing capability, because the products and businesses we were imagining didn’t exist yet. How could you have analytics or A/B testing in products that do not exist? Have yet to be launched?

This bugged me. The data we needed to make decisions only existed in products once they launched. But by the time a product had launched, all the major decisions had been taken, not least of all, whether to launched the product in the first place. So we relied upon market research. User interviews and online surveys to manufacturer data that attempted to predict what users might actually do once the product was launched. The problem was this data had proven to be incorrect. People had not behaved as they claimed they would. And this is the heart of the problem. What people say and what they do are not the same thing. When you are developing new products and you have hundreds of ideas to explore, this is especially the case. How can users reliably predict their interest and take up in an idea when it is barely a few words on a page? The reason it’s not working is because it never did work. And yet if you are going to build new products and services you have to solve this essential problem:

  • How to get valid data on user intent?
  • And more specifically — how to get valid data on user intent, for large numbers of ideas, when these ideas are barely a half-sketch of a proposition?

So this is pre-Alpha product problem. The stage when you have hundreds of ideas and don’t know which one to develop and progress. Once you get to your product Alpha, you have locked in the core value proposition and you are intent on executing it in the right way. Pre-Alpha you are focused on what would be valuable to end users. This is the moment of greatest and wildest latitude in what you might do. And into this set of wild extremes walks the most unreliable data upon which to base our decisions. Surely we can do better than that, I thought?

So I decided to get really clear on the problems with market research, so I can something clear to solve. I believe these three issues are a major — if not the major contributor to the failure of new products. According to Harvard Business School, 95% of new products fail and the most common reason is “no market need”. Well that’s just great! No one wants the thing that we built? How can that be?! We did all this research and people said they did want it! So what’s the problem? The problem is the research methodologies:

  1. Traditional research is based upon panels. Panels are narrow artificial constructs made to look like the general population. Surveys represent only the part of the population that takes surveys. Guess what?! Not very many people actually take surveys
  2. Survey respondents are being paid for their time. They are motivated to complete surveys — and progress through surveys — for cash. They don’t have a honest interest in your product, they are in it for the money.
  3. People are terrible at imagining things. Most studies of new ideas and future products rely heavily on users “imagining” a product or service and then giving an answer as to their interest in it.
  4. You can’t directly ask someone a question and get a useful answer. Surveys are a question and answer format. “How appealing is this?” “Would you use it?” Users consciously post-rationalise their own behaviour or make predictions about what they might do in some imagined future scenario. Unfortunately, people aren’t good at doing this.

It was around this time that I returned from the d.school at Stanford University having taken a short course in innovation and product design. We’d spent alot of time prototyping physical and digital products. I’d also started mixing with the start up community in London and become exposed to new ideas and approaches for product and venture development. My head was reeling with approaches like experimentation methodologies like “Landing Page tests”, “Wizard of Oz”, “Concierge”, “404 tests” and “Fake Doors”. These may be approaches familiar to you. Most people I meet are familiar with at least the theory of some of these approaches. Few have actively any of them. The barriers are many — technical knowledge, methodological knowledge and organisational barriers, to name a few. All of these approaches had one thing in common — they produced behavioural data when users believed the concept they we seeing was real. Their interest and actions were real as they believed the product being advertised or the landing page they visited was for a product they could buy and use right now. The advantages of this approach clearly resolved the issues I had with traditional research:

  1. Real users — not panels. Any user on the web can encounter, react and explore an idea being tested.
  2. Real-life behaviour — not claimed intent. Users discover, react and engage with your idea believing it is real. Therefore their interest is real. Their behavioural is a real.
  3. Observational not directly solicited — users aren’t directly asked their opinion. We observe how users naturally behave when they do not know they are being observed.

So I set about building a tool kit of capabilities so that we could run this methodology in-house. The benefits were immediate. We were able to easily and repeatedly test large numbers of propositions with barely a few words to describe each. So long as we could describe and idea in 10 words or even 30 words, we had sufficient test stimulus.

But it wasn’t good enough. As popular as this new approach became it was difficult to scale across the organisation. It required technical skills and knowledge in test creation, test design, test management and test publishing across multiple ad networks and environments across social, web and app. This was the genesis of the new business I am now building—Alpha Base. Alpha Base is a platform for testing ideas. An all-in-one platform to create idea tests, simply and quickly. No coding is required. No math or statistics. No manual effort or test creation expertise. No need to set up and manage half a dozen accounts for running ads across ad networks, search, social and no need to build your own prototypes or put your business or brand out into the web. If you can describe your idea in words — you had enough to create an Alpha Base test. You are free to focus on creating ideas not managing the testing platform to identify the winners from the losers.

And so that’s how it all began. Like all good solutions it started with a problem. In this case an illusion. I wanted to get the hard facts about what users wanted. Traditional research had been pouring data into our ideation, experimentation and product development processes, but this data was flawed. It had given us a spectre of reality. The only way to reliably predict how users will behave in the future is to observe how the behave today. And then seed new ideas into this space to see how their behaviour changes and reacts to new ideas and concepts. This is the space new products and businesses play in. It’s a wild and open space. The challenge for research is transition ideas from half-thought to specific and well-defined reality. This requires dedication to prototype-led learning and the kind of rapid, behavioural testing found in Landing Page, Fake Door and 404 tests. If you fail to adopt and master these techniques your next product will likely fail. And the next one. And the next one. Behavioural tests bridge the yawning chasm between claimed user intent and reality. Start building that bridge today. Your future success depends on it.

--

--

Alex Street
Alex Street

Written by Alex Street

0 Followers

Alex is CEO and Founder of Alpha Base. He has led research and data science teams at major multinational organisations in product and innovation teams.