
Opinions expressed by entrepreneurs’ colleagues are their very own.
The most dangerous words in product development are: “Our users will love it.” I heard this declaration at countless products of products, normally, followed by months of engineering work and ending with a quiet disappointment with disappointing users. Culprit? Confirmation hit – the Madenna tendency of our brain to look for information that supports what we already imagine.
As product managers, we are employed to make decisions. We analyze markets, collect requirements and set priorities functions. The problem is that after developing the hypothesis about what users want, we start to filter all incoming information through this lens. An ambiguous feedback is interpreted as supportive. Negative feedback is marked as “edge cases”. And we progressively construct an alternative reality in which our product decisions are all the time good.
User Research Theater
“Users’ Research Theater” refers to conducting applications for a conversation with users, without opening to difficult your assumptions. You can recognize these symptoms in your organization:
-
Choosing positive quotes from user sessions, ignoring negative patterns
-
Asking leading questions designed to cause specific answers
-
Limiting research to users who already love your product
-
Interpretation of silence or confusion as consent
-
Rejecting negative feedback as “they simply do not understand her yet”
Listen, I understand. You have already told your leaders and investors with an amazing road map. You hired engineers based on some technical assumptions. Your company narrative might be built around a specific vision of what users want. Changing the course seems not possible.
But it can remain on the convicted course is worse.
Breaking the deviation cycle
How will we fix it? How do we create processes that we query our valued assumptions as an alternative of strengthening them? Here are some practical approaches that I saw:
1. Separate data collection from interpretation
One team I worked with accepted a practice in which individuals conducting users couldn’t interpret the results. They could only document exactly what was said. A separate team – without emotional investment in specific results – would then analyze transcriptions. This reduced the tendency to listen to what they wanted to listen to during interviews.
This separation causes healthy tension. The intelligence team focuses on asking good questions, and not on running users in the direction of predetermined applications. The evaluation team meets patterns without influence of the tone of users or the interpersonal dynamics of the interview.
2. Actively search for confirming evidence
To make someone’s specific work to play the devil’s lawyer while planning research. This person should ask: “How can we overthrow our hypothesis?” Instead of “how can we confirm our idea?”
For example, as an alternative of asking “would you use this function?” Try “What would prevent you from using this function?” The first query is almost all the time received by a polite “yes”. The second gives real obstacles that it’s worthwhile to overcome.
3. Pay attention to behavior, not only opinions
Users are notoriously bad in predicting their very own future behavior. They will let you know with enthusiasm that they may definitely use your recent function, but when it starts, they follow their old habits.
I think that it is much more beneficial to watch what users actually do than what they say. This means an evaluation of use data from existing functions, creating prototype experiences in which users can show preferences through activities and conduct field research in which you observe users in their natural environment.
4. Create a culture that rewards the change after all
If your team is punished for admitting that they were unsuitable, guess what? Bad ideas shall be doubled and they may not recognize the need for trading.
Intelligent corporations build ceremonies that commemorate learning and adaptation. Some startups performed “Pivot Parties” – actual celebrations when the team made a significant correction of the course based on user perception. They literally jumped out of champagne when they killed the features that showed that the research wouldn’t succeed. This sent a powerful message: learning is valued on stubborn perseverance.
5. Diocese your research participants
If you simply seek advice from your most enthusiastic users, you create an echo chamber. Make sure the research includes:
-
Potential users who selected competitive products
-
There were users who abandoned your product
-
Current users who rarely engage in your product
-
Users in various cases of demographic data and use
This diversity helps reveal blind places in understanding.
Specialist knowledge paradox
This is the painful truth: the more experienced you are in your field, the more prone to the permissions to verify. You saw the patterns before. You have developed intuition. Sometimes it is extremely beneficial. Another time makes you dangerously too confident.
The solution is not to disregard your experience. This is to mix hardly earned intuition with strict processes testing your assumptions. The best product leaders I know have strong beliefs loosely. They make daring factories based on their specialist knowledge, but they quickly adapt when the evidence is contrary to their initial hypotheses.
Ultimately, the market does not care about your good vision or elegant solution. It only gives it if you have solved a real problem in a way that suits users’ lives. And the only technique to know that it is definitely to always query what you think about your users.
The most dangerous words in product development are: “Our users will love it.” I heard this declaration at countless products of products, normally, followed by months of engineering work and ending with a quiet disappointment with disappointing users. Culprit? Confirmation hit – the Madenna tendency of our brain to look for information that supports what we already imagine.
As product managers, we are employed to make decisions. We analyze markets, collect requirements and set priorities functions. The problem is that after developing the hypothesis about what users want, we start to filter all incoming information through this lens. An ambiguous feedback is interpreted as supportive. Negative feedback is marked as “edge cases”. And we progressively construct an alternative reality in which our product decisions are all the time good.
The remainder of this text is blocked.
Join the entrepreneur+ Today for access.