Mastering Stakeholder Collaboration for Evidence-Based Decision-Making
Actionable Tips to Transform a Dreaded Decision Interactions into an Enjoyable Value-Adding Process
Hey! Nacho here. Welcome to Product Direction’s newsletter. If you’re not a subscriber, here’s what you missed:
(Podcast) How to Navigate Politics, Dependencies, and Alignment in Large Corps
How to explain the financial value of continuous Discovery to your stakeholders
Additionally, I’ll publish my conversation about Discovery and Strategy with Teresa Torres next week 🥳
Despite your best efforts, you still are within the vast majority of companies and teams that make opinion-based priority decisions.
Don’t get me wrong, you have made progress. You are likely trying to:
Collect and put numbers on the table
Summarize insights from user feedback
Even creating a well-intended business case!
Yet after exposing that evidence for a few potential options, the conversation quickly derails to opinions:
“Even if the estimation is lower, this option aligns better with our strategy.”
“Our competitor has it! If we don’t act, we will lose customers.”
“Where did you get this estimation from? Is that data source valid?”
“We got this customer feedback because we asked the wrong customers!”
We end up “sizing in the air” the importance and validity of these arguments and filling the argumental gaps with our mental models and biases.
Before we jump to conclusions, it’s important to remark that:
These questions, concerns, and comments are very valid!
I’m not proposing a mythical formula to remove opinions from the table to make automated and perfect decision-making.
What I will challenge you to do is to handle the situation differently.
Uncovering Better Assumptions Conversations
Imagine, as a PM, going to a quarterly roadmap review, armed with your evidence and priorities, and facing the above arguments.
Instead of getting defensive, you first must understand that there is an underlying untold assumption in the argument. Your job is to uncover it.
The easiest way is to ask, “Why do you think so?” Using the previous examples, the dialogs for each argument can follow these lines:
PM: “Why do you think this option aligns more with our strategy?” Other: “Because it will help us grow our teenage users, who are not currently our core business but are our bet for growth.”
PM: “Why do you think we will lose customers if we don’t act?” Other: “This is valuable to our customers, and if we don’t have it, they will opt for a solution that does.”
PM: “Why do you believe this data source is invalid?” Other: “Because it conflicts with the data I have from this other source.”
PM: “Why do you believe these are the wrong customers?” Other: “Because the niche group for this feature is companies with 100-300 employees, and you didn’t filter company size in your sample”.
All of those are valid assumption statements, and unless you have evidence in a different direction, those are fair claims to include in your hypothesis validation process.
You can keep exploring if needed, asking more “whys” to uncover a more concrete hypothesis. “Why do you think the niche for the feature is companies with 100-300 employees,” and so on.
Integrating New Assumptions
That was the easy part.
Next, we can map and evaluate this new assumption instead of having a purely opinionated discussion. We use the typical 2x2 matrix to determine how risky it is: what evidence do we have, and what impact can it have on the business?
You must align with the stakeholders where the assumption sits in the matrix. Another source of conflict would be a discrepancy in the level of risk seen by different people. If, after expressing the reasons for the difference, people still don’t agree, use the point of view of the one that considers it riskier. The only difference this makes is that you would prioritize it for discovery, and if it ends up being not risky, you should easily find the proper evidence and build more confidence.
Using the matrix to prioritize the next steps
Now, depending on the situation, we can decide what to do.
For example, for the new data source, we don’t need to confront it. Getting that info and running a proper analysis can be very valuable. If you were deciding based on the wrong data, it’s a blessing that your stakeholders helped you catch it on time!
You may not have strong evidence for the “competitors have it” assumption. So, it can be very valuable to run the proper discovery and derisk it. Just because you don’t *think* it's risky, it's not a proper way to make an evidence-based decision, and you defeat the whole purpose of your prior discovery efforts.
In summary, you would align the subsequent actions with stakeholders to derisk the assumptions that are still considered risky.
“We don’t have time for validation”
This sounds lovely and logical. Until we face the urgency factor.
The quarter is starting, so we need to end the meeting with a decision on what we will start “developing.” We don’t have time for further experiments, validation, etc.
Here is where smart PMs do it differently: instead of “fighting” for their original plan, they are open to changing it and favor the stakeholders’ opinions and assumptions.
This goes against all theories of “saying NO” and all the other good PMing stuff that sounds good in books.
The big difference is that, by favoring them, you removed the critical source of friction, and you still move forward with the discovery.
Let’s say it is a feature that costs one month of development. That can be a lot of waste. But you start making discoveries simultaneously, and after a week, you find evidence that invalidates the idea. Now, you can return to the planning table and present a new plan backed by evidence rather than opinions. (And by the way, your discovery may go well, and then you continue the plan and accelerate one week of development. It’s all about controlled bets).
Aligning on Confidence Level
In the New Assumptions section, we saw how to discuss risk levels. But what about aligning on how strong the evidence we have is?
Before any discussion, I strongly recommend aligning on how to measure it. A great starting point is Itamar Gilad’s confidence meter. However, this is a conflicting topic, for which I wrote some recommendations previously:
The Smartest Way: Involve Stakeholders in Discovery
Making all this alignment in a planning meeting is time-consuming and ineffective.
The best way to keep planning running smoothly is to keep stakeholders in the loop throughout the discovery process. You don’t need to create new artifacts or fancy presentations. The same things you share with your team should be enough to keep them informed and generate shared understanding.
Moreover, you will uncover the conflicts and assumptions they have earlier and incorporate them in your early discovery steps. Everyone will be happier :)