Just Test It, Already

Why you shouldn't waste hours debating your next design idea - just test it already and avoid these 5 mental flaws that are trying to derail your design process.

Ideas are cheap…finding out if they’re good ones can be expensive

Have you ever been in one of those long-winded, design review meetings? The ones chock full of variable-quality ideas, strongly-held opinions & bereft of supporting evidence. Decisions still need to be made though. So the debate rages for hours to make sure that you don’t make the wrong decision. You eventually manage to work out what’s best and then run with it.

But lurking in those meetings are hidden forces waiting to derail your design process. Flaws in the way you think (the way we all think) that can throw you off track. Here are 5 common biases that are just waiting to sabotage your next review meeting.

Survivorship Bias

Have you ever focussed on what a competitor has done in order to get inspiration for what you should do next? Even though you don’t have the context to understand how they reached their end point? Could you be mistaking a structural or manufacturing or aesthetic change for an aerodynamic demon tweak? Even if it was an aero change, you’re only seeing the winner. You never see the options that didn’t cut the mustard — the losers. As such, you can’t construct a thread that explains why the winning design was chosen. Which leads into the next flaw. Perhaps you shouldn’t try to explain things when you don’t have the full story?

Narrative Fallacy

It’s not always easy to work out why something happened, especially when you don’t have much to go on. Unfortunately that doesn’t usually stop us creating a narrative that fits our view of the facts.

“A stupid decision that works out well becomes a brilliant decision in hindsight, Daniel Kahneman”

A story arc that passes through our observation points helps it all to make sense. Then we can make a judgement on where to go next. It also makes for great supporting evidence in an argument. But then you’re at risk of falling into the next trap…

Dunning-Kruger Effect

Without getting too Donald Rumsfeld on you — you don’t know what you don’t know. But you can be confident in what you think you know. Perhaps Bertrand Russell said it better (he usually did):

“The fundamental cause of the trouble in the modern world today is that the stupid are cocksure while the intelligent are full of doubt, Bertrand Russell in The Triumph of Stupidity”

You’ll have seen this crop up in meetings. The people who don’t know, debate what to do while the people who might know keep quiet. In these cases, a well constructed narrative fallacy plus persuasion, often end up dictating the project direction.

Congruence Bias

AKA — trying to prove yourself right. This one strikes when you try to prove that your idea is great by directly testing it, often repeatedly. This can lead to the assumption that, because you’ve compared a couple of options and made a gain, that you’ve made it as good as it can be. Or, in meeting speak — that you’ve “picked the low hanging fruit” [shudders].

The Halo (or Horns) Effect

This one eats up endless hours of meeting time. When suffering from the halo effect, a person who likes some aspect of a design (possibly a feature they came up with) is predisposed to like any design with that feature. Conversely (the horns effect) if they dislike some aspect of a design (probably the bit you came up with) they’re more likely to write the whole thing off.

So how can we avoid these biases?

It took a while to get here but this is were simulation comes in. You need data to evaluate your design ideas effectively.

Do you have a lean simulation process that makes it easier to test ideas than to discuss them?

Can your current simulation process help you identify ideas that are potential winners? Does it give you enough insight to be able to identify potential goldmines that are just a tweak away from hitting the jackpot?

On the flip side, can your process help you identify the capstones that are holding performance down?

Does your process let you experiment with those “what if we…?” ideas that all too often end up relegated to the “that’ll never work” box?

But what happens if there just aren’t any big winners?

Not to worry. You’re just going to have to cycle through your build-measure-learn loop as fast as possible adding small increments each time. Small gains add up and as long as you’ve got the resolution in your simulation to differentiate the small gains from noise then you’re on to a winner — it just might take a while.

In summary…

To mitigate the effect of these meeting-derailing biases you need some data to act on. In order to get that data you need some way to test your ideas (preferably without taking too long or costing too much). You need a simulation process that can:

Do you have these traits in your design & development process?

If you do, then don’t waste time talking about what you could do — just test it already.

If not, then we should talk…