Data quality is something of an obsession at Designalytics, in part because we know how much it can impact design research results. Our commitment in this area is (in part) why we’re so much more decisive in our determinations of design performance, and why our assessments are actually highly predictive of sales in market.
One aspect of our approach that intrigues more data-minded people is our large sample sizes. For pre-market design tests, at least 600 category shoppers participate in each of our studies—four times the industry standard. And for our syndicated design research (which no other firm offers), there are nearly 5,000 respondents in each product category.
To those in the know, this is a major advantage: The larger the sample size, the smaller the margin of error, the more useful the results, and the more convincing the conclusions. Yet because we consistently out-sample traditional design research firms, we’re often asked why we do it. Does it really make that big of a difference?
The answer: It definitely does, and probably a bigger difference than you’d imagine.
Smaller sample sizes can lead to an inconclusive result... that is mistaken for a positive one.
What’s one thing brands likely don’t want to hear after spending months and considerable resources testing package designs?
“The result was inconclusive.”
And yet, that’s exactly the result traditional validation testing produces a vast majority of the time: parity. Given its prevalence, “parity or better” has been seen as the benchmark of success for years. In fact, it’s a marker of equivocation—parity is a statistical draw that has mistakenly been called a win for decades.
Part of this comes down to sample size: From a mathematical point of view, studies with small sample sizes (100 or fewer) require large measured differences to confidently say a difference between two designs is likely real. Conversely, higher sample sizes allow you to see the differences with more clarity.
To use an analogy: Imagine you're shooting photos with a telephoto lens. You point your camera at an object and the image is fuzzy. But if you adjust the focus on your camera, the picture becomes clearer: With each turn, it’s sharper and sharper.
That’s what happens when you increase sample size. What once was obscured and hazy becomes increasingly vivid. You can be more confident in what you’re seeing.
Smaller sample sizes were the norm in design management… until now.
For years, brands have had to settle for the blurry picture of package design performance. Not anymore.
Due in part to our larger sample sizes, Designalytics’ metrics make clear positive or negative assessments (rather than parity) about 80% of the time, while traditional validation testing does so less than half the time. Our rigorous design measurement system is optimized for an online environment, which allows us to have larger sample sizes and superior data quality while also being cost-effective. It’s hard to overestimate the impact this can have.
Your brand’s design decisions will only be as good as the data that informs them. Having a robust sample size in your quantitative research is one of the most important steps you can take to ensure you’re making the right ones.
Want to learn more about what makes Designalytics’ data better? Get in touch.