Why our metrics are so predictive

There's a reason Designalytics' predictions align with sales outcomes more than 90% of the time: We started with the outcomes, reverse-engineering our consumer-derived design metrics and success thresholds based on the actual sales performance of redesigns launched to market. The result? Unprecedentedly predictive data that's laser-focused on brand growth.

 

What we measure

We’ve evolved tried-and-true KPIs to accommodate today’s retail realities and developed new metrics to gauge a brand’s mental availability. The result is the most robust picture of package design performance in the industry.

Purchase preference

Purchase preference

Which design are consumers most likely to purchase?

Communication

Communication

What do consumers think and feel about the design on first view? Which brands own the messages that drive purchase in the category?

Mental availability

Mental availability

Does the brand have visual assets that are truly unique and well-liked? How easily can consumers conjure the design from memory, or recognize it from a distance?

Stand-out

Standout

How well does the design grab and hold attention?

Findability

Findability

How quickly and accurately can consumers locate the brand when actively searching for it?

Element Engagement & Resonance

Element Engagement & Resonance

Which are the most liked and disliked design elements, and why?

 


What makes our data best-in-class?

Unprecedented data quality

Unprecedented data quality

We're the only provider to ensure “first-view” data quality where each online activity utilizes a new set of consumers. This virtually eliminates the pre-exposure bias that plagues all other design research, resulting in much more reliable data.

Massive sample sizes

Massive sample sizes

Nearly 5,000 category shoppers participate in the research for each product category we evaluate on a syndicated basis. For 360 design tests, 600+ category shoppers participate in each study—four times the industry standard. For Versus tests, 200+ category shoppers participate in each match-up.

Advanced exercise design

Advanced exercise design

We use engaging online exercises—not surveys—with innovative failsafes designed to minimize respondent errors. Additionally, our algorithms obsessively control for other factors that may bias the data, such as package position and adjacencies in exercises that display a competitive set.

New-to-industry metrics

New-to-industry metrics

In addition to measuring classic performance indicators such as standout and findability, we’ve developed new exercises to quantify and compare popularized concepts of mental availability so it can be managed for lasting brand health:

  • Distance recognition: how easily and accurately can consumers recognize the brand from a distance?
  • Memory structures: has the brand built sufficient memory structures to allow consumers to conjure a visual image from memory?
  • Distinctive assets: does the brand have visual assets that are truly unique and well-liked?
Multi-view standout evaluation

Multi-view standout evaluation

Dissatisfied with the deeply-flawed industry standard for standout evaluation (i.e., testing a single full planogram stimulus), we test multiple scenarios that reflect real-world shopping experiences:

  • Single facing (e-commerce and small brick-and-mortar stores)
  • Blocked facing (brick-and-mortar stores)
  • From an angle (down-aisle perspective at brick-and-mortar stores)
Icon-147

Designed for agile brands

Syndicated category and brand data is instantly accessible, and pre-market design testing takes days—not weeks or months.