Skip to main content

In-Platform Survey Advanced Methodologies: Max Diff

Written by Molly McDermott
Updated this week

About Max Diff

What is Max Diff?

Max Diff (short for maximum difference scaling) is a structured trade-off exercise type that helps you understand which attributes people truly care about.

Respondents see small groups of attributes at a time and, for each set, they pick the attribute they like the most and the attribute they like the least. By repeating this exercise across different combinations, Max Diff reveals which attributes are most and least preferred overall and how large the gaps are between attributes in terms of preference.

Instead of everyone saying “everything is important,” Max Diff forces trade-offs, which produces a clearer ranking of what really matters.

When to use Max Diff

Max Diff is best used to understand buyer preference across a list of attributes, such as:

  • Product features

  • Brand claims

  • Product benefits

  • Menu items/flavors

  • Product names

  • Any longer list of attributes that would be hard to rank in a single question

Max Diff is especially useful when you need a clear rank order and a read on how far apart attributes truly are in terms of preference.

Avoid using Max Diff in the following situations:

  • Too few attributes: If you want to evaluate fewer than 10 attributes, a simpler rating or ranking question will be more efficient and easier for respondents.

  • Combined or overlapping concepts: attributes should be clearly defined and mutually exclusive

    • Max Diff is designed to compare single attributes, not complex combinations of features

    • Do not use Max Diff to compare bundles like “Recyclable cup + High Protein + Stevia Sweetener” vs. “Non-Recyclable cup + High Protein + Real Sugar”

In-Platform Max Diff is best suited for directional prioritization and lower-stakes decision-making. If your project involves high-stakes decisions, complex segmentation, or requires highly robust modeling, we recommend connecting with your Numerator support team to explore a custom Max Diff approach.

About the Max Diff Question Type

Key Details

  • Available across survey types: Max Diff is a question type that can be added to any In-Platform Survey.

  • Advanced prioritization tool: Max Diff helps you identify what stands out most by asking respondents to make trade-offs between options.

  • Attribute count: You can test 10–30 items, though keeping the list under 20 typically leads to a smoother respondent experience.

  • Thoughtful design matters: Most Max Diff exercises use 3–6 items per set (4 is common). Showing each item multiple times (ideally three) helps improve reliability. If items appear fewer times, a larger sample is recommended.

  • Additional cost: Max Diff includes an additional charge of 1 survey credit per 300 completes due to its advanced programming and methodology.

  • Need guidance? For high-impact or more complex projects, your Numerator Research Consultant or Researchers on Demand can help determine whether In-Platform Max Diff or a Custom approach is the best fit.

How Max Diff Works

In a Max Diff exercise, panelists see a series of sets, each containing a subset of items. For every set, they select the item they find most appealing and the item they find least appealing. Repeating this task across sets reveals the relative preference for each item. Over the full exercise, the design should ensure that each attribute is shown multiple times and compared against combinations of other attributes, which results in robust preference data.

Max Diff is an advanced methodology. We strongly encourage you to reach out to your Numerator support team or our Researchers on Demand for guidance when designing an In-Platform Max Diff question.

Designing a Max Diff exercise requires balancing multiple variables to ensure reliable results while maintaining a reasonable survey length and positive panelist experience.

The primary design variables to consider are:

  • Quota Group Size

  • Number of Attributes (A)

  • Attributes per Set (S)

  • Sets per Respondent

  • Total Exposures per Attribute

These variables are interconnected. Adjusting one can directly impact the others.

Quota Group Size

Sample size plays a critical role in Max Diff reliability — especially if attributes are shown fewer times.

Recommendation: The fewer the exposures per attribute, the larger your sample should be.

Exposure per Attribute

Recommended Minimum per Group

1x exposure

600 completes

2x exposure

300 completes

3x exposure

200 completes

Number of Attributes (A)

The Attributes are the different items panelists are evaluating during the Max Diff exercise. The platform allows 10–30 attributes per Max Diff question. However, to minimize the risk of respondent fatigue, it is best practice to keep your list of attributes under 20 whenever possible. Also, consider that the more attributes you include, the more sets you will need to show in order to get reliable data, which increases survey length.

Additional considerations:

  • Attributes must be mobile-friendly

  • Maximum of 75 characters per attribute

  • Avoid overly long or complex phrasing

Attributes per Set (S)

Attributes per Set determines how many items appear in each evaluation round. The platform supports 3 to 6 attributes per set. Recommended ranges vary depending on the size and complexity of the Attributes (see below).

Type of Attributes

Recommended Attributes per Set

1-2 word statements (ex: names, colors, flavors)

4-5 per set

Full Sentence

3-4 per set

Multiple Sentences (less than 75 characters)

3 per set

Images

3-4 per set

Sets per Respondent

Sets per Respondent represents the number of rounds a panelist completes. The platform allows 3 to 15 Sets per Respondent, with every 5 sets counting as 1 survey question. The goal is to ensure each attribute is shown multiple times.

Pro tip: Use the formula (3 x A) / S to help determine the Sets per Respondent needed for 3 Exposures per Attribute. If using a larger sample, you can lower the number of exposures to 2 ( 2A / S) or 1 (A/ S).

Exposures per Attribute

Each attribute should appear at least once, but this is the bare minimum. To produce stable and reliable results, each attribute should be shown multiple times (Numerator recommends at least 3 times per respondent), particularly when working with smaller sample sizes. You can ensure this by adjusting the Sets per Respondent.

Total Exposures is the number of attributes a panelist will see throughout the exercise.

Total Exposures = Attributes per Set (S) x Sets per Respondent

To determine how many times each attribute appears:

Exposures per Attribute = Total Exposures / Number of Attributes

To estimate Sets per Respondent needed for 3 exposures: (3 x A) / S

For example, if testing 16 attributes with 4 per set: (3x16) / 4 = 12 Sets per Respondent

Programming a Max Diff Question

Respondent View

Analyzing Max Diff Results

  • Max Diff question results include both a Rank and a Score, which provides insight about the relative importance of an item/attribute.

  • The Rank represents the overall order of the attributes and tends to closely align with the score.

  • The Score is calculated by using the following formula: (# times attribute was selected best - # times attribute was selected as worst) / # times the item appeared.

    • Higher Scores indicate a stronger preference

    • A Score above zero means the attribute was picked as “most” more often than “least”.

    • A Score below zero means the attribute was picked as “least” more often than “most”.

    • A Score around zero means it was chosen similarly often as “most” and “least” or rarely selected at all.

  • The download includes the Preference Share, which shows the distribution of preferences across all attributes.

Max Diff Analysis Key Terms

Example setup:

Term

Definition

Platform Capabilities

Example (see image above)

Notes

Attribute (A)

The individual items or options being evaluated

Min 10

Max 30

Best Practice: <15

A = 15

We want to test 15 different toothpaste features

More attributes demand more exposures and/or a larger sample to maintain reliable data

Attributes per Set (S)

The number of options a panelist sees and evaluates at a time in each round

Min 3

Max 6

Best Practice: 4-5

S = 4

Panelists are shown 4 items (attributes) at a time

The larger or more detailed the attribute, the fewer attributes should appear in each set to avoid overwhelming respondents

Set per Respondent

The number of rounds a panelist completes in the Max Diff exercise

Min 3

Max 15

Sets per Respondent = 5

Panelists do 5 rounds of selecting best + worst options from a group

Use the formula (3 × A) ÷ S to estimate the recommended number of Sets per Respondent needed to show each attribute at least 3 times. Every 5 sets per respondent count as one survey question toward your total survey length

Total Exposures

The total number of item appearances a panelist sees across the entire exercise

Formula:

Attributes per Set x Sets per Respondent

4x5 = 20 Total Exposures

Panelists see 20 attributes in total during the exercise

More exposures = more reliable results. Ensure your design supports multiple appearances per attribute

Exposures per Attribute

The average number of times each attribute is shown to a panelist during the exercise

Formula: Total Exposures / # Attributes

20 Total Exposures/15 Attributes

On average, each Attribute will be shown 1.33 times

Minimum of 1 exposure per respondent; best practice is at least 3 exposures per attribute.

Max Diff Best Practices & Watchouts

  • Since Max Diff involves advanced programming, there is an additional cost of 1 survey credit/300 completes automatically added to surveys that use Max Diff.

  • In order to ensure a positive panelist experience and reduce respondent fatigue, Numerator recommends no more than 1-2 Max Diff questions per survey.

  • It is best practice to program your Max Diff question so that each attribute is shown multiple times. Numerator recommends at least three times per respondent.

    • The formula (3 x number of Attributes) / # Attributes per Set can help estimate how many sets should be used to ensure 3 times per option.

    • The fewer exposures each attribute will have, the larger your quota group sizes should be, to ensure you're collecting enough responses to make reasonable inferences from the data

  • Avoid using Max Diff in the following situations:

    • Too few attributes: If you want to evaluate fewer than 10 attributes, a simpler rating or ranking question will be more efficient and easier for respondents.

    • Combined or overlapping concepts: attributes should be clearly defined and mutually exclusive

      • Max Diff is designed to compare single attributes, not complex combinations of features

      • Do not use Max Diff to compare bundles like “Recyclable cup + High Protein + Stevia Sweetener” vs. “Non-Recyclable cup + High Protein + Real Sugar”.

Last Updated 2/26/2026

Did this answer your question?