Max-Diff, which is also known as best-worst scaling, is a type of experiment used for measuring relative preferences. Typically, respondents are shown a series of lists and are asked, when shown each list, to indicate which of the objects is 'best' and which is 'worst' (or, which is most preferred and which is least preferred).
For example, a max-diff experiment measuring preferences for different colas may ask which is best and which is worst from: Coke, Pepsi, Diet Coke and Coke Zero. Then, in the second question a different list is shown (e.g., Pepsi, Coke Zero, Pepsi Lite and Dr Pepper).
Typical applications of max-diff
Most-max diff experiments either:
- Measure relative preferences for brands.
- Measure relative preferences for product attributes (e.g., as a part of a Segmentation study).
- Measure relative preferences for different bundles of attributes. This is also known as a Best-Worst Conjoint.
Stages of a Max-Diff study
- Identifying the alternatives to be included in the experiment (i.e., the brands, product attributes or bundles of attributes).
- Creating a Max-Diff Experimental Design, which determines which objects appear in which lists.
Relationship of Max-Diff to other techniques
Max-diff can be viewed obtaining Incomplete Rankings and Partial Rankings. If a person is shown a list of Coke, Pepsi, Diet Coke and Coke Zero and indicates that Coke Zero is best and Coke is worst then the ranking obtained is: Coke Zero > Diet Coke = Pepsi > Coke.
Although the "max" and "diff" in "Max-Diff" are short for "Maximum Difference", max-diff as practiced in survey research is unrelated to Maximum Difference Scaling in psychophysical experiments (there is a conceptual relationship, but both the statistical models and the software developed for one are not readily adaptable to the other).
Max-Diff can be viewed as a form of discrete choice experiment. Additionally, standard discrete choice models, such as the various generalizations of Multinomial Logit can be use to analyze max-diff experiments, although the resulting parameter estimates are biased.
- Kenneth Knoblauch, Laurence T. Maloney. (2008) “Kenneth Knoblauch, Laurence T. Maloney”. Journal of Statistical Software, Vol. 25, Issue 2, Mar 2008.
A more up-to-date version of this content is on www.displayr.com.