Counts Analysis of Max-Diff Data

From Survey Analysis
Jump to: navigation, search

Counts analysis involves counting up the number of times that objects are chosen as best and subtracting the number of times that they are chosen as worst.

Example

Consider the following experimental design, where there are six blocks (sets), six alternatives, each alternative appears three times and each question presents respondents with three alternatives.

           block
alternative 1 2 3 4 5 6
          A 0 1 0 1 1 0
          B 0 0 1 0 1 1
          C 1 0 0 1 1 0
          D 1 1 1 0 0 0
          E 1 1 0 0 0 1
          F 0 0 1 1 0 1

Consider the following choices:

Block Best Worst
1 C E
2 A E
3 B F
4 C F
5 C B
6 B F

We can count up the number of times an alternative is chosen as best and the number of times it is chosen as worst, and the difference between these is the count. For this experiment, alternative C is seen to be most preferred, alternatives B and A are equal second, D is fourth, E fifth and F sixth (i.e., C > B = A > D > E > F).

Alternative Best Worst Count
A 1 0 1
B 2 1 1
C 3 0 3
D 0 0 0
E 0 2 -2
F 0 3 -3

Comments about counts analysis

  1. The highest and lowest counts (3 and -3 in this example) will only occur when a respondent is entirely consistent. Consequently, respondents that provide more consistent data end up having a higher range of count scores and thus it is not technically valid to compare the counts between respondents (as respondents will differ in their degree of variation). In practice, this issue is usually ignored (there is no good solution to this problem; this particular problem exists even with the more valid methods for analyzing max-diff data and also plagues Choice Modeling).
  2. Counts analysis is not valid, because it ignores the experimental design. Although this conclusion does not seem to be understood by many involved in academic and commercial marketing research, the conclusion that it is not valid is standard proof taught in experimental design classes in statistics.[note 1] This is most clear when we look at the relationship between alternatives A and B. The counts analysis suggests that these two alternatives are equally appealing. However, in the only situation in which they appear together (block 5), alternative B was chosen as worst and thus the data shows that A is preferred to B, yet the counts analysis fails to recognize this aspect of the data. The more complicated methods for analyzing max-diff data resolve this problem. Nevertheless, counts analysis is a useful way of inspecting data prior to applying more complicated methods.
  3. Discarding the Worst data and only looking at the Best data will potentially further reduce the validity of any counts analysis, as this involves further ignoring of the detail of the experimental design. Note that when the best data is looked at alternative B is seen to be superior to A, but this is not the case when the Worst data is included. (Note that this conclusion does not generalize to other forms of analysis of max-diff data, where the rationale for focusing just on the Best scores can be that there is more interest in what is appealing than what is unappealing.)
  4. The counts are, at best, ordinal measures of preference. Consider the relationship between the counts for alternatives A and C. That these two differ by 2 only tells us that C is preferred; there is no way of determining the extent of the preference from the counts analysis or using any other type of analysis).

Unbalanced designs

Where an experimental design is not balanced (i.e., where some alternatives appear more frequently than others), counts analysis as described above will produce biased estimates, because alternatives that appear more often will have more opportunities to be selected as Best or Worst. This can be resolved by dividing the computation of the number of times an alternative is selected as best by the number of times it is available, and also doing the same thing for worst.

Counts with multiple respondents

Counts analysis can be performed for an individual respondent, or, for groups of respondents. Where being performed for groups of respondents the computation is identical to when performed for individual respondents (i.e., the number of times alternatives are selected as best and worst are summed). Interpretation is usually facilitated by dividing the numbers of times alternatives are selected as Best and Worst by the number of times that they appear (i.e., as when analyzing an unbalanced design).

Other counts formulas

A number of other formulas are occasionally used. For example, some researchers compute the ratio of best to worst scores. Although such transformations may provide some advantage in some situations, they do not address the fundamental problem of counts analysis (i.e., that it ignores the experimental design).

Counts analysis in statistical software

A useful way of setting up max-diff results in statistical programs is to represent each alternative as a variable, with missing value codes used when alternatives are not shown, a 1 is used to denote an alternative that is chosen as best, a -1 for worst and a 0 for alternatives shown but neither best nor worst (i.e., the Stacked Layout). For example, in R:

     Alternatives
Block  A  B  C  D  E  F
    1 NA NA  1  0 -1 NA
    2  1 NA NA  0 -1 NA
    3 NA  1 NA  0 NA -1
    4  0 NA  1 NA NA -1
    5  1 -1  0 NA NA NA
    6 NA  1 NA NA  0 -1


When data is structured in this way the counts can be computed using sums. For example, the following code enters this data into R and computes the counts:

mdData = matrix(c(NA,NA,1,0,-1,NA,1,NA,NA,0,-1,NA,NA,1,NA,0,NA,-1,0,NA,1,NA,NA,-1,0,-1,1,NA,NA,NA,NA,1,NA,NA,0,-1),6,byrow=TRUE, dimnames=list(Block=1:6,Alternatives = LETTERS[1:6]))
mdData
apply(mdData,2,sum, na.rm=TRUE)

Similarly, for unbalanced designs, the code in R uses mean instead of sum:

apply(mdData,2,mean, na.rm=TRUE)

Aggregate counts are computed for multiple respondents by Stacking the data.

Also known as

Counting Analysis

See also

Notes

  1. In statistics, the proof is taught in the context of incomplete block designs, which is the type of experimental design used in max-diff.

A more up-to-date version of this content is on www.displayr.com.