2

In signal detection theory, d' (sensitivity) is computed in terms of z-scores, to allow comparison (subtraction) of hit rates from false alarm rates. Even so, the interpretation of the d' is not straightforward: I was not able to find any absolute guidelines in terms of how much d' needs to be for it to mean "good" sensitivity.

From eye-balling a bunch of papers, it seems that d' should be above 2 or at least 1.5, but it's not at all clear. The only absolute reference is of course d'=0, which corresponds to chance.

Also, at the group level, is a good enough (i.e. above chance) performance necessarily indicated by a significant 1-sample t-test against zero?

z8080
  • 919
  • 2
  • 7
  • 13

1 Answers1

1

There is no standard definition for a "good" value of d'. The index of sensitivity d' can be related to percent correct under certain conditions (unbiased observer, equal probability of signal and no signal trials, and a single observation). Under these conditions values of d' of 1, 1.5, 2, and 2.5 equate to a probability of correct of 0.69, 0.77, 0.84, and 0.89, respectively.

Historically, "threshold" was often defined as a probability of correct of 0.75. This is the midpoint of the psychometric function and in a yes/no task and the point with the steepest slope. With the transformed up-down method (Levitt, 1971) it is much easier to track a a probability of correct of 0.707 or 0.794, than a probability of correct of 0.75. These equate roughly to a d' of 1 and 1.5. Things got messier as people switched to 2-interval 2-AFC paradigms.

Another way to think about this is that the index of sensitivity d' is essentially the same thing as Cohen's d. Cohen (1988) proposed that values of Cohen's d of 0.2, 0.5, and 0.8 were small, medium, and large effects, respectively. Again, there is no obvious mapping of "good" onto small, medium, and large.

StrongBad
  • 2,633
  • 14
  • 27