1

I know that the IQ statistic is designed to give it a mean of 100 and that you'll certainly never find someone with an IQ below 1 or above 300, but that tells us very little about the variance or general shape of the distribution. So why is it that every graph of IQ scores that I've seen appears to be a truncated normal distribution? Is it some property of the test design, some property of the test subjects, or some deep theorem in statistics that I've overlooked?

J. Mini
  • 63
  • 5
  • Can you provide examples? –  Mar 28 '20 at 22:27
  • @baca Every Google Images result for "IQ graph" that I've ever seen. – J. Mini Mar 29 '20 at 01:26
  • 1
    I don't understand. The first pictures for me when I go to google "IQ curves" are these ones: https://www.iqtestforfree.net/images/iq_bell_curve.gif https://www.iqcomparisonsite.com/Images/NormalCurveSmall.gif https://external-preview.redd.it/DP8F2nQkyVpfK0Xwi_2o2qYtHz2jHsM_1WzY9yV0C4M.jpg?auto=webp&s=3da07185508128d988b459a9f532e5800122147b Those are perfectly normal curves (pun intended). –  Mar 29 '20 at 04:12
  • 1
    @baca They're obviously not a normal normal distribution. Nobody has a negative IQ score and nobody has an IQ over 10,000. My question is why they're normal distributions in the first place. Why they're truncated is obvious. – J. Mini Mar 29 '20 at 14:20
  • Although not a duplicate, the answer to this question probably also answers yours. – Arnon Weinberg Mar 30 '20 at 16:49
  • @ArnonWeinberg That's very close. It tells us how to get the needed scores but it doesn't go the extra step to explaining normality. – J. Mini Mar 30 '20 at 17:35
  • The content of your comments is not obvious to me from the question. I recommend editing the question to ask what you actually want to know - eg, "Why are IQ scores normalized?" or perhaps "What evidence is there that the underlying construct of IQ is normally distributed?" Clearer question = clearer answer. – Arnon Weinberg Mar 30 '20 at 18:29

1 Answers1

3

IQ isn't normal, it's normalized to have mean 100 and standard deviation 15, usually via a percentile method.

The reason IQ looks roughly normal is because intelligence (however it is defined) is a complex trait. Complex traits are predicted to have a roughly normal distribution based on the central limit theorem: a sum of many individual factors (including genetic and environmental ones) will tend to be distributed normally in a population, even if the underlying factors themselves are not normal.

There is no real concrete measure "IQ": it isn't measuring a real-world physical property the way you measure mass or length. Instead, you use tests intended to get some measure of that abstract trait, and then normalize individuals based on the group statistics. Actual tests administered to measure IQ will have a minimum and maximum score: you can at worst get every question wrong, at best get every question right.

Bryan Krause
  • 7,376
  • 3
  • 13
  • 23
  • "Complex traits are predicted to have a roughly normal distribution based on the central limit theorem" - which version? The classical one that I know and love seems insufficient. Have I missed a lemma? – J. Mini Mar 30 '20 at 17:05
  • @J.Mini Quoting the first sentence of the linked Wikipedia article: "In probability theory, the central limit theorem (CLT) establishes that, in some situations, when independent random variables are added, their properly normalized sum tends toward a normal distribution (informally a bell curve) even if the original variables themselves are not normally distributed." See also https://aidanlyon.com/normal_distributions.pdf – Bryan Krause Mar 30 '20 at 17:09
  • 1
    I'm unconvinced. If the raw test score were the average of some independent factors, then we might have a shot at using the CLT, but sums don't behave as nicely. You could normalise, but I doubt that we know the population parameters. – J. Mini Mar 30 '20 at 17:27
  • @J.Mini The factors being summed/averaged are not the things on the test, it's the underlying construct that IQ is meant to measure. It's the sum of all the genes with a role in cognition, all the books read to you as a child, the diet you eat, the amount of lead you absorb, etc etc. – Bryan Krause Mar 30 '20 at 17:37
  • 1
    So we assume that the raw test score is a sum of independent random variables that we do not know the distributions of. But as it's a sum of independent random variables, it must have some normal distribution with unknown parameters. We then fiddle around with that a bit to get the nice IQ graphs that we all know and love. Is that right? – J. Mini Mar 30 '20 at 17:51
  • 1
    @J.Mini Not the raw test score, the underlying thing that is being measured. But yes, IQ distributions get fiddled with: that's the normalization procedure. When you design an IQ test, you don't yet know how the scores on the test will map to IQ scores. You design the test, give it to lots of people, and use the distribution you observe to decide what score is "100" (the median or mean score) and what score is "85" (one standard deviation below the mean, or an equivalent percentile). – Bryan Krause Mar 30 '20 at 18:00
  • 1
    I'd also say that it isn't that it must be this way. If there was some single dominant factor in IQ, the distribution might not be normal at all: it could be bimodal, for example. But we do observe that it is approximately normal, and we should not be surprised to find this. – Bryan Krause Mar 30 '20 at 18:02