The margin of error is the most widely misunderstood and misleading concept in
statistics. It's positively frightening to people who actually understand what it means to see
how it's commonly used in the media, in conversation, sometimes even by other scientists!
The basic idea of it is very simple. Most of the time when we're doing statistics, we're doing statistics based on a sample - that is, the entire population we're interested in is difficult to study; so what we try to do is pick a representative subset called a sample. If the subset is truly representative, then the statistics you generate using information gathered from the sample will be the same as information gathered from the population as a whole.
But life is never simple. We never have perfectly representative samples; in fact, it's
impossible to select a perfectly representative sample. So we do our best to pick good
samples, and we use probability theory to work out a predication of how confident we can be that the
statistics from our sample are representative of the entire population. That's basically what the
margin of error represents: how well we think that the selected sample will allow us to
predict things about the entire population.
The way that we compute a margin of error consists of a couple of factors:
- The size of the sample.
- Magnitude of known problems in the sample.
- The extremity of the statistic.
Let's look at each of those in a bit more detail to understand what they mean, and then I'll explain how we use them to compute a margin of error.
The larger a sample is, the more likely it is to be representative. The intuition behind that is
pretty simple: the more individuals that we include in the sample, the less likely we are to
accidentally omit some group or some trend that exists within the population. Taking a presidential
election as an example: if you polled 10 randomly selected people in Manhattan, you'd probably get
mostly democrats, and a few republicans. But Manhattan actually has a fairly sizeable group of people
who vote for independents, like the green party. If you sampled 100 people, you'd probably get at
least three or four greens. With the smaller sample size, you'd wind up with statistics that overstated the number of democrats in Manhattan, because the green voters, who tend to be very liberal, would probably be "hidden" inside of the democratic stat. If you sampled 1000 people, you'd be more likely to get a really good picture of NYC: you'd get the democrats and republicans, the conservative party, the working families party, and so on - all groups that you'd miss in the
When we start to look at a statistic, we start with an expectation: a very rough sense
of what the outcome is likely to be. (Given a complete unknown, we generally start with the Bayesian
assumption that in the presence of zero knowledge, you can generally assign a 50/50 split as an
initial guess about the division of any population into exactly two categories.) When we work with a
sample, we tend to be less confident about how representative that sample is the farther the
measured statistic varies from the expected value.
Finally, sometimes we know that the mechanism we use for our sample is imperfect - that
is, we know that our sample contains an unavoidable bias. In that case, we expand the margin of error
to try to represent the reduced certainty caused by the known bias. For example, in elections, we
know that in general, there are certain groups of people who simply are less likely
to participate in exit polls. An exit poll simple cannot generate an unbiased sample, because the outcome is partially determined by who is willing to stop and take the poll. Another example is in polls involving things like sexuality, where because of social factors, people are less likely to admit to certain things. If you're trying to measure something like "What percentage of people have had extramarital affairs?", you know that many people are not going to tell the truth - so your result will include an expected bias.
Given those, how do we compute the margin of error? It depends a bit on how you're measuring. The
easiest (and most common) case is a percentage based statistic, so that's what we'll stick
with for this article. The margin of error is computed from the standard error, which is in
turn derived from an approximation of the standard deviation. Given a population of size P;
and a measured statistic of X (where X is in decimal form - so 50% means X=0.5), the standard error E
<!-- Insert image for equation: E = ((X * (1-X))/P)1/2 -->
The way that equation is generated is beyond the scope of this - but it's built on a couple of
reasonable assumptions: the big one being that the statistic being measured has a binomial
distribution. (For now, think of a binomial distribution as being something where randomly selected
samples will generate results for a statistic that form a bell curve around the value that you would
get if you could measure the statistic for the entire population.) If you make that assumption, we wind up with an equation in terms of the variance of the population (the variance is the standard deviation squared) - and then with a couple of simplifications that
can be shown to not significantly alter the value of the standard standard error, we wind up with
To get from the standard error to the margin of error, we need to pick a confidence
interval. A confidence interval is a percentage representing how certain we are that the
actual statistic lies within the measured statistic +/- the margin of error. In general,
most statistics are computed using a confidence interval of 95%. You get a confidence interval
of 95% when you use twice the standard error as your margin of error. So the margin of error for most polls is 2E with a confidence of 95%. Using 2.58E as your margin of error gives you a confidence interval of 99%; using just 1E gives you a confidence interval of 68%.
There are a bunch of errors in how people generally use the margin of error:
- The most glaring error is not citing the confidence interval. You cannot
know what a margin of error means if the confidence interval isn't specified. I don't think
I can remember the last time I saw a CI quoted outside of a scientific paper.
- Many people, especially journalists, believe that the margin of error includes
all possible sources of error. It most emphatically does not - it only
specifies the magnitude of error introduced by non-deliberate sampling errors.
In a scientific experiment, experimental errors and measurement errors always affect the
outcome of the experiment - but the margin of error does not include those
factors - only the sampling error. In a poll, the wording of a question and the way in
which its asked have a huge impact - and that is not part of the margin
of error. (For example, if you wanted to measure support for concealed carry laws for guns,
you could ask people "Do you believe that people should be allowed to carry concealed
weapons anywhere they go, including government buildings, schools, and churches?", you'd
get one result. If you asked "Do you believe that citizens have a right to carry a weapon
to protect themselves and their families from criminals?", you'd get a very different
answer - the phrasing of the first question is likely to bias people against
saying yes, by deliberately invoking the image of guns in a school or a church. The
phrasing of the second question is likely to generate far more "Yes" answers than the
first, because it invokes the image of self-protection from rampaging bad-guys.)
- People frequently believe that the margin of error is a measure of the quality
of a statistic - that is, that a well-designed poll will have a smaller margin of error
than a poorly-designed poll. It doesn't - the MoE only represents sampling errors!
A great poll with a sample size of 100 will virtually always have a considerably
larger MoE than a terrible poll with a sample size of 1000. If you want to know
the quality of a poll, you need to know more information it than just the margin of error;
If you want to gauge the relative quality of two different polls, you need to know more
than just the margin of error. In either case, you really need to know the sample size,
how the sample was collected, and most importantly exactly what they measure.
To give another political example, there are a number of different polls taken on
a very regular basis of the approval ratings of the president. These polls vary quite
drastically - for example, in this
week's polls, the number of people who approve of the
president range from 30% to 39%, with margins of error in the 3% range.