By Evan Miller
DRAFT: May 19, 2013
Being able to apply statistics is like having a secret superpower.
Where most people see averages, you see confidence intervals.
When someone says “7 is greater than 5,” you declare that they're really the same.
In a cacophony of noise, you hear a cry for help.
Unfortunately, not enough programmers have this superpower. That's a shame, because the application of statistics can almost always enhance the display and interpretation of data.
As my modest contribution to developer-kind, I've collected together the statistical formulas that I find to be most useful; this page presents them all in one place, a sort of statistical cheat-sheet for the practicing programmer.
Most of these formulas can be found in Wikipedia, but others are buried in journal articles or in professors' web pages. They are all classical (not Bayesian), and to motivate them I have added concise commentary. I've also added links and references, so that even if you're unfamiliar with the underlying concepts, you can go out and learn more. Wearing a red cape is optional.
Send suggestions and corrections to [email protected]
One of the first programming lessons in any language is to compute an average. But rarely does anyone stop to ask: what does the average actually tell us about the underlying data?
The standard deviation is a single number that reflects how spread out the data actually is. It should be reported alongside the average (unless the user will be confused).
Where:
Reference: Standard deviation (Wikipedia)
From a statistical point of view, the "average" is really just an estimate of an underlying population mean. That estimate has uncertainty that is summarized by the standard error.
Reference: Standard error (Wikipedia)
A confidence interval reflects the set of statistical hypotheses that won't be rejected at a given significance level. So the confidence interval around the mean reflects all possible values of the mean that can't be rejected by the data. It is a multiple of the standard error added to and subtracted from the mean.
Where:
Reference: Confidence interval (Wikipedia)
A two-sample t-test can tell whether two groups of observations differ in their mean.
The test statistic is given by:
The hypothesis of equal means is rejected if |t| exceeds the (1−α/2) quantile of a t distribution with degrees of freedom equal to:
You can see a demonstration of these concepts in Evan's Awesome Two-Sample T-Test.
Reference: Student's t-test (Wikipedia)
It's common to report the relative proportions of binary outcomes or categorical data, but in general these are meaningless without confidence intervals and tests of independence.
A Bernoulli parameter is the proportion underlying a binary-outcome event (for example, the percent of the time a coin comes up heads). The confidence interval is given by:
Where:
This formula can also be used as a sorting criterion.
Reference: Binomial proportion confidence interval (Wikipedia)
If you have more than two categories, a multinomial confidence interval supplies upper and lower confidence limits on all of the category proportions at once. The formula is nearly identical to the preceding one.
Where:
Reference: Confidence Intervals for Multinomial Proportions
Pearson's chi-squared test can detect whether the distribution of row counts seem to differ across columns (or vice versa). It is useful when comparing two or more sets of category proportions.
The test statistic, called X2 , is computed as:
Where:
The expected count is given by:
A statistical dependence exists if X2 is greater than the ( 1−α ) quantile of a χ2 distribution with (m−1)×(n−1) degrees of freedom.
You can see a 2x2 demonstration of these concepts in Evan's Awesome Chi-Squared Test.
Reference: Pearson's chi-squared test (Wikipedia)
If the incoming events are independent, their counts are well-described by a Poisson distribution. A Poisson distribution takes a parameter λ , which is the distribution's mean — that is, the average arrival rate of events per unit time.
The standard deviation of Poisson data usually doesn't need to be explicitly calculated. Instead it can be inferred from the Poisson parameter:
This fact can be used to read an unlabeled sales chart, for example.
Reference: Poisson distribution (Wikipedia)
The confidence interval around the Poisson parameter represents the set of arrival rates that can't be rejected by the data. It can be inferred from a single data point of c events observed over t time periods with the following formula:
Where:
Reference: Confidence Intervals for the Mean of a Poisson Distribution
Please never do this:
From a statistical point of view, 5 events is indistinguishable from 7 events. Before reporting in bright red text that one count is greater than another, it's best to perform a test of the two Poisson means.
The p-value is given by:
Where:
You can see a demonstration of these concepts in Evan's Awesome Poisson Means Test.
Reference: A more powerful test for comparing two Poisson means (PDF)
If you want to test whether groups of observations come from the same (unknown) distribution, or if a single group of observations comes from a known distribution, you'll need a Kolmogorov-Smirnov test. A K-S test will test the entire distribution for equality, not just the distribution mean.
The simplest version is a one-sample K-S test, which compares a sample of n points having an observed cumulative distribution function F to a known distribution function having a c.d.f. of G . The test statistic is:
In plain English, Dn is the absolute value of the largest difference in the two c.d.f.s for any value of x .
The critical value of Dn at significance level α is given by Kα/n√ , where Kα is the value of x that solves:
The critical must be solved iteratively, e.g. by Newton's method. If only the p-value is needed, it can be computed directly by solving the above for α .
Reference: Kolmogorov-Smirnov Test (Wikipedia)
The two-sample version is similar, except the test statistic is given by:
Where F1 and F2 are the empirical c.d.f.s of the two samples, having n1 and n2 observations, respectively. The critical value of the test statistic is Kα/n1n2/(n1+n2)−−−−−−−−−−−−√ with the same value of Kα above.
Reference: Kolmogorov-Smirnov Test (Wikipedia)
A k -sample extension of Kolmogorov-Smirnov was described by J. Kiefer in a 1959 paper. The test statistic is:
Where F¯ is the c.d.f. of the combined samples. The critical value of T is a2 where a solves:
Where:
To compute the critical value, this equation must also be solved iteratively. When k=2 , the equation reduces to a two-sample Kolmogorov-Smirnov test. The case of k=4 can also be reduced to a simpler form, but for other values of k , the equation cannot be reduced.
Reference: K-sample analogues of the Kolmogorov-Smirnov and Cramer-v. Mises tests (JSTOR)
Trend lines (or best-fit lines) can be used to establish a relationship between two variables and predict future values.
The slope of a best-fit (least squares) line is:
Where:
The standard error around the estimated slope is:
The confidence interval is constructed as:
Where:
Reference: Simple linear regression (Wikipedia)
If you own a Mac, my desktop statistics software Wizard can help you analyze more data in less time and communicate discoveries visually without spending days struggling with pointless command syntax. Check it out!