Sign test

In statistics, the sign test can be used to test the hypothesis that there is "no difference" between the continuous distributions of two random variables X and Y, in the situation when we can draw paired samples from X and Y. It is a non-parametric test which makes very few assumptions about the nature of the distributions under test - this means that it has very general applicability but may lack the statistical power of other tests such as the paired-samples T-test.

Formally, let p = Pr(X > Y), and then test the null hypothesis H0: p = 0.50. In other words, the null hypothesis states that given a random pair of measurements (xi, yi), then xi and yi are equally likely to be larger than the other.

Method
Independent pairs of sample data are collected from the populations {(x1, y1), (x2, y2),. . ., (xn, yn)}. Pairs are omitted for which there is no difference so that there is a possibility of a reduced sample of m pairs.

Then let w be the number of pairs for which yi &minus; xi > 0. Assuming that H0 is true, then W follows a binomial distribution W ~ b(m, 0.5). The "W" is for Frank Wilcoxon who developed the test, then later, the more powerful Wilcoxon signed-rank test.

Significance testing
Since the test statistic is expected to follow a binomial distribution, the standard binomial test is used to calculate significance. The normal approximation to the binomial distribution can be used for large sample sizes, m>25.

The left-tail value is computed by Pr(W &le; w), which is the p-value for the alternative H1: p < 0.50. This alternative means that the X measurements tend to be higher.

The right-tail value is computed by Pr(W &ge; w), which is the p-value for the alternative H1: p > 0.50. This alternative means that the Y measurements tend to be higher.

For a two-sided alternative H1 the p-value is twice the smaller tail-value.