Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |

Statistics: Scientific method · Research methods · Experimental design · Undergraduate statistics courses · Statistical tests · Game theory · Decision theory


The analysis of covariance (ANCOVA) is a general linear model with one continuous explanatory variable and one or more factors. ANCOVA is a merger of ANOVA and regression for continuous variables. ANCOVA tests whether certain factors have an effect after removing the variance for which quantitative predictors (covariates) account. The inclusion of covariates can increase statistical power because it accounts for some of the variability.

Assumptions[edit | edit source]

As any statistical procedure, ANCOVA makes certain assumptions about the data entered into the model. Only if these assumptions are met, at least approximately, ANCOVA will yield valid results. Specifically, ANCOVA, just like ANOVA, assumes that the dependent variable is normally distributed and the independent variable(s) must be orthogonal. In addition, the covariate must be normally distributed and measured with sufficent reliability.

Power Considerations[edit | edit source]

While the inclusion of a covariate into a ANOVA generally increases statistical power by accounting for some of the variance in the dependent variable and thus increasing the ratio of variance explained by the independent variables, adding a covariate into ANOVA also reduces the degrees of freedom (see below). Accordingly, adding a covariate which accounts for very little variance in the dependent variable might actually reduce power.==Equations==

One-factor ANCOVA analysis[edit | edit source]

One factor analysis is appropriate when dealing with more than 3 populations; k populations. The single factor has k levels equal to the k populations. n samples from each population are chosen random from their respective population.

Calculating the sum of squared deviates for the independent variable X and the dependent variable Y[edit | edit source]

The sum of squared deviates (SS): , , and must be calculated using the following equations for the dependent variable, Y. The SS for the covariate must also be calculated, the two necessary values are and .

The total sum of squares determines the variability of all the samples. represents the total number of samples:

The sum of squares for treatments determines the variablity between populations or factors. represents the number of factors:

The sum of squares for error determines the variability within each population or factor. represents the number of samples with a given population:

The total sum of squares is equal to the sum of the sum of squares for treatments and the sum of squares for error:

Calculating the covariance of X and Y[edit | edit source]

The total sum of square covariates determines the covariance of X and Y within the all the data samples:

Adjusting SSTy[edit | edit source]

The correlation between X and Y is .

The proportion of covariance is subtracted from the dependent, values:

Adjusting the means of each population k[edit | edit source]

The mean of each population is adjusted in the following manner:

Analysis using adjusted sum of squares values[edit | edit source]

Mean squares for treatments where is equal to . is one less than in ANOVA to account for the covariance and :

The F statistic is

See also[edit | edit source]

References & Bibliography[edit | edit source]

Key texts[edit | edit source]

Books[edit | edit source]

Papers[edit | edit source]

Additional material[edit | edit source]

Books[edit | edit source]

Papers[edit | edit source]

External links[edit | edit source]

This page uses Creative Commons Licensed content from Wikipedia (view authors).
Community content is available under CC-BY-SA unless otherwise noted.