Psychology Wiki
Register
Advertisement

Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |

Statistics: Scientific method · Research methods · Experimental design · Undergraduate statistics courses · Statistical tests · Game theory · Decision theory


In statistics, regression analysis is a technique which examines the relation of a dependent variable (response variable) to specified independent variables (explanatory variables). Regression analysis can be used as a descriptive method of data analysis (such as curve fitting) without relying on any assumptions about underlying processes generating the data.[1]

When paired with assumptions in the form of a statistical model, regression can be used for prediction (including forecasting of time-series data), inference, hypothesis testing, and modeling of causal relationships. These uses of regression rely heavily on the model assumptions being satisfied. Regression analysis has been criticized as being misused for these purposes in many cases where the appropriate assumptions cannot be verified to hold.[1][2] One factor contributing to the misuse of regression is that it can take considerably more skill to critique a model than to fit a model.[3]

The key relationship in a regression is the regression equation. A regression equation contains regression parameters whose values are estimated using data. The estimated parameters measure the relationship between the dependent variable and each of the independent variables. When a regression model is used, the dependent variable is modeled as a random variable because of either uncertainty as to its value or inherent variability. The data are assumed to be sample from a probability distribution, which is usually assumed to be a normal distribution.

History of regression[]

The term "regression" was used in the nineteenth century to describe a biological phenomenon, namely that the progeny of exceptional individuals tend on average to be less exceptional than their parents and more like their more distant ancestors. Francis Galton, a cousin of Charles Darwin, studied this phenomenon and applied the slightly misleading term "regression towards mediocrity" to it. For Galton, regression had only this biological meaning, but his work[4] was later extended by Udny Yule and Karl Pearson to a more general statistical context.[5]

Simple linear regression[]

File:LinearRegression.svg

Illustration of linear regression on a data set (red points).

The general form of a simple linear regression is

where is the intercept, is the slope, and is the error term, which picks up the unpredictable part of the response variable yi. The error term is usually posited to be normally distributed. The 's and 's are the data quantities from the sample or population in question, and and are the unknown parameters ("constants") to be estimated from the data. Estimates for the values of and can be derived by the method of ordinary least squares. The method is called "least squares," because estimates of and minimize the sum of squared error estimates for the given data set. The estimates of and are often denoted by and or their corresponding Roman letters. It can be shown (see Draper and Smith, 1998 for details) that least squares estimates are given by

and

where is the mean (average) of the values and is the mean of the values.

Generalizing simple linear regression[]

The simple model above can be generalized in different ways.

  • The number of predictors can be increased from one to several. See
Main article: linear regression
  • The relationship between the knowns (the s and s) and the unknowns ( and the s) can be nonlinear. See
Main article: non-linear regression
  • The response variable may be non-continuous. For binary (zero or one) variables, there are the probit and logit model. The multivariate probit model makes it possible to estimate jointly the relationship between several binary dependent variables and some independent variables. For categorical variables with more than two values there is the multinomial logit. For ordinal variables with more than two values, there are the ordered logit and ordered probit models. An alternative to such procedures is linear regression based on polychoric or polyserial correlations between the categorical variables. Such procedures differ in the assumptions made about the distribution of the variables in the population. If the variable is positive with low values and represents the repetition of the occurrence of an event, count models like the Poisson regression or the negative binomial model may be used
  • A different loss function to weight distances from observed and predicted values can be minimized. If absolute deviation is used, the result is quantile regression, which can be used to measure relationships between quantiles such as the median.
  • The form of the right hand side can be determined from the data. See Nonparametric regression. These approaches require a large number of observations, as the data are used to build the model structure as well as estimate the model parameters. They are usually computationally intensive.

Regression diagnostics[]

Once a regression model has been constructed, it is important to confirm the goodness of fit of the model and the statistical significance of the estimated parameters. Commonly used checks of goodness of fit include R-squared, analyses of the pattern of residuals and construction of an ANOVA table. Statistical significance is checked by an F-test of the overall fit, followed by t-tests of individual parameters. Interpretations of these diagnostics rest heavily on the model assumptions. Although examination of the residuals can be used to invalidate a model, the results of a t-test or F-test are meaningless unless the modeling assumptions are satisfied.

Estimation of model parameters[]

The parameters of a regression model can be estimated in many ways. The following list orders these methods roughly on the basis of how widely used they are in practice:

For a model with normally distributed errors the method of least squares and the method of maximum likelihood coincide (see Gauss-Markov theorem).

Interpolation and extrapolation[]

Regression models predict a value of the variable given known values of the variables. If the prediction is to be done within the range of values of the variables used to construct the model this is known as interpolation. Prediction outside the range of the data used to construct the model is known as extrapolation and it is more risky.

Assumptions underpinning regression[]

Regression analysis depends on certain assumptions

  1. The predictors must be linearly independent, i.e it must not be possible to express any predictor as a linear combination of the others. See Multicollinear.
  2. The error terms must be normally distributed and independent.
  3. The variance of the error terms must be constant.
  4. The sample must be representative of the population for the inference prediction.
  5. The distribution of the dependent variable must have approximately equal variability, called the assumption of homoscedasticity

Examples[]

To illustrate the various goals of regression, we give an example.

Prediction of future observations[]

The following data set gives the average heights and weights for American women aged 30-39 (source: The World Almanac and Book of Facts, 1975).

Height (in) 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
Weight (lb) 115 117 120 123 126 129 132 135 139 142 146 150 154 159 164

We would like to see how the weight of these women depends on their height. We are therefore looking for a function such that , where Y is the weight of the women and X their height. Intuitively, we can guess that if the women's proportions are constant and their density too, then the weight of the women must depend on the cube of their height.

File:Data plot women weight vs height.svg

A plot of the data set confirms this supposition

will denote the vector containing all the measured heights () and is the vector containing all measured weights. We can suppose the heights of the women are independent from each other and have constant variance, which means the Gauss-Markov assumptions hold. We can therefore use the least-squares estimator, i.e. we are looking for coefficients and satisfying as well as possible (in the sense of the least-squares estimator) the equation:

Geometrically, what we will be doing is an orthogonal projection of Y on the subspace generated by the variables and . The matrix X is constructed simply by putting a first column of 1's (the constant term in the model), a column with the original values (the X in the model) and a third column with these values cubed (). The realization of this matrix (i.e. for the data at hand) can be written:

1 58 195112
1 59 205379
1 60 216000
1 61 226981
1 62 238328
1 63 250047
1 64 262144
1 65 274625
1 66 287496
1 67 300763
1 68 314432
1 69 328509
1 70 343000
1 71 357911
1 72 373248

The matrix (sometimes called "information matrix" or "dispersion matrix") is:

Vector is therefore:

hence

File:Plot regression women.svg

A plot of this function shows that it lies quite closely to the data set

The confidence intervals are computed using:

with:

<- this number is incorrect

Therefore, we can say that the 95% confidence intervals are:

See also[]

Notes[]

  1. 1.0 1.1 Richard A. Berk, Regression Analysis: A Constructive Critique, Sage Publications (2004)
  2. David A. Freedman, Statistical Models: Theory and Practice, Cambridge University Press (2005)
  3. [1] R. Dennis Cook; Sanford Weisberg "Criticism and Influence Analysis in Regression", Sociological Methodology, Vol. 13. (1982), pp. 313-361.
  4. Francis Galton. "Typical laws of heredity", Nature 15 (1877), 492-495, 512-514, 532-533. (Galton uses the term "reversion" in this paper, which discusses the size of peas.); Francis Galton. Presidential address, Section H, Anthropology. (1885) (Galton uses the term "regression" in this paper, which discusses the height of humans.)
  5. G. Udny Yule. "On the Theory of Correlation", J. Royal Statist. Soc., 1897, p. 812-54. Karl Pearson, G. U. Yule, Norman Blanchard, and Alice Lee. "The Law of Ancestral Heredity", Biometrika (1903). In the work of Yule and Pearson, the joint distribution of the response and explanatory variables is assumed to be Gaussian. This assumption was weakened by R.A. Fisher in his works of 1922 and 1925 (R.A. Fisher, "The goodness of fit of regression formulae, and the distribution of regression coefficients", J. Royal Statist. Soc., 85, 597-612 from 1922 and Statistical Methods for Research Workers from 1925). Fisher assumed that the conditional distribution of the response variable is Gaussian, but the joint distribution need not be. In this respect, Fisher's assumption is closer to Gauss's formulation of 1821.

References[]

  • Audi, R., Ed. (1996). "curve fitting problem," The Cambridge Dictionary of Philosophy. Cambridge, Cambridge University Press. pp.172-173.
  • William H. Kruskal and Judith M. Tanur, ed. (1978), "Linear Hypotheses," International Encyclopedia of Statistics. Free Press, v. 1,
Evan J. Williams, "I. Regression," pp. 523-41.
Julian C. Stanley, "II. Analysis of Variance," pp. 541-554.
  • Lindley, D.V. (1987). "Regression and correlation analysis," New Palgrave: A Dictionary of Economics, v. 4, pp. 120-23.
  • Birkes, David and Yadolah Dodge, Alternative Methods of Regression. ISBN 0-471-56881-3
  • Chatfield, C. (1993) "Calculating Interval Forecasts," Journal of Business and Economic Statistics, 11. pp. 121-135.
  • Draper, N.R. and Smith, H. (1998).Applied Regression Analysis Wiley Series in Probability and Statistics
  • Fox, J. (1997). Applied Regression Analysis, Linear Models and Related Methods. Sage
  • Hardle, W., Applied Nonparametric Regression (1990), ISBN 0-521-42950-1
  • Meade, N. and T. Islam (1995) "Prediction Intervals for Growth Curve Forecasts," Journal of Forecasting, 14, pp. 413-430.
  • Munro, Barbara Hazard (2005) "Statistical Methods for Health Care Research" Lippincott Williams & Wilkins, 5th ed.
  • Gujarati, Basic Econometrics, 4th edition
  • Sykes, A.O. "An Introduction to Regression Analysis" (Innaugural Coase Lecture)
  • S. Kotsiantis, D. Kanellopoulos, P. Pintelas, Local Additive Regression of Decision Stumps, Lecture Notes in Artificial Intelligence, Springer-Verlag, Vol. 3955, SETN 2006, pp. 148 – 157, 2006
  • S. Kotsiantis, P. Pintelas, Selective Averaging of Regression Models, Annals of Mathematics, Computing & TeleInformatics, Vol 1, No 3, 2005, pp. 66-75

Software[]

  • All major statistical software packages, e.g. SAS, SPSS, Minitab, , R, or Stata, perform various types of regression analysis correctly and in a user-friendly way.
  • The Predictive Model Markup Language (PMML) allows for the expression of several types of regression analysis. Currently, tools such as SAS and SPSS already allow for the exporting of PMML models, while other tools such as ADAPA can import and execute PMML in batch or real-time.
  • Simpler regression can be done in spreadsheets like MS Excel or OpenOffice.org Calc.
  • Experts can run complex types of regression using special programming languages like Mathematica, R, Stata or Matlab.
  • There are a number of software programs that perform specialized forms of regression.
  • There are a number of web sites that allow online linear and nonlinear regression.

External links[]




{{enWP|RegressionAnalysis))

Advertisement