Regression Analysis
Regression Analysis
Regression analysis
In statistics, regression analysis includes many techniques for modeling and analyzing several variables, when the
focus is on the relationship between a dependent variable and one or more independent variables. More specifically,
regression analysis helps one understand how the typical value of the dependent variable changes when any one of
the independent variables is varied, while the other independent variables are held fixed. Most commonly, regression
analysis estimates the conditional expectation of the dependent variable given the independent variables — that is,
the average value of the dependent variable when the independent variables are fixed. Less commonly, the focus is
on a quantile, or other location parameter of the conditional distribution of the dependent variable given the
independent variables. In all cases, the estimation target is a function of the independent variables called the
regression function. In regression analysis, it is also of interest to characterize the variation of the dependent
variable around the regression function, which can be described by a probability distribution.
Regression analysis is widely used for prediction and forecasting, where its use has substantial overlap with the field
of machine learning. Regression analysis is also used to understand which among the independent variables are
related to the dependent variable, and to explore the forms of these relationships. In restricted circumstances,
regression analysis can be used to infer causal relationships between the independent and dependent variables.
However this can lead to illusions or false relationships, so caution is advisable:[1] See correlation does not imply
causation.
A large body of techniques for carrying out regression analysis has been developed. Familiar methods such as linear
regression and ordinary least squares regression are parametric, in that the regression function is defined in terms of
a finite number of unknown parameters that are estimated from the data. Nonparametric regression refers to
techniques that allow the regression function to lie in a specified set of functions, which may be infinite-dimensional.
The performance of regression analysis methods in practice depends on the form of the data generating process, and
how it relates to the regression approach being used. Since the true form of the data-generating process is generally
not known, regression analysis often depends to some extent on making assumptions about this process. These
assumptions are sometimes testable if a large amount of data is available. Regression models for prediction are often
useful even when the assumptions are moderately violated, although they may not perform optimally. However, in
many applications, especially with small effects or questions of causality based on observational data, regression
methods give misleading results.[2][3]
History
The earliest form of regression was the method of least squares (French: méthode des moindres carrés), which was
published by Legendre in 1805,[4] and by Gauss in 1809.[5] Legendre and Gauss both applied the method to the
problem of determining, from astronomical observations, the orbits of bodies about the Sun. Gauss published a
further development of the theory of least squares in 1821,[6] including a version of the Gauss–Markov theorem.
The term "regression" was coined by Francis Galton in the nineteenth century to describe a biological phenomenon.
The phenomenon was that the heights of descendants of tall ancestors tend to regress down towards a normal
average (a phenomenon also known as regression toward the mean).[7][8] For Galton, regression had only this
biological meaning,[9][10] but his work was later extended by Udny Yule and Karl Pearson to a more general
statistical context.[11][12] In the work of Yule and Pearson, the joint distribution of the response and explanatory
variables is assumed to be Gaussian. This assumption was weakened by R.A. Fisher in his works of 1922 and
1925.[13][14][15] Fisher assumed that the conditional distribution of the response variable is Gaussian, but the joint
distribution need not be. In this respect, Fisher's assumption is closer to Gauss's formulation of 1821.
In the 1950s and 1960s, economists used electromechanical desk calculators to calculate regressions. Before 1970, it
sometimes took up to 24 hours to receive the result from one regression.[16]
Regression analysis 2
Regression methods continue to be an area of active research. In recent decades, new methods have been developed
for robust regression, regression involving correlated responses such as time series and growth curves, regression in
which the predictor or response variables are curves, images, graphs, or other complex data objects, regression
methods accommodating various types of missing data, nonparametric regression, Bayesian methods for regression,
regression in which the predictor variables are measured with error, regression with more predictor variables than
observations, and causal inference with regression.
Regression models
Regression models involve the following variables:
• The unknown parameters, denoted as β, which may represent a scalar or a vector.
• The independent variables, X.
• The dependent variable, Y.
In various fields of application, different terminologies are used in place of dependent and independent variables.
A regression model relates Y to a function of X and β.
The approximation is usually formalized as E(Y | X) = f(X, β). To carry out regression analysis, the form of the
function f must be specified. Sometimes the form of this function is based on knowledge about the relationship
between Y and X that does not rely on the data. If no such knowledge is available, a flexible or convenient form for f
is chosen.
Assume now that the vector of unknown parameters β is of length k. In order to perform a regression analysis the
user must provide information about the dependent variable Y:
• If N data points of the form (Y,X) are observed, where N < k, most classical approaches to regression analysis
cannot be performed: since the system of equations defining the regression model is underdetermined, there is not
enough data to recover β.
• If exactly N = k data points are observed, and the function f is linear, the equations Y = f(X, β) can be solved
exactly rather than approximately. This reduces to solving a set of N equations with N unknowns (the elements of
β), which has a unique solution as long as the X are linearly independent. If f is nonlinear, a solution may not
exist, or many solutions may exist.
• The most common situation is where N > k data points are observed. In this case, there is enough information in
the data to estimate a unique value for β that best fits the data in some sense, and the regression model when
applied to the data can be viewed as an overdetermined system in β.
In the last case, the regression analysis provides the tools for:
1. Finding a solution for unknown parameters β that will, for example, minimize the distance between the measured
and predicted values of the dependent variable Y (also known as method of least squares).
2. Under certain statistical assumptions, the regression analysis uses the surplus of information to provide statistical
information about the unknown parameters β and predicted values of the dependent variable Y.
If the experimenter had performed measurements at three different values of the independent variable vector X, then
regression analysis would provide a unique set of estimates for the three unknown parameters in β.
In the case of general linear regression, the above statement is equivalent to the requirement that matrix XTX is
invertible.
Statistical assumptions
When the number of measurements, N, is larger than the number of unknown parameters, k, and the measurement
errors εi are normally distributed then the excess of information contained in (N - k) measurements is used to make
statistical predictions about the unknown parameters. This excess of information is referred to as the degrees of
freedom of the regression.
Underlying assumptions
Classical assumptions for regression analysis include:
• The sample is representative of the population for the inference prediction.
• The error is a random variable with a mean of zero conditional on the explanatory variables.
• The independent variables are measured with no error. (Note: If this is not so, modeling may be done instead
using errors-in-variables model techniques).
• The predictors are linearly independent, i.e. it is not possible to express any predictor as a linear combination of
the others. See Multicollinearity.
• The errors are uncorrelated, that is, the variance–covariance matrix of the errors is diagonal and each non-zero
element is the variance of the error.
• The variance of the error is constant across observations (homoscedasticity). (Note: If not, weighted least squares
or other methods might instead be used).
These are sufficient conditions for the least-squares estimator to possess desirable properties, in particular, these
assumptions imply that the parameter estimates will be unbiased, consistent, and efficient in the class of linear
unbiased estimators. It is important to note that actual data rarely satisfies the assumptions. That is, the method is
used even though the assumptions are not true. Variation from the assumptions can sometimes be used as a measure
of how far the model is from being useful. Many of these assumptions may be relaxed in more advanced treatments.
Reports of statistical analyses usually include analyses of tests on the sample data and methodology for the fit and
usefulness of the model.
Assumptions include the geometrical support of the variables (Cressie, 1996). Independent and dependent variables
often refer to values measured at point locations. There may be spatial trends and spatial autocorrelation in the
variables that violates statistical assumptions of regression. Geographic weighted regression is one technique to deal
with such data (Fotheringham et al., 2002). Also, variables may include values aggregated by areas. With aggregated
data the Modifiable Areal Unit Problem can cause extreme variation in regression parameters (Fotheringham and
Wong, 1991). When analyzing data aggregated by political boundaries, postal codes or census areas results may be
very different with a different choice of units.
Regression analysis 4
Linear regression
In linear regression, the model specification is that the dependent variable, is a linear combination of the
parameters (but need not be linear in the independent variables). For example, in simple linear regression for
modeling data points there is one independent variable: , and two parameters, and :
straight line:
(In multiple linear regression, there are several independent variables or functions of independent variables.)
Adding a term in xi2 to the preceding regression gives:
parabola:
This is still linear regression; although the expression on the right hand side is quadratic in the independent variable
, it is linear in the parameters , and
In both cases, is an error term and the subscript indexes a particular observation. Given a random sample from
the population, we estimate the population parameters and obtain the sample linear regression model:
The residual, , is the difference between the value of the dependent variable predicted by the model,
and the true value of the dependent variable . One method of estimation is ordinary least squares. This
method obtains parameter estimates that minimize the sum of squared residuals, SSE,[17][18] also sometimes denoted
RSS:
Minimization of this function results in a set of normal equations, a set of simultaneous linear equations in the
parameters, which are solved to yield the parameter estimators, .
In the case of simple regression, the
formulas for the least squares estimates are
Under the further assumption that the population error term is normally distributed, the researcher can use these
estimated standard errors to create confidence intervals and conduct hypothesis tests about the population
parameters.
Regression analysis 5
where xij is the ith observation on the jth independent variable, and where the first independent variable takes the
value 1 for all i (so is the regression intercept).
The least squares parameter estimates are obtained from p normal equations. The residual can be written as
where the ij element of X is xij, the i element of the column vector Y is yi, and the j element of is . Thus X is
n×p, Y is n×1, and is p×1. The solution is
For a derivation, see linear least squares, and for a numerical example, see linear regression (example).
Regression diagnostics
Once a regression model has been constructed, it may be important to confirm the goodness of fit of the model and
the statistical significance of the estimated parameters. Commonly used checks of goodness of fit include the
R-squared, analyses of the pattern of residuals and hypothesis testing. Statistical significance can be checked by an
F-test of the overall fit, followed by t-tests of individual parameters.
Interpretations of these diagnostic tests rest heavily on the model assumptions. Although examination of the
residuals can be used to invalidate a model, the results of a t-test or F-test are sometimes more difficult to interpret if
the model's assumptions are violated. For example, if the error term does not have a normal distribution, in small
samples the estimated parameters will not follow normal distributions and complicate inference. With relatively
large samples, however, a central limit theorem can be invoked such that hypothesis testing may proceed using
asymptotic approximations.
Nonlinear regression
When the model function is not linear in the parameters, the sum of squares must be minimized by an iterative
procedure. This introduces many complications which are summarized in Differences between linear and non-linear
least squares
.
Regression analysis 7
Other methods
Although the parameters of a regression model are usually estimated using the method of least squares, other
methods which have been used include:
• Bayesian methods, e.g. Bayesian linear regression
• Percentage regression, for situations where reducing percentage errors is deemed more appropriate.[21]
• Least absolute deviations, which is more robust in the presence of outliers, leading to quantile regression
• Nonparametric regression, requires a large number of observations and is computationally intensive
• Distance metric learning, which is learned by the search of a meaningful distance metric in a given input
space.[22]
Software
All major statistical software packages perform least squares regression analysis and inference. Simple linear
regression and multiple regression using least squares can be done in some spreadsheet applications and on some
calculators. While many statistical software packages can perform various types of nonparametric and robust
regression, these methods are less standardized; different software packages implement different methods, and a
method with a given name may be implemented differently in different packages. Specialized regression software
has been developed for use in fields such as survey analysis and neuroimaging.
References
[1] Armstrong, J. Scott (2012). "Illusions in Regression Analysis" (http:/ / upenn. academia. edu/ JArmstrong/ Papers/ 1162346/
Illusions_in_Regression_Analysis). International Journal of Forecasting (forthcoming). .
[2] David A. Freedman, Statistical Models: Theory and Practice, Cambridge University Press (2005)
[3] R. Dennis Cook; Sanford Weisberg Criticism and Influence Analysis in Regression (http:/ / links. jstor. org/
sici?sici=0081-1750(1982)13<313:CAIAIR>2. 0. CO;2-3), Sociological Methodology, Vol. 13. (1982), pp. 313-361
[4] A.M. Legendre. Nouvelles méthodes pour la détermination des orbites des comètes (1805). “Sur la Méthode des moindres quarrés” appears as
an appendix.
[5] C.F. Gauss. Theoria Motus Corporum Coelestium in Sectionibus Conicis Solem Ambientum. (1809)
[6] C.F. Gauss. Theoria combinationis observationum erroribus minimis obnoxiae (http:/ / books. google. co. za/ books?id=ZQ8OAAAAQAAJ&
printsec=frontcover& dq=Theoria+ combinationis+ observationum+ erroribus+ minimis+ obnoxiae& as_brr=3#v=onepage& q=& f=false).
(1821/1823)
[7] Mogull, Robert G. (2004). Second-Semester Applied Statistics. Kendall/Hunt Publishing Company. p. 59. ISBN 0-7575-1181-3.
[8] Galton, Francis (1989). "Kinship and Correlation (reprinted 1989)". Statistical Science (Institute of Mathematical Statistics) 4 (2): 80–86.
JSTOR 2245330.
[9] Francis Galton. "Typical laws of heredity", Nature 15 (1877), 492-495, 512-514, 532-533. (Galton uses the term "reversion" in this paper,
which discusses the size of peas.)
[10] Francis Galton. Presidential address, Section H, Anthropology. (1885) (Galton uses the term "regression" in this paper, which discusses the
height of humans.)
[11] Yule, G. Udny (1897). "On the Theory of Correlation". J. Royal Statist. Soc. (Blackwell Publishing) 60 (4): 812–54. doi:10.2307/2979746.
JSTOR 2979746.
[12] Pearson, Karl; Yule, G.U.; Blanchard, Norman; Lee,Alice (1903). "The Law of Ancestral Heredity". Biometrika (Biometrika Trust) 2 (2):
211–236. doi:10.1093/biomet/2.2.211. JSTOR 2331683.
[13] Fisher, R.A. (1922). "The goodness of fit of regression formulae, and the distribution of regression coefficients". J. Royal Statist. Soc.
(Blackwell Publishing) 85 (4): 597–612. doi:10.2307/2341124. JSTOR 2341124.
[14] Ronald A. Fisher (1954). Statistical Methods for Research Workers (http:/ / psychclassics. yorku. ca/ Fisher/ Methods/ ) (Twelfth ed.).
Oliver and Boyd. ISBN 0-05-002170-2. .
[15] Aldrich, John (2005). "Fisher and Regression". Statistical Science 20 (4): 401–417. doi:10.1214/088342305000000331. JSTOR 20061201.
[16] Rodney Ramcharan. Regressions: Why Are Economists Obessessed with Them? (http:/ / www. imf. org/ external/ pubs/ ft/ fandd/ 2006/ 03/
basics. htm) March 2006. Accessed 2011-12-03.
[17] M. H. Kutner, C. J. Nachtsheim, and J. Neter (2004), "Applied Linear Regression Models", 4th ed., McGraw-Hill/Irwin, Boston (p. 25)
[18] N. Ravishankar and D. K. Dey (2002), "A First Course in Linear Model Theory", Chapman and Hall/CRC, Boca Raton (p. 101)
[19] Chiang, C.L, (2003) Statistical methods of analysis, World Scientific. ISBN 981-238-310-7 - page 274 section 9.7.4 "interpolation vs
extrapolation" (http:/ / books. google. com/ books?id=BuPNIbaN5v4C& lpg=PA274& dq=regression extrapolation& pg=PA274#v=onepage&
Regression analysis 8
Further reading
• William H. Kruskal and Judith M. Tanur, ed. (1978), "Linear Hypotheses," International Encyclopedia of
Statistics. Free Press, v. 1,
Evan J. Williams, "I. Regression," pp. 523–41.
Julian C. Stanley, "II. Analysis of Variance," pp. 541–554.
• Lindley, D.V. (1987). "Regression and correlation analysis," New Palgrave: A Dictionary of Economics, v. 4,
pp. 120–23.
• Birkes, David and Dodge, Y., Alternative Methods of Regression. ISBN 0-471-56881-3
• Chatfield, C. (1993) "Calculating Interval Forecasts," Journal of Business and Economic Statistics, 11.
pp. 121–135.
• Draper, N.R. and Smith, H. (1998).Applied Regression Analysis Wiley Series in Probability and Statistics
• Fox, J. (1997). Applied Regression Analysis, Linear Models and Related Methods. Sage
• Hardle, W., Applied Nonparametric Regression (1990), ISBN 0-521-42950-1
• Meade, N. and T. Islam (1995) "Prediction Intervals for Growth Curve Forecasts" (http://onlinelibrary.wiley.
com/doi/10.1002/for.3980140502/abstract) Journal of Forecasting, 14, pp. 413–430.
• N. Cressie (1996) Change of Support and the Modiable Areal Unit Problem. Geographical Systems 3:159–180.
• A.S. Fotheringham, C. Brunsdon, and M. Charlton. (2002) Geographically weighted regression: the analysis of
spatially varying relationships. Wiley.
• T. Strutz: Data Fitting and Uncertainty (A practical introduction to weighted least squares and beyond).
Vieweg+Teubner, ISBN 978-3-8348-1022-9.
External links
• Earliest Uses: Regression (http://jeff560.tripod.com/r.html) - gives basic history and references.
• Regression of Weakly Correlated Data (http://www.vias.org/simulations/simusoft_regrot.html) - How linear
regression mistakes can appear when Y-range is much smaller than X-range
• Regression to predict Graphics Card Performance (http://www.gpubench.com) - Example of multivariate
regression in action
Article Sources and Contributors 9
License
Creative Commons Attribution-Share Alike 3.0 Unported
//creativecommons.org/licenses/by-sa/3.0/