5  Overview of Linear models

Although this book is primarily about multivariate models, it is useful to have an overview of the range of available techniques to see their relations and to appreciate how easily univariate response models generalize to multivariate ones. Hence, this chapter reviews the characteristics of the standard univariate methods for explaining or predicting a single outcome variable from a set of predictors.

The key ideas are:

Figure 5.1 summarizes a variety of methods for linear models, classified by number of predictors and number of response variables, and whether these are quantitative vs. discrete. For the present purposes, the key columns are the first two, for the case of one or more quantitative outcome variables.

TODO: This should be a table, but that’s not working

Figure 5.1: Techniques for linear models classified by number of predictors and number of response variables, and whether these are quantitative vs. discrete

when the predictors are also quantitative, simple regression (\(p=1\)) generalizes to multivariate regression with two or more outcomes, for example predicting weight and body mass index jointly from a person’s height. The situation is more interesting when there are \(p>1\) predictors. The most common multivariate generalization is multivariate multiple regression (MMRA), where each outcome is regressed on the predictors, as if done separately for each outcome, but using multivariate tests that take correlations among the predictors into account. Other methods for this case include canonical correlation analysis, which tries to explain all relations between \(\mathbf{Y}\) and a set of \(\mathbf{x}\)s through maximally correlated linear combinations of each.

When the predictor variables are all categorical, like gender or level of education, methods like the simple \(t\)-test, one-way ANOVA and factorial ANOVA with \(q=1\) outcome measures all have simple extensions to the case of \(q>1\) outcomes.

Not shown in Figure 5.1 are rows for cases where predictor variables include a mixture quantitative and discrete factors.

History Corner

Why are there so many different names for regression vs. ANOVA concepts, statistics and techniques? In regression, we use notation like \(x_1, x_2, ...\) to refer to predictors in a model, while in ANOVA, factors \(A, B, ...\) are called main effects. In regression applications, we often test linear hypotheses, are interested in coefficients and evaluate a model with and \(R^2\) statistic, while in ANOVA we may test contrasts among factor levels, and use \(F\)-tests to evaluate models.

Well, like twins separated at birth, they grew up in homes in different places and with different parents, who were free to choose their own names.

Methods of regression began with Francis Galton’s (1886, 1889) studies of heritability of traits, trying to understand how strongly the physical characteristics of one generation of living things resembled those in the next generation. From a study of the diameters of sweet peas in parent plants and their size in the next generation, and another on the relationship between heights of human parents and their offspring, he developed the fundamental ideas of regression. Karl Pearson (1896)

Analysis of variance methods were raised on farms, notably the Rothamsted Experimental Station, where R. A. Fisher analyzed vast amounts of data on crop experiments designed to determine the conditions (soil condition, fertilizer treatments) that gave the greatest yields, while controlling for extraneous determiners (plots of planting). With multiple factors determining the outcome, Fisher (1923), in an experiment on yields of different varieties of potatoes given various manure treatments, devised the method of breaking down the total variance into portions attributable to each factor and presented the first ANOVA table. The method became well-known after Fisher’s (1925) Statistical Methods for Research Workers.

The great synthesis of regression and ANOVA did not take place until the 1960s … Graybill, Winer, Bock … -> Symposium 1967

Birth of the general linear models : https://files.eric.ed.gov/fulltext/ED026737.pdf

5.1 Linear combinations

All methods of multivariate statistics involve a simple idea: Finding weighted sums—linear combinations— of observed variables to optimize some criterion—maximizing a measure of goodness-of-fit, like \(R^2\) or minimizing a measure of badness-of-fit like sums of squares of residuals. Methods differ according to whether:

  • All variables belong to one set (say, \(\mathbf{X}\)), not distinguished as to whether they are responses or predictors, as in PCA and factor analysis, vs. two sets where one set is considered outcome, dependent variables, to be explained by predictors, independent variables (\(\mathbf{X}\)), as in multiple regression, multivariate analysis of variance, discriminant analysis and canonical correlation analysis.

  • The variables in \(\mathbf{X}\) and \(\mathbf{Y}\) are discrete, categorical factors like sex and level of education or quantitative variables like salary and number of years of experience.

For example, Figure 5.2 illustrates PCA (as we saw in Chapter 4) as finding weights to maximize the variance of linear combinations, \(v_1, v_2, ...\), \[\begin{eqnarray*} \mathbf{v}_1 & = & a_1 \mathbf{x}_1 + a_2 \mathbf{x}_2 + a_3 \mathbf{x}_3 + a_4 \mathbf{x}_4 \\ \mathbf{v}_2 & = & b_1 \mathbf{x}_1 + b_2 \mathbf{x}_2 + b_3 \mathbf{x}_3 + b_4 \mathbf{x}_4 \\ \dots & \dots \; , \\ \end{eqnarray*}\]

subject to all \(\mathbf{v}_i, \mathbf{v}_j\) being uncorrelated, \(\mathbf{v}_i \;\perp\; \mathbf{v}_j\).

Figure 5.2: Principal components analysis as linear combinations to maximize variance accounted for. Left: diagram of PCA showing two uncorrelated linear combinations, v1 and v2. Right: Geometry of PCA.

5.1.1 Multiple regression

Figure 5.3: Multiple regression as linear combinations to maximize R squared. … TODO Add vector diagram

5.1.2 Multivariate regression

Figure 5.4: Multivariate multiple regression as linear combinations to maximize R squared. … TODO Add vector diagram

5.1.3 Canonical correlation analysis

5.2 The General Linear Model

To establish notation and terminology, it is worthwhile to state the the general linear model formally. For convenience, I use vector and matrix notation. This expresses a response variable, \(\mathbf{y} = (y_1, y_2, \dots , y_n)^\textsf{T}\) for \(n\) observations, as a sum of terms involving \(p\) regressors, \(\mathbf{x}_1, \mathbf{x}_2, \dots , \mathbf{x}_p\), each of length \(n\).

\[ \begin{align*} \mathbf{y} & = \beta_0 + \beta_1 \mathbf{x}_1 + \beta_2 \mathbf{x}_2 + \cdots + \beta_p \mathbf{x}_p + \mathbf{\epsilon} \\ & = \left[ \mathbf{1},\; \mathbf{x}_1,\; \mathbf{x}_2,\; \dots ,\; \mathbf{x}_p \right] \; \boldsymbol{\beta} + \boldsymbol{\epsilon} \\ \end{align*} \tag{5.1}\]

       <!-- & = \LARGE{\sizedmat{X}{n \times (p+1)}\; \sizedmat{\boldsymbol{\beta}}{(p+1) \times 1} + \boldsymbol{\epsilon}} -->

or, expressed in matrices,

\[ \LARGE{\mathord{\mathop{\mathbf{y}}\limits_{n \times 1}} = \mathord{\mathop{\mathbf{X}}\limits_{n \times (p+1)}}\; \mathord{\mathop{\mathbf{\boldsymbol{\beta}}}\limits_{(p+1) \times 1}} + \boldsymbol{\epsilon}} \]

The matrix \(\mathbf{X}\) is called the model matrix and contains the numerical representations of the predictor variables called regressors. The essential thing about a linear model is that it is linear in the parameters \(\beta_i\). That is, the predicted value of \(\mathbf{y}\) is a linear combination of some \(\mathbf{x}_i\) with weights \(\beta_i\). An example of a nonlinear model is the exponential growth model, \(y = \beta_0 + e^{\beta_1 x}\), where the parameter \(\beta_1\) appears as an exponent.1

  • These can be quantitative variables like age, salary or years of education. But they can also be transformed versions, like sqrt(age) or log(salary).

  • A quantitative variable can be represented by more than one model regressor, for example if it is expressed as a polynomial like poly(age, degree=2) or a natural spline like ns(salary, df=5). The model matrix portion for such terms contains one column for each degree of freedom (df) and there are df coefficients in the corresponding portion of \(\boldsymbol{\beta}\).

  • A categorical or discrete predictor– a factor variable in R– with \(d\) levels is expressed as \(d - 1\) columns in \(\mathbf{X}\). Typically these are contrasts or comparisons between a baseline or reference level and each of the remaining ones, but any set of \(d - 1\) linearly independent contrasts can be used by assigning to contrasts(factor). For example, contrasts(factor) <- contr.treatment(4) for a 4-level factor assigns 3 contrasts representing comparisons with a baseline level, typically the first (in alphabetic order). For an ordered factor, such as one for political knowledge with levels “low”, “medium”, “high”, contrasts.poly() returns the coefficients of orthogonal polynomial contrasts representing linear and quadratic trends.

  • Interactions between predictors are represented as the direct products of the corresponding columns of \(\mathbf{X}\). This allows the effect of one predictor on the response to depend on values of other predictors. For example, the interaction of two quantitative variables, \(\mathbf{x}_1, \mathbf{x}_2\) is represented by the product \(\mathbf{x}_1 \times \mathbf{x}_2\). More generally, for variables or factors \(A\) and \(B\) with degrees of freedom \(\text{df}_A\) and \(\text{df}_B\) the regressors in \(\mathbf{X}\) are the \(\text{df}_A \times \text{df}_B\) products of each column for \(A\) with each column for \(B\).

5.2.1 Model formulas

Statistical models in R, such as those fit by lm(), glm() and many other modelling function in R are expressed in a simple notation that was developed by Wilkinson & Rogers (1973) for the GENSTAT software system at the Rothamsted Research Station. It solves the problem of having a compact way to specify any model consisting of any combinations of quantitative and discrete factor variables, interactions of these and arbitrary transformations of these.

In this, a model formula take the forms

response ~ terms
response ~ term1 + term2 + ...

where the left-hand side, response specifies the response variable in the model and the right-hand side specifies the terms in the model specifying the columns in the \(\mathbf{X}\) matrix of Equation 5.1; the coefficients \(\beta\) are implied and not represented explicitly in the formula.

The notation y ~ x is read as “y is modeled by x”. The left-hand side is usually a variable name (such as height), but it could be an expression that evaluates to the the response, such as log(salary) or weight/height^2 which represents the body mass index.

On the right-hand side (RHS), the usual arithmetic operator, +, -, *, /, ^ have special meanings as described below. The most fundamental is that y ~ a + b is interpreted as “y is modeled by a and b”; that is, the sum of linear terms for a and b.

Some examples for regression-like models using only quantitative variables, x, x1, x2, x3, ... are shown below:

y ~ x                      # simple linear regression
y ~ x - 1                  # no intercept: regression through the origin 
y ~ x + I(x^2)             # quadratic model
y ~ poly(x, 3)             # cubic model
y ~ x1 * x2                # crossing: x1 + x2  +  x1 : x2
y ~ x1 + x2 + x3           # multiple regression
y ~ (x1 + x2 + x3)^2       # response surface: all quadratics & two-way interactions
log(y) ~ x1 + poly(x, 2)   # arbitrary transformation of response
y1 + y2 ~ x1 + x2 + x3     # response is sum of y1 and y2

The intercept \(\beta_0\) is automatically included in the model without need to specify it explicitly. The minus sign, - on the right-hand side removes terms from the model, so a model with no intercept \(\beta_0 = 0\) can be specifies as y ~ X -1 (or perhaps more naturally, y ~ 0 + X).

Function calls on the RHS, such as poly(x, 3) are evaluated directly, but to use a special model operator, like ^ must be “protected” by wrapping the term in I(), meaning “identity” or “inhibit”. Thus, the model y ~ x + I(x^2) means the quadratic model \(y = \beta_0 + \beta_1 x + \beta_2 x^2\). This differs from the model y ~ poly(x, 2) in that the former uses the raw x, x^2 values (which are necessarily positively correlated) while poly() converts these to orthogonal polynomial scores, which are uncorrelated (and therefore free from problems of collinearity).

Factor variables are treated specially in linear models, but have simple notations in R formulas. The following examples use A, B, C to represent discrete factors with two or more levels.

y ~ A                 # one-way ANOVA
y ~ A + B             # two-way, main effects only
y ~ A * B             # full two-way, with interaction
y ~ A + B + A:B       # same, in long-hand
y ~ x + A             # one-way ANCOVA
y ~ (A + B + C)^2     # three-way ANOVA, incl. all two-way interactions

5.2.1.1 Crossing

The * operator has special meaning used to specify the crossing of variables and factors and : specifies interactions (products of variables). So, the model y ~ x1 * x2 is expanded to give y ~ x1 + x2 + x1:x2 and the interaction term x1:x2 is calculated as \(x_1 \times x_2\). In algebraic notation (omitting the error term) this works out to the model,

\[\begin{eqnarray*} y & = & \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \beta_1 x_1 * \beta_2 x_2 \\ & = & \beta_0 + (\beta_1 + \beta_2 x_2) x_1 + \beta_2 x_2 \:\: ,\\ \end{eqnarray*}\]

which means that the coefficient for \(x_1\) in the model is not constant for all values of \(x_2\), but rather changes with the value of \(x_2\). If \(\beta_2 > 0\), the slope for \(x_1\) increases with \(x_2\) and vice-versa.

y ~ A * B for factors is similar, expanding to y ~ A + B + A:B, but the columns in the model matrix represent contrasts among the factor levels as describe above. The main effects, A and B come from contrasts among the means of their factor levels and the interaction term A:B reflects differences among means of A across the levels of B (and vice-versa).

The model formula y ~ x + A specifies an ANCOVA model with different intercepts for the levels of A, but with a common slope for x. Adding an interaction of x:A in the model y ~ x * A allow separate slopes and intercepts for the groups.

5.2.1.2 Powers

The ^ exponent operator indicates powers of a term expression to a specified degree. Thus the term (A + B)^2 is identical to (A + B) * (A + B) which expands to the main effects of A, B and their interaction, also identical to A * B. In general, the product of parenthesized terms expands as in ordinary algebra,

y ~ (A + B) * (C + D) -> A + B + C + D + A:C + A:D + B:C + B:D

Powers get more interesting with more terms, so (A + B + C)^2 is the same as (A + B + C) * (A + B + C), which includes main effects of A, B and C as well as all two-way interactions, A:B, A:C and B:C. The model formula (A + B + C)^3 expands to include all two-way interactions and the three-way interaction A:B:C.

(A + B + C)^3 -> A + B + C + A:B + A:C + B:C + A:B:C

In this context - can be use to remove terms, as shown in the following examples

(A + B + C)^2 <-> (A + B + C)^3 - A:B:C
(A + B + C)^3 - B:C - A:B:C  <-> A + B + C + A:B + A:C 

Finally, the symbol . on the right-hand side specifies all terms in the current dataset other than the response. Thus if you have a data.frame containing y, x1, x2, ..., x6, you can specify a model with all variables except x6 as predictors as

y ~ . - x6

To test what we’ve covered above,

  • What do you think the model formula y ~ .^2 means in a data set containing variables x1, x2, x3, and x4?

  • What about the formula with y ~ .^2 - A:B:C:D with factors A, B, C, D?

You can work out questions like these or explore model formulae using terms() for a "formula" object. The labels of these terms can then be concatenated to a string and turned back into a formula using as.formula():

f <- formula(y ~ (x1 + x2 + x3 + x4)^2)
terms = attr(terms(f), "term.labels")

terms |> paste(collapse = " + ")
#> [1] "x1 + x2 + x3 + x4 + x1:x2 + x1:x3 + x1:x4 + x2:x3 + x2:x4 + x3:x4"
# convert back to a formula
as.formula(sprintf("y ~ %s", paste(terms, collapse=" + "))) 
#> y ~ x1 + x2 + x3 + x4 + x1:x2 + x1:x3 + x1:x4 + x2:x3 + x2:x4 + 
#>     x3:x4

5.2.1.3 Model matrices

As noted above, a model formula is used to generate the \(n \times (p+1)\) model matrix, \(\mathbf{X}\), typically containing the column of 1s for the intercept \(\beta_0\) in the model, followed by \(p\) columns representing the regressors \(\mathbf{x}_1, \mathbf{x}_2, \dots , \mathbf{x}_p\). Internally, lm() uses stats::model.matrix() and you can use this to explore how factors, interactions and other model terms are represented in a model object.

For a small example, here are a few observations representing income (inc) and type of occupation. model.matrix() takes a one-sided formula with the terms on the right-hand side. The main effect model looks like this:

set.seed(42)
inc <- round(runif(n=9, 20, 40))
type <- rep(c("bc", "wc", "prof"), each =3)

mm <- model.matrix(~ inc + type) 
data.frame(type, mm)
#>   type X.Intercept. inc typeprof typewc
#> 1   bc            1  38        0      0
#> 2   bc            1  39        0      0
#> 3   bc            1  26        0      0
#> 4   wc            1  37        0      1
#> 5   wc            1  33        0      1
#> 6   wc            1  30        0      1
#> 7 prof            1  35        1      0
#> 8 prof            1  23        1      0
#> 9 prof            1  33        1      0

As you can see, type, with 2 degrees of freedom is represented by two dummy (0/1) variables, typeprof and typewc. Together, these represent treatment contrasts (comparisons) between the baseline group type=="bc", which is coded (0, 0) and each of the others: type=="prof", coded (1, 0) and type=="wc", codes (0, 1). By default, the baseline level is the first in alphabetic or numerical order, but you can change this using relevel().

In a model with the interaction inc * type, additional columns are constructed as the product of inc with each of the columns for type.

model.matrix(~ inc * type)
#>   (Intercept) inc typeprof typewc inc:typeprof inc:typewc
#> 1           1  38        0      0            0          0
#> 2           1  39        0      0            0          0
#> 3           1  26        0      0            0          0
#> 4           1  37        0      1            0         37
#> 5           1  33        0      1            0         33
#> 6           1  30        0      1            0         30
#> 7           1  35        1      0           35          0
#> 8           1  23        1      0           23          0
#> 9           1  33        1      0           33          0
#> attr(,"assign")
#> [1] 0 1 2 2 3 3
#> attr(,"contrasts")
#> attr(,"contrasts")$type
#> [1] "contr.treatment"

5.3 Regression

5.4 ANOVA

5.5 ANCOVA

5.6 Regression trees


  1. Taking logarithms of both sides would yield the linear model, \(log(y) = c + \beta_1 x\).↩︎