Although this book is primarily about multivariate models, it is useful to have an overview of the range of available techniques for univariate response models to see their uses and to appreciate how easily univariate models generalize to multivariate ones. Hence, this chapter reviews the characteristics of the standard univariate methods for explaining or predicting a single outcome variable from a set of predictors.

The key ideas are:

Figure 5.1 summarizes a variety of methods for linear models, classified by number of predictors and number of response variables, and whether these are quantitative vs. discrete. For the present purposes, the key columns are the first two, for the case of one or more quantitative outcome variables.

Figure 5.1: Techniques for linear models classified by number of predictors and number of response variables, and whether these are quantitative vs. discrete

When the predictors are also quantitative, simple regression (\(p=1\)) generalizes to multivariate regression with two or more outcomes (\(q > 1\)). For example we might want to predict weight and body mass index jointly from a person’s height.

The situation is more interesting when there are \(p>1\) predictors. The most common multivariate generalization is multivariate multiple regression (MMRA), where each outcome is regressed on the predictors, as if done separately for each outcome, but using multivariate tests that take correlations among the predictors into account. Other methods for this case include canonical correlation analysis, which tries to explain all relations between \(\mathbf{Y}\) and a set of \(\mathbf{x}\)s through maximally correlated linear combinations of each.

When the predictor variables are all discrete or categorical, such as gender or level of education, methods like the simple \(t\)-test, one-way ANOVA and factorial ANOVA with \(q=1\) outcome measures all have simple extensions to the case of \(q>1\) outcomes.

Not shown in Figure 5.1 are rows for cases where predictor variables include a mixture quantitative and discrete factors. These have two “flavors”, depending on our emphasis.

History Corner

Why are there so many different names for regression vs. ANOVA concepts, statistics and techniques? In regression, we use notation like \(x_1, x_2, ...\) to refer to predictors in a model, while in ANOVA, factors \(A, B, ...\) are called main effects. In regression applications, we often test linear hypotheses, are interested in coefficients and evaluate a model with an \(R^2\) statistic, while in ANOVA we may test contrasts among factor levels, and use \(F\)-tests to evaluate models.

Well, like twins separated at birth, they grew up in homes in different places and with different parents, who were each free to choose their own names, not recognizing their shared DNA.

Methods of regression began in evolutionary biology with Francis Galton’s (1886, 1889) studies of heritability of traits, trying to understand how strongly the physical characteristics of one generation of living things resembled those in the next generation. From a study of the diameters of sweet peas in parent plants and their size in the next generation, and another on the relationship between heights of human parents and their offspring, he developed the fundamental ideas of regression. Karl Pearson (1896)

In contrast, analysis of variance methods were raised on farms, notably the Rothamsted Experimental Station, where R. A. Fisher analyzed vast amounts of data on crop experiments designed to determine the conditions (soil condition, fertilizer treatments) that gave the greatest yields, while controlling for extraneous determiners (plots of planting). With multiple factors determining the outcome, Fisher (1923), in an experiment on yields of different varieties of potatoes given various manure treatments, devised the method of breaking down the total variance into portions attributable to each factor and presented the first ANOVA table. The method became well-known after Fisher’s (1925) Statistical Methods for Research Workers.

The great synthesis of regression and ANOVA did not take place until the 1960s. At that time, methods for computing were beginning to move from programmable desk calculators to mainframe computers, largely using a collection of separate FORTRAN programs, designed for regression, one-way ANOVA, two-way ANOVA, models with interactions, and so forth. To complete an analysis, as a graduate student I often had to use three or more different programs.

Then, something remarkable happened on two fronts: theory and computation. First, in quick succession textbooks by Scheffé (1960), Graybill (1961), Winer (1962) … began to layout a general theory of linear models that encompassed all of these separate models, giving the “General Linear Model” a well-deserved name. Second, two symposiums, one at IBM Yorkdown Heights (IBM (1965)) and the other at the University of Georgia (BradshawFindley1967) resulted in the first general programs to handle all these cases in an understandable way.

A bit of matrix algebra thrown into the mix showed that most of the ideas for univariate models could be extended to multiple response variables, and so the “Multivariate Linear Model” was born. R. Darrell Bock (Bock, 1963, 1964) sketched a flowchart of the computational steps, which was implemented at the University of Chicago by Jeremy Finn (1967) in the MULTIVARIANCE program. A group at the University of North Carolina headed by Elliot Cramer developed their MANOVA program (Clyde et al., 1966) and Willard Dixon (1965) at UCLA developed the BMD programs incorporating these ideas. The ANOVA and regression twins had finally become part of a larger family.

5.1 Linear combinations

All methods of multivariate statistics involve a simple idea: Finding weighted sums—linear combinations— of observed variables to optimize some criterion—maximizing a measure of goodness-of-fit, like \(R^2\) or minimizing a measure of badness-of-fit like sums of squares of residuals. Methods differ according to whether:

  • All variables belong to one set (say, \(\mathbf{X}\)), not distinguished as to whether they are responses or predictors, as in PCA and factor analysis, vs. two sets where one set is considered outcome, dependent variables, to be explained by predictors, independent variables (\(\mathbf{X}\)), as in multiple regression, multivariate analysis of variance, discriminant analysis and canonical correlation analysis.

  • The variables in \(\mathbf{X}\) and \(\mathbf{Y}\) are discrete, categorical factors like sex and level of education or quantitative variables like salary and number of years of experience.

PCA

For example, Figure 5.2 illustrates PCA (as we saw in Chapter 4) as finding weights to maximize the variance of linear combinations, \(v_1, v_2, ...\), \[\begin{aligned} \mathbf{v}_1 & = & a_1 \mathbf{x}_1 + a_2 \mathbf{x}_2 + a_3 \mathbf{x}_3 + a_4 \mathbf{x}_4 \\ \mathbf{v}_2 & = & b_1 \mathbf{x}_1 + b_2 \mathbf{x}_2 + b_3 \mathbf{x}_3 + b_4 \mathbf{x}_4 \\ \vdots & = & \vdots \; , \\ \end{aligned}\]

subject to all \(\mathbf{v}_i, \mathbf{v}_j\) being uncorrelated, \(\mathbf{v}_i \;\perp\; \mathbf{v}_j\).

Figure 5.2: Principal components analysis as linear combinations to maximize variance accounted for. Left: diagram of PCA showing two uncorrelated linear combinations, v1 and v2. Right: Geometry of PCA.

Multiple regression

An analogous diagram for multiple regression is shown in Figure 5.3. Here, we find the weights \(b_1, b_2, \dots\) to maximize the \(R^2\) of \(\mathbf{y}\) with the predicted values \(\widehat{\mathbf{y}}\),

\[ \widehat{\mathbf{y}} = b_1 \mathbf{x}_1 + b_2 \mathbf{x}_2 + b_3 \mathbf{x}_3 \:\: . \] In the vector diagram at the right, saying that the fitted vector \(\widehat{\mathbf{y}}\) is a linear combination of \(\mathbf{x}_1\) and \(\mathbf{x}_2\) means that it lies in the plane that they define. The fitted vector is the orthogonal projection of \(\mathbf{y}\) on this plane, and the least squares weights \(b_1\) and \(b_2\) give the maximum possible correlation \(r^2 (\mathbf{y}, \widehat{\mathbf{y}})\).

Figure 5.3: Multiple regression as a linear combination to maximize the squared correlation with the predicted values \(\hat{\mathbf{y}}\). Right: vector geometry of multiple regression for two predictors.

The vector of residuals, \(\mathbf{e} = \mathbf{y} -\widehat{\mathbf{y}}\) is orthogonal to that plane (\(\mathbf{y}\) and \(\mathbf{e}\) are uncorrelated), and the least squares solution also minimizes length \(\parallel \mathbf{e} \parallel = \sqrt(\Sigma e_i^2)\).

Multivariate regression

Multivariate multiple regression does the same thing for each response variable, \(\mathbf{y}_1\) and \(\mathbf{y}_2\), as shown in Figure 5.4. It finds the weights to maximize the correlation between each \(\mathbf{y}_j\) and the corresponding predicted value \(\widehat{\mathbf{y}}_j\).

\[\begin{aligned} \widehat{\mathbf{y}}_1 & = & a_1 \mathbf{x}_1 + a_2 \mathbf{x}_2 + a_3 \mathbf{x}_3 \\ \widehat{\mathbf{y}}_2 & = & b_1 \mathbf{x}_1 + b_2 \mathbf{x}_2 + b_3 \mathbf{x}_3 \\ \end{aligned}\]

Figure 5.4: Multivariate multiple regression as linear combinations to maximize the R squared for each response variable separately.

The weights \(a_1, a_2, a_3\) and \(b_1, b_2, b_3\) are the same as would be found in separate multiple regressions for the response variables. However, the multivariate tests used here take the correlations among the \(\mathbf{y}\)s, and can be more powerful than fitting separate univariate response models.

Canonical correlation analysis

Finally, canonical correlation analysis uses a different approach to fitting relations between a set of responses, \(\mathbf{y}_1, \mathbf{y}_2, \dots\) and a set of predictors, \(\mathbf{x}_1, \mathbf{x}_2, \dots\). …

Figure 5.5: Canonical correlation analyis finds uncorrelated linear combinations of the responses which maximize the R squared with linear combinations of the predictors.

5.2 The General Linear Model

To establish notation and terminology, it is worthwhile to state the the general linear model formally. For convenience, I use vector and matrix notation. This expresses a response variable, \(\mathbf{y} = (y_1, y_2, \dots , y_n)^\mathsf{T}\) for \(n\) observations, as a sum of terms involving \(p\) regressors, \(\mathbf{x}_1, \mathbf{x}_2, \dots , \mathbf{x}_p\), each of length \(n\).

\[\begin{aligned} \mathbf{y} & = \beta_0 + \beta_1 \mathbf{x}_1 + \beta_2 \mathbf{x}_2 + \cdots + \beta_p \mathbf{x}_p + \mathbf{\epsilon} \\ & = \left[ \mathbf{1},\; \mathbf{x}_1,\; \mathbf{x}_2,\; \dots ,\; \mathbf{x}_p \right] \; \boldsymbol{\beta} + \boldsymbol{\epsilon} \\ \end{aligned} \tag{5.1}\]

or, expressed in matrices,

\[ \Large{\mathord{\mathop{\mathbf{y}}\limits_{n \times 1}} = \mathord{\mathop{\mathbf{X}}\limits_{n \times (p+1)}}\; \mathord{\mathop{\mathbf{\boldsymbol{\beta}}}\limits_{(p+1) \times 1}} + \boldsymbol{\epsilon}} \]

The matrix \(\mathbf{X}\) is called the model matrix and contains the numerical representations of the predictor variables called regressors. The essential thing about a linear model is that it is linear in the parameters \(\beta_i\). That is, the predicted value of \(\mathbf{y}\) is a linear combination of some \(\mathbf{x}_i\) with weights \(\beta_i\). An example of a nonlinear model is the exponential growth model, \(y = \beta_0 + e^{\beta_1 x}\), where the parameter \(\beta_1\) appears as an exponent.1

  • These can be quantitative variables like age, salary or years of education. But they can also be transformed versions, like sqrt(age) or log(salary).

  • A quantitative variable can be represented by more than one model regressor, for example if it is expressed as a polynomial like poly(age, degree=2) or a natural spline like ns(salary, df=5). The model matrix portion for such terms contains one column for each degree of freedom (df) and there are df coefficients in the corresponding portion of \(\boldsymbol{\beta}\).

  • A categorical or discrete predictor– a factor variable in R– with \(d\) levels is expressed as \(d - 1\) columns in \(\mathbf{X}\). Typically these are contrasts or comparisons between a baseline or reference level and each of the remaining ones, but any set of \(d - 1\) linearly independent contrasts can be used by assigning to contrasts(factor). For example, contrasts(factor) <- contr.treatment(4) for a 4-level factor assigns 3 contrasts representing comparisons with a baseline level, typically the first (in alphabetic order). For an ordered factor, such as one for political knowledge with levels “low”, “medium”, “high”, contrasts.poly() returns the coefficients of orthogonal polynomial contrasts representing linear and quadratic trends.

  • Interactions between predictors are represented as the direct products of the corresponding columns of \(\mathbf{X}\). This allows the effect of one predictor on the response to depend on values of other predictors. For example, the interaction of two quantitative variables, \(\mathbf{x}_1, \mathbf{x}_2\) is represented by the product \(\mathbf{x}_1 \times \mathbf{x}_2\). More generally, for variables or factors \(A\) and \(B\) with degrees of freedom \(\text{df}_A\) and \(\text{df}_B\) the regressors in \(\mathbf{X}\) are the \(\text{df}_A \times \text{df}_B\) products of each column for \(A\) with each column for \(B\).

5.2.1 Model formulas

Statistical models in R, such as those fit by lm(), glm() and many other modelling function in R are expressed in a simple notation that was developed by Wilkinson & Rogers (1973) for the GENSTAT software system at the Rothamsted Research Station. It solves the problem of having a compact way to specify any model consisting of any combinations of quantitative and discrete factor variables, interactions of these and arbitrary transformations of these.

In this, a model formula take the forms

response ~ terms
response ~ term1 + term2 + ...

where the left-hand side, response specifies the response variable in the model and the right-hand side specifies the terms in the model specifying the columns in the \(\mathbf{X}\) matrix of Equation 5.1; the coefficients \(\beta\) are implied and not represented explicitly in the formula.

The notation y ~ x is read as “y is modeled by x”. The left-hand side is usually a variable name (such as height), but it could be an expression that evaluates to the the response, such as log(salary) or weight/height^2 which represents the body mass index.

On the right-hand side (RHS), the usual arithmetic operator, +, -, *, /, ^ have special meanings as described below. The most fundamental is that y ~ a + b is interpreted as “y is modeled by a and b”; that is, the sum of linear terms for a and b.

Some examples for regression-like models using only quantitative variables, x, x1, x2, x3, ... are shown below:

y ~ x                      # simple linear regression
y ~ x - 1                  # no intercept: regression through the origin 
y ~ x + I(x^2)             # quadratic model
y ~ poly(x, 3)             # cubic model
y ~ x1 * x2                # crossing: x1 + x2  +  x1 : x2
y ~ x1 + x2 + x3           # multiple regression
y ~ (x1 + x2 + x3)^2       # response surface: all quadratics & two-way interactions
log(y) ~ x1 + poly(x, 2)   # arbitrary transformation of response
y1 + y2 ~ x1 + x2 + x3     # response is sum of y1 and y2

The intercept \(\beta_0\) is automatically included in the model without need to specify it explicitly. The minus sign, - on the right-hand side removes terms from the model, so a model with no intercept \(\beta_0 = 0\) can be specifies as y ~ X -1 (or perhaps more naturally, y ~ 0 + X).

Function calls on the RHS, such as poly(x, 3) are evaluated directly, but to use a special model operator, like ^ must be “protected” by wrapping the term in I(), meaning “identity” or “inhibit”. Thus, the model y ~ x + I(x^2) means the quadratic model \(y = \beta_0 + \beta_1 x + \beta_2 x^2\). This differs from the model y ~ poly(x, 2) in that the former uses the raw x, x^2 values (which are necessarily positively correlated) while poly() converts these to orthogonal polynomial scores, which are uncorrelated (and therefore free from problems of collinearity).

Factor variables are treated specially in linear models, but have simple notations in R formulas. The following examples use A, B, C to represent discrete factors with two or more levels.

y ~ A                 # one-way ANOVA
y ~ A + B             # two-way, main effects only
y ~ A * B             # full two-way, with interaction
y ~ A + B + A:B       # same, in long-hand
y ~ x + A             # one-way ANCOVA
y ~ (A + B + C)^2     # three-way ANOVA, incl. all two-way interactions

5.2.1.1 Crossing

The * operator has special meaning used to specify the crossing of variables and factors and : specifies interactions (products of variables). So, the model y ~ x1 * x2 is expanded to give y ~ x1 + x2 + x1:x2 and the interaction term x1:x2 is calculated as \(x_1 \times x_2\). In algebraic notation (omitting the error term) this works out to the model,

\[\begin{aligned} y & = & \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \beta_1 x_1 * \beta_2 x_2 \\ & = & \beta_0 + (\beta_1 + \beta_2 x_2) x_1 + \beta_2 x_2 \:\: ,\\ \end{aligned}\]

which means that the coefficient for \(x_1\) in the model is not constant for all values of \(x_2\), but rather changes with the value of \(x_2\). If \(\beta_2 > 0\), the slope for \(x_1\) increases with \(x_2\) and vice-versa.

y ~ A * B for factors is similar, expanding to y ~ A + B + A:B, but the columns in the model matrix represent contrasts among the factor levels as describe above. The main effects, A and B come from contrasts among the means of their factor levels and the interaction term A:B reflects differences among means of A across the levels of B (and vice-versa).

The model formula y ~ x + A specifies an ANCOVA model with different intercepts for the levels of A, but with a common slope for x. Adding an interaction of x:A in the model y ~ x * A allow separate slopes and intercepts for the groups.

5.2.1.2 Powers

The ^ exponent operator indicates powers of a term expression to a specified degree. Thus the term (A + B)^2 is identical to (A + B) * (A + B) which expands to the main effects of A, B and their interaction, also identical to A * B. In general, the product of parenthesized terms expands as in ordinary algebra,

y ~ (A + B) * (C + D) -> A + B + C + D + A:C + A:D + B:C + B:D

Powers get more interesting with more terms, so (A + B + C)^2 is the same as (A + B + C) * (A + B + C), which includes main effects of A, B and C as well as all two-way interactions, A:B, A:C and B:C. The model formula (A + B + C)^3 expands to include all two-way interactions and the three-way interaction A:B:C.

(A + B + C)^3 -> A + B + C + A:B + A:C + B:C + A:B:C

In this context - can be use to remove terms, as shown in the following examples

(A + B + C)^2 <-> (A + B + C)^3 - A:B:C
(A + B + C)^3 - B:C - A:B:C  <-> A + B + C + A:B + A:C 

Finally, the symbol . on the right-hand side specifies all terms in the current dataset other than the response. Thus if you have a data.frame containing y, x1, x2, ..., x6, you can specify a model with all variables except x6 as predictors as

y ~ . - x6

To test what we’ve covered above,

  • What do you think the model formula y ~ .^2 means in a data set containing variables x1, x2, x3, and x4?

  • What about the formula with y ~ .^2 - A:B:C:D with factors A, B, C, D?

You can work out questions like these or explore model formulae using terms() for a "formula" object. The labels of these terms can then be concatenated to a string and turned back into a formula using as.formula():

f <- formula(y ~ (x1 + x2 + x3 + x4)^2)
terms = attr(terms(f), "term.labels")

terms |> paste(collapse = " + ")
#> [1] "x1 + x2 + x3 + x4 + x1:x2 + x1:x3 + x1:x4 + x2:x3 + x2:x4 + x3:x4"
# convert back to a formula
as.formula(sprintf("y ~ %s", paste(terms, collapse=" + "))) 
#> y ~ x1 + x2 + x3 + x4 + x1:x2 + x1:x3 + x1:x4 + x2:x3 + x2:x4 + 
#>     x3:x4

5.2.2 Model matrices

As noted above, a model formula is used to generate the \(n \times (p+1)\) model matrix, \(\mathbf{X}\), typically containing the column of 1s for the intercept \(\beta_0\) in the model, followed by \(p\) columns representing the regressors \(\mathbf{x}_1, \mathbf{x}_2, \dots , \mathbf{x}_p\). Internally, lm() uses stats::model.matrix() and you can use this to explore how factors, interactions and other model terms are represented in a model object.

For a small example, here are a few observations representing income (inc) and type of occupation. model.matrix() takes a one-sided formula with the terms on the right-hand side. The main effect model looks like this:

set.seed(42)
inc <- round(runif(n=9, 20, 40))
type <- rep(c("bc", "wc", "prof"), each =3)

mm <- model.matrix(~ inc + type) 
data.frame(type, mm)
#>   type X.Intercept. inc typeprof typewc
#> 1   bc            1  38        0      0
#> 2   bc            1  39        0      0
#> 3   bc            1  26        0      0
#> 4   wc            1  37        0      1
#> 5   wc            1  33        0      1
#> 6   wc            1  30        0      1
#> 7 prof            1  35        1      0
#> 8 prof            1  23        1      0
#> 9 prof            1  33        1      0

As you can see, type, with 2 degrees of freedom is represented by two dummy (0/1) variables, typeprof and typewc. Together, these represent treatment contrasts (comparisons) between the baseline group type=="bc", which is coded (0, 0) and each of the others: type=="prof", coded (1, 0) and type=="wc", codes (0, 1). By default, the baseline level is the first in alphabetic or numerical order, but you can change this using relevel().

In a model with the interaction inc * type, additional columns are constructed as the product of inc with each of the columns for type.

model.matrix(~ inc * type)
#>   (Intercept) inc typeprof typewc inc:typeprof inc:typewc
#> 1           1  38        0      0            0          0
#> 2           1  39        0      0            0          0
#> 3           1  26        0      0            0          0
#> 4           1  37        0      1            0         37
#> 5           1  33        0      1            0         33
#> 6           1  30        0      1            0         30
#> 7           1  35        1      0           35          0
#> 8           1  23        1      0           23          0
#> 9           1  33        1      0           33          0
#> attr(,"assign")
#> [1] 0 1 2 2 3 3
#> attr(,"contrasts")
#> attr(,"contrasts")$type
#> [1] "contr.treatment"

5.2.3 Contrasts

5.3 Regression

5.4 ANOVA

5.5 ANCOVA

5.6 Regression trees

Package summary

Packages used here:

1 packages used here: knitr


  1. Taking logarithms of both sides would yield the linear model, \(log(y) = c + \beta_1 x\).↩︎