Although this book is primarily about multivariate models, it is useful to have an overview of the range of available techniques for univariate response models to see their uses and to appreciate how easily univariate models generalize to multivariate ones. Hence, this chapter reviews the characteristics of the standard univariate methods for explaining or predicting a single outcome variable from a set of predictors.
The key ideas are:
For a single quantitative outcome variable
, methods of linear regression and analysis of variance (ANOVA) are comprised within a single framework of the general linear model (GLM). Regression and ANOVA differ only in that the predictors in the former are quantitative, while those in the latter are discrete factors. They can all be fit usinglm()
.These models extend directly to the multivariate case of
outcomes, , and are also fit usinglm()
.A binary outcome
and categorical outcomes like marital status (“never married”, “married”, “separated”, “divorced”) can be handled within a different extension, the generalized linear model as logistic regression or multinomial regression, fit usingglm()
.All of these models involve linear combinations of predictors (weighted sums) fit to optimize some criterion, for example minimizing some function of the residuals or maximizing some measure of fit.
Models and data can be more easily understood with graphics, and many statistical ideas have a visual representation in geometry.
Figure 5.1 summarizes a variety of methods for linear models, classified by number of predictors and number of response variables, and whether these are quantitative vs. discrete. For the present purposes, the key columns are the first two, for the case of one or more quantitative outcome variables.
When the predictors are also quantitative, simple regression (
The situation is more interesting when there are
When the predictor variables are all discrete or categorical, such as gender or level of education, methods like the simple
Why are there so many different names for regression vs. ANOVA concepts, statistics and techniques? In regression, we use notation like
Well, like twins separated at birth, they grew up in homes in different places and with different parents, who were each free to choose their own names, not recognizing their shared DNA.
Methods of regression began in evolutionary biology with Francis Galton’s (1886, 1889) studies of heritability of traits, trying to understand how strongly the physical characteristics of one generation of living things resembled those in the next generation. From a study of the diameters of sweet peas in parent plants and their size in the next generation, and another on the relationship between heights of human parents and their offspring, he developed the fundamental ideas of regression. Karl Pearson (1896)
In contrast, analysis of variance methods were raised on farms, notably the Rothamsted Experimental Station, where R. A. Fisher analyzed vast amounts of data on crop experiments designed to determine the conditions (soil condition, fertilizer treatments) that gave the greatest yields, while controlling for extraneous determiners (plots of planting). With multiple factors determining the outcome, Fisher (1923), in an experiment on yields of different varieties of potatoes given various manure treatments, devised the method of breaking down the total variance into portions attributable to each factor and presented the first ANOVA table. The method became well-known after Fisher’s (1925) Statistical Methods for Research Workers.
The great synthesis of regression and ANOVA did not take place until the 1960s. At that time, methods for computing were beginning to move from programmable desk calculators to mainframe computers, largely using a collection of separate FORTRAN programs, designed for regression, one-way ANOVA, two-way ANOVA, models with interactions, and so forth. To complete an analysis, as a graduate student I often had to use three or more different programs.
Then, something remarkable happened on two fronts: theory and computation. First, in quick succession textbooks by Scheffé (1960), Graybill (1961), Winer (1962) … began to layout a general theory of linear models that encompassed all of these separate models, giving the “General Linear Model” a well-deserved name. Second, two symposiums, one at IBM Yorkdown Heights (IBM (1965)) and the other at the University of Georgia (Bashaw & Findley (1967)) resulted in the first general programs to handle all these cases in an understandable way.
A bit of matrix algebra thrown into the mix showed that most of the ideas for univariate models could be extended to multiple response variables, and so the “Multivariate Linear Model” was born. R. Darrell Bock (Bock, 1963, 1964) sketched a flowchart of the computational steps, which was implemented at the University of Chicago by Jeremy Finn (1967) in the MULTIVARIANCE program. A group at the University of North Carolina headed by Elliot Cramer developed their MANOVA program (Clyde et al., 1966) and Willard Dixon (1965) at UCLA developed the BMD programs incorporating these ideas. The ANOVA and regression twins had finally become part of a larger family.
Packages
In this chapter I use the following packages. Load them now:
TODO: This stuff on linear combinations seems out of place here. Where to move it?
5.1 The General Linear Model
To establish notation and terminology, it is worthwhile to state the the general linear model formally. For convenience, I use vector and matrix notation. This expresses a response variable,
or, expressed in matrices,
The matrix
These can be quantitative variables like
age
,salary
oryears
of education. But they can also be transformed versions, likesqrt(age)
orlog(salary)
.A quantitative variable can be represented by more than one model regressor, for example if it is expressed as a polynomial like
poly(age, degree=2)
or a natural spline likens(salary, df=5)
. The model matrix portion for such terms contains one column for each degree of freedom (df
) and there aredf
coefficients in the corresponding portion of .A categorical or discrete predictor– a factor variable in R– with
levels is expressed as columns in . Typically these are contrasts or comparisons between a baseline or reference level and each of the remaining ones, but any set of linearly independent contrasts can be used by assigning tocontrasts(factor)
. For example,contrasts(factor) <- contr.treatment(4)
for a 4-level factor assigns 3 contrasts representing comparisons with a baseline level, typically the first (in alphabetic order). For an ordered factor, such as one for political knowledge with levels “low”, “medium”, “high”,contrasts.poly()
returns the coefficients of orthogonal polynomial contrasts representing linear and quadratic trends.Interactions between predictors are represented as the direct products of the corresponding columns of
. This allows the effect of one predictor on the response to depend on values of other predictors. For example, the interaction of two quantitative variables, is represented by the product . More generally, for variables or factors and with degrees of freedom and the regressors in are the products of each column for with each column for .
5.1.1 Model formulas
Statistical models in R, such as those fit by lm()
, glm()
and many other modelling function in R are expressed in a simple notation that was developed by Wilkinson & Rogers (1973) for the GENSTAT software system at the Rothamsted Research Station. It solves the problem of having a compact way to specify any model consisting of any combinations of quantitative and discrete factor variables, interactions of these and arbitrary transformations of these.
In this, a model formula take the forms
response ~ terms
response ~ term1 + term2 + ...
where the left-hand side, response
specifies the response variable in the model and the right-hand side specifies the terms
in the model specifying the columns in the
The notation y ~ x
is read as “y
is modeled by x
”. The left-hand side is usually a variable name (such as height
), but it could be an expression that evaluates to the the response, such as log(salary)
or weight/height^2
which represents the body mass index.
On the right-hand side (RHS), the usual arithmetic operator, +, -, *, /, ^
have special meanings as described below. The most fundamental is that y ~ a + b
is interpreted as “y
is modeled by a
and b
”; that is, the sum of linear terms for a
and b
.
Some examples for regression-like models using only quantitative variables, x, x1, x2, x3, ...
are shown below:
y ~ x # simple linear regression
y ~ x - 1 # no intercept: regression through the origin
y ~ x + I(x^2) # quadratic model
y ~ poly(x, 3) # cubic model
y ~ x1 * x2 # crossing: x1 + x2 + x1 : x2
y ~ x1 + x2 + x3 # multiple regression
y ~ (x1 + x2 + x3)^2 # response surface: all quadratics & two-way interactions
log(y) ~ x1 + poly(x, 2) # arbitrary transformation of response
y1 + y2 ~ x1 + x2 + x3 # response is sum of y1 and y2
The intercept -
on the right-hand side removes terms from the model, so a model with no intercept y ~ X -1
(or perhaps more naturally, y ~ 0 + X
).
Function calls on the RHS, such as poly(x, 3)
are evaluated directly, but to use a special model operator, like ^
must be “protected” by wrapping the term in I()
, meaning “identity” or “inhibit”. Thus, the model y ~ x + I(x^2)
means the quadratic model y ~ poly(x, 2)
in that the former uses the raw x, x^2
values (which are necessarily positively correlated) while poly()
converts these to orthogonal polynomial scores, which are uncorrelated (and therefore free from problems of collinearity).
Example 5.1 Workers data: Regression models
For the workers
data (?sec-workers-data-pca) you can fit simple regression models predicting income from years of experience using a linear and quadratic model as follows:
data(workers, package = "matlib")
workers.mod1 <- lm(Income ~ Experience, data=workers)
coef(workers.mod1) |> t() |> t()
#> [,1]
#> (Intercept) 29.16
#> Experience 1.12
workers.mod2 <- lm(Income ~ poly(Experience, 2), data=workers)
coef(workers.mod2) |> t() |> t()
#> [,1]
#> (Intercept) 46.5
#> poly(Experience, 2)1 39.1
#> poly(Experience, 2)2 -11.2
It is simplest to understand these models by plotting the data overlaid with the fitted regressions. Ths uses geom_smooth()
and specifies the smoothing model as method = "lm"
with a formula
, which is y ~ x
for the linear model and y ~ poly(x, 2)
for the quadratic.
ggplot(data = workers,
aes(x = Experience, y = Income)) +
geom_point(size = 3) +
geom_smooth(method = "lm", formula = y ~ x,
se = FALSE, linewidth = 2,
color = "blue") +
geom_smooth(method = "lm", formula = y ~ poly(x, 2),
se = FALSE, linewidth = 2,
color = "red")
The coefficients of the linear model are also easy to interpret:
So a worker with zero years of experience can expect an income of $29162 and this should increase by $1119 for each additional year. However, it is not so simple to interpret the coefficients when a poly()
term is used. Naively plugging in the coefficients for workers.mod2
gives
The problem is that Experience
in model workers.mod
is represented not by the raw values, but rather by values of raw = TRUE
in poly()
:
workers.mod2b <- lm(Income ~ poly(Experience, 2, raw = TRUE),
data=workers) |>
print()
#>
#> Call:
#> lm(formula = Income ~ poly(Experience, 2, raw = TRUE), data = workers)
#>
#> Coefficients:
#> (Intercept) poly(Experience, 2, raw = TRUE)1
#> 23.0680 2.2952
#> poly(Experience, 2, raw = TRUE)2
#> -0.0335
5.1.1.1 Factors
Factor variables are treated specially in linear models, but have simple notations in R formulas. The following examples use A, B, C
to represent discrete factors with two or more levels.
y ~ A # one-way ANOVA
y ~ A + B # two-way, main effects only
y ~ A * B # full two-way, with interaction
y ~ A + B + A:B # same, in long-hand
y ~ x + A # one-way ANCOVA
y ~ (A + B + C)^2 # three-way ANOVA, incl. all two-way interactions
5.1.1.2 Crossing
The *
operator has special meaning used to specify the crossing of variables and factors and :
specifies interactions (products of variables). So, the model y ~ x1 * x2
is expanded to give y ~ x1 + x2 + x1:x2
and the interaction term x1:x2
is calculated as
which means that the coefficient for
y ~ A * B
for factors is similar, expanding to y ~ A + B + A:B
, but the columns in the model matrix represent contrasts among the factor levels as described in detail below (Section 10.3.1). The main effects, A
and B
come from contrasts among the means of their factor levels and the interaction term A:B
reflects differences among means of A
across the levels of B
(and vice-versa).
The model formula y ~ x + A
specifies an ANCOVA model with different intercepts for the levels of A
, but with a common slope for x
. Adding an interaction of x:A
in the model y ~ x * A
allow separate slopes and intercepts for the groups.
5.1.1.3 Powers
The ^
exponent operator indicates powers of a term expression to a specified degree. Thus the term (A + B)^2
is identical to (A + B) * (A + B)
which expands to the main effects of A
, B
and their interaction, also identical to A * B
. In general, the product of parenthesized terms expands as in ordinary algebra,
y ~ (A + B) * (C + D) -> A + B + C + D + A:C + A:D + B:C + B:D
Powers get more interesting with more terms, so (A + B + C)^2
is the same as (A + B + C) * (A + B + C)
, which includes main effects of A
, B
and C
as well as all two-way interactions, A:B, A:C
and B:C
. The model formula (A + B + C)^3
expands to include all two-way interactions and the three-way interaction A:B:C
.
(A + B + C)^3 -> A + B + C + A:B + A:C + B:C + A:B:C
In this context -
can be use to remove terms, as shown in the following examples
+ B + C)^2 <-> (A + B + C)^3 - A:B:C
(A + B + C)^3 - B:C - A:B:C <-> A + B + C + A:B + A:C (A
Finally, the symbol .
on the right-hand side specifies all terms in the current dataset other than the response. Thus if you have a data.frame containing y, x1, x2, ..., x6
, you can specify a model with all variables except x6
as predictors as
y ~ . - x6
To test what we’ve covered above,
What do you think the model formula
y ~ .^2
means in a data set containing variablesx1, x2, x3,
andx4
?What about the formula with
y ~ .^2 - A:B:C:D
with factorsA, B, C, D
?
You can work out questions like these or explore model formulae using terms()
for a "formula"
object. The labels of these terms can then be concatenated to a string and turned back into a formula using as.formula()
:
f <- formula(y ~ (x1 + x2 + x3 + x4)^2)
terms = attr(terms(f), "term.labels")
terms |> paste(collapse = " + ")
#> [1] "x1 + x2 + x3 + x4 + x1:x2 + x1:x3 + x1:x4 + x2:x3 + x2:x4 + x3:x4"
# convert back to a formula
as.formula(sprintf("y ~ %s", paste(terms, collapse=" + ")))
#> y ~ x1 + x2 + x3 + x4 + x1:x2 + x1:x3 + x1:x4 + x2:x3 + x2:x4 +
#> x3:x4
5.1.2 Model matrices
As noted above, a model formula is used to generate the lm()
uses stats::model.matrix()
and you can use this to explore how factors, interactions and other model terms are represented in a model object.
For a small example, here are a few observations representing income (inc
) and type
of occupation, taking on values bc
(blue colar), wc
(white colar) and prof
(professional). model.matrix()
takes a one-sided formula with the terms on the right-hand side. The main effect model looks like this:
set.seed(42)
inc <- round(runif(n=9, 20, 40))
type <- rep(c("bc", "wc", "prof"), each =3)
mm <- model.matrix(~ inc + type)
data.frame(type, mm)
#> type X.Intercept. inc typeprof typewc
#> 1 bc 1 38 0 0
#> 2 bc 1 39 0 0
#> 3 bc 1 26 0 0
#> 4 wc 1 37 0 1
#> 5 wc 1 33 0 1
#> 6 wc 1 30 0 1
#> 7 prof 1 35 1 0
#> 8 prof 1 23 1 0
#> 9 prof 1 33 1 0
As you can see, type
, with 2 degrees of freedom is represented by two dummy (0/1) variables, labeled typeprof
and typewc
here. Together, these represent treatment contrasts (comparisons) between the baseline group type=="bc"
, which is coded (0, 0) and each of the others: type=="prof"
, coded (1, 0) and type=="wc"
, codes (0, 1). Different coding schemes are described in the following section.
In a model with the interaction inc * type
, additional columns are constructed as the product of inc
with each of the columns for type
. We will see below how this generalizes to an arbitrary number of predictor terms and their possible interactions.
model.matrix(~ inc * type)
#> (Intercept) inc typeprof typewc inc:typeprof inc:typewc
#> 1 1 38 0 0 0 0
#> 2 1 39 0 0 0 0
#> 3 1 26 0 0 0 0
#> 4 1 37 0 1 0 37
#> 5 1 33 0 1 0 33
#> 6 1 30 0 1 0 30
#> 7 1 35 1 0 35 0
#> 8 1 23 1 0 23 0
#> 9 1 33 1 0 33 0
#> attr(,"assign")
#> [1] 0 1 2 2 3 3
#> attr(,"contrasts")
#> attr(,"contrasts")$type
#> [1] "contr.treatment"
5.1.3 Coding factors and contrasts
Discrete explanatory variables, such as race, type of occupation or level of education require special attention in linear models because, unlike continuous variables, they cannot by entered into the model equation just as they are. Instead, we need some way to code those variables numerically.
A key insight is that your choice of a coding scheme changes the meaning of the model parameters, and allows you to perform different comparisons (test different statistical hypotheses) about the means of the category levels according to meaningful questions in your research design. For a more general discussion of coding schemes, see Fox & Weisberg (2018), $§$4.7 and the vignette Coding Matrices, Contrast Matrices and Linear Models for the codingMatrices package.
Each coding scheme for a factor represents the same model in terms of fitted values and overall significance for that term, but they differ in how the coefficients are parameterized and interpreted. This is crucial to understand, because tests of the coefficients can directly answer different research questions depending on the coding scheme used.
In R, categorical variables are called factors usually created by g <- factor(g)
or g <- as.factor(g)
for a discrete variable g
. If levels of the variable g
are ordered, such as type
of occupation with levels "bc"
< "wc"
< "prof"
or dose of a drug, "low"
< "medium"
< "high"
, you can use g <- ordered(g)
to reflect this.
In any case, a factor with
Base R provides several coding schemes via assignment to the contrasts()
function for a factor, as in contrasts(df$Drug) <- ...
one of:
contr.treatment()
: Compares each level to a reference level using dummy (0, 1) variables. This is the default, and the reference level is taken as the first (in alphabetic or numerical order). You can change the reference level usingrelevel()
orreorder()
for the factor, or simply usingfactor(A, levels = ...)
.contr.sum()
: Compares each level to the grand mean.contr.helmert()
: Compares each level to the mean of the previous levels, which is useful for ordered categories such astype
of occupation with levels"bc"
<"wc"
<"prof"
or dose of a drug,"low"
<"medium"
<"high"
.contr.poly()
: For ordered factors with numeric levels, this creates orthogonal polynomial contrasts, representing the linear, quadratic, cubic … trends in the factor means, as if these appeared as terms in a model with as a numeric variable.
TODO: Move some stuff from 10.3.1 on contrasts here
I take up some of the details of these coding schemes below. But first, it is useful to define exactly what I mean by a contrast. For a factor with
Two contrasts,
Orthogonal contrasts correspond to statistically independent tests. This is nice because they reflect separate, non-overlapping research questions. Another consequence is that the sums of squares for the overall hypothesis of differences among the groups is exactly decomposed as the sum of the sum of squares accounted for by the
Treatment coding
Let’s examine R’s default coding scheme, contr.treatment
(also called dummy coding), for a factor with 4 levels: ‘a’, ‘b’, ‘c’, and ‘d’, with a view to understanding the relationship between the true population means,
C <- contr.treatment(letters[1:4]) |> print()
#> b c d
#> a 0 0 0
#> b 1 0 0
#> c 0 1 0
#> d 0 0 1
Here, the columns of C
correspond to three dummy variables for the levels b, c, d compared to the reference level a. If we denote these columns as
The design matrix
With this coding, the parameters
Thus we have,
Note that
Thus, you can solve for the parameters in terms of the means symbolically as follows
Deviation coding
Another common coding scheme, useful for unordered factors, is deviation coding given by contrast.sum()
. This has the property that the parameters are constrained to sum to zero,
The parameters estimated with this coding are: With this coding, the intercept is
A (redundant) coefficient for the last group is omitted, because with this coding a coefficient for that group would be equal to the negative of the sum of the others,
Helmert contrasts
For ordered factors, it is sensible to take the ordering into account in interpreting the results. Helmert contrasts are designed for this purpose. The intercept is again the grand mean across all levels. Each contrast compares the mean of a given level to the average of all previous ones in the order; they contrast the second level with the first, the third with the average of the first two, and so on.
C <- contr.helmert(letters[1:4]) |> print()
#> [,1] [,2] [,3]
#> a -1 -1 -1
#> b 1 -1 -1
#> c 0 2 -1
#> d 0 0 3
It is easier to understand these if the columns are normalized so that the largest value in each column is 1.
Then we would have the coefficients as:
Note that you can easily reverse the ordering of the comparisons to contrast each of the first
C.rev <- C[4:1, 3:1]
row.names(C.rev) <- letters[1:4]
C.rev
#> [,1] [,2] [,3]
#> a 3 0 0
#> b -1 2 0
#> c -1 -1 1
#> d -1 -1 -1
Polynomial contrasts
For ordered factors that are also numeric, like Age = c(8, 9, 10, 11)
or those that can be considered equally spaced along some continuum, polynomial contrasts, given by contr.poly()
, provide orthogonal (uncorrelated) contrasts that assess the degree to which the factor means vary linearly, quadratically, and so on with the factor levels.
contr.poly()
scales each column so that it’s sum of squares is 1. Each pair of columns is orthogonal,
C <- contr.poly(4) |> print()
#> .L .Q .C
#> [1,] -0.671 0.5 -0.224
#> [2,] -0.224 -0.5 0.671
#> [3,] 0.224 -0.5 -0.671
#> [4,] 0.671 0.5 0.224
# show they are orthonormal
t(C) %*% C |> zapsmall()
#> .L .Q .C
#> .L 1 0 0
#> .Q 0 1 0
#> .C 0 0 1
We can get a better sense of orthogonal polynomial contrasts by taking a numeric vector
Then we can make the columns of M
orthogonal using the Gram-Schmidt method, where each successive column after the first is made orthogonal to all previous columns by subtracting their projections on that column. Plotting these, as in Figure 5.3 shows that the coefficients for a linear term plot as a line, those for
op <- par(mar = c(4, 4, 1, 1)+.1)
M1 <- matlib::GramSchmidt(M, normalize = FALSE)
matplot(M1,
type = "b",
pch = as.character(0:3),
cex = 1.5,
cex.lab = 1.5,
lty = 1,
lwd = 3,
xlab = "X",
ylab = "Coefficient")
Custom contrasts
You don’t have to be constrained by the kinds of comparisons available in contr.*
functions. For a factor with
One useful idea for defining orthogonal comparisons of substantive interest is the idea of nested dichotomies. Here you would start with contrasting one meaningful subset of the groups against all the others. Then, successive contrasts are defined by making dichotomies among the groups within each subset.
TODO: Make a figure for the two examples
As one example, suppose we are looking at support on some issue among four political parties: A, B, C and D, where A and B are left-of-center and C and D are to the right of the political spectrum. The following comparisons might of interest:
AB.CD: {A, B} vs. {C, D}
A.B: {A} vs. {B}
C.D: {C} vs. {D}
You could set up these comparisons as the following contrasts:
With a data frame like df
with the factor party
, you would then use these contrasts in a model by assigning C
to contrasts(df$party)
:
Then, in a linear model, the coefficients estimate the mean difference between the averages of the subset of parties in each comparison. For example, parties A and B on average are 2.11 higher in support than parties C and D; support for party A is 2.37 greater than party B and so forth.
For another example, say we are examining differences among three psychiatric diagnostic patient groups, “bipolar”, “manic”, “depressed” and also have a matched normal group. One set of meaningful comparisons would be as follows:
D1: {Normal} vs. {Bipolar, Depressed, Manic}
D2: {Bipolar} vs. {Depressed, Manic}
D3: {Depressed} vs. {Manic}
Weights for these contrasts are assigned by making them positive values for the groups in one subset and negative for the other, and giving numbers that sum to zero for each one:
These have the same form as the reversed Helmert contrasts considered earlier.
TODO: Do I need to go throught the details of univariate regression, ANOVA, … here? ## Regression
5.2 ANOVA
5.3 ANCOVA
Not shown in Figure 5.1 are rows for cases where predictor variables include a mixture quantitative and discrete factors. These have two “flavors”, depending on our emphasis.
Analysis of covariance (ANCOVA) is the term for models where the main emphasis on on comparing means of different groups, but where there are one or more quantitative predictors (for example a pre-test score on the outcome variable) that need to be taken into account, so we can assess the differences among groups controlling or adjusting for those predictors, which are called covariates. (Strictly speaking, ANCOVA models typically assume that the regression slopes are the same for all groups.)
Homogeneity of regression treats the same type of data, but the main interest is on the regression relation between
and a set of quantitative s, but there is also one or more grouping factors. In this case, we want to determine if the same regression relation applies in all groups or whether the groups differ in their intercepts and/or slopes. Another term for this class of questions is moderator effects, which refers to interactions between the quantitative predictors and group factors.
TODO Move the initial discussion of ANCOVA from Section 10.7 here with the generic example.
5.4 Regression trees
Package summary
Packages used here:
3 packages used here: ggplot2, knitr, nestedLogit
Taking logarithms of both sides would yield the linear model,
.↩︎