Although this book is primarily about multivariate models, it is useful to have an overview of the range of available techniques for univariate response models to see their uses and to appreciate how easily univariate models generalize to multivariate ones. Hence, this chapter reviews the characteristics of the standard univariate methods for explaining or predicting a single outcome variable from a set of predictors.

The key ideas are:

summarizes a variety of methods for linear models, classified by number of predictors and number of response variables, and whether these are quantitative vs. discrete. For the present purposes, the key columns are the first two, for the case of one or more quantitative outcome variables.

Figure 5.1: Techniques for linear models classified by number of predictors and number of response variables, and whether these are quantitative vs. discrete

When the predictors are also quantitative, simple regression (p=1) generalizes to multivariate regression with two or more outcomes (q>1). For example we might want to predict weight and body mass index jointly from a person’s height.

The situation is more interesting when there are p>1 predictors. The most common multivariate generalization is multivariate multiple regression (MMRA), where each outcome is regressed on the predictors, as if done separately for each outcome, but using multivariate tests that take correlations among the predictors into account. Other methods for this case include canonical correlation analysis, which tries to explain all relations between Y and a set of xs through maximally correlated linear combinations of each.

When the predictor variables are all discrete or categorical, such as gender or level of education, methods like the simple t-test, one-way ANOVA and factorial ANOVA with q=1 outcome measures all have simple extensions to the case of q>1 outcomes.

History Corner

Why are there so many different names for regression vs. ANOVA concepts, statistics and techniques? In regression, we use notation like x1,x2,... to refer to predictors in a model, while in ANOVA, factors A,B,... are called main effects. In regression applications, we often test linear hypotheses, are interested in coefficients and evaluate a model with an R2 statistic, while in ANOVA we may test contrasts among factor levels, and use F-tests to evaluate models.

Well, like twins separated at birth, they grew up in homes in different places and with different parents, who were each free to choose their own names, not recognizing their shared DNA.

Methods of regression began in evolutionary biology with Francis Galton’s (, ) studies of heritability of traits, trying to understand how strongly the physical characteristics of one generation of living things resembled those in the next generation. From a study of the diameters of sweet peas in parent plants and their size in the next generation, and another on the relationship between heights of human parents and their offspring, he developed the fundamental ideas of regression. Karl Pearson ()

In contrast, analysis of variance methods were raised on farms, notably the Rothamsted Experimental Station, where R. A. Fisher analyzed vast amounts of data on crop experiments designed to determine the conditions (soil condition, fertilizer treatments) that gave the greatest yields, while controlling for extraneous determiners (plots of planting). With multiple factors determining the outcome, Fisher (), in an experiment on yields of different varieties of potatoes given various manure treatments, devised the method of breaking down the total variance into portions attributable to each factor and presented the first ANOVA table. The method became well-known after Fisher’s () Statistical Methods for Research Workers.

The great synthesis of regression and ANOVA did not take place until the 1960s. At that time, methods for computing were beginning to move from programmable desk calculators to mainframe computers, largely using a collection of separate FORTRAN programs, designed for regression, one-way ANOVA, two-way ANOVA, models with interactions, and so forth. To complete an analysis, as a graduate student I often had to use three or more different programs.

Then, something remarkable happened on two fronts: theory and computation. First, in quick succession textbooks by Scheffé (), Graybill (), Winer () … began to layout a general theory of linear models that encompassed all of these separate models, giving the “General Linear Model” a well-deserved name. Second, two symposiums, one at IBM Yorkdown Heights (IBM ()) and the other at the University of Georgia (Bashaw & Findley ()) resulted in the first general programs to handle all these cases in an understandable way.

A bit of matrix algebra thrown into the mix showed that most of the ideas for univariate models could be extended to multiple response variables, and so the “Multivariate Linear Model” was born. R. Darrell Bock (, ) sketched a flowchart of the computational steps, which was implemented at the University of Chicago by Jeremy Finn () in the MULTIVARIANCE program. A group at the University of North Carolina headed by Elliot Cramer developed their MANOVA program () and Willard Dixon () at UCLA developed the BMD programs incorporating these ideas. The ANOVA and regression twins had finally become part of a larger family.

Packages

In this chapter I use the following packages. Load them now:

TODO: This stuff on linear combinations seems out of place here. Where to move it?

5.1 The General Linear Model

To establish notation and terminology, it is worthwhile to state the the general linear model formally. For convenience, I use vector and matrix notation. This expresses a response variable, y=(y1,y2,,yn)T for n observations, as a sum of terms involving p regressors, x1,x2,,xp, each of length n.

(5.1)y=β0+β1x1+β2x2++βpxp+ϵ=[1,x1,x2,,xp]β+ϵ

or, expressed in matrices,

yn×1=Xn×(p+1)β(p+1)×1+ϵ

The matrix X is called the model matrix and contains the numerical representations of the predictor variables called regressors. The essential thing about a linear model is that it is linear in the parameters βi. That is, the predicted value of y is a linear combination of some xi with weights βi. An example of a nonlinear model is the exponential growth model, y=β0+eβ1x, where the parameter β1 appears as an exponent.

  • These can be quantitative variables like age, salary or years of education. But they can also be transformed versions, like sqrt(age) or log(salary).

  • A quantitative variable can be represented by more than one model regressor, for example if it is expressed as a polynomial like poly(age, degree=2) or a natural spline like ns(salary, df=5). The model matrix portion for such terms contains one column for each degree of freedom (df) and there are df coefficients in the corresponding portion of β.

  • A categorical or discrete predictor– a factor variable in R– with d levels is expressed as d1 columns in X. Typically these are contrasts or comparisons between a baseline or reference level and each of the remaining ones, but any set of d1 linearly independent contrasts can be used by assigning to contrasts(factor). For example, contrasts(factor) <- contr.treatment(4) for a 4-level factor assigns 3 contrasts representing comparisons with a baseline level, typically the first (in alphabetic order). For an ordered factor, such as one for political knowledge with levels “low”, “medium”, “high”, contrasts.poly() returns the coefficients of orthogonal polynomial contrasts representing linear and quadratic trends.

  • Interactions between predictors are represented as the direct products of the corresponding columns of X. This allows the effect of one predictor on the response to depend on values of other predictors. For example, the interaction of two quantitative variables, x1,x2 is represented by the product x1×x2. More generally, for variables or factors A and B with degrees of freedom dfA and dfB the regressors in X are the dfA×dfB products of each column for A with each column for B.

5.1.1 Model formulas

Statistical models in R, such as those fit by lm(), glm() and many other modelling function in R are expressed in a simple notation that was developed by Wilkinson & Rogers () for the GENSTAT software system at the Rothamsted Research Station. It solves the problem of having a compact way to specify any model consisting of any combinations of quantitative and discrete factor variables, interactions of these and arbitrary transformations of these.

In this, a model formula take the forms

response ~ terms
response ~ term1 + term2 + ...

where the left-hand side, response specifies the response variable in the model and the right-hand side specifies the terms in the model specifying the columns in the X matrix of ; the coefficients β are implied and not represented explicitly in the formula.

The notation y ~ x is read as “y is modeled by x”. The left-hand side is usually a variable name (such as height), but it could be an expression that evaluates to the the response, such as log(salary) or weight/height^2 which represents the body mass index.

On the right-hand side (RHS), the usual arithmetic operator, +, -, *, /, ^ have special meanings as described below. The most fundamental is that y ~ a + b is interpreted as “y is modeled by a and b”; that is, the sum of linear terms for a and b.

Some examples for regression-like models using only quantitative variables, x, x1, x2, x3, ... are shown below:

y ~ x                      # simple linear regression
y ~ x - 1                  # no intercept: regression through the origin 
y ~ x + I(x^2)             # quadratic model
y ~ poly(x, 3)             # cubic model
y ~ x1 * x2                # crossing: x1 + x2  +  x1 : x2
y ~ x1 + x2 + x3           # multiple regression
y ~ (x1 + x2 + x3)^2       # response surface: all quadratics & two-way interactions
log(y) ~ x1 + poly(x, 2)   # arbitrary transformation of response
y1 + y2 ~ x1 + x2 + x3     # response is sum of y1 and y2

The intercept β0 is automatically included in the model without need to specify it explicitly. The minus sign, - on the right-hand side removes terms from the model, so a model with no intercept β0=0 can be specifies as y ~ X -1 (or perhaps more naturally, y ~ 0 + X).

Function calls on the RHS, such as poly(x, 3) are evaluated directly, but to use a special model operator, like ^ must be “protected” by wrapping the term in I(), meaning “identity” or “inhibit”. Thus, the model y ~ x + I(x^2) means the quadratic model y=β0+β1x+β2x2. This differs from the model y ~ poly(x, 2) in that the former uses the raw x, x^2 values (which are necessarily positively correlated) while poly() converts these to orthogonal polynomial scores, which are uncorrelated (and therefore free from problems of collinearity).

Example 5.1 Workers data: Regression models

For the workers data (?sec-workers-data-pca) you can fit simple regression models predicting income from years of experience using a linear and quadratic model as follows:

data(workers, package = "matlib")
workers.mod1 <- lm(Income ~ Experience, data=workers)
coef(workers.mod1) |> t() |> t()
#>              [,1]
#> (Intercept) 29.16
#> Experience   1.12

workers.mod2 <- lm(Income ~ poly(Experience, 2), data=workers)
coef(workers.mod2) |> t() |> t()
#>                       [,1]
#> (Intercept)           46.5
#> poly(Experience, 2)1  39.1
#> poly(Experience, 2)2 -11.2

It is simplest to understand these models by plotting the data overlaid with the fitted regressions. Ths uses geom_smooth() and specifies the smoothing model as method = "lm" with a formula, which is y ~ x for the linear model and y ~ poly(x, 2) for the quadratic.

ggplot(data = workers,
       aes(x = Experience, y = Income)) +
  geom_point(size = 3) +
  geom_smooth(method = "lm", formula = y ~ x,
              se = FALSE, linewidth = 2,
              color = "blue") +
  geom_smooth(method = "lm", formula = y ~ poly(x, 2),
              se = FALSE, linewidth = 2,
              color = "red") 
Figure 5.2: Workers data with fitted linear and quadratic models for years of experience.

The coefficients of the linear model are also easy to interpret:

Income^=29.162+1.119(Experience)

So a worker with zero years of experience can expect an income of $29162 and this should increase by $1119 for each additional year. However, it is not so simple to interpret the coefficients when a poly() term is used. Naively plugging in the coefficients for workers.mod2 gives

Income^=46.5+39.111(Experience)11.16(Experience2)

The problem is that x = Experience in model workers.mod is represented not by the raw values, but rather by values of x and x2 that have been made to be uncorrelated. If you really want to interpret the coefficient values in terms of years of experience, use the option raw = TRUE in poly():

workers.mod2b <- lm(Income ~ poly(Experience, 2, raw = TRUE), 
                    data=workers) |>
  print()
#> 
#> Call:
#> lm(formula = Income ~ poly(Experience, 2, raw = TRUE), data = workers)
#> 
#> Coefficients:
#>                      (Intercept)  poly(Experience, 2, raw = TRUE)1  
#>                          23.0680                            2.2952  
#> poly(Experience, 2, raw = TRUE)2  
#>                          -0.0335

Income^=23.07+2.3(Experience)0.03(Experience2) This says that income is predicted to be $23,068 with no experience, increase initially by $2295, but that yearly increase decreases by $330. Some further details of orthogonal polynomials are explained below.

5.1.1.1 Factors

Factor variables are treated specially in linear models, but have simple notations in R formulas. The following examples use A, B, C to represent discrete factors with two or more levels.

y ~ A                 # one-way ANOVA
y ~ A + B             # two-way, main effects only
y ~ A * B             # full two-way, with interaction
y ~ A + B + A:B       # same, in long-hand
y ~ x + A             # one-way ANCOVA
y ~ (A + B + C)^2     # three-way ANOVA, incl. all two-way interactions

5.1.1.2 Crossing

The * operator has special meaning used to specify the crossing of variables and factors and : specifies interactions (products of variables). So, the model y ~ x1 * x2 is expanded to give y ~ x1 + x2 + x1:x2 and the interaction term x1:x2 is calculated as x1×x2. In algebraic notation (omitting the error term) this works out to the model,

y=β0+β1x1+β2x2+β1x1β2x2=β0+(β1+β2x2)x1+β2x2,

which means that the coefficient for x1 in the model is not constant for all values of x2, but rather changes with the value of x2. If β2>0, the slope for x1 increases with x2 and vice-versa.

y ~ A * B for factors is similar, expanding to y ~ A + B + A:B, but the columns in the model matrix represent contrasts among the factor levels as described in detail below (). The main effects, A and B come from contrasts among the means of their factor levels and the interaction term A:B reflects differences among means of A across the levels of B (and vice-versa).

The model formula y ~ x + A specifies an ANCOVA model with different intercepts for the levels of A, but with a common slope for x. Adding an interaction of x:A in the model y ~ x * A allow separate slopes and intercepts for the groups.

5.1.1.3 Powers

The ^ exponent operator indicates powers of a term expression to a specified degree. Thus the term (A + B)^2 is identical to (A + B) * (A + B) which expands to the main effects of A, B and their interaction, also identical to A * B. In general, the product of parenthesized terms expands as in ordinary algebra,

y ~ (A + B) * (C + D) -> A + B + C + D + A:C + A:D + B:C + B:D

Powers get more interesting with more terms, so (A + B + C)^2 is the same as (A + B + C) * (A + B + C), which includes main effects of A, B and C as well as all two-way interactions, A:B, A:C and B:C. The model formula (A + B + C)^3 expands to include all two-way interactions and the three-way interaction A:B:C.

(A + B + C)^3 -> A + B + C + A:B + A:C + B:C + A:B:C

In this context - can be use to remove terms, as shown in the following examples

(A + B + C)^2 <-> (A + B + C)^3 - A:B:C
(A + B + C)^3 - B:C - A:B:C  <-> A + B + C + A:B + A:C 

Finally, the symbol . on the right-hand side specifies all terms in the current dataset other than the response. Thus if you have a data.frame containing y, x1, x2, ..., x6, you can specify a model with all variables except x6 as predictors as

y ~ . - x6

To test what we’ve covered above,

  • What do you think the model formula y ~ .^2 means in a data set containing variables x1, x2, x3, and x4?

  • What about the formula with y ~ .^2 - A:B:C:D with factors A, B, C, D?

You can work out questions like these or explore model formulae using terms() for a "formula" object. The labels of these terms can then be concatenated to a string and turned back into a formula using as.formula():

f <- formula(y ~ (x1 + x2 + x3 + x4)^2)
terms = attr(terms(f), "term.labels")

terms |> paste(collapse = " + ")
#> [1] "x1 + x2 + x3 + x4 + x1:x2 + x1:x3 + x1:x4 + x2:x3 + x2:x4 + x3:x4"
# convert back to a formula
as.formula(sprintf("y ~ %s", paste(terms, collapse=" + "))) 
#> y ~ x1 + x2 + x3 + x4 + x1:x2 + x1:x3 + x1:x4 + x2:x3 + x2:x4 + 
#>     x3:x4

5.1.2 Model matrices

As noted above, a model formula is used to generate the n×(p+1) model matrix, X, typically containing the column of 1s for the intercept β0 in the model, followed by p columns representing the regressors x1,x2,,xp. Internally, lm() uses stats::model.matrix() and you can use this to explore how factors, interactions and other model terms are represented in a model object.

For a small example, here are a few observations representing income (inc) and type of occupation, taking on values bc (blue colar), wc (white colar) and prof (professional). model.matrix() takes a one-sided formula with the terms on the right-hand side. The main effect model looks like this:

set.seed(42)
inc <- round(runif(n=9, 20, 40))
type <- rep(c("bc", "wc", "prof"), each =3)

mm <- model.matrix(~ inc + type) 
data.frame(type, mm)
#>   type X.Intercept. inc typeprof typewc
#> 1   bc            1  38        0      0
#> 2   bc            1  39        0      0
#> 3   bc            1  26        0      0
#> 4   wc            1  37        0      1
#> 5   wc            1  33        0      1
#> 6   wc            1  30        0      1
#> 7 prof            1  35        1      0
#> 8 prof            1  23        1      0
#> 9 prof            1  33        1      0

As you can see, type, with 2 degrees of freedom is represented by two dummy (0/1) variables, labeled typeprof and typewc here. Together, these represent treatment contrasts (comparisons) between the baseline group type=="bc", which is coded (0, 0) and each of the others: type=="prof", coded (1, 0) and type=="wc", codes (0, 1). Different coding schemes are described in the following section.

In a model with the interaction inc * type, additional columns are constructed as the product of inc with each of the columns for type. We will see below how this generalizes to an arbitrary number of predictor terms and their possible interactions.

model.matrix(~ inc * type)
#>   (Intercept) inc typeprof typewc inc:typeprof inc:typewc
#> 1           1  38        0      0            0          0
#> 2           1  39        0      0            0          0
#> 3           1  26        0      0            0          0
#> 4           1  37        0      1            0         37
#> 5           1  33        0      1            0         33
#> 6           1  30        0      1            0         30
#> 7           1  35        1      0           35          0
#> 8           1  23        1      0           23          0
#> 9           1  33        1      0           33          0
#> attr(,"assign")
#> [1] 0 1 2 2 3 3
#> attr(,"contrasts")
#> attr(,"contrasts")$type
#> [1] "contr.treatment"

5.1.3 Coding factors and contrasts

Discrete explanatory variables, such as race, type of occupation or level of education require special attention in linear models because, unlike continuous variables, they cannot by entered into the model equation just as they are. Instead, we need some way to code those variables numerically.

A key insight is that your choice of a coding scheme changes the meaning of the model parameters, and allows you to perform different comparisons (test different statistical hypotheses) about the means of the category levels according to meaningful questions in your research design. For a more general discussion of coding schemes, see Fox & Weisberg (), $§$4.7 and the vignette Coding Matrices, Contrast Matrices and Linear Models for the codingMatrices package.

Each coding scheme for a factor represents the same model in terms of fitted values and overall significance for that term, but they differ in how the coefficients are parameterized and interpreted. This is crucial to understand, because tests of the coefficients can directly answer different research questions depending on the coding scheme used.

In R, categorical variables are called factors usually created by g <- factor(g) or g <- as.factor(g) for a discrete variable g. If levels of the variable g are ordered, such as type of occupation with levels "bc" < "wc" < "prof" or dose of a drug, "low" < "medium" < "high", you can use g <- ordered(g) to reflect this.

In any case, a factor with k levels is reflected in an overall test with k1 degrees of freedom corresponding to the null hypothesis H0:μ1=μ2==μk. This can be represented as k1 comparisons among the factor level means, or k1 separate questions asking how the means differ.

Base R provides several coding schemes via assignment to the contrasts() function for a factor, as in contrasts(df$Drug) <- ... one of:

  • contr.treatment(): Compares each level to a reference level using k1 dummy (0, 1) variables. This is the default, and the reference level is taken as the first (in alphabetic or numerical order). You can change the reference level using relevel() or reorder() for the factor, or simply using factor(A, levels = ...).

  • contr.sum(): Compares each level to the grand mean.

  • contr.helmert(): Compares each level to the mean of the previous levels, which is useful for ordered categories such as type of occupation with levels "bc" < "wc" < "prof" or dose of a drug, "low" < "medium" < "high".

  • contr.poly(): For ordered factors with numeric levels, this creates orthogonal polynomial contrasts, representing the linear, quadratic, cubic … trends in the factor means, as if these appeared as x,x2,x3,... terms in a model with x as a numeric variable.

TODO: Move some stuff from 10.3.1 on contrasts here

I take up some of the details of these coding schemes below. But first, it is useful to define exactly what I mean by a contrast. For a factor with k groups, a contrast is simply a comparison of the mean of one subset of groups against the mean of another subset. This is specified as a weighted sum, L of the means μ with weights c that sum to zero,

L=cTμ=ikciμisuch thatΣci=0.

Two contrasts, c1 and c2 are orthogonal if the sum of products of their weights is zero, i.e., c1c2=Σc1i×c2i=0. When contrasts are placed as columns of a matrix C, they are all mutually orthogonal if each pair is orthogonal, which means CC is a diagonal matrix. If the columns of C are normalized to have sums of squares = 1, then CC=I.

Orthogonal contrasts correspond to statistically independent tests. This is nice because they reflect separate, non-overlapping research questions. Another consequence is that the sums of squares for the overall hypothesis of differences among the groups is exactly decomposed as the sum of the sum of squares accounted for by the k1 contrasts Li:

SSgroup=ik1SS(Li).

Treatment coding

Let’s examine R’s default coding scheme, contr.treatment (also called dummy coding), for a factor with 4 levels: ‘a’, ‘b’, ‘c’, and ‘d’, with a view to understanding the relationship between the true population means, μa, μb, μc, and μd and the parameters β estimated in a linear model. We get the following:

C <- contr.treatment(letters[1:4]) |> print()
#>   b c d
#> a 0 0 0
#> b 1 0 0
#> c 0 1 0
#> d 0 0 1

Here, the columns of C correspond to three dummy variables for the levels b, c, d compared to the reference level a. If we denote these columns as xb, xc, and xd, then:

xb={1if factor=b0otherwise;Xc={1if factor=c0otherwise;Xd={1if factor=d0otherwise

The design matrix X(4×4)=[1,C]=[1,xb,xc,xc] includes the constant column 1 representing the intercept, which averages over the factor levels when there are other terms in the model.

X=(1000110010101001)

With this coding, the parameters β in the model are related to the means μ as,

(μaμbμcμd)=Xβ=[1000110010101001](β0βbβcβd)=(β0β0+βbβ0+βcβ0+βd)

Thus we have,

(μaμbμcμd)=(β0β0+βbβ0+βcβ0+βd)

Note that X is non-singular as long as the comparisons in C are linearly independent. It’s inverse, X1 determines how the transformed parameters relate to the original class means, that is, it determines the interpretation of the parameters in terms of the means.

X <- cbind(1, C)
Xinv <- solve(X) |> print()
#>    a b c d
#>    1 0 0 0
#> b -1 1 0 0
#> c -1 0 1 0
#> d -1 0 0 1

Thus, you can solve for the parameters in terms of the means symbolically as follows

(β0βbβcβd)=X1μ=[1000110010101001](μaμbμcμd)=(μaμbμaμcμaμdμa)

Deviation coding

Another common coding scheme, useful for unordered factors, is deviation coding given by contrast.sum(). This has the property that the parameters are constrained to sum to zero, Σβi=0.

C <- contr.sum(letters[1:4]) |> print()
#>   [,1] [,2] [,3]
#> a    1    0    0
#> b    0    1    0
#> c    0    0    1
#> d   -1   -1   -1

The parameters estimated with this coding are: With this coding, the intercept is β0=μ¯, the grand mean across all levels and the parameters are the deviations from that,

  • β1=μaμ¯
  • β2=μbμ¯
  • β3=μcμ¯

A (redundant) coefficient for the last group is omitted, because with this coding a coefficient for that group would be equal to the negative of the sum of the others, βd=μdμ¯=(β1+β2+β3).

Helmert contrasts

For ordered factors, it is sensible to take the ordering into account in interpreting the results. Helmert contrasts are designed for this purpose. The intercept is again the grand mean across all levels. Each contrast compares the mean of a given level to the average of all previous ones in the order; they contrast the second level with the first, the third with the average of the first two, and so on.

C <- contr.helmert(letters[1:4]) |> print()
#>   [,1] [,2] [,3]
#> a   -1   -1   -1
#> b    1   -1   -1
#> c    0    2   -1
#> d    0    0    3

It is easier to understand these if the columns are normalized so that the largest value in each column is 1.

C %*% diag(1/c(1, 2, 3)) |> MASS::fractions()
#>   [,1] [,2] [,3]
#> a   -1 -1/2 -1/3
#> b    1 -1/2 -1/3
#> c    0    1 -1/3
#> d    0    0    1

Then we would have the coefficients as:

  • β0=μ¯
  • β1=μbμa
  • β2=μcμa+μb2
  • β3=μdμa+μb+μc3

Note that you can easily reverse the ordering of the comparisons to contrast each of the first k1 means with the average of all those higher in the order.

C.rev <- C[4:1, 3:1]
row.names(C.rev) <- letters[1:4]
C.rev
#>   [,1] [,2] [,3]
#> a    3    0    0
#> b   -1    2    0
#> c   -1   -1    1
#> d   -1   -1   -1

Polynomial contrasts

For ordered factors that are also numeric, like Age = c(8, 9, 10, 11) or those that can be considered equally spaced along some continuum, polynomial contrasts, given by contr.poly(), provide orthogonal (uncorrelated) contrasts that assess the degree to which the factor means vary linearly, quadratically, and so on with the factor levels.

contr.poly() scales each column so that it’s sum of squares is 1. Each pair of columns is orthogonal, cicj=0, so that CC=I.

C <- contr.poly(4) |> print()
#>          .L   .Q     .C
#> [1,] -0.671  0.5 -0.224
#> [2,] -0.224 -0.5  0.671
#> [3,]  0.224 -0.5 -0.671
#> [4,]  0.671  0.5  0.224

# show they are orthonormal
t(C) %*% C |> zapsmall()
#>    .L .Q .C
#> .L  1  0  0
#> .Q  0  1  0
#> .C  0  0  1

We can get a better sense of orthogonal polynomial contrasts by taking a numeric vector x, and raising it to successive powers, 1, 2, 3. Here x0=1 is the constant term or intercept.

M <- outer(1:8, 0:3, `^`)
colnames(M) <- c("int", "lin", "quad", "cubic")
rownames(M) <- paste0("x", 1:8)
M
#>    int lin quad cubic
#> x1   1   1    1     1
#> x2   1   2    4     8
#> x3   1   3    9    27
#> x4   1   4   16    64
#> x5   1   5   25   125
#> x6   1   6   36   216
#> x7   1   7   49   343
#> x8   1   8   64   512

Then we can make the columns of M orthogonal using the Gram-Schmidt method, where each successive column after the first is made orthogonal to all previous columns by subtracting their projections on that column. Plotting these, as in shows that the coefficients for a linear term plot as a line, those for x2 follow a quadratic, and so forth.

op <- par(mar = c(4, 4, 1, 1)+.1)
M1 <- matlib::GramSchmidt(M, normalize = FALSE)
matplot(M1, 
  type = "b",
  pch = as.character(0:3),
  cex = 1.5,
  cex.lab = 1.5,
  lty = 1,
  lwd = 3,
  xlab = "X",
  ylab = "Coefficient")
Figure 5.3: The coefficients for orthogonal polynomial contrasts for linear (1), quadratic (2) and cubic (3) terms for a numeric variable X. The intercept or constant term is represented as 0. Orthogonality means that each pair of values is uncorrelated.

Custom contrasts

You don’t have to be constrained by the kinds of comparisons available in contr.* functions. For a factor with k levels you are free to make up any k1 contrasts that correspond to k1 different questions or tests of hypotheses about the factor level means. Even better, if your contrasts are orthogonal, their tests are statistically independent.

One useful idea for defining orthogonal comparisons of substantive interest is the idea of nested dichotomies. Here you would start with contrasting one meaningful subset of the groups against all the others. Then, successive contrasts are defined by making dichotomies among the groups within each subset.

TODO: Make a figure for the two examples

As one example, suppose we are looking at support on some issue among four political parties: A, B, C and D, where A and B are left-of-center and C and D are to the right of the political spectrum. The following comparisons might of interest:

AB.CD: {A, B} vs. {C, D}
A.B: {A} vs. {B}
C.D: {C} vs. {D}

You could set up these comparisons as the following contrasts:

# contrasts
AB.CD <- c(1,  1, -1, -1)
A.B   <- c(1, -1,  0,  0)
C.D   <- c(0,  0,  1, -1)

# put them in a matrix
C <- cbind(AB.CD, A.B, C.D)
rownames(C) <- LETTERS[1:4]
C
#>   AB.CD A.B C.D
#> A     1   1   0
#> B     1  -1   0
#> C    -1   0   1
#> D    -1   0  -1

With a data frame like df with the factor party, you would then use these contrasts in a model by assigning C to contrasts(df$party):

set.seed(47)
df <- data.frame(
  party = factor(rep(LETTERS[1:4], each = 3)),
  support = c(35, 25, 25, 15) + round(rnorm(12, 0, 1), 1)
)
contrasts(df$party) <- C

Then, in a linear model, the coefficients estimate the mean difference between the averages of the subset of parties in each comparison. For example, parties A and B on average are 2.11 higher in support than parties C and D; support for party A is 2.37 greater than party B and so forth.

party.mod <- lm(support ~ party, data = df)
coef(party.mod)
#> (Intercept)  partyAB.CD    partyA.B    partyC.D 
#>       24.83        2.11        2.37        1.85

For another example, say we are examining differences among three psychiatric diagnostic patient groups, “bipolar”, “manic”, “depressed” and also have a matched normal group. One set of meaningful comparisons would be as follows:

D1: {Normal} vs. {Bipolar, Depressed, Manic}
D2: {Bipolar} vs. {Depressed, Manic}
D3: {Depressed} vs. {Manic}

Weights for these contrasts are assigned by making them positive values for the groups in one subset and negative for the other, and giving numbers that sum to zero for each one:

D1 <- c(3, -1, -1, -1)
D2 <- c(0,  2, -1, -1)
D3 <- c(0,  0,  1, -1)

C <- cbind(D1, D2, D3)
rownames(C) <- c("Normal", "Bipolar", "Depressed", "Manic")
C
#>           D1 D2 D3
#> Normal     3  0  0
#> Bipolar   -1  2  0
#> Depressed -1 -1  1
#> Manic     -1 -1 -1

These have the same form as the reversed Helmert contrasts considered earlier.

TODO: Do I need to go throught the details of univariate regression, ANOVA, … here? ## Regression

5.2 ANOVA

5.3 ANCOVA

Not shown in are rows for cases where predictor variables include a mixture quantitative and discrete factors. These have two “flavors”, depending on our emphasis.

  • Analysis of covariance (ANCOVA) is the term for models where the main emphasis on on comparing means of different groups, but where there are one or more quantitative predictors (for example a pre-test score on the outcome variable) that need to be taken into account, so we can assess the differences among groups controlling or adjusting for those predictors, which are called covariates. (Strictly speaking, ANCOVA models typically assume that the regression slopes are the same for all groups.)

  • Homogeneity of regression treats the same type of data, but the main interest is on the regression relation between y and a set of quantitative xs, but there is also one or more grouping factors. In this case, we want to determine if the same regression relation applies in all groups or whether the groups differ in their intercepts and/or slopes. Another term for this class of questions is moderator effects, which refers to interactions between the quantitative predictors and group factors.

TODO Move the initial discussion of ANCOVA from here with the generic example.

5.4 Regression trees

Package summary

Packages used here:

3 packages used here: ggplot2, knitr, nestedLogit


  1. Taking logarithms of both sides would yield the linear model, log(y)=c+β1x.↩︎