Test your knowledge of the material on extending loglinear models in the following quiz to see how much you learned. This is entirely private for you---no records are kept of your performance.

Questions

1. When one table variable is a binary response, a logit model for that response is:

Each logit model for a binary response C is equivalent to a loglinear model. The loglinear model must include the [AB] association of predictors. The logit model is simpler to interpret because it has fewer parameters and focuses on the effects on the response variable.

2. The main advantage of the GLM (generalized linear model) approach to loglinear models is that GLM:

The GLM approach allows quantitative predictors and special ways of treating ordinal factors. It provides parameter estimates with standard errors and significance tests, can incorporate ordinal variables as linear/polynomial terms, and can handle overdispersion using quasi-Poisson or negative binomial families.

3. The linear-by-linear (L × L) model for ordinal variables assumes:

The L × L model assumes log(mij) = μ + λiA + λjB + γ aibj, where γ is a constant local odds ratio. For integer scores, log(θij) = γ. This model uses only 1 more parameter than the independence model, making it very parsimonious.

4. In models for ordered categories, the RC(1) model differs from the L × L model in that RC(1):

The RC(1) row-and-column effects model estimates optimal category scores αi and βj as parameters: λijAB = γ αiβj. Unlike L × L which assigns fixed integer scores, RC(1) finds the best-fitting scores. The ordering and spacing of categories is estimated from the data, similar to correspondence analysis.

5. For a square table with the same row and column categories, the quasi-independence model:

The quasi-independence model adds parameters δi for each diagonal cell: log(mij) = μ + λiA + λjB + δiI(i=j). This fits the diagonal cells (where row = column) perfectly while testing for independence in the off-diagonal cells, which is useful for mobility tables and other square tables.

6. The relationship between symmetry and quasi-symmetry models is:

For square tables, it can be shown that symmetry = quasi-symmetry + marginal homogeneity, and G²(S) = G²(QS) + G²(MH). The symmetry model assumes πij = πji, but this also implies marginal homogeneity (equal row and column totals). The quasi-symmetry model allows λij = λji without requiring equal margins.

7. The gnm package in R is used to fit:

The gnm package provides functions to fit generalized nonlinear models (GNMs). RC models are not loglinear—they contain multiplicative terms like γ αiβj. The gnm() function can fit these models using Mult(), Exp(), and other nonlinear functions. The package also provides Diag(), Symm(), and Topo() convenience functions.

8. In the UNIDIFF (uniform difference) model for stratified tables, the layer coefficients γk represent:

The UNIDIFF model is log(mijk) = μ + λiR + λjC + λkL + λikRL + λjkCL + γkδijRC. The layer coefficients γk are multipliers showing how the [RC] association varies across layers (e.g., across countries or time periods), with one layer fixed at 1.0 as reference.

9. Compared to general association models, models for ordinal variables provide:

Models for ordinal variables give more focused tests → greater power to detect associations. They use fewer df → can fit different models between independence [A][B] and saturated [AB], have fewer parameters → easier interpretation and smaller standard errors. This is similar to using CMH tests or testing linear contrasts in ANOVA.

10. When comparing multiple models using AIC and BIC, which statement is TRUE?

BIC (Bayesian Information Criterion) penalizes model complexity more heavily than AIC (Akaike Information Criterion). BIC = -2log-likelihood + log(n)df, while AIC = -2log-likelihood + 2df. As a result, BIC often prefers more parsimonious models. Lower values of both criteria indicate better models, balancing goodness-of-fit against complexity.