Skip to contents

LaTeX is the de facto standard for communication and publication in scientific documents and it is very easy to typeset mathematical expressions like Pythagoras’ theorem, a2+b2=c2a^2 + b^2 = c^2 (using a^2 + b^2 = c^2) once you learn the notation. With a bit of practice, the PDF of the normal distribution can be written as

f(x) = \frac{1}{\sigma\sqrt{2\pi}} 
  \exp\left[ -\left(\frac{x-\mu}{2\sigma}\right)^{\!2}\,\right]

which renders as

f(x)=1σ2πexp[(xμ2σ)2] f(x) = \frac{1}{\sigma\sqrt{2\pi}} \exp\left[ -\left(\frac{x-\mu}{2\sigma}\right)^{\!2}\,\right]

Writing equations with arrays, matrices and vectors is somewhat more challenging. Many people rely on interactive LaTeX editors like Overleaf, MathType, or online versions like Lagrida LaTeX Equation Editor or Arachnoid that provide a menu-driven interface with fill-in templates for matrices.

There are already some tools available in R for producing LaTeX output:

See Tools for making latex tables in R for a more comprehensive list

The matlib package extends these, providing a collection of functions that simplify using LaTeX notation for matrices, vectors and equations in documentation and in writing:

  • latexMatrix(): Constructs the LaTeX code for a symbolic matrix, whose elements are a symbol, with row and column subscripts. latexMatrix() also supports matrices with numeric elements, and the objects it produces may be used in various kinds of matrix computations, both symbolic and numeric.
  • Eqn(): A wrapper to produce LaTeX expressions or equations that can be used directly in .Rmd or .qmd documents to compile to equations. It also provides for direct preview of the resulting equation in the RStudio Viewer panel.
  • showEqn(): Shows what matrices 𝐀,𝐛\mathbf{A}, \mathbf{b} look like as the system of linear equations, 𝐀𝐱=𝐛\mathbf{A x} = \mathbf{b}, but written out as a set of equations.

When used directly in an R session, these functions produce their output to the console (using cat()). In a .Rmd or .qmd document, use the chunk options: results='asis', echo=FALSE so that knitr just outputs the text of the equations to the document. The rendering of the equations is mediated by pandoc for standard Rmarkdown or Quarto documents.

Note: There are several different engines for rendering mathematics in HTML documents for the Web: mathml, katex, and mathjax and others, all of which can be made to work with pandoc. The features we describe below work in standard Rmarkdown or Quarto documents. However, some more advanced features (horizontal and vertical lines for partitioned matrices) require katex to work with pkgdown. Equation numbers and cross-references to them still do not work in pkgdown. See the discussion in this pkgdown issue. Rendering of matrix equations doesn’t seem to work at all in the friendly.r-universe.dev version of this vignette.

Using latexMatrix() and Eqn()

latexMatrix() constructs the LaTeX code for a symbolic matrix, whose elements are a symbol, with row and column subscripts. For example, by default (with no arguments) it produces the expression for an n×mn \times m matrix 𝐗\mathbf{X} whose elements are xijx_{ij} in a LaTeX \begin{pmatrix} ... \end{pmatrix} environment. The LaTeX code generated looks like this:

\begin{pmatrix} 
  x_{11} & x_{12} & \cdots & x_{1m} \\ 
  x_{21} & x_{22} & \cdots & x_{2m} \\ 
  \vdots & \vdots &        & \vdots \\ 
  x_{n1} & x_{n2} & \cdots & x_{nm} \\ 
\end{pmatrix}

The code above appears in the console. To render this as a matrix in a document, this must be wrapped in a display math environment, typically specified as $$ ... $$ or \[ ... \]. This is provided by Eqn() and used in a code chunk with the results = 'asis' option, giving the rendered expression.

(x11x12x1mx21x22x2mxn1xn2xnm)\begin{equation*} \begin{pmatrix} x_{11} & x_{12} & \cdots & x_{1m} \\ x_{21} & x_{22} & \cdots & x_{2m} \\ \vdots & \vdots & & \vdots \\ x_{n1} & x_{n2} & \cdots & x_{nm} \\ \end{pmatrix} \end{equation*}

For chunk output in a document, you will get a LaTeX error, “missing $ inserted” if you forget to use Eqn() or otherwise fail to make the LaTeX appear inside a math environment.

Some other examples:

  • A 3×33 \times 3 identity matrix with square brackets, specified as an equation with a left-hand side 𝐈3\mathbf{I}_3. The first argument to latexMatrix() can be any numeric matrix. The matrix="bmatrix" argument here specifies square brackets around the matrix.
Eqn("\\mathbf{I}_3 =", latexMatrix(diag(3), matrix="bmatrix"))

𝐈3=[100010001]\begin{equation*} \mathbf{I}_3 =\begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{bmatrix} \end{equation*}

  • A row or column vector; the first argument to latexMatrix() must be a matrix, so wrap an R vector in matrix(), supplying nrow=1 (or ncol = 1):
latexMatrix(matrix(LETTERS[1:4], nrow=1), matrix="Bmatrix") |> Eqn()

{ABCD}\begin{equation*} \begin{Bmatrix} A & B & C & D \\ \end{Bmatrix} \end{equation*}

latexMatrix(matrix(letters[1:3], ncol=1), matrix = "vmatrix") |> Eqn()

abc\begin{equation*} \begin{vmatrix} a \\ b \\ c \\ \end{vmatrix} \end{equation*}

The above examples illustrate, some styles of matrix delimiters using the matrix argument.

A wide variety of options are available for the matrix element symbols, fonts, subscripts and decorations:

  • The typical element can use any LaTeX math font, e.g., \\mathbb{}, \mathcal{}, ...;
  • the row/column subscripts can start at 0 or 1;
  • they can be separated by a comma, and
  • one can apply an exponent (1\bullet^{-1}) or transpose symbol (\bullet^\top or \bullet^\prime)
latexMatrix("\\mathbb{q}", 3, 3, 
            matrix = "bmatrix",
            zero.based = c(TRUE, FALSE), 
            comma=TRUE, 
            exponent="-1") |>
  Eqn()

[𝕢0,1𝕢0,2𝕢0,3𝕢1,1𝕢1,2𝕢1,3𝕢2,1𝕢2,2𝕢2,3]1\begin{equation*} \begin{bmatrix} \mathbb{q}_{0,1} & \mathbb{q}_{0,2} & \mathbb{q}_{0,3} \\ \mathbb{q}_{1,1} & \mathbb{q}_{1,2} & \mathbb{q}_{1,3} \\ \mathbb{q}_{2,1} & \mathbb{q}_{2,2} & \mathbb{q}_{2,3} \\ \end{bmatrix}^{-1} \end{equation*}

The SVD

As a more complicated example, here we write out the LaTeX equation for the singular value decomposition (SVD) of a general n×pn \times p matrix 𝐗\mathbf{X} using Eqn() and latexMatrix(). In Rmd markup, Eqn() can be given an equation label (using the label argument), which will both label and number the equations.

Two calls to Eqn() produce separate equations in the output below. Both of these equations are numbered. (Eqn() uses the LaTeX equation environment, \begin{equation} ... \end{equation}, or equation* if the equation does not include a label). The two calls to Eqn() are rendered as separate equations, center aligned.

Eqn("\\mathbf{X} = \\mathbf{U} \\mathbf{\\Lambda} \\mathbf{V}^\\top", label='eq:svd')
Eqn("\\mathbf{X} =",
    latexMatrix("u", "n", "k"),
    latexMatrix("\\lambda", "k", "k", diag=TRUE),
    latexMatrix("v", "k", "p", transpose = TRUE), label='eq:svdmats')

This produces the two numbered equations:1

(#eq:svd)𝐗=𝐔𝚲𝐕\begin{equation} (\#eq:svd) \mathbf{X} = \mathbf{U} \mathbf{\Lambda} \mathbf{V}^\top\end{equation}

(#eq:svdmats)(u11u12u1ku21u22u2kun1un2unk)(λ1000λ2000λk)(v11v12v1pv21v22v2pvk1vk2vkp)\begin{equation} (\#eq:svdmats) \begin{pmatrix} u_{11} & u_{12} & \cdots & u_{1k} \\ u_{21} & u_{22} & \cdots & u_{2k} \\ \vdots & \vdots & & \vdots \\ u_{n1} & u_{n2} & \cdots & u_{nk} \\ \end{pmatrix} \begin{pmatrix} \lambda_{1} & 0 & \cdots & 0 \\ 0 & \lambda_{2} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \lambda_{k} \\ \end{pmatrix} \begin{pmatrix} v_{11} & v_{12} & \cdots & v_{1p} \\ v_{21} & v_{22} & \cdots & v_{2p} \\ \vdots & \vdots & & \vdots \\ v_{k1} & v_{k2} & \cdots & v_{kp} \\ \end{pmatrix}^\top \end{equation}

The matrix names in Equation @ref(eq:svd) are printed in a boldface math font (\mathbf{}), typically used for matrices and vectors. Note that when using LaTeX code in R expressions each backslash (\) must be doubled (\\) in R because \ is the escape character. #’ You can avoid this “leaning toothpick syndrome” of by using R’s new (as of 4.0.0) raw strings, composed as r"(...)" or r"{...}".

Note that the first equation can be referenced because it was labeled: “As seen in Equation @ref(eq:svd) ”. References to equations can entered in text using an inline call to ref(), e.g, `r ref("eq:svd")` (In Quarto, equation labels must be of the form #eq-label and equation references are of the form @eq-label)

Systems of equations

As another example, the chunk below shows a system of equations 𝐀𝐱=𝐛\mathbf{A} \mathbf{x} = \mathbf{b} written out using symbolic matrices.

Eqn(latexMatrix("a", nrow = "m", ncol="n", matrix="bmatrix"),
    latexMatrix("x", nrow = "n", ncol=1),
    Eqn_hspace(mid='='),
    latexMatrix("b", nrow = "m", ncol=1))

[a11a12a1na21a22a2nam1am2amn](x1x2xn)=(b1b2bm)\begin{equation*} \begin{bmatrix} a_{11} & a_{12} & \cdots & a_{1n} \\ a_{21} & a_{22} & \cdots & a_{2n} \\ \vdots & \vdots & & \vdots \\ a_{m1} & a_{m2} & \cdots & a_{mn} \\ \end{bmatrix} \begin{pmatrix} x_{1} \\ x_{2} \\ \vdots \\ x_{n} \\ \end{pmatrix} \quad=\quad\begin{pmatrix} b_{1} \\ b_{2} \\ \vdots \\ b_{m} \\ \end{pmatrix} \end{equation*}

Extra symmetric white space is added via Eqn_hspace(), which can also be used for standard spacing such as \quad (default size), \,, '1cm' for \hspace{}, etc.

Section showEqn describes another way to display systems of equations.

aligned environment

You can also align separate equations relative to some symbol like an = sign to show separate steps of re-expression, using the option Eqn(..., align=TRUE). Alignment points are marked by & in LaTeX.

Below we show the singular value decomposition again, but now as two separate equations aligned after the = sign. Note the locations of the & operator for alignment, specified as the left-hand side (lhs) of the second equation: You get the most pleasing result by placing the & before the symbol to be aligned at, e.g, use & = or & +.

Eqn("\\mathbf{X} & = \\mathbf{U} \\mathbf{\\Lambda} \\mathbf{V}^\\top",
    Eqn_newline(),
    ' & =',
    latexMatrix("u", "n", "k"),
    latexMatrix("\\lambda", "k", "k", diag=TRUE),
    latexMatrix("v", "k", "p", transpose = TRUE),
    align=TRUE)

𝐗=𝐔𝚲𝐕=(u11u12u1ku21u22u2kun1un2unk)(λ1000λ2000λk)(v11v12v1pv21v22v2pvk1vk2vkp)\begin{align*} \mathbf{X} & = \mathbf{U} \mathbf{\Lambda} \mathbf{V}^\top \\ & =\begin{pmatrix} u_{11} & u_{12} & \cdots & u_{1k} \\ u_{21} & u_{22} & \cdots & u_{2k} \\ \vdots & \vdots & & \vdots \\ u_{n1} & u_{n2} & \cdots & u_{nk} \\ \end{pmatrix} \begin{pmatrix} \lambda_{1} & 0 & \cdots & 0 \\ 0 & \lambda_{2} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \lambda_{k} \\ \end{pmatrix} \begin{pmatrix} v_{11} & v_{12} & \cdots & v_{1p} \\ v_{21} & v_{22} & \cdots & v_{2p} \\ \vdots & \vdots & & \vdots \\ v_{k1} & v_{k2} & \cdots & v_{kp} \\ \end{pmatrix}^\top \end{align*}

Note that in this example, there are three calls to latexMatrix(), wrapped inside Eqn(). Eqn_newline() emits a newline (\\) between equations.

Decorators

A set of “Eqn helpers” facilitate adding typeset label on top or under a LaTeX expression or matrix, and adding braces over “{” or under “}” components in a matrix. Here’s a simple example:

A <- matrix(1:4, 2, 2)
B <- matrix(4:1, 2, 2)
AB <- A + B
Eqn(overset(A), "+",
    overset(B), Eqn_hspace(mid = '='),
    overbrace(AB, "A+B"))

(1324)𝐀+(4231)𝐁=(5555)𝐀+𝐁\begin{equation*} \overset{\mathbf{A}} { \begin{pmatrix} 1 & 3 \\ 2 & 4 \\ \end{pmatrix} } +\overset{\mathbf{B}} { \begin{pmatrix} 4 & 2 \\ 3 & 1 \\ \end{pmatrix} } \quad=\quad\overbrace{\begin{pmatrix} 5 & 5 \\ 5 & 5 \\ \end{pmatrix} }^{\mathbf{A+B}}\end{equation*}

This generates a labeled expression showing the Hat matrix (𝐇\mathbf{H}) in least squares regression. Note you can create LaTeX expressions as character strings, and use those inside Eqn() to keep the code simpler.

H <- "\\mathbf{X} (\\mathbf{X}^{\\top}\\mathbf{X})^{-1} \\mathbf{X}^{\\top}"
Eqn("\\mathbf{\\hat{y}} =", underbrace(H, "\\mathbf{H}"), "\\mathbf{y}")

𝐲̂=𝐗(𝐗𝐗)1𝐗𝐇𝐲\begin{equation*} \mathbf{\hat{y}} =\underbrace{\mathbf{X} (\mathbf{X}^{\top}\mathbf{X})^{-1} \mathbf{X}^{\top}}_{\mathbf{\mathbf{H}}}\mathbf{y}\end{equation*}

Here’s a gaudy version of the SVD equation:

Eqn(underset("\\mathbf{X}", "(n \\times p)"), "& = \\mathbf{U} \\mathbf{\\Lambda} \\mathbf{V}^\\top",
    Eqn_newline(),
    ' & =',
    underbrace(latexMatrix("u", "n", "k"), "\\mathbf{U}"),
    overbrace(latexMatrix("\\lambda", "k", "k", diag=TRUE),"\\mathbf{\\Lambda}"),
    underbrace(latexMatrix("v", "k", "p", transpose = TRUE), "\\mathbf{V}^\\top"),
    align=TRUE)

𝐗(𝐧×𝐩)=𝐔𝚲𝐕=(u11u12u1ku21u22u2kun1un2unk)𝐔(λ1000λ2000λk)𝚲(v11v12v1pv21v22v2pvk1vk2vkp)𝐕\begin{align*} \underset{\mathbf{(n \times p)}} { \mathbf{X} } & = \mathbf{U} \mathbf{\Lambda} \mathbf{V}^\top \\ & =\underbrace{\begin{pmatrix} u_{11} & u_{12} & \cdots & u_{1k} \\ u_{21} & u_{22} & \cdots & u_{2k} \\ \vdots & \vdots & & \vdots \\ u_{n1} & u_{n2} & \cdots & u_{nk} \\ \end{pmatrix} }_{\mathbf{\mathbf{U}}}\overbrace{\begin{pmatrix} \lambda_{1} & 0 & \cdots & 0 \\ 0 & \lambda_{2} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \lambda_{k} \\ \end{pmatrix} }^{\mathbf{\mathbf{\Lambda}}}\underbrace{\begin{pmatrix} v_{11} & v_{12} & \cdots & v_{1p} \\ v_{21} & v_{22} & \cdots & v_{2p} \\ \vdots & \vdots & & \vdots \\ v_{k1} & v_{k2} & \cdots & v_{kp} \\ \end{pmatrix}^\top }_{\mathbf{\mathbf{V}^\top}}\end{align*}

Computing with "latexMatrix" objects

Objects returned by latexMatrix() that have definite (i.e., numeric) dimensions— for example, a 3-by-2 matrix as opposed to an “n”-by-“m” matrix—may be employed in a variety of standard symbolic and numeric matrix computations. They provide some reasonable means to compose meaningful matrix equations in LaTeX far easier than doing this manually, matrix by matrix.

These computations include:

  • the R arithmetic operators + (matrix addition), - (matrix subtraction and negation), * (product of a scalar and a matrix),
  • ^ (raise to a power), and
  • %*% (matrix multiplication), and the functions t() (transpose), determinant(), and solve() (matrix inverse),
  • %O% (kronecker product),
  • Selecting sub-matrices using indexing X[rows, cols] and binding rows/columns with rbind(), cbind()

There are also function equivalents of the operators that are more flexible via optional arguments. For example, using the operator A %*% B multiplies the two matrices A and B, returning a symbolic result. The corresponding function matmult() multiplies two or more matrices, and can simplify the result (with simplify = TRUE, default) and/or produce the numeric representation of the product (with as.numeric = TRUE, default).

With the exception of determinant(), which (because it is a scalar quantity) returns a character string with a LaTeX expression for the determinant, these operators and functions return "latexMatrix" objects, which can be printed, typeset, or used in further computations. Additionally, in many instances a "latexMatrix" object can be coerced to a numeric matrix by as.double(). We illustrate these computations in this section.

Consider, first, basic matrix arithmetic. Define some matrices: A and B are numeric, but C and D are symbolic, with elements cijc_{ij} and dijd_{ij}

(A <- latexMatrix(matrix(c(1, -3, 0, 1), 2, 2)))
## \begin{pmatrix} 
##  1 &  0 \\ 
## -3 &  1 \\ 
## \end{pmatrix}
(B <- latexMatrix(matrix(c(5, 3, -1, 4), 2, 2)))
## \begin{pmatrix} 
##  5 & -1 \\ 
##  3 &  4 \\ 
## \end{pmatrix}
(C <- latexMatrix(symbol="c", 2, 3))
## \begin{pmatrix} 
##   c_{11} & c_{12} & c_{13} \\ 
##   c_{21} & c_{22} & c_{23} \\ 
## \end{pmatrix}
(D <- latexMatrix(symbol="d", 2, 3))
## \begin{pmatrix} 
##   d_{11} & d_{12} & d_{13} \\ 
##   d_{21} & d_{22} & d_{23} \\ 
## \end{pmatrix}

The usual arithmetic operations work for these "latexMatrix" objects, and return LaTeX representations of the result:

A + B
## \begin{pmatrix}  
##  6 & -1 \\ 
##  0 &  5 \\ 
## \end{pmatrix}

Some other examples:

A - B
## \begin{pmatrix}  
## -4 &  1 \\ 
## -6 & -3 \\ 
## \end{pmatrix}
-A            # unary minus
## \begin{pmatrix}  
## -1 &  0 \\ 
##  3 & -1 \\ 
## \end{pmatrix}
2*A           # scalar multiplication
## \begin{pmatrix}  
## 2 \cdot 1    & 2 \cdot 0 \\ 
## 2 \cdot (-3) & 2 \cdot 1 \\ 
## \end{pmatrix}
C + D         # sum of symbolics
## \begin{pmatrix}  
## c_{11} + d_{11} & c_{12} + d_{12} & c_{13} + d_{13} \\ 
## c_{21} + d_{21} & c_{22} + d_{22} & c_{23} + d_{23} \\ 
## \end{pmatrix}
"\\pi" * C    # everything should be multiplied by pi
## \begin{pmatrix}  
## \pi \cdot c_{11} & \pi \cdot c_{12} & \pi \cdot c_{13} \\ 
## \pi \cdot c_{21} & \pi \cdot c_{22} & \pi \cdot c_{23} \\ 
## \end{pmatrix}

Typesetting the last result produces:

(πc11πc12πc13πc21πc22πc23)\begin{equation*} \begin{pmatrix} \pi \cdot c_{11} & \pi \cdot c_{12} & \pi \cdot c_{13} \\ \pi \cdot c_{21} & \pi \cdot c_{22} & \pi \cdot c_{23} \\ \end{pmatrix} \end{equation*}

Some of these operations produce numeric results and so can be coerced to numeric matrices; for example:

as.numeric(A + B)
##      [,1] [,2]
## [1,]    6   -1
## [2,]    0    5

Using these tools, it is easy to typeset complete matrix equations, giving multiple arguments to Eqn():

Eqn("\\mathbf{A} + \\mathbf{B} =", A, " + ", B, " = ", A + B)

𝐀+𝐁=(1031)+(5134)=(6105)\begin{equation*} \mathbf{A} + \mathbf{B} =\begin{pmatrix} 1 & 0 \\ -3 & 1 \\ \end{pmatrix} + \begin{pmatrix} 5 & -1 \\ 3 & 4 \\ \end{pmatrix} = \begin{pmatrix} 6 & -1 \\ 0 & 5 \\ \end{pmatrix} \end{equation*}

If the elements of a matrix are valid R variable names, then it is also possible to give these elements numeric values, as in

(M <- latexMatrix(matrix(letters[1:9], 3, 3)))
## \begin{pmatrix} 
## a & d & g \\ 
## b & e & h \\ 
## c & f & i \\ 
## \end{pmatrix}
as.double(-2*M, locals=c(a=1, b=0, c=-2, d=-4, e=7, f=-1, g=0, h=4, i=6))
##      [,1] [,2] [,3]
## [1,]   -2    8    0
## [2,]    0  -14   -8
## [3,]    4    2  -12

Matrix products and transpose

The product of two matrices is given by %*% for "latexMatrix" objects. When the arguments are both numeric, the numeric result is evaluated and presented in LaTeX form.

A %*% B
(51127)\begin{pmatrix} 5 & -1 \\ -12 & 7 \\ \end{pmatrix}

But the result is symbolic if either argument is symbolic:

A %*% latexMatrix("b", 2, 2)
(b11b12(3)b11+b21(3)b12+b22)\begin{pmatrix} b_{11} & b_{12} \\ (-3) \cdot b_{11} + b_{21} & (-3) \cdot b_{12} + b_{22} \\ \end{pmatrix}

The LaTeX symbol for multiplication is a centered dot, \\cdot (\cdot), by default. This can be changed by changing options(latexMultSymbol), e.g, to use the ×\times symbol instead, use:

options(latexMultSymbol = "\\times")

The transpose, t() of "latexMatrix" objects is similarly straightforward. This directly transposes the matrix, as opposed to superscript notation, 𝐃\mathbf{D}^\top or 𝐃\mathbf{D}^\prime which is implicit.

D
## \begin{pmatrix} 
##   d_{11} & d_{12} & d_{13} \\ 
##   d_{21} & d_{22} & d_{23} \\ 
## \end{pmatrix}
t(D)
## \begin{pmatrix}  
## d_{11} & d_{21} \\ 
## d_{12} & d_{22} \\ 
## d_{13} & d_{23} \\ 
## \end{pmatrix}
M %*% t(D)
## \begin{pmatrix}  
## a \cdot d_{11} + d \cdot d_{12} + g \cdot d_{13} & a \cdot d_{21} + d \cdot d_{22} + g \cdot d_{23} \\ 
## b \cdot d_{11} + e \cdot d_{12} + h \cdot d_{13} & b \cdot d_{21} + e \cdot d_{22} + h \cdot d_{23} \\ 
## c \cdot d_{11} + f \cdot d_{12} + i \cdot d_{13} & c \cdot d_{21} + f \cdot d_{22} + i \cdot d_{23} \\ 
## \end{pmatrix}

The matrix product in the previous example typesets as

(ad11+dd12+gd13ad21+dd22+gd23bd11+ed12+hd13bd21+ed22+hd23cd11+fd12+id13cd21+fd22+id23)\begin{equation*} \begin{pmatrix} a \cdot d_{11} + d \cdot d_{12} + g \cdot d_{13} & a \cdot d_{21} + d \cdot d_{22} + g \cdot d_{23} \\ b \cdot d_{11} + e \cdot d_{12} + h \cdot d_{13} & b \cdot d_{21} + e \cdot d_{22} + h \cdot d_{23} \\ c \cdot d_{11} + f \cdot d_{12} + i \cdot d_{13} & c \cdot d_{21} + f \cdot d_{22} + i \cdot d_{23} \\ \end{pmatrix} \end{equation*}

Determinants & inverse

The determinant is computed recursively by finding the cofactors of the first row of the matrix, i.e., det(𝐀n×n)=ΣjnaijCij\det(\mathbf{A}_{n \times n}) = \Sigma_j^n \, a_{ij}\: C_{ij} where the cofactors CijC_{ij} involve the determinants of the (n1)×(n1)(n-1) \times (n-1) minors 𝐀ij\mathbf{A}_{ij}. (See the vignette Evaluation of determinants for explanation.)

The method is applicable to a matrix of any order, although beyond an order-3 matrix, the resulting expression gets very long. A couple of examples:

A
## \begin{pmatrix} 
##  1 &  0 \\ 
## -3 &  1 \\ 
## \end{pmatrix}
## [1] "1 \\cdot 1 - 0 \\cdot (-3)"
M
## \begin{pmatrix} 
## a & d & g \\ 
## b & e & h \\ 
## c & f & i \\ 
## \end{pmatrix}
## [1] "a \\cdot (e \\cdot i - h \\cdot f) - d \\cdot (b \\cdot i - h \\cdot c) + g \\cdot (b \\cdot f - e \\cdot c)"

The determinant of a matrix is a single number. determinant() returns the expression that computes it in LaTeX notation, using \\cdot to represent \cdot for multiplication.

Typesetting the output from the last command produces

a(eihf)d(bihc)+g(bfec)\begin{equation*} a \cdot (e \cdot i - h \cdot f) - d \cdot (b \cdot i - h \cdot c) + g \cdot (b \cdot f - e \cdot c)\end{equation*}

The inverse of a square matrix is computed from its determinant and adjoint matrix; for example:

## \begin{pmatrix}  
## 1 & 0 \\ 
## 3 & 1 \\ 
## \end{pmatrix}

Specifying the argument simplify = TRUE to solve() puts the inverse determinant before the adjoint matrix and returns a latex expression rather than a "latexMatrix" object; for example:

solve(M, simplify=TRUE)
## [1] "\\frac{1}{a \\cdot (e \\cdot i - h \\cdot f) - d \\cdot (b \\cdot i - h \\cdot c) + g \\cdot (b \\cdot f - e \\cdot c)} \n\\begin{pmatrix}  \ne \\cdot i - h \\cdot f    & -(d \\cdot i - g \\cdot f) & d \\cdot h - g \\cdot e    \\\\ \n-(b \\cdot i - h \\cdot c) & a \\cdot i - g \\cdot c    & -(a \\cdot h - g \\cdot b) \\\\ \nb \\cdot f - e \\cdot c    & -(a \\cdot f - d \\cdot c) & a \\cdot e - d \\cdot b    \\\\ \n\\end{pmatrix}\n"

which typesets as

1a(eihf)d(bihc)+g(bfec)(eihf(digf)dhge(bihc)aigc(ahgb)bfec(afdc)aedb)\begin{equation*} \frac{1}{a \cdot (e \cdot i - h \cdot f) - d \cdot (b \cdot i - h \cdot c) + g \cdot (b \cdot f - e \cdot c)} \begin{pmatrix} e \cdot i - h \cdot f & -(d \cdot i - g \cdot f) & d \cdot h - g \cdot e \\ -(b \cdot i - h \cdot c) & a \cdot i - g \cdot c & -(a \cdot h - g \cdot b) \\ b \cdot f - e \cdot c & -(a \cdot f - d \cdot c) & a \cdot e - d \cdot b \\ \end{pmatrix} \end{equation*} We can also supply values for the elements of the matrix to obtain a numeric inverse:

MASS::fractions(as.double(solve(M), 
                          locals=c(a=1, b=0, c=-2, d=-4, e=7, f=-1, g=0, h=4, i=6)))
##      [,1]  [,2]  [,3] 
## [1,] 23/39  4/13 -8/39
## [2,] -4/39  1/13 -2/39
## [3,]  7/39  3/26  7/78
MASS::fractions(det(as.double(M, 
                              locals=c(a=1, b=0, c=-2, d=-4, e=7, f=-1, g=0, h=4, i=6))))
## [1] 78

Linear hypotheses

As an example of the more general use of these functions, consider the general linear hypothesis used to test hypotheses and contrasts in linear models. We consider a multivariate regression model 𝐘=𝐗𝐁+𝐄\mathbf{Y} = \mathbf{X} \mathbf{B} + \mathbf{E} with qq regressors 𝐱0,𝐱1,,𝐱q\mathbf{x}_0, \mathbf{x}_1, \dots, \mathbf{x}_q (including the constant 𝐱0\mathbf{x}_0 for the intercept) and pp responses, 𝐲1,𝐲2,,𝐲p\mathbf{y}_1, \mathbf{y}_2, \dots, \mathbf{y}_p.

Suppose we want to test the hypothesis that a subset of rows (predictors) and/or columns (responses) simultaneously have null effects. This can be expressed in the general linear test, 0:𝐂h×q𝐁q×p=𝟎h×p, \mathcal{H}_0 : \mathbf{C}_{h \times q} \, \mathbf{B}_{q \times p} = \mathbf{0}_{h \times p} \: , where 𝐂\mathbf{C} is a full rank hqh \le q hypothesis matrix of constants, that selects subsets or linear combinations (contrasts) of the coefficients in 𝐁\mathbf{B} to be tested in a hh degree-of-freedom hypothesis.

For example, for a multivariate regression model with three responses y1,y2,y3y_1, y_2, y_3 and three predictors x1,x2,x3x_1, x_2, x_3, the coefficients 𝐁\mathbf{B} are given by the following latexMatrix() expression, where several arguments are used to: (a) start row indices at zero (zero.based); (b) make the column indices a subscript of yy (prefix.col); (c) insert a comma between row/column subscripts.

(B <- latexMatrix('\\beta', ncol = 3, nrow=4, 
                 comma=TRUE, prefix.col = 'y_',
                 zero.based=c(TRUE, FALSE)))
(β0,y1β0,y2β0,y3β1,y1β1,y2β1,y3β2,y1β2,y2β2,y3β3,y1β3,y2β3,y3)\begin{pmatrix} \beta_{0,y_{1}} & \beta_{0,y_{2}} & \beta_{0,y_{3}} \\ \beta_{1,y_{1}} & \beta_{1,y_{2}} & \beta_{1,y_{3}} \\ \beta_{2,y_{1}} & \beta_{2,y_{2}} & \beta_{2,y_{3}} \\ \beta_{3,y_{1}} & \beta_{3,y_{2}} & \beta_{3,y_{3}} \\ \end{pmatrix}

We can test the hypothesis that neither x2x_2 nor x3x_3 contribute at all to the predicting the yys in terms of the hypothesis that the coefficients for the corresponding rows of 𝐁\mathbf{B} are zero. To do this, we specify a 2-row 𝐂\mathbf{C} matrix that simply selects those rows:

(C <- latexMatrix(matrix(c(0, 1, 0, 0,
                           0, 0, 1, 0), nrow=2, byrow=TRUE), 
                 matrix = "bmatrix"))
[01000010]\begin{bmatrix} 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ \end{bmatrix}

Then, the hypothesis to be tested can be expressed as follows, using Eqn() to wrap a set of LaTeX expressions and calls to matlib functions.

B0 <- latexMatrix('\\beta', ncol = 3, nrow=2, comma=TRUE, prefix.col = 'y_')
Eqn("\\mathcal{H}_0 : \\mathbf{C} \\mathbf{B} & = ",
    C, B,
    Eqn_newline(), 
    '& =',
    B0,
    "= \\mathbf{0}_{(2 \\times 3)}", 
    align=TRUE)

0:𝐂𝐁=[01000010](β0,y1β0,y2β0,y3β1,y1β1,y2β1,y3β2,y1β2,y2β2,y3β3,y1β3,y2β3,y3)=(β1,y1β1,y2β1,y3β2,y1β2,y2β2,y3)=𝟎(2×3)\begin{align*} \mathcal{H}_0 : \mathbf{C} \mathbf{B} & = \begin{bmatrix} 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ \end{bmatrix} \begin{pmatrix} \beta_{0,y_{1}} & \beta_{0,y_{2}} & \beta_{0,y_{3}} \\ \beta_{1,y_{1}} & \beta_{1,y_{2}} & \beta_{1,y_{3}} \\ \beta_{2,y_{1}} & \beta_{2,y_{2}} & \beta_{2,y_{3}} \\ \beta_{3,y_{1}} & \beta_{3,y_{2}} & \beta_{3,y_{3}} \\ \end{pmatrix} \\ & =\begin{pmatrix} \beta_{1,y_{1}} & \beta_{1,y_{2}} & \beta_{1,y_{3}} \\ \beta_{2,y_{1}} & \beta_{2,y_{2}} & \beta_{2,y_{3}} \\ \end{pmatrix} = \mathbf{0}_{(2 \times 3)}\end{align*}

In this example, note that the R objects C, B and B0 are the results of latexMatrix() calls, which are character strings containing LaTeX expressions.

Partitioned matrices, indexing & binding

Matrix notation sometimes portrays matrices whose elements are themselves matrices and vectors (rather than scalars) in order to show a higher-level structure. Such matrices, called partitioned or block matrices have similar arithmetic and algebraic properties to those of ordinary matrices.

For example, the code below represents a 4×44 \times 4 matrix 𝐌\mathbf{M}, which is partitioned in 2×22 \times 2 blocks, which are labeled 𝐌i,j\mathbf{M}_{i,j}.

M <- latexMatrix("m", 4, 4)
Mpart <- latexMatrix('\\mathbf{M}', nrow = 2, ncol = 2, comma = TRUE)
Eqn("\\mathbf{M} =", Mpart,
    " =", M)
## 
## \begin{equation*}
## \mathbf{M} =\begin{pmatrix} 
##   \mathbf{M}_{1,1} & \mathbf{M}_{1,2} \\ 
##   \mathbf{M}_{2,1} & \mathbf{M}_{2,2} \\ 
## \end{pmatrix}
##  =\begin{pmatrix} 
##   m_{11} & m_{12} & m_{13} & m_{14} \\ 
##   m_{21} & m_{22} & m_{23} & m_{24} \\ 
##   m_{31} & m_{32} & m_{33} & m_{34} \\ 
##   m_{41} & m_{42} & m_{43} & m_{44} \\ 
## \end{pmatrix}
## \end{equation*}

This typesets as:

𝐌=(𝐌1,1𝐌1,2𝐌2,1𝐌2,2)=(m11m12m13m14m21m22m23m24m31m32m33m34m41m42m43m44)\begin{equation*} \mathbf{M} =\begin{pmatrix} \mathbf{M}_{1,1} & \mathbf{M}_{1,2} \\ \mathbf{M}_{2,1} & \mathbf{M}_{2,2} \\ \end{pmatrix} =\begin{pmatrix} m_{11} & m_{12} & m_{13} & m_{14} \\ m_{21} & m_{22} & m_{23} & m_{24} \\ m_{31} & m_{32} & m_{33} & m_{34} \\ m_{41} & m_{42} & m_{43} & m_{44} \\ \end{pmatrix} \end{equation*}

Just as rows and columns can be selected using X[rows, cols] indexing for ordinary matrices, the same operator can be used for LaTeX matrices, e.g., M[rows, cols]. The following extracts 4 the sub-matrices of M:

M11 <- M[1:2, 1:2] |> print()
## \begin{pmatrix}  
## m_{11} & m_{12} \\ 
## m_{21} & m_{22} \\ 
## \end{pmatrix}
M12 <- M[1:2, 3:4]
M21 <- M[3:4, 1:2]
M22 <- M[3:4, 3:4]

The operations of joining matrices by rows, with rbind(), and by columns, with cbind() are also defined for "latexMatrices". This code puts the 4 pieces of 𝐌\mathbf{M} back together:

rbind(
  cbind(M11, M12),
  cbind(M21, M22)
)
## \begin{pmatrix}  
## m_{11} & m_{12} & m_{13} & m_{14} \\ 
## m_{21} & m_{22} & m_{23} & m_{24} \\ 
## m_{31} & m_{32} & m_{33} & m_{34} \\ 
## m_{41} & m_{42} & m_{43} & m_{44} \\ 
## \end{pmatrix}

And, of course you can also format the sub-matrices together using Eqn():

Eqn(M11, M12,
    Eqn_newline(),
    M21, M22,
    align = TRUE)

(m11m12m21m22)(m13m14m23m24)(m31m32m41m42)(m33m34m43m44)\begin{align*} \begin{pmatrix} m_{11} & m_{12} \\ m_{21} & m_{22} \\ \end{pmatrix} \begin{pmatrix} m_{13} & m_{14} \\ m_{23} & m_{24} \\ \end{pmatrix} \\ \begin{pmatrix} m_{31} & m_{32} \\ m_{41} & m_{42} \\ \end{pmatrix} \begin{pmatrix} m_{33} & m_{34} \\ m_{43} & m_{44} \\ \end{pmatrix} \end{align*}

Finally, the partition() function alters the print representation of a matrix using horizontal and vertical lines separating the sub-matrices. It does this by re-wrapping the matrix in a LaTeX \begin{array} ... \end{array} environment, using | in {c c | c c} for the vertical lines and \hline for horizontal lines. This may be the simplest way to portray partitioned matrices in writing.

partition(M, rows=2, columns=2)
$$\begin{pmatrix} \begin{array}{c c | c c} m_{11} & m_{12} & m_{13} & m_{14}\\ m_{21} & m_{22} & m_{23} & m_{24}\\ \hline m_{31} & m_{32} & m_{33} & m_{34}\\ m_{41} & m_{42} & m_{43} & m_{44}\\ \end{array} \end{pmatrix}$$

Note that partition() can show more than one horizontal and vertical partition lines (or no line at all):

partition(M, rows=c(1,3), columns=c(1,3))
$$\begin{pmatrix} \begin{array}{c | c c | c} m_{11} & m_{12} & m_{13} & m_{14}\\ \hline m_{21} & m_{22} & m_{23} & m_{24}\\ m_{31} & m_{32} & m_{33} & m_{34}\\ \hline m_{41} & m_{42} & m_{43} & m_{44}\\ \end{array} \end{pmatrix}$$

Using this notation, we can illustrate matrix arithmetic with partitioned matrices. For example, assuming the partitions of matrices 𝐂\mathbf{C} and 𝐃\mathbf{D} are of the same size, their sum is just the sum of corresponding sub-matrices:

C <- latexMatrix("\\mathbf{C}", 2, 2)
D <- latexMatrix("\\mathbf{D}", 2, 2)
Eqn("\\mathbf{C} + \\mathbf{D} =",
    C, "+", D, "=", 
    C + D)

𝐂+𝐃=(𝐂11𝐂12𝐂21𝐂22)+(𝐃11𝐃12𝐃21𝐃22)=(𝐂11+𝐃11𝐂12+𝐃12𝐂21+𝐃21𝐂22+𝐃22)\begin{equation*} \mathbf{C} + \mathbf{D} =\begin{pmatrix} \mathbf{C}_{11} & \mathbf{C}_{12} \\ \mathbf{C}_{21} & \mathbf{C}_{22} \\ \end{pmatrix} +\begin{pmatrix} \mathbf{D}_{11} & \mathbf{D}_{12} \\ \mathbf{D}_{21} & \mathbf{D}_{22} \\ \end{pmatrix} =\begin{pmatrix} \mathbf{C}_{11} + \mathbf{D}_{11} & \mathbf{C}_{12} + \mathbf{D}_{12} \\ \mathbf{C}_{21} + \mathbf{D}_{21} & \mathbf{C}_{22} + \mathbf{D}_{22} \\ \end{pmatrix} \end{equation*}

Kronecker products

The Kronecker product of two matrices, 𝐀m×n𝐁p×q\mathbf{A}_{m \times n} \otimes \mathbf{B}_{p \times q} is the mp×nqmp \times nq block matrix consisting of each element aija_{ij} multiplied by 𝐁\mathbf{B}. This has many uses in statistics, among these the nice result (Bock 1975; Sunwoo 1996) that the design matrix 𝐗\mathbf{X} in the linear ANOVA model for factors A, B, C, … can be generated as the Kronecker product of their contrast matrices 𝐂A,𝐂B,𝐂C\mathbf{C}_A, \mathbf{C}_B, \mathbf{C}_C \dots, each preceded by the unit vector 𝟏\mathbf{1}.

𝐗ABC=[𝟏𝐂A][𝟏𝐂B][𝟏𝐂B] \mathbf{X}_{ABC\dots} = [\mathbf{1} \mid \mathbf{C}_A] \;\otimes\; [\mathbf{1} \mid \mathbf{C}_B] \;\otimes\; [\mathbf{1} \mid \mathbf{C}_B] \;\otimes\; \dots

This is implemented in the %O% operator and the kronecker() function in the package. For example,

A <- matrix(1:4, nrow = 2) |> 
  latexMatrix() |> print()
(1324)\begin{pmatrix} 1 & 3 \\ 2 & 4 \\ \end{pmatrix}
B <- matrix(5:8, nrow = 2) |> 
  latexMatrix() |> print()
(5768)\begin{pmatrix} 5 & 7 \\ 6 & 8 \\ \end{pmatrix}
kronecker(A, B) |> partition(rows = 2, columns = 2)
$$\begin{pmatrix} \begin{array}{c c | c c} 1 \cdot 5 & 1 \cdot 7 & 3 \cdot 5 & 3 \cdot 7\\ 1 \cdot 6 & 1 \cdot 8 & 3 \cdot 6 & 3 \cdot 8\\ \hline 2 \cdot 5 & 2 \cdot 7 & 4 \cdot 5 & 4 \cdot 7\\ 2 \cdot 6 & 2 \cdot 8 & 4 \cdot 6 & 4 \cdot 8\\ \end{array} \end{pmatrix}$$

You can also use Eqn() to illustrate the definition of the Kronecker product more explicitly. In the following, KAB is the product in symbolic form; as.double() is used to evaluate the result numerically.

Bmat <- latexMatrix('\\mathbf{B}', ncol=1, nrow=1)
KABmat <- kronecker(A, Bmat)
KAB <- kronecker(A, B)

Eqn("\\mathbf{A} \\otimes \\mathbf{B} = &",
    KABmat,
    Eqn_newline(space = '1.5ex'), "= & ",
    KAB |> partition(rows = 2, columns = 2),
    Eqn_newline(space = '1.5ex'), "= & ",
    latexMatrix(as.double(KAB)) |> partition(rows = 2, columns = 2),
    align = TRUE)

$$\begin{align*} \mathbf{A} \otimes \mathbf{B} = &\begin{pmatrix} 1 \cdot \mathbf{B} & 3 \cdot \mathbf{B} \\ 2 \cdot \mathbf{B} & 4 \cdot \mathbf{B} \\ \end{pmatrix} \\[1.5ex] = & \begin{pmatrix} \begin{array}{c c | c c} 1 \cdot 5 & 1 \cdot 7 & 3 \cdot 5 & 3 \cdot 7\\ 1 \cdot 6 & 1 \cdot 8 & 3 \cdot 6 & 3 \cdot 8\\ \hline 2 \cdot 5 & 2 \cdot 7 & 4 \cdot 5 & 4 \cdot 7\\ 2 \cdot 6 & 2 \cdot 8 & 4 \cdot 6 & 4 \cdot 8\\ \end{array} \end{pmatrix} \\[1.5ex] = & \begin{pmatrix} \begin{array}{c c | c c} 5 & 7 & 15 & 21\\ 6 & 8 & 18 & 24\\ \hline 10 & 14 & 20 & 28\\ 12 & 16 & 24 & 32\\ \end{array} \end{pmatrix}\end{align*}$$

matrix2latex

The matrix2latex() function can also generate symbolic equations from numeric or character matrices. For numeric matrices, it can round the values or show results as fractions.

A <- matrix(1:12, nrow=3, ncol=4, byrow = TRUE) / 6
matrix2latex(A, fractions = TRUE, brackets = "b") |> Eqn()

[1/61/31/22/35/617/64/33/25/311/62]\begin{equation*} \left[ \begin{array}{llll} 1/6 & 1/3 & 1/2 & 2/3 \\ 5/6 & 1 & 7/6 & 4/3 \\ 3/2 & 5/3 & 11/6 & 2 \\ \end{array} \right] \end{equation*}

Say we want to show the matrix [𝐀|𝐛][\mathbf{A} | \mathbf{b}] involved in the system of equations 𝐀𝐱=𝐛\mathbf{A} \mathbf{x} = \mathbf{b}. Create these as a character matrix and vector:

A <- matrix(paste0('a_', 1:9), 3, 3, byrow = TRUE) |> print()
##      [,1]  [,2]  [,3] 
## [1,] "a_1" "a_2" "a_3"
## [2,] "a_4" "a_5" "a_6"
## [3,] "a_7" "a_8" "a_9"
b <- paste0("\\beta_", 1:3) |> print()
## [1] "\\beta_1" "\\beta_2" "\\beta_3"

Then use matrix2latex() on cbind(A,b) and pipe the result of matrix2latex() to Eqn():

[a1a2a3β1a4a5a6β2a7a8a9β3]\begin{equation*} \left[ \begin{array}{llll} a_1 & a_2 & a_3 & \beta_1 \\ a_4 & a_5 & a_6 & \beta_2 \\ a_7 & a_8 & a_9 & \beta_3 \\ \end{array} \right] \end{equation*}

All the R tricks for creating and modifying matrices can be used in this way.

showEqn

showEqn() is designed to show a system of linear equations, 𝐀𝐱=𝐛\mathbf{A x} = \mathbf{b}, but written out as a set of equations individually. With the option latex = TRUE it writes these out in LaTeX form.

Here, we create a character matrix containing the elements of a 3×33 \times 3 matrix A, whose elements are of the form a_{ij} and two character vectors, b_i and x_i.

A <- matrix(paste0("a_{", outer(1:3, 1:3, FUN  = paste0), "}"), 
            nrow=3) |> print()
##      [,1]     [,2]     [,3]    
## [1,] "a_{11}" "a_{12}" "a_{13}"
## [2,] "a_{21}" "a_{22}" "a_{23}"
## [3,] "a_{31}" "a_{32}" "a_{33}"
b <- paste0("b_", 1:3)
x <- paste0("x", 1:3)

showEqn(..., latex = TRUE) produces the three equations in a single \begin{array} ... \begin{array} environment.

showEqn(A, b, vars = x, latex=TRUE)

If this line was run in an R console, it would produce:

\begin{array}{lllllll}
a_{11} \cdot x_1 &+& a_{12} \cdot x_2 &+& a_{13} \cdot x_3  &=&  b_1 \\ 
a_{21} \cdot x_1 &+& a_{22} \cdot x_2 &+& a_{23} \cdot x_3  &=&  b_2 \\ 
a_{31} \cdot x_1 &+& a_{32} \cdot x_2 &+& a_{33} \cdot x_3  &=&  b_3 \\ 
\end{array}

Evaluating the above code in an unnumbered LaTeX math environment via Eqn() gives the desired result:

showEqn(A, b, vars = x, latex=TRUE) |> Eqn()

a11x1+a12x2+a13x3=b1a21x1+a22x2+a23x3=b2a31x1+a32x2+a33x3=b3\begin{equation*} \begin{array}{lllllll} a_{11} \cdot x_1 &+& a_{12} \cdot x_2 &+& a_{13} \cdot x_3 &=& b_1 \\ a_{21} \cdot x_1 &+& a_{22} \cdot x_2 &+& a_{23} \cdot x_3 &=& b_2 \\ a_{31} \cdot x_1 &+& a_{32} \cdot x_2 &+& a_{33} \cdot x_3 &=& b_3 \\ \end{array}\end{equation*}

References

Bock, R. Darrell. 1975. Multivariate Statistical Methods in Behavioral Research. New York: McGraw Hill.
Sunwoo, Hasik. 1996. “Simple Algorithms about Kronecker Products in the Linear Model.” Linear Algebra and Its Applications 237–238 (April): 351–58. https://doi.org/10.1016/0024-3795(95)00675-3.