d2l-pdl

D2L.ai examples done using Perl Data Language (pdl)

View on GitHub

Linear Algebra

By now, we can load datasets into tensors and manipulate these tensors with basic mathematical operations. To start building sophisticated models, we will also need a few tools from linear algebra. This section offers a gentle introduction to the most essential concepts, starting from scalar arithmetic and ramping up to matrix multiplication.

Scalars

Most everyday mathematics consists of manipulating numbers one at a time. Formally, we call these values scalars. For example, the temperature in Orlando, Florida is a balmy 72 degrees Fahrenheit. If you wanted to convert the temperature to Celsius you would evaluate the expression c=59(f32), setting f to 72. In this equation, the values 5, 9, and 32 are constant scalars. The variables c and f in general represent unknown scalars.

We denote scalars by ordinary lower-cased letters (e.g., x, y, and z) and the space of all (continuous) real-valued scalars by R. For expedience, we will skip past rigorous definitions of spaces: just remember that the expression xR is a formal way to say that x is a real-valued scalar. The symbol (pronounced “in”) denotes membership in a set. For example, x,y{0,1} indicates that x and y are variables that can only take values 0 or 1.

Scalars are implemented as tensors that contain only one element. Below, we assign two scalars and perform the familiar addition, multiplication, division, and exponentiation operations.

pdl> $x = ones(1) * 3.0
pdl> $y = ones(1) * 2.0
pdl> print $x, $y
[3] [2]
pdl> print $x+$y
pdl> print $x + $y, $x * $y, $x / $y, $x ** $y
[5] [6] [1.5] [9]

Vectors

For current purposes, you can think of a vector as a fixed-length array of scalars. As with their code counterparts, we call these scalars the elements of the vector (synonyms include entries and components). When vectors represent examples from real-world datasets, their values hold some real-world significance. For example, if we were training a model to predict the risk of a loan defaulting, we might associate each applicant with a vector whose components correspond to quantities like their income, length of employment, or number of previous defaults. If we were studying the risk of heart attack, each vector might represent a patient and its components might correspond to their most recent vital signs, cholesterol levels, minutes of exercise per day, etc. We denote vectors by bold lowercase letters, (e.g., x, y, and z).

Vectors are implemented as 1st-order tensors. In general, such tensors can have arbitrary lengths, subject to memory limitations. Caution: in Perl (and in PDL), as in most reasonable programming languages, vector indices start at 0, also known as zero-based indexing, whereas in linear algebra subscripts begin at 1 (one-based indexing), in this document.

pdl> $x = sequence(3)
pdl> print $x
[0 1 2]

We can refer to an element of a vector by using a subscript. For example, x2 denotes the second element of x. Since x2 is a scalar, we do not bold it. By default, we visualize vectors by stacking their elements vertically:

x=[x1xn].

Here x1,,xn are elements of the vector. Later on, we will distinguish between such column vectors and row vectors whose elements are stacked horizontally. Recall that we access a tensor’s elements via indexing.

pdl> print $x(2)
[2]

To indicate that a vector contains n elements, we write xRn. Formally, we call n the dimensionality of the vector. In code, this corresponds to the tensor’s length, accessible via the length function in PDL. The output of length is identical to dims, which is the generic dimensionality retrieval function. The return value of dims is a tuple that indicates a tensor’s length along each dimension. Tensors with just one dimension have shapes with just one element. We can also use the shape function to return a PDL object with the dimensions.

pdl> print $x->length
3
pdl> print $x->dims
3
pdl> print $x->shape
[3]

Oftentimes, the word “dimension” gets overloaded to mean both the number of axes and the length along a particular dimension. To avoid this confusion, we use order to refer to the number of axes and dimensionality exclusively to refer to the number of components.

Matrices

Just as scalars are 0th-order tensors and vectors are 1st-order tensors, matrices are 2nd-order tensors. We denote matrices by bold capital letters (e.g., X, Y, and Z), and represent them in code by tensors with two axes. The expression ARm×n indicates that a matrix A contains m×n real-valued scalars, arranged as m rows and n columns. When m=n, we say that a matrix is square. Visually, we can illustrate any matrix as a table. To refer to an individual element, we subscript both the row and column indices, e.g., aij is the value that belongs to A’s ith row and jth column:

A=[a11a12a1na21a22a2nam1am2amn].

In code, we represent a matrix ARm×n by a 2nd-order tensor with shape (m, n). We can convert any appropriately sized m×n tensor into an m×n matrix by passing the desired shape to reshape. Recall that PDL requires the order to be swapped since it uses column-major form:

pdl> $A = sequence(6)->reshape(2,3)
pdl> print $A
[
 [0 1]
 [2 3]
 [4 5]
]

Sometimes we want to flip the axes. When we exchange a matrix’s rows and columns, the result is called its transpose. Formally, we signify a matrix A’s transpose by A and if B=A, then bij=aji for all i and j. Thus, the transpose of an m×n matrix is an n×m matrix:

A=[a11a21am1a12a22am2a1na2namn].

In code, we can access any matrix’s transpose using the transpose function as shown below:

pdl> print $A->transpose
[
 [0 2 4]
 [1 3 5]
]

Symmetric matrices are the subset of square matrices that are equal to their own transposes: A=A. The following matrix is symmetric:

pdl> $A = pdl([[1, 2, 3], [2, 0, 4], [3, 4, 5]])
pdl> print $A == $A->transpose

[
 [1 1 1]
 [1 1 1]
 [1 1 1]
]

Matrices are useful for representing datasets. Typically, rows correspond to individual records and columns correspond to distinct attributes.

Tensors

While you can go far in your machine learning journey with only scalars, vectors, and matrices, eventually you may need to work with higher-order tensors. Tensors give us a generic way of describing extensions to nth-order arrays. We call software objects of the tensor class “tensors” precisely because they too can have arbitrary numbers of axes. While it may be confusing to use the word tensor for both the mathematical object and its realization in code, our meaning should usually be clear from context. We denote general tensors by capital letters with a special font face (e.g., X, Y, and Z) and their indexing mechanism (e.g., xijk and [X]1,2i1,3) follows naturally from that of matrices.

Tensors will become more important when we start working with images. Each image arrives as a 3rd-order tensor with axes corresponding to the height, width, and channel. At each spatial location, the intensities of each color (red, green, and blue) are stacked along the channel. Furthermore, a collection of images is represented in code by a 4th-order tensor, where distinct images are indexed along the first dimension. Higher-order tensors are constructed, as were vectors and matrices, by growing the number of shape components.

NOTE: In PDL the order of the parameters to reshape are reverse that of in the Python equivalent.

pdl> print sequence(24)->reshape(4,3,2)
[
 [
  [ 0  1  2  3]
  [ 4  5  6  7]
  [ 8  9 10 11]
 ]
 [
  [12 13 14 15]
  [16 17 18 19]
  [20 21 22 23]
 ]
]

Basic Properties of Tensor Arithmetic

Scalars, vectors, matrices, and higher-order tensors all have some handy properties. For example, elementwise operations produce outputs that have the same shape as their operands.

pdl> $A = sequence(6)->reshape(3,2)
### Assign a copy of $A to $B by allocating new memory
pdl> $B = $A->copy
pdl> print $A, $A+$B

[
 [0 1 2]
 [3 4 5]
]

[
 [ 0  2  4]
 [ 6  8 10]
]

The elementwise product of two matrices is called their Hadamard product (denoted ). We can spell out the entries of the Hadamard product of two matrices A,BRm×n:

AB=[a11b11a12b12a1nb1na21b21a22b22a2nb2nam1bm1am2bm2amnbmn].
pdl> print $A * $B

[
 [ 0  1  4]
 [ 9 16 25]
]

Adding or multiplying a scalar and a tensor produces a result with the same shape as the original tensor. Here, each element of the tensor is added to (or multiplied by) the scalar.

pdl> print $a + $X, ($a * $X)->shape

[
 [
  [ 2  3  4  5]
  [ 6  7  8  9]
  [10 11 12 13]
 ]
 [
  [14 15 16 17]
  [18 19 20 21]
  [22 23 24 25]
 ]
]
 [4 3 2]

Reduction

Often, we wish to calculate the sum of a tensor’s elements. To express the sum of the elements in a vector x of length n, we write i=1nxi. There is a simple function for it:

pdl> $x = sequence(3)
pdl> print $x, $x->sum
[0 1 2] 3

To express sums over the elements of tensors of arbitrary shape, we simply sum over all its axes. For example, the sum of the elements of an m×n matrix A could be written i=1mj=1naij.

pdl> print $A

[
 [0 1 2]
 [3 4 5]
]

pdl> print $A->shape, $A->sum
[3 2] 15

By default, invoking the sum function reduces a tensor along all of its axes, eventually producing a scalar. The sumover function allows us to specify the axes along which the tensor should be reduced. To sum over all elements along the rows (dimension 0), we call sumover with no arguments. Since the input matrix reduces along dimension 0 to generate the output vector, this dimension is missing from the shape of the output.

pdl> print $A->sumover, $A->sumover->shape
[3 12] [2]

If we want to reduce along the columns we call the mv function with the dimension we want to swap and call sumover on that. Here we move dimensions 0 and 1 and invoke sumover on the result.

pdl> print $A->mv(0,1)->sumover
[3 5 7]
## what the move looks like
pdl> print $A->mv(0,1)

[
 [0 3]
 [1 4]
 [2 5]
]

Another way is to use transpose with sumover.

pdl> print $A->transpose->sumover
[3 5 7]

Reducing a matrix along both rows and columns via summation is equivalent to summing up all the elements of the matrix.

pdl> print $A->sumover
[3 12]
pdl> print $A->sumover->sum
15

A related quantity is the mean, also called the average. We calculate the mean by dividing the sum by the total number of elements. Because computing the mean is so common, it gets a dedicated library function that works analogously to sum. In PDL this function is avg. Likewise, the function average can also calculate the mean along specific dimensions.

pdl> print $A->avg
2.5
pdl> print $A->average
[1 4]

Non-Reduction Sum

Sometimes it can be useful to keep the number of dimensions unchanged when invoking the function for calculating the sum or mean. This matters when we want to use the broadcast mechanism.

pdl> print $A
[
 [1 2 3]
 [2 0 4]
 [3 4 5]
]
pdl> print $A->sumover
[6 6 12]
pdl> $sumA = $A->sumover->transpose
pdl> print $sumA
[
 [ 6]
 [ 6]
 [12]
]

For instance, since $sumA keeps its two axes after summing each row, we can divide $A by $sumA with broadcasting to create a matrix where each row sums up to 1.

pdl> print $A/$sumA

[
 [       0.16666667        0.33333333               0.5]
 [       0.33333333                 0        0.66666667]
 [             0.25        0.33333333        0.41666667]
]

If we want to calculate the cumulative sum of elements of $A along some dimension, say dimension 0 (row by row), we can call the cumusumover function. By design, this function does not reduce the input tensor along any dimension.

pdl> print $A->cumusumover
[
 [ 1  3  6]
 [ 2  2  6]
 [ 3  7 12]
]

Dot Products

So far, we have only performed elementwise operations, sums, and averages. And if this was all we could do, linear algebra would not deserve its own section. Fortunately, this is where things get more interesting. One of the most fundamental operations is the dot product. Given two vectors x,yRd, their dot product xy (also known as inner product, x,y) is a sum over the products of the elements at the same position: xy=i=1dxiyi.

The dot product of two vectors is a sum over the products of the elements at the same position.

In PDL, the dot product can be obtained by using the inner function for two vectors, since it is also known as the inner product of two vectors.

pdl> $x = sequence(3)
pdl> $y = ones(3)
pdl> print inner($x,$y)
3

Equivalently, we can calculate the dot product of two vectors by performing an elementwise multiplication followed by a sum:

pdl> print sum($x * $y)
3

Dot products are useful in a wide range of contexts. For example, given some set of values, denoted by a vector xRn, and a set of weights, denoted by wRn, the weighted sum of the values in x according to the weights w could be expressed as the dot product xw. When the weights are nonnegative and sum to 1, i.e., (i=1nwi=1), the dot product expresses a weighted average. After normalizing two vectors to have unit length, the dot products express the cosine of the angle between them. Later in this section, we will formally introduce this notion of length.

Matrix–Vector Products

Now that we know how to calculate dot products, we can begin to understand the product between an m×n matrix A and an n-dimensional vector x. To start off, we visualize our matrix in terms of its row vectors

A=[a1a2am],

where each aiRn is a row vector representing the ith row of the matrix A.

The matrix–vector product Ax is simply a column vector of length m, whose ith element is the dot product aix:

Ax=[a1a2am]x=[a1xa2xamx].

We can think of multiplication with a matrix ARm×n as a transformation that projects vectors from Rn to Rm. These transformations are remarkably useful. For example, we can represent rotations as multiplications by certain square matrices. Matrix–vector products also describe the key calculation involved in computing the outputs of each layer in a neural network given the outputs from the previous layer.

To express a matrix–vector product in code, we use the same inner function. The operation is inferred based on the type of the arguments. Note that the column dimension of $A (its length along dimension 1) must be the same as the dimension of $x (its length).

pdl> print $x
[0 1 2]
pdl> print $A
[
 [ 0  2  6]
 [ 0  0  8]
 [ 0  4 10]
]
pdl> print $A->shape, $x->shape
[3 3] [3]
pdl> print inner($A, $x)
[8 8 14]

Matrix–Matrix Multiplication

Once you have gotten the hang of dot products and matrix–vector products, then matrix–matrix multiplication should be straightforward.

Say that we have two matrices ARn×k and BRk×m:

A=[a11a12a1ka21a22a2kan1an2ank],B=[b11b12b1mb21b22b2mbk1bk2bkm].

Let aiRk denote the row vector representing the ith row of the matrix A and let bjRk denote the column vector from the jth column of the matrix B:

A=[a1a2an],B=[b1b2bm].

To form the matrix product CRn×m, we simply compute each element cij as the dot product between the ith row of A and the jth column of B, i.e., aibj:

C=AB=[a1a2an][b1b2bm]=[a1b1a1b2a1bma2b1a2b2a2bmanb1anb2anbm].

We can think of the matrix–matrix multiplication AB as performing m matrix–vector products or m×n dot products and stitching the results together to form an n×m matrix. In the following snippet, we perform matrix multiplication on $A and $B. Here, $A is a matrix with two rows and three columns, and $B is a matrix with three rows and four columns. After multiplication, we obtain a matrix with two rows and four columns. In PDL, this can be accomplished by the operator x or the function matmult.

pdl> $A = sequence(6)->reshape(3,2)
[
 [0 1 2]
 [3 4 5]
]
pdl> $B = ones(4,3)
pdl> print $B

[
 [1 1 1 1]
 [1 1 1 1]
 [1 1 1 1]
]
pdl> print matmult($A,$B)
[
 [ 3  3  3  3]
 [12 12 12 12]
]
pdl> print $A x $B
[
 [ 3  3  3  3]
 [12 12 12 12]
]

The term matrix–matrix multiplication is often simplified to matrix multiplication, and should not be confused with the Hadamard product.

Norms

Some of the most useful operators in linear algebra are norms. Informally, the norm of a vector tells us how big it is. For instance, the 2 norm measures the (Euclidean) length of a vector. Here, we are employing a notion of size that concerns the magnitude of a vector’s components (not its dimensionality).

A norm is a function that maps a vector to a scalar and satisfies the following three properties:

  1. Given any vector x, if we scale (all elements of) the vector by a scalar αR, its norm scales accordingly: αx=|α|x.
  2. For any vectors x and y: norms satisfy the triangle inequality: x+yx+y.
  3. The norm of a vector is nonnegative and it only vanishes if the vector is zero: x>0 for all x0.

Many functions are valid norms and different norms encode different notions of size. The Euclidean norm that we all learned in elementary school geometry when calculating the hypotenuse of a right triangle is the square root of the sum of squares of a vector’s elements. Formally, this is called the 2 norm and expressed as

x2=i=1nxi2.

The method magnover calculates the 2 norm in PDL. This is different from the norm function in PDL which normalizes a vector. For matrices, in PDL you have to use the mnorm function available in the PDL::LinearAlgebra module. So maybe it makes sense to use mnorm for vectors too.

pdl> print pdl([3, -4])->magnover
5
pdl> use PDL::LinearAlgebra
pdl> print pdl([3, -4])->mnorm
5

The 1 norm is also common and the associated measure is called the Manhattan distance. By definition, the 1 norm sums the absolute values of a vector’s elements:

x1=i=1n|xi|.

Compared to the 2 norm, it is less sensitive to outliers. To compute the 1 norm, we compose the absolute value with the sum operation.

pdl> print pdl([[3, -4]])->abs->sumover
[7]

Both the 2 and 1 norms are special cases of the more general p norms:

xp=(i=1n|xi|p)1/p.

In the case of matrices, matters are more complicated. After all, matrices can be viewed both as collections of individual entries and as objects that operate on vectors and transform them into other vectors. For instance, we can ask by how much longer the matrix–vector product Xv could be relative to v. This line of thought leads to what is called the spectral norm. For now, we introduce the Frobenius norm, which is much easier to compute and defined as the square root of the sum of the squares of a matrix’s elements:

XF=i=1mj=1nxij2.

The Frobenius norm behaves as if it were an 2 norm of a matrix-shaped vector. Invoking the following function will calculate the Frobenius norm of a matrix.

pdl> use PDL::LinearAlgebra
pdl> print ones(9,4)->mnorm
6

While we do not want to get too far ahead of ourselves, we already can plant some intuition about why these concepts are useful. In deep learning, we are often trying to solve optimization problems: maximize the probability assigned to observed data; maximize the revenue associated with a recommender model; minimize the distance between predictions and the ground truth observations; minimize the distance between representations of photos of the same person while maximizing the distance between representations of photos of different people. These distances, which constitute the objectives of deep learning algorithms, are often expressed as norms.

Discussion

In this section, we have reviewed all the linear algebra that you will need to understand a significant chunk of modern deep learning. There is a lot more to linear algebra, though, and much of it is useful for machine learning. For example, matrices can be decomposed into factors, and these decompositions can reveal low-dimensional structure in real-world datasets. There are entire subfields of machine learning that focus on using matrix decompositions and their generalizations to high-order tensors to discover structure in datasets and solve prediction problems. But this book focuses on deep learning. And we believe you will be more inclined to learn more mathematics once you have gotten your hands dirty applying machine learning to real datasets. So while we reserve the right to introduce more mathematics later on, we wrap up this section here.

If you are eager to learn more linear algebra, there are many excellent books and online resources. For a more advanced crash course, consider checking out Introduction to Linear Algebra by Strang (1993), Linear Algebra Review and Reference by Kolter (2008) and The Matrix Cookbook by Petersen & Pedersen (2008)(archive.org link to pdf).

To recap:

Exercises

  1. Prove that the transpose of the transpose of a matrix is the matrix itself: (A)=A.
  2. Given two matrices A and B, show that sum and transposition commute: A+B=(A+B).
  3. Given any square matrix A, is A+A always symmetric? Can you prove the result by using only the results of the previous two exercises?
  4. We defined the tensor X of shape (4, 3, 2) in this section. What is the output of length(X)? Write your answer without implementing any code, then check your answer using code.
  5. For a tensor X of arbitrary shape, does length(X) always correspond to the length of a certain dimension of X? What is that dimension?
  6. Run $A / $A->sum() and see what happens. Can you analyze the results?
  7. When traveling between two points in downtown Manhattan, what is the distance that you need to cover in terms of the coordinates, i.e., in terms of avenues and streets? Can you travel diagonally?
  8. Consider a tensor of shape (4, 3, 2). What are the shapes of the summation outputs along dimensions 0, 1, and 2?
  9. Feed a tensor with three or more dimensions to the mnorm function and observe its output. What does this function compute for tensors of arbitrary shape?
  10. Consider three large matrices, say AR210×216, BR216×25 and CR25×214, initialized with Gaussian random variables. You want to compute the product ABC. Is there any difference in memory footprint and speed, depending on whether you compute (AB)C or A(BC)? Why?
  11. Consider three large matrices, say AR210×216, BR216×25 and CR25×216. Is there any difference in speed depending on whether you compute AB or AC? Why? What changes if you initialize C=B without cloning memory? Why?
  12. Consider three matrices, say A,B,CR100×200. Construct a tensor with three axes by stacking [A,B,C]. What is the dimensionality? Slice out the second coordinate of the third dimension to recover B. Check that your answer is correct.
Next - Calculus Previous - Data Pre-processing