Section

  • 1

    Lecture Notes 

    There will be reasonably full lecture notes for the module.  These will be released in 6–8 parts at intervals of 1–2 weeks.  The notes are heavily based on notes created by Peter Cameron, and I am indebted to him for making these available.  Each chapter will appear first as a draft.  We'll revise the notes as the module progresses, and a final version will appear when the relevant material has been covered.

    The lectures will provide some leisurely expansion of the more challenging parts of the syllabus, together with motivational material, examples, etc. 

    Relationship to Linear Algebra I

    • Particularly near the beginning of the module, we'll be reviewing material from Linear Algebra I,a and we'll move briskly while doing this.
    • Sometimes we'll be expanding on material from Linear Algebra I, e.g., by supplying proofs.
    • Sometimes we'll be coming at familiar concepts from a different, often more abstract, direction.
    • Often we'll be exploring new territory.

    Format of the examination paper

    The rubric will be the similar to last year, i.e., "answer all five questions".

    Engagement monitoring 

    Engagement will be monitored in line with the School's Student Engagement Policy

    Further reading

    If you would like to pursue this topic in further depth, I suggest reading Sheldon Axler, Linear Algebra Done Right (3rd edition), Springer-Verlag, 2015. ISBN 978-3-319-11079-0.

  • 2

     

    • Tuesday 27th September, 16:00–18:00. Administrative matters. Definitions: field, vector space. Terminology. Why this generality? Examples: vectors in R2; more generally Kn for an arbitrary field K (verify a couple of the axioms for this example), functions S → R, polynomials of degree n − 1.  Definitions: what it means for a list of vectors to be linearly dependent, independent, spanning, form a basis. Examples.  Span of a list of vectors.  
    • Wednesday 28th September, 09:00–10:00. Finite dimensional vector space. Thinning out a dependent list of vectors (Lemma 1.12). Example.  Exchange Lemma (Lemma 1.13).
    • Tuesday 4th October, 16:00–18:00. Coordinate representation of a vector relative to a basis. Any n-dimensional vector space over a field K is isomorphic to (is essentially the same as) the vector space Kn. Translating between different bases, transition matrices and their properties (covered swiftly, as this is revision of material in Lin. Alg, I). Subspace of a vector space. Alternative characterisation as a subset closed under vector addition and scalar product.
    • Wednesday 5th October, 09:00–10:00.  Examples of subspaces: the span of a set of vectors, intersection and sum of subspaces.  A non-example: union of subspaces. Relationship between the dimensions of U, W, U ∩ W and U + W (Theorem 1.24 in the draft notes).
    • Tuesday 11th October, 16:00–18:00.  The case when dim(U ∩ W) = 0 is of special interest.  In this case, very element is uniquely expressible as a sum of an element of U and an element of W. We call U + W the direct sum of U and W.  The direct sum of more than two subspaces, dimension of a direct sum of several subspaces. Matrix algebra (revision of Lin. Alg. I). Elementary row and column operations, and the corresponding elementary matrices. Row and column spaces, and row and column ranks. Lemma 2.9:  Elementary operations preserve the row and column ranks (statement only).
    • Wednesday 12th October, 09:00–10:00. Proof of Lemma 2.9:   Canonical form for equivalence. Theorem 2.10: Every matrix can be reduced using elementary row and column operations to a matrix in canonical form.  (The algorithm for accomplishing this reduction is left as a reading exercise.)  As its name suggests this matrix is unique. Definition of rank.  Corollary: row rank, column rank and rank are equal.
    • Tuesday 18th October, 16:00–18:00.  Worked example of reduction to canonical form. For every matrix A there are invertible matrices P and Q such that PAQ is in the canonical form for equivalence. Worked example of the construction of P and Q. Definition of equivalence of matrices; equivalence is an equivalence relation. A matrix is invertible iff it has rank n. Every invertible matrix is the product of elementary matrices. An invertible matrix can be reduced to the identity matrix by elementary row (or column) operations alone. Two matrices are equivalent iff they have the same rank. New topic: determinants. Sign of a permutation, examples. Leibniz formula for the determinant.

    • Wednesday 19th October, 09:00–10:00.  Example: 3 × 3 determinant.  Three properties (D1)–(D3) that a function on square matrices might satisfy. (The eventual aim is to show that the determinant is the unique function satisfying (D1)–(D3).)  Proof that det(.), as defined by the Leibniz formula, satisfies (D1)–(D3). (To gain intuition, the proof for the (D2) part was preceded by a 3 × 3 example.)

    • Tuesday 25th October, 16:00–18:00.Theorem. There is a unique function on matrices satisfying (D1)–(D3); this function is det() (statement only). First step: investigate the effect of elementary row operations on D, an arbitrary function satisfying (D1)–(D3). Finish with a case analysis according to whether A is invertible or not. Some corollaries: a square matrix is invertible A iff det(A) ≠ 0; effect of elementary row operations on the determinant; det(AB) = det(A)det(B). Everything we stated for rows can be stated for columns; det(A^T) = det(A). Definition: minor and cofactor. Laplace or cofactor expansion. (Proof of equivalence to the Leibniz formula omitted; see printed notes/Wikipedia/Q2 on assignment 5.) Example. Adjugate matrix. Theorem: A.Adj(A) = det(A) I. (Note matrix multiplication on the left and scalar multiplication on the right!) Proof.  

    • Wednesday 26th October, 09:00–10:00. Matrices with polynomial entries vs. polynomials with matrix coefficients. Characteristic polynomial p_A(x) of a matrix A. Example. Cayley-Hamilton Theorem: p_A(A) = O. Example. Proof. Short discussion on computing determinants in practice.

    • Tuesday 1st November, 16:00–18:00. Linear maps between vector spaces. Definition of linear map, image, kernel. The image and kernel of a linear map are subspaces. Rank-nullity Theorem. Proof. Representing a linear map relative to specified bases by a matrix. Proof that applying a linear map is equivalent to multiplying by this matrix. 

    • Wednesday 2nd November, 09:00–10:00. Definition of sum and product of linear maps. Relative to given bases, sum and product of linear maps correspond to sum and product of the corresponding matrices. Proof for the case of product. Example. Change of bases: how does the matrix representing a linear map change as the bases of the vector spaces change? Answer: the new matrix is related to the old by the relation of equivalence. Proof. Let alpha be a linear map V -> W of rank r. There are bases for V and W relative to which alpha is represented by a matrix in the canonical form for equivalence (with r ones). Proof. Corollary: Let alpha be a linear map of rank r. Every matrix representing alpha has rank r.
    • Tuesday 15th November, 16:00–18:00. New topic: linear maps on a vector space. Definition of projection. For a projection π on V, it is the case that V is the direct sum of Im(π) and Ker(π). Proof. Converse. For several projections on V satisfying certain conditions, V is the direct sum of the images of the projections. Converse. Linear maps and matrices. Definition of similarity for matrices. Two matrices represent the same linear map if they are similar.  Definitions: eigenvectors, eigenvalues, eigenspaces.
    • Wednesday 16th November, 11:00–12:00. Example. Definition: diagonalisable linear map. Lemma. A linear map on V is diagonalisable if there is a basis of V consisting of eigenvectors. Example. Lemma: eigenvectors with distinct eigenvalues are linearly independent. Proof.
    • Tuesday 22nd November, 16:00–18:00.  Theorem: the following are equivalent for a linear map alpha on V: (a) alpha is diagonalisable, (b) V is the direct sum of eigenspaces, and (c) alpha is a linear combination of projections satisfying certain properties. Proof. Example. Proposition giving expressions for computing powers of matrices and, more generally, polynomials of matrices. Definitions of determinant, characteristic polynomial and minimal polynomial of a linear map.
    • Wednesday 23rd November, 09:00–10:00.  Demonstration that the above are well-defined. Example (missed out yesterday) of a linear map that is not diagonalisable. Proposition: the minimal polynomial divides the characteristic polynomial. Proof. Theorem: The following are equivalent: λ is an eigenvalue of α; λ is a root of the characteristic polynomial α; λ is the root of the minimal polynomial of α. Proof.
    • Tuesday 29th November, 16:00–18:00. Theorem: α is diagonalisable if and only if its minimal polynomial is a product of distinct linear factors. (The proof is beyond the scope of the module.) Examples of matrices that are not diagonalisable, either because the field does not allow the characteristic polynomial to be decomposed into linear factors, or because the minimal polynomial has a repeated root. Brief treatments of Jordan form. Theorem: over C, every matrix is similar to one in Jordan form. Definition of the trace of a matrix. Lemma: Tr(AB) = Tr(BA), and Tr(A) = Tr(A') whenever A and B are similar. Proof. Proposition: Let α be a linear map on a vector space V of dimension n. The the coefficient of x- 1 in pα(x) is -Tr(A), and the constant term of pα(x) is (-1)det(α). If α is diagonalisable then the sum of the eigenvalues is Tr(α) and the product is det(α). Proof.
    • Wednesday 30th November, 09:00–10:00. New topic.  Definition: inner product, inner product space. (Restrict to vector spaces over the reals.) Lengths, angles. Cauchy-Schwarz inequality: (v.w)2 ≤ (v.v)(w.w). Orthonormal basis. Orthogonal vectors are linearly independent.  Proof.   Every inner product space has an orthonormal basis.
    • Tuesday 6th December, 16:00–18:00. Proof of theorem just stated using the Gram-Schmidt process. Example. Definition of adjoint linear map. Why is the definition sound? Explanation invoking the Riesz Representation Theorem. Proposition: If α is represented by matrix A relative to some orthonormal basis then the adjoint α* is represented by the transpose of A. Proof. Definition of self-adjoint and orthogonal maps. Corollary: α is self-adjoint iff A is symmetric, and α is orthogonal iff ATA = AAT = I. 
    • Wednesday 7th December, 09:00–10:00.  Theorem: The following are equivalent for s linear map α: (a) α is orthogonal, (b) α preserves the inner product, and (c) α maps any orthonormal basis to an orthonormal basis. Proof. Example:  is a rotation clockwise by π/4.  The adjoint, α*, is rotation anticlockwise by π/4.  Note that α is an orthogonal map since α* = α-1.  Note also that α preserves the inner product and maps orthonormal bases to orthonormal bases.  Corollary:  If A represents an orthogonal linear map with respect to an orthonormal basis then the columns of A are themselves orthonormal.  
    • Tuesday 13th December, 16:00–18:00. Definition: orthogonal complement. Proposition: V is the direct sum of U and the orthogonal complement of U. Definition: orthogonal projection. Proposition: correspondence between orthogonal subspaces and orthogonal projections. Spectral Theorem.  Most of the proof.
    • Wednesday 14th December, 09:00–10:00. Completion of the proof. Alternative statement in terms of symmetric matrices. Example. Alternative statement in terms of projections.  End of the module.

    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15