logo资料库

linear algebra done wrong.pdf

第1页 / 共286页
第2页 / 共286页
第3页 / 共286页
第4页 / 共286页
第5页 / 共286页
第6页 / 共286页
第7页 / 共286页
第8页 / 共286页
资料共286页,剩余部分请下载后查看
Preface
Notes for the instructor
Chapter 1. Basic Notions
1. Vector spaces
1.1. Examples.
1.2. Matrix notation
Exercises.
2. Linear combinations, bases.
2.1. Generating and linearly independent systems
Exercises.
3. Linear Transformations. Matrix–vector multiplication
3.1. Examples.
3.2. Linear transformations F**n –> F**m. Matrix–column multiplication.
无标题
3.3. Linear transformations and generating sets.
3.4. Conclusions.
Exercises.
4. Linear transformations as a vector space
5. Composition of linear transformations and matrix multiplication.
5.1. Definition of the matrix multiplication.
5.2. Motivation: composition of linear transformations.
5.3. Properties of matrix multiplication.
5.4. Transposed matrices and multiplication.
5.5. Trace and matrix multiplication
Exercises.
6. Invertible transformations and matrices. Isomorphisms
6.1. Identity transformation and identity matrix.
6.2. Invertible transformations.
Examples.
6.2.1. Properties of the inverse transformation.
6.3. Isomorphism. Isomorphic spaces.
Examples
6.4. Invertibility and equations.
Exercises.
7. Subspaces.
Exercises.
8. Application to computer graphics.
8.1. 2-dimensional manipulation.
8.2. 3-dimensional graphics
Exercises.
Chapter 2. Systems of linear equations
1. Different faces of linear systems.
2. Solution of a linear system. Echelon and reduced echelon forms
2.1. Row operations.
2.1.1. Row operations and multiplication by elementary matrices
2.2. Row reduction.
2.2.1. An example of row reduction.
2.3. Echelon form
Exercises.
3. Analyzing the pivots.
3.1. Corollaries about linear independence and bases. Dimension
3.2. Corollaries about invertible matrices
Exercises.
4. Finding the inverse of A by row reduction.
An Example.
Exercises.
5. Dimension. Finite-dimensional spaces.
5.1. Completing a linearly independent system to a basis
5.2. Subspaces of finite dimensional spaces
Exercises.
6. General solution of a linear system.
Exercises.
7. Fundamental subspaces of a matrix. Rank.
7.1. Computing fundamental subspaces and rank.
7.2. Explanation of the computing bases in the fundamental subspaces.
7.2.1. The null space Ker A.
7.2.2. The column space Ran A.
7.2.3. The row space Ran A**T.
无标题
7.3. The Rank Theorem. Dimensions of fundamental subspaces.
7.4. Completion of a linearly independent system to a basis
Exercises.
8. Representation of a linear transformation in arbitrary bases. Change of coordinates formula.
8.1. Coordinate vector.
8.2. Matrix of a linear transformation.
8.3. Change of coordinate matrix.
8.3.1. An example: change of coordinates from the standard basis
8.3.2. An example: going through the standard basis
8.4. Matrix of a transformation and change of coordinates.
8.5. Case of one basis: similar matrices
Exercises.
Chapter 3. Determinants
1. Introduction.
2. What properties determinant should have.
2.1. Linearity in each argument.
2.2. Preservation under ``column replacement''
2.3. Antisymmetry.
2.4. Normalization.
3. Constructing the determinant.
3.1. Basic properties.
3.2. Properties of determinant deduced from the basic properties.
3.3. Determinants of diagonal and triangular matrices.
3.4. Computing the determinant.
3.5. Determinants of a transpose and of a product. Determinants of elementary matrices.
无标题
3.6. Summary of properties of determinant.
Exercises.
4. Formal definition. Existence and uniqueness of the determinant.
Exercises.
无标题
5. Cofactor expansion.
5.1. Cofactor formula for the inverse matrix
5.2. Some applications of the cofactor formula for the inverse.
Exercises.
6. Minors and rank.
7. Review exercises for Chapter 3.
Chapter 4. Introduction to spectral theory (eigenvalues and eigenvectors)
1. Main definitions
1.1. Eigenvalues, eigenvectors, spectrum
1.2. Finding eigenvalues: characteristic polynomials
1.3. Finding characteristic polynomial and eigenvalues of an abstract operator
1.4. Complex vs real spaces
1.5. Multiplicities of eigenvalues
1.6. Trace and determinant.
1.7. Eigenvalues of a triangular matrix
Exercises.
2. Diagonalization.
2.1. Preliminaries
2.2. Some motivations: functions of operators.
2.3. The case of n distinct eigenvalues
2.4. Bases of subspaces (AKA direct sums of subspaces).
2.5. Criterion of diagonalizability
2.6. Real factorization
2.7. Some example
2.7.1. Real eigenvalues
2.7.2. Complex eigenvalues
2.7.3. A non-diagonalizable matrix
Exercises.
Chapter 5. Inner product spaces
1. Inner product in R**n and C**n. Inner product spaces.
1.1. Inner product and norm in R**n.
1.2. Inner product and norm in C**n.
1.3. Inner product spaces.
1.3.1. Examples
1.4. Properties of inner product
1.5. Norm. Normed spaces
Exercises.
2. Orthogonality. Orthogonal and orthonormal bases.
2.1. Orthogonal and orthonormal bases.
Exercises.
3. Orthogonal projection and Gram-Schmidt orthogonalization
3.1. Gram-Schmidt orthogonalization algorithm
3.2. An example.
3.3. Orthogonal complement. Decomposition E=E+Eperp
Exercises.
4. Least square solution. Formula for the orthogonal projection
4.1. Least square solution
4.1.1. Geometric approach.
4.1.2. Normal equation.
4.2. Formula for the orthogonal projection.
4.3. An example: line fitting
4.3.1. An example.
4.4. Other examples: curves and planes.
4.4.1. An example: curve fitting
4.4.2. Plane fitting
Exercises.
5. Adjoint of a linear transformation. Fundamental subspaces revisited.
5.1. Adjoint matrices and adjoint operators.
5.1.1. Uniqueness of the adjoint.
5.1.2. Adjoint transformation in abstract setting.
5.1.3. Useful formulas.
5.2. Relation between fundamental subspaces.
5.3. The ``essential'' part of a linear transformation
Exercises.
6. Isometries and unitary operators. Unitary and orthogonal matrices.
6.1. Main definitions
6.2. Examples
6.3. Properties of unitary operators
6.4. Unitary equivalent operators
Exercises.
7. Rigid motions in R**n
Exercises.
8. Complexification and decomplexification
8.1. Decomplexification
8.1.1. Decomplexification of a vector space
8.1.2. Decomplexification of an inner product
8.2. Complexification
8.3. Introducing complex structure to a real space
8.3.1. An elementary way to introduce a complex structure
8.3.2. From elementary to abstract construction of complex structure
8.3.3. An abstract construction of complex structure
8.3.4. The abstract construction via the elementary one
Exercises.
Chapter 6. Structure of operators in inner product spaces.
1. Upper triangular (Schur) representation of an operator.
Exercises.
2. Spectral theorem for self-adjoint and normal operators.
Exercises.
3. Polar and singular value decompositions.
3.1. Positive definite operators. Square roots
3.2. Modulus of an operator. Singular values.
3.3. Singular values. Schmidt decomposition.
3.4. Matrix representation of the Shmidt decomposition. Singular value decomposition.
3.4.1. From singular value decomposition to the polar decomposition
Exercises.
4. Applications of the singular value decomposition.
4.1. Image of the unit ball
4.2. Operator norm of a linear transformation
4.3. Condition number of a matrix
4.4. Effective rank of a matrix
4.5. Moore–Penrose (pseudo)inverse.
Exercises.
5. Structure of orthogonal matrices
6. Orientation
6.1. Motivation
6.2. Formal definition
6.3. Continuous transformations of bases and orientation
Exercises.
Chapter 7. Bilinear and quadratic forms
1. Main definition
1.1. Bilinear forms on R**n
1.2. Quadratic forms on R**n
1.3. Quadratic forms on C**n
Exercises.
2. Diagonalization of quadratic forms
2.1. Orthogonal diagonalization
2.2. Non-orthogonal diagonalization
2.2.1. Diagonalization by completion of squares
2.2.2. Diagonalization using row/column operations
Exercises.
3. Silvester's Law of Inertia
4. Positive definite forms. Minimax characterization of eigenvalues and the Silvester's criterion of positivity
4.1. Silvester's criterion of positivity
4.2. Minimax characterization of eigenvalues
4.3. Some remarks
Exercises.
5. Positive definite forms and inner products
Chapter 8. Dual spaces and tensors
1. Dual spaces
1.1. Linear functionals and the dual space. Change of coordinates in the dual space
1.1.1. Change of coordinates formula
1.1.2. A uniqueness theorem
1.2. Second dual
1.3. Dual, a.k.a. biorthogonal bases
1.3.1. Abstract non-orthogonal Fourier decomposition
1.4. Examples of dual systems
1.4.1. Taylor formula
1.4.2. Lagrange interpolation
Exercises.
2. Dual of an inner product space
2.1. Riesz representation theorem
2.2. Is an inner product space a dual to itself?
2.3. Biorthogonal systems and orthonormal bases
3. Adjoint (dual) transformations and transpose. Fundamental subspace revisited (once more)
3.1. Dual (adjoint) transformation
3.1.1. Dual transformation for the case A : F**n -> F**m
3.1.2. Dual transformation in the abstract setting
3.1.3. A coordinate-free way to define the dual transformation
3.2. Annihilators and relations between fundamental subspaces
Exercises.
4. What is the difference between a space and its dual?
4.1. Isomorphisms between X and X'
4.2. An example: velocities (differential operators) and differential forms as vectors and linear functionals
4.2.1. Velocities as vectors
4.2.2. Differential forms as linear functionals (covectors)
4.2.3. Differential operators as vectors
4.3. The case of a real inner product space
4.3.1. Einstein notation, metric tensor
4.3.2. Covariant and contravariant coordinates. Lovering and raising the indices
4.4. Conclusions
Exercises.
5. Multilinear functions. Tensors
5.1. Multilinear functions
5.1.1. Multilinear functions form vector space
5.1.2. Dimension of L(V1,V2,...,Vp;V)
5.2. Tensor Products
5.2.1. Lifting a multilinear function to a linear transformation on the tensor product
5.2.2. Dual of a tensor product
5.3. Covariant and contravariant tensors
5.3.1. Linear transformations as tensors
5.3.2. Polylinear transformations as tensors
Exercises.
6. Change of coordinates formula for tensors.
6.1. Coordinate representation of a tensor.
6.2. Change of coordinate formulas in Einstein notation
6.3. Change of coordinates formula for tensors
Chapter 9. Advanced spectral theory
1. Cayley–Hamilton Theorem
Exercises.
2. Spectral Mapping Theorem
2.1. Polynomials of operators
2.2. Spectral Mapping Theorem
Exercises.
3. Generalized eigenspaces. Geometric meaning of algebraic multiplicity
3.1. Invariant subspaces
3.2. Generalized eigenspaces.
3.3. Geometric meaning of algebraic multiplicity
3.4. An important application
4. Structure of nilpotent operators
4.1. Cycles of generalized eigenvectors
4.2. Jordan canonical form of a nilpotent operator
4.3. Dot diagrams. Uniqueness of the Jordan canonical form
4.4. Computing a Jordan canonical basis
5. Jordan decomposition theorem
5.1. Remarks about computing Jordan canonical basis
Index
Linear Algebra Done Wrong Sergei Treil Department of Mathematics, Brown University
Copyright c Sergei Treil, 2004, 2009, 2011, 2014, 2017 This book is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License, see https://creativecommons.org/licenses/by-nc-nd/3.0/ Additional details: You can use this book free of charge for non- commercial purposes, in particular for studying and/or teaching. You can print paper copies of the book or its parts using either personal printer or professional printing services. Instructors teaching a class (or their institu- tions) can provide students with printed copies of the book and charge the fee to cover the cost of printing; however the students should have an option to use the free electronic version.
Preface The title of the book sounds a bit mysterious. Why should anyone read this book if it presents the subject in a wrong way? What is particularly done “wrong” in the book? Before answering these questions, let me first describe the target au- dience of this text. This book appeared as lecture notes for the course “Honors Linear Algebra”. It supposed to be a first linear algebra course for mathematically advanced students. It is intended for a student who, while not yet very familiar with abstract reasoning, is willing to study more rigor- ous mathematics than what is presented in a “cookbook style” calculus type course. Besides being a first course in linear algebra it is also supposed to be a first course introducing a student to rigorous proof, formal definitions—in short, to the style of modern theoretical (abstract) mathematics. The target audience explains the very specific blend of elementary ideas and concrete examples, which are usually presented in introductory linear algebra texts with more abstract definitions and constructions typical for advanced books. Another specific of the book is that it is not written by or for an alge- braist. So, I tried to emphasize the topics that are important for analysis, geometry, probability, etc., and did not include some traditional topics. For example, I am only considering vector spaces over the fields of real or com- plex numbers. Linear spaces over other fields are not considered at all, since I feel time required to introduce and explain abstract fields would be better spent on some more classical topics, which will be required in other dis- ciplines. And later, when the students study general fields in an abstract algebra course they will understand that many of the constructions studied in this book will also work for general fields. iii
iv Preface Also, I treat only finite-dimensional spaces in this book and a basis always means a finite basis. The reason is that it is impossible to say some- thing non-trivial about infinite-dimensional spaces without introducing con- vergence, norms, completeness etc., i.e. the basics of functional analysis. And this is definitely a subject for a separate course (text). So, I do not consider infinite Hamel bases here: they are not needed in most applica- tions to analysis and geometry, and I feel they belong in an abstract algebra course. Notes for the instructor. There are several details that distinguish this text from standard advanced linear algebra textbooks. First concerns the definitions of bases, linearly independent, and generating sets. In the book I first define a basis as a system with the property that any vector admits a unique representation as a linear combination. And then linear indepen- dence and generating system properties appear naturally as halves of the basis property, one being uniqueness and the other being existence of the representation. The reason for this approach is that I feel the concept of a basis is a much more important notion than linear independence: in most applications we really do not care about linear independence, we need a system to be a basis. For example, when solving a homogeneous system, we are not just looking for linearly independent solutions, but for the correct number of linearly independent solutions, i.e. for a basis in the solution space. And it is easy to explain to students, why bases are important: they allow us to introduce coordinates, and work with Rn (or Cn) instead of working with an abstract vector space. Furthermore, we need coordinates to perform computations using computers, and computers are well adapted to working with matrices. Also, I really do not know a simple motivation for the notion of linear independence. Another detail is that I introduce linear transformations before teach- ing how to solve linear systems. A disadvantage is that we did not prove until Chapter 2 that only a square matrix can be invertible as well as some other important facts. However, having already defined linear transforma- tion allows more systematic presentation of row reduction. Also, I spend a lot of time (two sections) motivating matrix multiplication. I hope that I explained well why such a strange looking rule of multiplication is, in fact, a very natural one, and we really do not have any choice here. Many important facts about bases, linear transformations, etc., like the fact that any two bases in a vector space have the same number of vectors, are proved in Chapter 2 by counting pivots in the row reduction. While most of these facts have “coordinate free” proofs, formally not involving Gaussian
Preface v elimination, a careful analysis of the proofs reveals that the Gaussian elim- ination and counting of the pivots do not disappear, they are just hidden in most of the proofs. So, instead of presenting very elegant (but not easy for a beginner to understand) “coordinate-free” proofs, which are typically presented in advanced linear algebra books, we use “row reduction” proofs, more common for the “calculus type” texts. The advantage here is that it is easy to see the common idea behind all the proofs, and such proofs are easier to understand and to remember for a reader who is not very mathematically sophisticated. I also present in Section 8 of Chapter 2 a simple and easy to remember formalism for the change of basis formula. Chapter 3 deals with determinants. I spent a lot of time presenting a motivation for the determinant, and only much later give formal definitions. Determinants are introduced as a way to compute volumes. It is shown that if we allow signed volumes, to make the determinant linear in each column (and at that point students should be well aware that the linearity helps a lot, and that allowing negative volumes is a very small price to pay for it), and assume some very natural properties, then we do not have any choice and arrive to the classical definition of the determinant. I would like to emphasize that initially I do not postulate antisymmetry of the determinant; I deduce it from other very natural properties of volume. Note, that while formally in Chapters 1–3 I was dealing mainly with real spaces, everything there holds for complex spaces, and moreover, even for the spaces over arbitrary fields. Chapter 4 is an introduction to spectral theory, and that is where the complex space Cn naturally appears. It was formally defined in the begin- ning of the book, and the definition of a complex vector space was also given there, but before Chapter 4 the main object was the real space Rn. Now the appearance of complex eigenvalues shows that for spectral theory the most natural space is the complex space Cn, even if we are initially dealing with real matrices (operators in real spaces). The main accent here is on the diagonalization, and the notion of a basis of eigesnspaces is also introduced. Chapter 5 dealing with inner product spaces comes after spectral theory, because I wanted to do both the complex and the real cases simultaneously, and spectral theory provides a strong motivation for complex spaces. Other then the motivation, Chapters 4 and 5 do not depend on each other, and an instructor may do Chapter 5 first. Although I present the Jordan canonical form in Chapter 9, I usually do not have time to cover it during a one-semester course. I prefer to spend more time on topics discussed in Chapters 6 and 7 such as diagonalization
vi Preface of normal and self-adjoint operators, polar and singular values decomposi- tion, the structure of orthogonal matrices and orientation, and the theory of quadratic forms. I feel that these topics are more important for applications, then the Jordan canonical form, despite the definite beauty of the latter. However, I added Chapter 9 so the instructor may skip some of the topics in Chapters 6 and 7 and present the Jordan Decomposition Theorem instead. I also included (new for 2009) Chapter 8, dealing with dual spaces and tensors. I feel that the material there, especially sections about tensors, is a bit too advanced for a first year linear algebra course, but some topics (for example, change of coordinates in the dual space) can be easily included in the syllabus. And it can be used as an introduction to tensors in a more advanced course. Note, that the results presented in this chapter are true for an arbitrary field. I had tried to present the material in the book rather informally, prefer- ring intuitive geometric reasoning to formal algebraic manipulations, so to a purist the book may seem not sufficiently rigorous. Throughout the book I usually (when it does not lead to the confusion) identify a linear transfor- mation and its matrix. This allows for a simpler notation, and I feel that overemphasizing the difference between a transformation and its matrix may confuse an inexperienced student. Only when the difference is crucial, for example when analyzing how the matrix of a transformation changes under the change of the basis, I use a special notation to distinguish between a transformation and its matrix.
Contents Preface Chapter 1. Basic Notions iii 1 1 6 Invertible transformations and matrices. Isomorphisms §1. Vector spaces §2. Linear combinations, bases. §3. Linear Transformations. Matrix–vector multiplication §4. Linear transformations as a vector space 17 §5. Composition of linear transformations and matrix multiplication. 19 §6. 24 §7. Subspaces. §8. Application to computer graphics. Chapter 2. Systems of linear equations §1. Different faces of linear systems. 39 §2. Solution of a linear system. Echelon and reduced echelon forms 40 §3. Analyzing the pivots. 46 §4. Finding A−1 by row reduction. §5. Dimension. Finite-dimensional spaces. §6. General solution of a linear system. §7. Fundamental subspaces of a matrix. Rank. §8. Representation of a linear transformation in arbitrary bases. 30 56 59 52 54 Change of coordinates formula. 12 31 39 69 Chapter 3. Determinants 75 vii
viii Contents Introduction. §1. 75 §2. What properties determinant should have. 76 §3. Constructing the determinant. 78 §4. Formal definition. Existence and uniqueness of the determinant. 86 §5. Cofactor expansion. 90 §6. Minors and rank. 96 §7. Review exercises for Chapter 3. 96 Chapter 4. Introduction to spectral theory (eigenvalues and eigenvectors) §1. Main definitions §2. Diagonalization. Chapter 5. Inner product spaces Inner product in Rn and Cn. Inner product spaces. §1. §2. Orthogonality. Orthogonal and orthonormal bases. §3. Orthogonal projection and Gram-Schmidt orthogonalization §4. Least square solution. Formula for the orthogonal projection §5. Adjoint of a linear transformation. Fundamental subspaces revisited. Isometries and unitary operators. Unitary and orthogonal matrices. §6. §7. Rigid motions in Rn §8. Complexification and decomplexification Chapter 6. Structure of operators in inner product spaces. §1. Upper triangular (Schur) representation of an operator. §2. Spectral theorem for self-adjoint and normal operators. §3. Polar and singular value decompositions. §4. Applications of the singular value decomposition. §5. Structure of orthogonal matrices §6. Orientation Chapter 7. Bilinear and quadratic forms §1. Main definition §2. Diagonalization of quadratic forms §3. Silvester’s Law of Inertia §4. Positive definite forms. Minimax characterization of eigenvalues and the Silvester’s criterion of positivity 99 100 105 117 117 125 129 136 142 146 151 154 163 163 165 171 179 187 193 197 197 200 206 208
分享到:
收藏