Linear Algebra Notes
Notes on some linear algebra topics. Sometimes I find things are just stated
as being obvious, for example,
the dot product of two orthogonal vectors
is zero. Well... why is this? The result was a pretty simple but
nonetheless it took a little while to think it through properly.
Hence, these notes... they started of as my musings on the "why", even if the
"why" might be obvious to the rest of the world!
A lot of these notes are now just based on the amazing Khan Academy linear algebra tutorials. In fact, id say the vast majority are... they are literally just notes on Sal's lectures!
Another resource that is really worth watching is 3 Blue 1 Brown's series "The essence of linear algebra".
- What isn't a vector space?, Mathematics Stack Exchange.
- Things that are/aren't vector spaces, Dept. of Mathematics and Statistics,University of New Mexico.
- Vector Spaces, physics.miami.edu.
- Vector space, Wikipedia.
A vector is an element of a vector space. A vector space, also known as a linear space, , is a set which has addition and scalar multiplication defined for it and satisfies the following for any vectors , , and and scalars and :
|1||Closed under addition:|
|6||Closed under scalar multiply:|
First let's properly define a function. If and are sets and we have and , we can form a set that consists of ordered pairs . If it is the case that , then is a function and we write . This is called being "single valued", i.e., each input to the function is unambiguous. Note, is not a function, it is the value returned by the function: is the function that defines the mapping from elements in (the domain) to the elements in (the range).
One example of a vector space is the set of real-valued functions of a real variable, defined on the domain . I.e., we're talking about where each is a function. In other words, with the aforementioned restriction on the domain.
We've said that a vector is any element in a vector space, but usually we thing of this in terms of, say, . We can also however, given the above, think of functions as vectors: the function-as-a-vector being the "vector" of ordered pairs that make up the mapping between domain and range. So in this case the set of function is still like a set of vectors and with scalar multiplication and vector addition defined appropriately the rules for a vector space still hold!
What Isn't A Vector Space
It sounded a lot like everything was a vector space to me! So I did a quick bit of googling and found the paper "Things that are/aren't vector spaces". It does a really good example of showing how in maths we might build up a model of something and expect it to behave "normally" (i.e., obeys the vector space axioms), but in fact doesn't, in which case it becomes a lot harder to reason about them. This is why vectors spaces are nice... we can reason about them using logic and operations we are very familiar with and feel intuitive.
A linear combination of vectors is one that only uses the addition and scalar multiplication of those variables. So, given two vectors, and the following are linear combinations: This can be generalise to say that for any set of vectors that a linear combination of those vectors is .
The set of all linear combinations of a set of vectors is called the span of those vectors: The span can also be refered to as the generating set of vectors for a subspace.
In the graph to the right there are two vectors and . The dotted blue lines show all the possible scalings of the vectors. The vectors span those lines. The green lines show two particular scalings of added to all the possible scalings of . One can use this to imagine that if we added all the possible scalings of to all the possible scalings of we would reach every coordinate in . Thus, we can say that the set of vectors spans , or .
From this we may be tempted to say that any set of two, 2d, vectors spans but we would be wrong. Take for example and . These two variables are co-linear. Because they span the same line, no combination of them can ever move off that line. Therefore they do not span .
Lets write the above out a little more thoroughly... The span of these vectors is, by our above definition: Which we can re-write as: ... which we can clearly see is just the set of all scaled vectors of . Plainly, any co-linear set of 2d vectors will not span .
We saw above that the set of vectors are co-linear and thus did not span . The reason for this is that they are linealy dependent, which means that one or more vectors in the set are just linear combinations of other vectors in the set.
For example, the set of vectors spans , but is redundant because we can remove it from the set of vectors and still have a set that spans . In other words, it adds no new information or directionality to our set.
A set of vectors will be said to be linearly dependent when: Thus a set of vectors is linearly independent when the following is met: Sure, this is a definition, but why does this mean that the set is linearly dependent?
Think of vectors in a . For any two vectors that are not co-linear, there is no combination of those two vectors that can get you back to the origin unless they are both scaled by zero. The image to the left is trying to demonstrate that. If we scale , no matter how small we make that scaling, we cannot find a scaling of that when added to will get us back to the origin. Thus we cannot find a linear combination where and/or does not equal zero.
This shows that neither vector is a linear combination of the other, because if it where, we could use a non-zero scaling of one, added to the other, to get us back to the origin.
We can also show that two vectors are linearly independent when their dot product is zero. This is covered in a latter section.
Another way of thinking about LI is that there is only one solution to the equation .
If is a set of vectors in , then is a subspace of if and only if:
- contains the zero vector:
- is closed under scalar multiplication:
- is closed under addition:
What these rules mean is that, using linear operations, you can't do anything to "break" out of the subspace using vectors from that subspace. Hence "closed".
Surprising, at least for me, was that the zero vector is actually a subspace of because the set obeys all of the above rules.
A very useful thing is that a span is always a valid subspace. Take, for example, . We can show that it is a valid subspace by checking that it objects the above 3 rules.
First, does it contain the zero vector? Recall that a span is the set of all linear combinations of the span vectors. So we know that the following linear combination is valid, and hence the span contains the zero vector. Second, is it closed under scalar multiplication? Because the span is the set of all linear combinations, for any set of scalars