Mathematics for Machine Learning - Day 15
Matrix Representation of Linear Mappings
Congratulations! Now you can't just write a basis without worrying if the basis is ordered or not. You've reached a stage where it's getting more and more technical. Now:
{% katex %}
\mathscr{B} = {b_1, \dots, b_n} \text{ is an unordered basis} \
\text{ and } \
\mathscr{B} = (b_1, \dots, b_n) \text{ is an ordered basis}
{% endkatex %}
The topic that's going to be discuss later on will matter if the vector are in order or not.
Coordinates
Consider a vector space and an ordered basis
{% katex %}
\mathscr{B} = (b_1, \dots, b_n) \text{ of } V \
\text{For any } x \in V
{% endkatex %}
We obtain a unique representation (linear combination)
{% katex %} x = (\alpha_1 b_1, \dots, \alpha_n b_n) {% endkatex %}
of x with respect to B. Then:
{% katex %}
(b_1, \dots, b_n) \text{ are the coordinates of x with respect to B.} \
\text{And the vector } \left[\begin{array}{c}
y_1 \
\vdots \
y_2
\end{array}\right] \in \reals^n
{% endkatex %}
is the coordinate vector / representation of x with respect to the ordered basis B.
Coordinate vector?
Yup, a basis effectively defines a coordinate system, much like the cartesian system.
There's a slight difference though, in this coordinate system, a vector:
{% katex %} x \in \reals^2 {% endkatex %}
has a representation that tells us how to linearly combine
{% katex %} (e_1 & e_2) {% endkatex %}
to obtain x.
What's the difference?
Yeah, cartesian is also a coordinate system that requires two points to make a two dimensional plain. But there's a twist, any basis of the vector defines a valid coordinate representation.
Example:
{% katex %}
\text{A geometric vector } x \in \reals \text{ with coordinates } \left[\begin{array}{c}
2 \
3
\end{array}\right] \\ \text{With respect to the standard basis (e_1, e_2) of \reals^2
{% endkatex %}
This means we can write it as:
{% katex %} x = 2e_1 + 3e_2 {% endkatex %}
However, we don't have to use the standard basis! be creative, for example, we can use:
{% katex %}
b_1 = \left[\begin{array}{c}
1 \
-1
\end{array}\right] and b_2 = \left[\begin{array}{c}
1 \
1
\end{array}\right] \
\text{This will obtain the coordinates } \frac{1}{2} \left[\begin{array}{c}
-1 \
5
\end{array}\right]
{% endkatex %}
Proof
What? how does changing one of the value into a minus affect it? Then let me change it into something that's easier to digest.
{% katex %}
1a + 1b = 2 \
-1a + 1b = 3
{% endkatex %}
Then we use addition on it to find b
{% katex %}
2b = 5 \
\therefore b = \frac{5}{2} \
\text{Adding into any of the two functions well obtain } a = -\frac{1}{2}
{% endkatex %}
Then to get the author's version, we just put the half on the front!
{% katex %}
\left[\begin{array}{c}
-\frac{1}{2} \
\frac{5}{2}
\end{array}\right] = \frac{1}{2}\left[\begin{array}{c}
-1 \
5
\end{array}\right]
{% endkatex %}
See! it's the same value, just from a different perspective
.
Acknowledgement
I can't overstate this: I'm truly grateful for this book being open-sourced for everyone. Many people will be able to learn and understand machine learning on a fundamental level. Whether changing careers, demystifying AI, or just learning in general, this book offers immense value even for fledgling composer such as myself. So, Marc Peter Deisenroth, A. Aldo Faisal, and Cheng Soon Ong, thank you for this book.
Source: Axler, Sheldon. 2015. Linear Algebra Done Right. Springer Deisenroth, M. P., Faisal, A. A., & Ong, C. S. (2020). Mathematics for Machine Learning. Cambridge: Cambridge University Press. https://mml-book.com