Mathematics for Machine Learning - Day 13

Pour Terra20 July, 2024

Isomorphic vector meme

Linear Mappings

Consider

V,WR A mapping Φ:VW preserves the vector space V, W \in \reals \\\ \text{A mapping } \Phi : V \to W \text{ preserves the vector space }

If

Φ(x+y)=Φ(x)+Φ(y) Φ(λx)=λΦ(x) x,yV and λR\Phi (x+y) = \Phi(x) + \Phi(y) \\\ \Phi (\lambda x) = \lambda \Phi (x) \\\ \forall x,y \in V \text{ and } \lambda \in \reals

What a mapping? and why did you start with a formula?

Good question. It's because you might realize this from the property of distributivity but with a different symbol. And that's it!

It's preserving the vector while being able to scale it and don't forget with mathematical notations, Phi can be anything so long as it doesn't violate any rules stated, it can be a scalar, a vector, a matrix, even a set!... Actually it can't be a set, I'd get slapped by a mathematician for trying to map a vector with a set.

What use is mapping?

I don't know :D but much like in Panda's map (It's a python library) a mapping can be a an important tool and I see similarities from this type of mapping.

Also, the mapping itself isn't the main topic, it's:

Linear mapping

A mapping is called linear mapping if:

x,yV,λ and ΨR: Φ(λx+Ψy)=λΦ(x)+ΨΦ(y)\forall x,y \in V, \forall \lambda \text{ and } \Psi \in \reals : \\\ \Phi(\lambda x + \Psi y) = \lambda \Phi (x) + \Psi \Phi (y)

That means that it's defined as linear mapping when a mapping is done to a vector and will result in a vector. Linear mapping can also be called linear transformation / vector space homomorphism

Consider a mapping

A mapping Φ:VW Where V,W can be arbitrary sets\text{A mapping } \Phi : V \to W \\\ \text{Where } V, W \text{ can be arbitrary sets}

Phi will be called differently depending on the conditions

Injective

x,yV:Φ(x)=Φ(y)x=y\forall x,y \in V : \Phi (x) = \Phi (y) \to x=y

Surjective

Φ(V)=W\Phi (V) = W

This means that every element in W can be reached from V using Phi

Bijective

ΨΦ(x)=x\Psi \circ \Phi(x) = x

Bijective will fulfill both injective and surjective. This can be undone by mapping the inverse of the previous mapping

Ψ:WV textWhereΨ=Φ1\Psi : W \to V \\\ text{Where } \Psi = \Phi^{-1}

Special cases of linear mapping

text1.Isomorphism:Φ:VWLinear and Bijective text2.Endomorphism:Φ:VVLinear text3.Automorphism:Ψ:VVLinear and Bijectivetext{1. Isomorphism: } \Phi : V \to W \text{Linear and Bijective} \\\ text{2. Endomorphism: } \Phi : V \to V \text{Linear} \\\ text{3. Automorphism: } \Psi : V \to V \text{Linear and Bijective}

Example

Φ:RC,Φ(x)=x1+ix2\Phi : \reals \to \Complex, \Phi(x) = x_1 + i x_2 Φ([x1 x2]+[y1 y2])=x1+ix2+y1+iy2 =Φ([x1 x2])+Φ([y1 y2])\Phi ( \left[\begin{array}{c} x_1 \\\ x_2 \end{array}\right] + \left[\begin{array}{c} y_1 \\\ y_2 \end{array}\right] ) = x_1 + ix_2 + y_1 + iy_2 \\\ = \Phi ( \left[\begin{array}{c} x_1 \\\ x_2 \end{array}\right]) + \Phi ( \left[\begin{array}{c} y_1 \\\ y_2 \end{array}\right] )
On the other hand :
Φ(λ[x1 x2])=λx1+iλx2 =λ(x1+ix2)=Φλ([x1 x2])\Phi ( \lambda \left[\begin{array}{c} x_1 \\\ x_2 \end{array}\right]) = \lambda x_1 + i \lambda x_2 \\\ = \lambda ( x_1 + i x_2 ) = \Phi \lambda ( \left[\begin{array}{c} x_1 \\\ x_2 \end{array}\right] )

Finite-dimensional vector

(From theorem 3.59 in Axler, 2015)

V and W is isomorphic if and only if dim(V) = dim(W)

Intuition

This means that V and W are kind of the same thing, since they can be transformed to one another without incurring any loss.

Consider the vector V, W, X

For linear mappings:

Φ:VW and Ψ:WX\Phi : V \to W \text{ and } \Psi : W \to X

the mapping that is also linear will be

ΦΦVX\Phi \circ \Phi V \to X

For isomorphism

If:

Φ:VW is an isomorphism\Phi : V \to W \text{ is an isomorphism}

Then:

Φ1:WV is an isomorphism too\Phi^{-1} : W \to V \text{ is an isomorphism too}

For linear mapping (2)

If:

Φ:VW,Ψ:VW are linear\Phi : V \to W, \Psi : V \to W \text{ are linear}

Then:

Φ+Ψ and λΦ,λR are linear too\Phi + \Psi \text{ and } \lambda \Phi, \lambda \in \reals \text{ are linear too}

Acknowledgement

I can't overstate this: I'm truly grateful for this book being open-sourced for everyone. Many people will be able to learn and understand machine learning on a fundamental level. Whether changing careers, demystifying AI, or just learning in general, this book offers immense value even for fledgling composer such as myself. So, Marc Peter Deisenroth, A. Aldo Faisal, and Cheng Soon Ong, thank you for this book.

Source: Axler, Sheldon. 2015. Linear Algebra Done Right. Springer Deisenroth, M. P., Faisal, A. A., & Ong, C. S. (2020). Mathematics for Machine Learning. Cambridge: Cambridge University Press. https://mml-book.com

Made with TERRA Made with TERRA Made with TERRA Made with TERRA Made with TERRA Made with TERRA Made with TERRA Made with TERRA Made with TERRA Made with TERRA