Posts tagged with eigenvector

Linear Transformations will take you on a Trip Comparable to that of Magical Mushroom Sauce, And Perhaps cause More Lasting Damage

Long after I was supposed to “get it”, I finally came to understand matrices by looking at the above pictures. Staring and contemplating. I would come back to them week after week. This one is a stretch; this one is a shear; this one is a rotation. What’s the big F?

The thing is that mathematicians think about transforming an entire space at once. Any particular instance or experience must be of a point, but in order to conceive and prove statements about all varieties and possibilities, mathematicians think about “mappings of the entire possible space of objects”. (This is true in group theory as much as in linear algebra.)

So the change felt by individual ink-spots going from the original-F to the F-image would be the experience of an actual orbit in a dynamical system, of an actual feather blown by a bit of wind, an actual bullet passing through an actual heart, an actual droplet in the Mbezi River pulsing forward with the flow of time. But mathematicians consider the totality of possibilities all at once. That’s what “transforming the space” means.

\begin{pmatrix} a \rightsquigarrow a  & | &  a \rightsquigarrow b  & | &  a \rightsquigarrow c \\ \hline b \rightsquigarrow a  & | &  b \rightsquigarrow b  & | &  b \rightsquigarrow c \\ \hline c \rightsquigarrow a  & | &  b \rightsquigarrow c  & | &  c \rightsquigarrow c   \end{pmatrix}

What do the slots in the matrix mean? Combing from left to right across the rows of numbers often means “from”. Going from top to bottom along the columns often means “to”. This is true in Markov transition matrices for example, and those combing motions correspond with basic matrix multiplication.

So there’s a hint of causation to this matrix business. Rows are the “causes” and columns are the “effects”. Second row, fifth column is the causal contribution of input B to the resulting output E and so on. But that’s not 100% correct, it’s just a whiff of a hint of a suggestion of a truth.

The “domain and image” viewpoint in the pictures above (which come from Flanigan & Kazdan about halfway through) is a truer expression of the matrix concept.

  • [ [1, 0], [0, 1] ] maps the Mona Lisa to itself,
  • [ [.799, −.602], [.602, .799] ] has a determinant of 1 — does not change the amount of paint — and rotates the Mona Lisa by 37° counterclockwise,
  • [ [1, 0], [0, 2] ] stretches the image northward;
  • and so on.

a shear mapping, which is linear

MATRICES IN WORDS

Matrices aren’t* just 2-D blocks of numbers — that’s a 2-array. Matrices are linear transformations. Because “matrix” comes with rules about how the numbers combine (inner product, outer product), a matrix is a verb whereas a 2-array, which can hold any kind of data with any or no rules attached to it, is a noun.

* (NB: Computer languages like R, Java, and SAGE/Python have their own definitions. They usually treat vector == list && matrix == 2-array.)

Linear transformations in 1-D are incredibly restricted. They’re just proportional relationships, like “Buy 1 more carton of eggs and it will cost an extra $2.17. Buy 2 more cartons of eggs and it will cost an extra $4.34. Buy 3 more cartons of eggs and it will cost an extra $6.51….”  Bo-ring.

In scary mathematical runes one writes:

\begin{matrix}  y \propto x  \\   \textit{---or---}  \\  y = \mathrm{const} \cdot x  \end{matrix}

And the property of linearity itself is written:

image

Or say: rescaling or adding first, it doesn’t matter which order.

 



“ADDING” “THINGS”

The matrix revolution does so much generalisation of this simple concept it’s hard to imagine you’re still talking about the same thing. First of all, the insight that mathematically abstract vectors, including vectors of generalised numbers, can represent just about anything. Anything that can be “added” together.

the Matrix Revolution ... I couldn't resist

And I put the word “added” in quotes because, as long as you define an operation that obeys commutativity, associativity, and distributes over multiplication-by-a-scalar, you get to call it “addition”! See the mathematical definition of ring.

  • The blues scale has a different notion of “addition” than the diatonic scale.
  • Something different happens when you add a spiteful remark to a pleased emotional state than when you add it to an angry emotional state.
  • Modular and noncommutative things can be “added”. Clock time, food recipes, chemicals in a reaction, and all kinds of freaky mathematical fauna fall under these categories.
  • Polynomials, knots, braids, semigroup elements, lattices, dynamical systems, networks, can be “added”. Or was that “multiplied”? Like, whatever.
  • Quantum states (in physics) can be “added”.
  • So “adding” is perhaps too specific a word—all we mean is “a two-place input, one-place output satisfying X, Y, Z”, where X,Y,Z are the properties from your elementary school textbook like identity, associativity, commutativity.

 So your imagination is usually the limiting reagent in defining “addition”.

image

But that’s just vectors. Matrices also add dimensionality. Linear transformations can be from and to any number of dimensions:

  • 1→7
  • 4→3
  • 1671 → 5
  • 18 → 188
  • and X→1 is a special case, the functional. Functionals comprise performance metrics, size measurements, your final grade in a class, statistical moments (kurtosis, skew, variance, mean) and other statistical metrics (Value-at-Risk, median), divergence (not gradient nor curl), risk metrics, the temperature at any point in the room, EBITDA, not function(x) { c( count(x), mean(x), median(x) ) }, and … I’ll do another article on functionals.

In contemplating these maps from dimensionality to dimensionality, it’s a blessing that the underlying equation is so simple as linear (proportional). When thinking about information leakage, multi-parameter cause & effect, sources & sinks in a many-equation dynamical system, images and preimages and dual spaces; when the objects being linearly transformed are systems of partial differential equations, — being able to reduce the issue to mere multi-proportionalities is what makes the problems tractable at all.

So that’s why so much painstaking care is taken in abstract linear algebra to be absolutely precise — so that the applications which rely on compositions or repetitions or atlases or inversions of linear mappings will definitely go through.

image

 

Why would anyone care to learn matrices?

Understanding of matrices is the key difference between those who “get” higher maths and those who don’t. I’ve seen many grad students and professors reading up on linear algebra because they need it to understand some deep papers in their field. 

  • Linear transformations can be stitched together to create manifolds.
  • If you add Fourier | harmonic | spectral techniques + linear algebra, you get really trippy — yet informative — views on things. Like spectral mesh compressions of ponies.
  • The “linear basis” and “linear combination” metaphors extend far. For example, to eigenfaces or When Doves Cry Inside a Convex Hull.
  • You can’t understand slack vectors or optimisation without matrices.
  • JPEG, discrete wavelet transform, and video compression rely on linear algebra.
  • A 2-matrix characterises graphs or flows on graphs. So that’s Facebook friends, water networks, internet traffic, ecosystems, Ising magnetism, Wassily Leontief’s vision of the economy, herd behaviour, network-effects in sales (“going viral”), and much, much more that you can understand — after you get over the matrix bar.
  • The expectation operator of statistics (“average”) is linear.
  • Dropping a variable from your statistical analysis is linear. Mathematicians call it “projection onto a lower-dimensional space” (second-to-last example at top).
  • Taking-the-derivative is linear. (The differential, a linear approximation of a could-be-nonlinear function, is the noun that results from doing the take-the-derivative verb.) 
  • The composition of two linear functions is linear. The sum of two linear functions is linear. From these it follows that long differential equations—consisting of chains of “zoom-in-to-infinity" (via "take-the-derivative") and "do-a-proportional-transformation-there" then "zoom-back-out" … long, long chains of this, can amount in total to no more than a linear transformation.
    image 
  • If you line up several linear transformations with the proper homes and targets, you can make hard problems easy and impossible problems tractable. The more “advanced-mathematics” the space you’re considering, the more things become linear transformations.
  • That’s why linear operators are used in both quantum mechanical theory and practical things like building helicopters.
  • You can understand dynamical systems, attractors, and thereby understand love better through matrices.










The eigenvectors of a matrix summarise what it does.

  1. Think about a large, not-sparse matrix. A lot of computations are implied in that block of numbers. Some of those computations might overlap each other—2 steps forward, 1 step back, 3 steps left, 4 steps right … that kind of thing, but in 400 dimensions. The eigenvectors aim at the end result of it all.
     
  2. The eigenvectors point in the same direction before & after a linear transformation is applied. (& they are the only vectors that do so) 

    For example, consider a shear three-elevenths shear to the east, per northward block repeatedly applied to ℝ².

    image
    In the above, eig_1 = \vec{blue} = \vec{(1  0)}  and image. (The red arrow is not an eigenvector because it shifted over.)

  3. The eigenvalues say how their eigenvectors scale during the transformation, and if they turn around.

    If λᵢ = 1.3 then |eig| grows by 30%.
     If λᵢ = −2»_i = 2 then eig_i doubles in length and points backwards. If λᵢ = 1 then |eig| stays the same. And so on. Above, λ₁ = 1 since eig_1 = \vec{blue} = \vec{(1  0)} stayed the same length.

    It’s nice to add that image and image.

For a long time I wrongly thought an eigenvector was, like, its own thing. But it’s not. Eigenvectors are a way of talking about a (linear) transform / operator. So eigenvectors are always the eigenvectors of some transform. Not their own thing.

Put another way: eigenvectors and eigenvalues are a short, universally comparable way of summarising a square matrix. Looking at just the eigenvalues (the spectrum) tells you more relevant detail about the matrix, faster, than trying to understand the entire block-of-numbers and how the parts of the block interrelate. Looking at the eigenvectors tells you where repeated applications of the transform will “leak” (if they leak at all).

To recap: eigenvectors are unaffected by the matrix transform; they simplify the matrix transform; and the λ's tell you how much the |eig|’s change under the transform.

Now a payoff.

Dynamical Systems make sense now.

If repeated applications of a matrix = a dynamical system, then the eigenvalues explain the system’s long-term behaviour.

image

I.e., they tell you whether and how the system stabilises, or … doesn’t stabilise.

Dynamical systems model interrelated systems like ecosystems, human relationships, or weather. They also unravel mutual causation.

What else can I do with eigenvectors?

Eigenvectors can help you understand:

  • helicopter stability
  • quantum particles (the Von Neumann formalism)
  • guided missiles
  • PageRank 1 2
  • the fibonacci sequence
  • your Facebook friend network
  • eigenfaces
  • lots of academic crap
  • graph theory
  • mathematical models of love
  • electrical circuits
  • JPEG compression 1 2
  • markov processes
  • operators & spectra
  • weather
  • fluid dynamics
  • systems of ODE’s … well, they’re just continuous-time dynamical systems
  • principal components analysis in statistics
  • for example principal components (eigenvalues after varimax rotation of the correlation matrix) were used to try to identify the dimensions of brand personality

Plus, maybe you will have a cool idea or see something in your life differently if you understand eigenvectors intuitively.




No, really. The solutions of the Schrödinger Equation are harmonics, just like musical notes.

Quantum state of an electron orbiting a hydrogen atom where n=6, l=4, and m=1 (spin doesn’t matter):
|6,4,1> Orbital Animation

This is an equal superposition of the |3,2,1> and |3,1,-1> eigenstates:
|3,2,1>+|3,1,-1> Orbital Animation

This is an equal superposition of the |3,2,2> and |3,1,-1> eigenstates:
|3,2,2>+|3,1,-1> Orbital Animation

This is an equal superposition of the |4,3,3> and |4,1,0> eigenstates:
|4,1,0>+|4,3,3> Orbital Animation


What is an eigenstate? It’s a convenient state to use as a basis. We get to decide which quantum states are “pure” and which “mixed”. There’s an easy way and a hard way; the easy way is to use eigenstates as the pure states.

More mathematically: the Schrödinger equation tells us what’s going on in an atom. The answers to the Schrödinger equation are complex and hard to compare. But phrasing the answers as combinations of eigenfunctions makes them comparable. One atom is 30% state A, 15% state B, 22% state C … and another atom is different percentages of the same states.

Just like vectors in 3-D space, where you can orient the axes differently — you can pick different directions for x, y, and z to point in. But now the vectors are abstract, representing states. Still addable so still vectors. Convex or linear combinations of those “pure” states describe the “mixed” states.


Related:

SOURCE: Atom in a Box