Posts tagged with **eigenvector**

The eigenvectors of a matrix **summarise** what it does.

- Think about a large, not-sparse matrix. A lot of computations are implied in that block of numbers. Some of those computations might overlap each other—2 steps forward, 1 step back, 3 steps left, 4 steps right … that kind of thing, but in 400 dimensions.
**The eigenvectors aim at the end result**of it all.

**The eigenvectors point in the same direction before & after**a linear transformation is applied.*(& they are the only vectors that do so)*

For example, consider a**shear**repeatedly applied to ℝ².

In the above, and . (The red arrow is not an eigenvector because it shifted over.)**The eigenvalues**say**how**their eigenvectors**scale**during the transformation, and if they turn around.

Ifthen |**λ**ᵢ = 1.3**eig****ᵢ**| grows by**30%**. If**λᵢ = −2**then doubles in length and points backwards. If**λᵢ = 1**then |**eig****ᵢ**| stays the same. And so on. Above,**λ₁ = 1**since stayed the same length.

It’s nice to add that and .

For a long time I wrongly thought an eigenvector was, like, its own thing. But it’s not. Eigenvectors are a way of talking about a (linear) transform / operator. So eigenvectors are always *the eigenvectors of* some transform. Not their own thing.

Put another way: eigenvectors and eigenvalues are a short, universally comparable way of summarising a square matrix. Looking at just the eigenvalues (the spectrum) tells you more relevant detail about the matrix, faster, than trying to understand the entire block-of-numbers and how the parts of the block interrelate. Looking at the eigenvectors tells you where repeated applications of the transform will “leak” (if they leak at all).

To recap: eigenvectors are unaffected by the matrix transform; they simplify the matrix transform; and the **λ**'s tell you how much the |**eig**|’s change under the transform.

Now a payoff.

### Dynamical Systems make sense now.

If repeated applications of a matrix = a dynamical system, then the eigenvalues explain the system’s long-term behaviour.

I.e., they tell you whether and how the system stabilises, or … doesn’t stabilise.

Dynamical systems model interrelated systems like ecosystems, human relationships, or weather. They also unravel mutual causation.

### What else can I do with eigenvectors?

Eigenvectors can help you understand:

- helicopter stability
- quantum particles (the Von Neumann formalism)
- guided missiles
- PageRank 1 2
- the fibonacci sequence
- your Facebook friend network
- eigenfaces
- lots of academic crap
- graph theory
- mathematical models of love
- electrical circuits
- JPEG compression 1 2
- markov processes
- operators & spectra
- weather
- fluid dynamics
- systems of ODE’s … well, they’re just continuous-time dynamical systems
- principal components analysis in statistics
- for example principal components (eigenvalues after varimax rotation of the correlation matrix) were used to try to identify the dimensions of brand personality

Plus, maybe you will have a cool idea or see something in your life differently if you understand eigenvectors intuitively.

No, really. The solutions of the Schrödinger Equation are harmonics, just like musical notes.

**Quantum state** of an electron orbiting a hydrogen atom where **n=6**, **l=4**, and **m=1** (spin doesn’t matter):

This is an equal superposition of the **|3,2,1>** and **|3,1,-1>** eigenstates:

This is an equal superposition of the **|3,2,2>** and **|3,1,-1>** eigenstates:

This is an equal superposition of the **|4,3,3>** and **|4,1,0>** eigenstates:

What is an **eigenstate**? It’s a convenient state to use as a basis. We get to decide which quantum states are “pure” and which “mixed”. There’s an easy way and a hard way; the easy way is to use eigenstates as the pure states.

More mathematically: the Schrödinger equation tells us what’s going on in an atom. The answers to the Schrödinger equation are complex and hard to compare. But phrasing the answers as combinations of eigenfunctions makes them comparable. One atom is 30% state *A*, 15% state *B*, 22% state *C* … and another atom is different percentages of the same states.

Just like vectors in 3-D space, where you can orient the axes differently — you can pick different directions for x, y, and z to point in. But now the vectors are abstract, representing states. Still addable so still vectors. Convex or linear combinations of those “pure” states describe the “mixed” states.

Related:

- eigenvectors
- eigenfaces
- eigenstates
- eigenbasis
- eigenfunctions
- eigenmodes
- eigendirections
- eigengraphs, and
- eigencombinations.

**SOURCE: **Atom in a Box