Posts tagged with differential equations

a smooth field of 1-vectors in 3-D

(Source: thievess)

hi-res

A beautiful depiction of a 1-form by Robert Ghrist. You never thought understanding a 1→1-dimensional ODE (or a 1-D vector field) would be so easy!

What his drawing makes obvious, is that images of Phase Space wear a totally different meaning than “up”, “down”, “left”, “right”. In this case up = more; down = less; left = before and right = after. So it’s unhelpful to think about derivative = slope.

BTW, the reason that ƒ must have an odd number of fixed points, follows from the “dissipative” assumption (“infinity repels”). If ƒ (−∞)→+, then the red line enters from the top-left. And if ƒ (+∞)→−∞, then the red line exits toward the bottom-right. So no matter how many wiggles, it must cross an odd number of times. (Rolle’s Thm / intermediate value theorem from undergrad calculus / analysis)

Found this via John D Cook.

(Source: math.upenn.edu)

### Proof that differential equations are real.

The shapes the salt is taking at different pitches are combinations of eigenfunctions of the Laplace operator.

(The Laplace operator $\dpi{200} \bg_white \large \bigtriangleup f = \sum_{i=1}^n {\partial ^2 f \over \partial {x_i}^2 }$ tells you the flux density of the gradient flow of a many-to-one function ƒ. As eigenvectors summarise a matrix operator, so do eigenfunctions summarise this differential operator.)

Remember that sound is compression waves — air vibrating back and forth — so that pressure can push the salt (or is it sand?) around just like wind blows sand in the desert.

Notice the similarity to solutions of Schrödinger PDE’s from the hydrogen atom.

When the universe sings itself, the probability waves of energy hit each other and form material shapes in the same way as the sand/salt in the video is doing. Except in 3-D, not 2-D. Everything is, like, waves, man.

To quote Dave Barry: I am not making this up. Science fact, not science fiction.

hi-res

## Laplace Transform

The LaPlace Transform is the continuous version of a power series.

Think of a power series
$\large \dpi{200} \bg_white \sum_n \mathrm{const}_n \cdot \mathrm{input}^n \ = \ f( \mathrm{ input } )$
$\large \dpi{200} \bg_white \sum_n \text{const}_n \cdot \blacksquare^n \ = \ f(\blacksquare)$
as mapping a sequence of constants to a function.
$\large \dpi{200} \bg_white \begin{pmatrix} \mathrm{const}_1 \\ \mathrm{const}_2 \\ \mathrm{const}_3 \\ \vdots \end{pmatrix} \longmapsto f \in \mathcal{F}$
Well, it does, after all.

Then turn the into a . And turn the x^k into a exp ( ‒k ln x ). Now you have the continuous version of the “spectrum” view that allows so many tortuous ODE’s to be solved in a flash. I wonder what the economic value of that formula is?

In addition to solving some ODE’s that occur in engineering applications, there is also wisdom to be had here. Thinking of functions as all being made up of the same components allows fair comparisons between them.

(If you really want to know what a power series is, read Roger Penrose’s book.

To summarise: a lot of functions can be approximated by summing weighted powers of the input variable, as an equally valid alternative to applying the function itself. For example, adding input¹  1/2 ⨯ input²  1/2/3 ⨯ input³  1/2/3/4 ⨯ input⁴ and so on, eventually approximates e^input.)