Posts tagged with differential equations







a smooth field of 1-vectors in 3-D

a smooth field of 1-vectors in 3-D

(Source: thievess)


hi-res







A beautiful depiction of a 1-form by Robert Ghrist. You never thought understanding a 1→1-dimensional ODE (or a 1-D vector field) would be so easy!

What his drawing makes obvious, is that images of Phase Space wear a totally different meaning than “up”, “down”, “left”, “right”. In this case up = more; down = less; left = before and right = after. So it’s unhelpful to think about derivative = slope.

BTW, the reason that ƒ must have an odd number of fixed points, follows from the “dissipative” assumption (“infinity repels”). If ƒ (−∞)→+, then the red line enters from the top-left. And if ƒ (+∞)→−∞, then the red line exits toward the bottom-right. So no matter how many wiggles, it must cross an odd number of times. (Rolle’s Thm / intermediate value theorem from undergrad calculus / analysis)

Found this via John D Cook.

(Source: math.upenn.edu)




Proof that differential equations are real.

The shapes the salt is taking at different pitches are combinations of eigenfunctions of the Laplace operator.

(The Laplace operator  tells you the flux density of the gradient flow of a many-to-one function ƒ. As eigenvectors summarise a matrix operator, so do eigenfunctions summarise this differential operator.)

Remember that sound is compression waves — air vibrating back and forth — so that pressure can push the salt (or is it sand?) around just like wind blows sand in the desert.

Notice the similarity to solutions of Schrödinger PDE’s from the hydrogen atom.

When the universe sings itself, the probability waves of energy hit each other and form material shapes in the same way as the sand/salt in the video is doing. Except in 3-D, not 2-D. Everything is, like, waves, man.

To quote Dave Barry: I am not making this up. Science fact, not science fiction.




The universe is a song, singing itself.

hi-res




The LaPlace Transform is the continuous version of a power series.

Think of a power series
a_n x^n
\sum_n \text{const}_n \cdot \blacksquare^n \ = \ f(\blacksquare)
as mapping a sequence of constants to a function.
{ const_1, const_2, ... } ’ f(x)
Well, it does, after all.

Then turn the into a . And turn the x^k into a exp ( ‒k ln x ). Now you have the continuous version of the “spectrum” view that allows so many tortuous ODE’s to be solved in a flash. I wonder what the economic value of that formula is?

In addition to solving some ODE’s that occur in engineering applications, there is also wisdom to be had here. Thinking of functions as all being made up of the same components allows fair comparisons between them.

plot(eXp, xlab="exponent in the power series", ylab="value of constant", main="Spectrum of exp", log="y", cex.lab=1.1, cex.axis=.9, type="h", lwd=8, lend="butt", col="#333333")    eXp <- c(1, 1/2, 1/6, 1/2/3/4, 1/2/3/4/5, 1/2/3/4/5/6, 1/2/3/4/5/6/7, 1/2/3/4/5/6/7/8, 1/2/3/4/5/6/7/8/9, 1/2/3/4/5/6/7/8/9/10, 1/2/3/4/5/6/7/8/9/10/11),    eXp <- c(1, 1/2, 1/6, 1/2/3/4, 1/2/3/4/5, 1/2/3/4/5/6, 1/2/3/4/5/6/7, 1/2/3/4/5/6/7/8, 1/2/3/4/5/6/7/8/9, 1/2/3/4/5/6/7/8/9/10, 1/2/3/4/5/6/7/8/9/10/11)

(If you really want to know what a power series is, read Roger Penrose’s book.

To summarise: a lot of functions can be approximated by summing weighted powers of the input variable, as an equally valid alternative to applying the function itself. For example, adding input¹  1/2 ⨯ input²  1/2/3 ⨯ input³  1/2/3/4 ⨯ input⁴ and so on, eventually approximates e^input.)