Quantcast

Posts tagged with Taylor series

A jet can be thought of as the infinitesimal germ of a section of some bundle or of a map between spaces.

Jets are a coordinate-free version of … Taylor series.

Michael BächtoldDavid CorfieldUrs Schreiber

 

Pictorial glossary

Bundle:

Vector Bundle Construction
image
image

Sections:
\begin{figure}\begin{center}  \small\psfrag{PictureZeroSection} [l]{Zero sectio...  ...g{figure=figures/fibration_2.eps, width=0.4\textwidth}\end{center}\end{figure}
imagepicture that illustrates the research done

Mapping between spaces:




Just playing with z² / z² + 2z + 2

g(z)=\frac{z^2}{z^2+2z+2}

on WolframAlpha. That’s Wikipedia’s example of a function with two poles (= two singularities = two infinities). Notice how “boring” line-only pictures are compared to the the 3-D ℂ→>ℝ picture of the mapping (the one with the poles=holes). That’s why mathematicians say ℂ uncovers more of “what’s really going on”.

As opposed to normal differentiability, ℂ-differentiability of a function implies:

  • infinite descent into derivatives is possible (no chain of C¹ ⊂ C² ⊂ C³ ... Cω like usual)

  • nice Green’s-theorem type shortcuts make many, many ways of doing something equivalent. (So you can take a complicated real-world situation and validly do easy computations to understand it, because a squibbledy path computes the same as a straight path.)
  

Pretty interesting to just change things around and see how the parts work.

  • The roots of the denominator are 1+i and 1−i (of course the conjugate of a root is always a root since i and −i are indistinguishable)
  • you can see how the denominator twists
  • a fraction in ℂ space maps lines to circles, because lines and circles are turned inside out (they are just flips of each other: see also projective geometry)
  • if you change the z^2/ to a z/ or a 1/ you can see that.
  • then the Wikipedia picture shows the poles (infinities) 

Complex ℂ→ℂ maps can be split into four parts: the input “real”⊎”imaginary”, and the output “real"⊎"imaginary”. Of course splitting them up like that hides the holistic truth of what’s going on, which comes from the perspective of a “twisted” plane where the elements z are mod z • exp(i • arg z).

a conformal map (angle-preserving map)

ℂ→ℂ mappings mess with my head…and I like it.










Vectors, concretely, are arrows, with a head and a tail. If two arrows share a tail, then you can measure the angle between them. The length of the arrow represents the magnitude of the vector.

The modern abstract view is much more interesting but let’s start at the beginning.

Force vectors

Originally vectors were conceived as a force applied at a point.

As in, “That lawn ain’t mowing itself, boy. Now you git over there and apply a continuous stream of vectors to that lawnmower, before I apply a high-magnitude vector to your bee-hind!”

Thanks Galileo, totally gonna get you back, man

The Galilean idea of splitting a point into its x-coordinate, y-coordinate and z-coordinate works with vectors as well. “Apply a force that totals 5 foot-pounds / second² in the x direction and 2 foot-pounds / second² in the y direction”, for instance.

Therefore, both points and vectors benefit from adding more dimensions to Galileo’s “coordinate system”. Add a w dimension, a q dimension, a ξ dimension — and it’s up to you to determine what those things can mean.

If a vector can be described as (5, 2, 0), then why not make a vector that’s (5, 2, 0, 1.1, 2.2, 19, 0, 0, 0, 3)? And so on.

4th Dimension Plus

So that’s how you get to 4-D vectors, 13-D vectors, and 11,929-D vectors. But the really interesting stuff comes from considering -dimensional vectors. That opens up functional space, and sooooo many things in life are functions.

(Interesting stuff also happens when you make vectors out of things that are not traditionally conceived to be “numbers”. Another post.)

Abstractions

In the most general sense, vectors are things that can be added together. The modern, abstract view includes as vectors:

Things you can do with vectors

Given two vectors, you should be able to take their outer product or their inner product.

The inner product allows you to measure the angle between two vectors. If the inner product makes sense, then the space you are playing in has geometry. (Not all spaces have geometry — some just have topology.)

And — this is weird — if the concept of angle applies, then the concept of length applies as well. Don’t ask me why; the symbols just work that way.

Magnitude

But the “length” of a song (one of my for-instances above) would not be something like 2:43. The magnitude of a song vector would be the total amount of energy in the sound wave | compression wave.

\| \text{song} \| = \int \text{compression wave}

What is the angle between two songs, two spike-trains, two security prices? What is the angle between two heartbeats? It’s the correlation between them.

Linear Algebra

Also, you can do linear algebra on vectors — provided they’re coming out of the same point. Some might say that the ability to do linear algebra on something is what makes a vector.

That can mean different things in different spaces — like maybe you’re superposing wave-forms, or maybe you’re converting bitmap images to JPEG. Or maybe you’re Photoshopping an existing JPEG. Oh, man, Photoshop is so math-y.

shearing the mona lisa

Shearing the mona lisa (linear algebra on an image — from the Wikipedia page on eigenvectors, one of which is the red arrow)




"It’s easy to learn calculus and then forget what the point was.”
—Gilbert Strang

hi-res




The chief triumph of differential calculus is this:

Any nonlinear function can be approximated by a linear function.

(OK…pretty much any nonlinear function.) That approximation is the differential, aka the tangent line, aka the best affine approximation.  It is valid only around a small area but that’s good enough. Because small areas can be put together to make big areas. And short lines can make nonlinear* curves.

In other words, zoom in on a function enough and it looks like a simple line. Even when the zoomed-out picture is shaky, wiggly, jumpy, scrawly, volatile, or intermittently-volatile-and-not-volatile:

Fed funds rate history since 1990 -- back to 1949 available at www.economagic.com

Moreover, calculus says how far off those linear approximations are. So you know how tiny the straight, flat puzzle pieces should be to look like a curve when put together. That kind of advice is good enough to engineer with.

 

It’s surprising that you can break things down like that, because nonlinear functions can get really, really intricate. The world is, like, complicated.

So it’s reassuring to know that ideas that are built up from counting & grouping rocks on the ground, and drawing lines & circles in the sand, are in principle capable of describing ocean currents, architecture, finance, computers, mechanics, earthquakes, electronics, physics.

image

(OK, there are other reasons to be less optimistic.)


 

 

* What’s so terrible about nonlinear functions anyway? They’re not terrible, they’re terribly interesting. It’s just nearly impossible to generally, completely and totally solve nonlinear problems.

But lines are doable. You can project lines outward. You can solve systems of linear equations with the tap of a computer.  So if it’s possible to decompose nonlinear things into linear pieces, you’re money.

 

Two more findings from calculus.

  1. One can get closer to the nonlinear truth even faster by using polynomials. Put another way, the simple operations of + and ×, taught in elementary school, are good enough to do pretty much anything, so long as you do + and × enough times. 

  2. One can also get arbitrarily truthy using trig functions. You may not remember sin & cos but they are dead simple. More later on the sexy things you can do with them (Fourier decomposition).