Posts tagged with function

One of my projects in life is to (i) become “fluent in mathematics" in the sense that my intuition should incorporate the objects and relationships of 20th-century mathematical discoveries, and (ii) share that feeling with people who are interested in doing the same in a shorter timeframe.

Inspired by the theory of Plato’s Republic that “philosopher kings” should learn Geometry—pure logic or the way any universe must necessarily work—and my belief that the shapes
image
image
image
image
image
image
image
a covering, drawn by Robert Ghrist www.math.upenn.edu/~rghrist
image
image
imageimageimageimageimageimageimageimageimageimageimageimage

and feelings thereof operate on a pre-linguistic, pre-rational “gut feeling” level, this may be a worthwhile pursuit. The commercial application would come in the sense that, once you’re in a situation where you have to make big decisions, the only tools you have, in some sense, are who you have become. (Who knows if that would work—but hey, it might! At least one historical wise guy believed the decision-makers should prepare their minds with the shapes of ultimate logic in the universe—and the topologists have told us by now of many more shapes and relations.)

To that end I owe the interested a few more blogposts on:

  • automorphisms / homomorphisms
  • the logic of shape, the shape of logic
  • breadth of functions
  • "to equivalence-class"

which I think relate mathematical discoveries to unfamiliar ways of thinking.

 

Today I’ll talk about the breadth of functions.

If you remember Descartes’ concept of a function, it is merely a one-to-at-least-one association. “Associate” is about as blah and general and nothing a verb as I could come up with. How could it say anything worthwhile?

The breadth of functions-as-verbs, I think, comes from which codomains you choose to associate to which domains.

The biggest contrast I can come up with is between

  1. a function that associates a non-scalar domain to a ≥0 scalar domain, and
  2. a domain to itself.

If I impose further conditions on the second kind of function, it becomes an automorphismThe conditions being surjectivity  and injectivity : coveringness ≥ 

image
image
and one-to-one-ness 
≤  ↑
image
successor function and square function
image
Monotone and antitone functions  (not over ℝ just the domain you see = 0<x<1⊂ℝ)  These are examples of invertible functions.
.

If I impose those two conditions then I’m talking about an isomorphism (bijection) from a space to itself, which I could also call “turning the abstract space over and around and inside out in my hands” — playing with the space. If I biject the space to another version of itself, I’m looking at the same thing in a different way.

imageimage

Back to the first case, where I associate a ≥0 scalar (i.e., a “regular number” like 12.8) to an object of a complicated space, like

  • the space of possible neuron weightings;
  • the space of 2-person dynamical systems (like the “love equations”);
  • a space containing weird objects that twist in a way that’s easier to describe than to draw;
  • a space of possible things that could happen;
  • the space of paths through London that spend 90% of their time along the Thames;
    image
  • the space of possible protein configurations;
    image

then I could call that “assigning a size to the object”. Again I should add some more constraints to the mapping in order to really call it a “size assignment”. For example continuity, if reasonable—I would like similar things to have a similar size. Or the standard definition of a metric: dist(a,b)=dist(b,a); dist(x,x)=0; no other zeroes besides dist(self,self), and triangle law.

Since the word “size" itself could have many meanings as well, such as:

  • volume
  • angle measure
  • likelihood
  • length/height
  • correlation
  • mass
  • how long an algorithm takes to run
  • how different from the typical an observation is
  • how skewed a statistical distribution is
  • (the inverse of) how far I go until my sampling method encounters the farthest-away next observation
  • surface area
    File:Bronchial anatomy.jpg
  • density
  • number of tines (or “points” if you’re measuring a buck’s antlers)
    image
    image
  • how big of a suitcase you need to fit the thing in (L-∞ norm)
    image
    image

which would order objects differently (e.g., lungs have more surface area in less volume; fractals have more points but needn’t be large to have many points; a delicate sculpture could have small mass, small surface area, large height, and be hard to fit into a box; and osmium would look small but be very heavy—heavier than gold).

image
image

Let’s stay with the weighted-neurons example, because it’s evocative and because posets and graphs model a variety of things.

image
image
image
image
image
image
image

image
image

An isomorphism from graphs to graphs might be just to interchange certain wires for dots. So roads become cities and cities become roads. Weird, right? But mathematically these can be dual. I might also take an observation from depth-first versus breadth-first search from computer science (algorithm execution as trees) and apply it to a network-as-brain, if the tree-ness is sufficiently similar between the two and if trees are really a good metaphor after all for either algorithms or brains.

image
imageBrains sound like a wicked-hard space to think about.  It’s a tightly connected (but not totally connected) network (graph theory)  Each of the nodes’ 3-D location may be important as well (voxels)  The signals propagate through time (dynamical)

More broadly, one hopes that theorems about automorphism groups on trees (like automorphism groups on T-shirts) could evoke interesting or useful thoughts about all the tree-like things and web-like things: be they social networks, roads, or brains.

 

So that’s one example of a pre-linguistic “shape” that’s evoked by 20th-century mathematics. Today I feel like I could do two: so how about To Equivalence-Class.

Probably due to the invention of set theory, mathematics offers a way of bunching all alike things together. This is something people have done since at least Aristotle; it’s basically like Aristotle’s categories.

  • The set of all librarians;
  • The set of all hats;
  • The set of all sciences;
  • Quine’s (extensional) definition of the number three as “the class of all sets with cardinality three”. (Don’t try the “intensional” definition or “What is it intrinsically that makes three, three? What does three really mean?” unless you’re trying to drive yourself insane to get out of the capital punishment.)
  • The set of all cars;
  • The set of all cats;
  • The set of all computers;
    Water Computer
    image
    imageimage
  • The set of all even numbers;
  • The set of all planes oriented any way in 𝔸³
  • The set of all equal-area blobs in any plane 𝔸² that’s parallel to the one you’re talking about (but could be shifted anywhere within 𝔸³)
    image
  • The set of all successful people;
  • The set of all companies that pay enough tax;
  • The set of all borrowers who will make at least three late payments during the life of their mortgage;
  • The set of all borrowers with between 1% and 5% chance of defaulting on their mortgage;
  • The set of all Extraverted Sensing Feeling Perceivers;
  • The set of all janitors within 5 years of retirement age, who have worked in the custodial services at some point during at least 15 of the last 25 years;
  • The set of all orchids;
  • The set of all ungulates;

The boundaries of some of these (Aristotelian, not Lawverean) categories may be fuzzy or vague

  • if you cut off a cat’s leg is it still a cat?
    image
    What if you shave it? What if you replace the heart with a fish heart?
    image
  • Is economics a science? Is cognitive science a science? Is mathematics a science? Is  Is the particular idea you’re trying to get a grant for scientific?

and in fact membership in any of these equivalence classes could be part of a rhetorical contest. If you already have positive associations with “science”, then if I frame what I do as scientific then you will perceive it as e.g. precise, valuable, truthful, honourable, accurate, important, serious, valid, worthwhile, and so on. Scientists put Man on the Moon. Scientists cured polio. Scientists discovered Germ Theory. (But did “computer scientists” or “statisticians” or “Bayesian quantum communication” or “full professors” or “mathematical élite” or “string theorists” do those things? Yet they are classed together under the STEM label. Related: engineers, artisans, scientists, and intelligentsia in Leonardo da Vinci’s time.)

But even though it is an old thought-form, mathematicians have done such interesting things with the equivalence-class concept that it’s maybe worth connecting the mathematical type with the everyday type and see where it leads you.

Characteristic property of the quotient topology

What mathematics adds to the equivalence-class concept is the idea of “quotienting” to make a new equivalence-class. For example if you take the set of integers you can quotient it in two to get either the odd numbers or the even numbers.

image

  • If you take a manifold and quotient it you get an orbifold—an example of which would be Dmitri Tymoczko’s mathematical model of Bach/Mozart/Western theory of harmonious musical chords.
    image
    image
  • If you take the real plane ℝ² and quotient it by ℤ²
    image
    (ℤ being the integers) you get the torus 𝕋²
    image
  • Likewise if you take ℝ and quotient it by the integers ℤ you get a circle.
    image

  • If you take connected orientable topological surfaces S with genus g and p punctures, and quotient by the group of orientation-preserving diffeomorphisms of it, you get Riemann’s moduli space of deformations of complex structures S. (I don’t understand that one but you can read about it in Introduction to Teichmüller theory, old and new by Athanase Papadopoulos. It’s meant to just suggest that there are many interesting things in moduli space, surgery theory, and other late-20th-century mathematics that use quotients.)
  • If you quotient the disk D² by its boundary ∂D² you get the globe S².
  • Klein bottles are quotients of the unit rectangle I²=[0,1]².

image

So equivalence-classing is something we engage in plenty in life and business. Whether it is

  • grouping individuals together for stereotypes (maybe based on the way they dress or talk or spell),
  • or arguing about what constitutes “science” and therefore should get the funding,
  • or about which borrowers should be classed together to create a MBS with a certain default probabilities and covariance (correlation) with other things like the S&P.

Even any time one refers to a group of distinct people under one word—like “Southerners” or “NGO’s” or “small business owners”—that’s effectively creating an (Aristotelian) category and presuming certain properties hold—or hold approximately—for each member of the set.

File:Gastner map redblue byarea bystate.png
File:Gastner map redblue byarea bycounty.png
File:Gastner map purple byarea bycounty.png
image
image
image
File:Red and Blue States Map (Average Margins of Presidential Victory).svg

Of course there are valid and invalid ways of doing this—but before I started using the verb “to equivalence-class” to myself, I didn’t have as good of a rhetoric for interrogating the people who want to generalise. Linking together the process of abstraction-from-experience—going from many particular observations of being cheated to a model of “untrustworthy person”—with the mathematical operations of

  • slicing off outliers,
  • quotienting along properties,
  • foliating,
  • considering subsets that are tamer than the vast freeness of generally-the-way-anything-can-be

—formed a new vocabulary that’s helpfully guided my thinking on that subject.

Ordine geometrico demonstrata!




We want to take theories and turn them over and over in our hands, turn the pants inside out and look at the sewing; hold them upside down; see things from every angle; and sometimes, to quotient or equivalence-class over some property to either consider a subset of cases for which a conclusion can be drawn (e.g., “all fair economic transactions” (non-exploitive?) or “all supply-demand curveses such that how much you get paid is in proportion to how much you contributed” (how to define it? vary the S or the D and get a local proportionality of PS:TS? how to vary them?)

Consider abstractly a set like {a, b, c, d}. 4! ways to rearrange the letters. Since sets are unordered we could call it as well the quotient of all rearangements of quadruples of once-and-yes-used letters (b,d,c,a). /p>

Descartes’ concept of a mapping is “to assign” (although it’s not specified who is doing the assigning; just some categorical/universal ellipsis of agency) members of one set to members of another set.
image

  • For example the Hash Map of programming.
    {
     '_why' => 'famous programmer',
     'North Dakota' => 'cold place',
     ... }
  • Or to round up ⌈num⌉: not injective because many decimals are written onto the same integer.


    http://www.tigerlogic.com/tigerlogic/omnis/developers/images/technews/fnobj11ceilingfngraph.jpg

    http://mathworld.wolfram.com/images/interactive/CeilingReImAbs.gif
  • Or to “multiply by zero” i.e. “erase” or “throw everything away”:

In this sense a bijection from the same domain to itself is simply a different—but equivalent—way of looking at the same thing. I could rename A=1,B=2,C=3,D=4 or rename A='Elsa',B='Baobab',C=√5,D=Hypathia and end with the same conclusion or “same structure”. For example. But beyond renamings we are also interested in different ways of fitting the puzzle pieces together. The green triangle of the wooden block puzzle could fit in three rotations (or is it six rotations? or infinity right-or-left-rotations?) into the same hole.

image

By considering all such mappings, dividing them up, focussing on the easier classes; classifying the types at all; finding (or imposing) order|pattern on what seems too chaotic or hard to predict (viz, economics) more clarity or at least less stupidity might be found.

The hope isn’t completely without support either: Quine explained what is a number with an equivalence class of sets; Tymoczko described the space of musical chords with a quotient of a manifold; PDE’s (read: practical engineering application) solved or better geometrically understood with bijections; Gauss added 1+2+3+...+99+100 in two easy steps rather than ninety-nine with a bijection; ….

 

It’s hard for me to speak to why we want groups and what they are both at once. Today I felt more capable of writing what they are.

So this is the concept of sameness, let’s discuss just linear planes (or, hyperplanes) and countable sets of individual things.

Leave it up to you or for me later, to enumerate the things from life or the physical world that “look like” these pure mathematical things, and are therefore amenable by metaphor and application of proved results, to the group theory.

But just as one motivating example: it doesn’t matter whether I call my coordinates in the mechanical world of physics (x,y,z) or (y,x,z). This is just a renaming or bijection from {1,2,3} onto itself.

Even more, I could orient the axis any way that I want. As long as the three are mutually perpendicular each to the other, the origin can be anywhere (invariance under an affine mapping — we can equivalence-class those together) and the rotation of the 3-D system can be anything. Stand in front of the class as the teacher, upside down, oriented so that one of the dimensions helpfully disappears as you fly straight forward (or two dimensions disappear as you run straight forward on a flat road). Which is an observation taken for granted by my 8th grade physics teacher. But in the language of group theory means we can equivalence-class over the special linear group of 3-by-3 matrices that leave volume the same. Any rotation in 3-D

Sameness-preserving Groups partition into:

  • permutation groups, or rearrangements of countable things, and
  • linear groups, or “trivial” “unimportant” “invariant” changes to continua (such as rescaling—if we added a “0” to the end of all your currency nothing would change)
  • conjunctions of smaller groups

The linear groups—get ready for it—can all be represented as matrices! This is why matrices are considered mathematically “important”. Because we have already conceived this huge logical primitive that (in part) explains the Universe (groups) — or at least allows us to quotient away large classes of phenomena — and it’s reducible to something that’s completely understood! Namely, matrices with entries coming from corpora (fields).

So if you can classify (bonus if human beings can understand the classification in intuitive ways) all the qualitatively different types of Matrices,

image

then you not only know where your engineering numerical computation is going, but you have understood something fundamental about the logical primitives of the Universe!

Aaaaaand, matrices can be computed on this fantastic invention called a computer!

 

unf




Just playing with z² / z² + 2z + 2

g(z)=\frac{z^2}{z^2+2z+2}

on WolframAlpha. That’s Wikipedia’s example of a function with two poles (= two singularities = two infinities). Notice how “boring” line-only pictures are compared to the the 3-D ℂ→>ℝ picture of the mapping (the one with the poles=holes). That’s why mathematicians say ℂ uncovers more of “what’s really going on”.

As opposed to normal differentiability, ℂ-differentiability of a function implies:

  • infinite descent into derivatives is possible (no chain of C¹ ⊂ C² ⊂ C³ ... Cω like usual)

  • nice Green’s-theorem type shortcuts make many, many ways of doing something equivalent. (So you can take a complicated real-world situation and validly do easy computations to understand it, because a squibbledy path computes the same as a straight path.)
  

Pretty interesting to just change things around and see how the parts work.

  • The roots of the denominator are 1+i and 1−i (of course the conjugate of a root is always a root since i and −i are indistinguishable)
  • you can see how the denominator twists
  • a fraction in ℂ space maps lines to circles, because lines and circles are turned inside out (they are just flips of each other: see also projective geometry)
  • if you change the z^2/ to a z/ or a 1/ you can see that.
  • then the Wikipedia picture shows the poles (infinities) 

Complex ℂ→ℂ maps can be split into four parts: the input “real”⊎”imaginary”, and the output “real"⊎"imaginary”. Of course splitting them up like that hides the holistic truth of what’s going on, which comes from the perspective of a “twisted” plane where the elements z are mod z • exp(i • arg z).

a conformal map (angle-preserving map)

ℂ→ℂ mappings mess with my head…and I like it.










One way to think about quantum operators is as Questions that are asked of a quantum system.

  • Identity operator = "Who are you?"
  • Energy operator = "How much do you weigh?"
  • "What is your spin along the z axis?”
  • and so on.







Statistical moments, letter values, and other verbs that are often just called “statistics” can be thought of the same way: asking questions of a data set.


For example, after you run the ∑/n operation to get the mean happiness in Europe (2.0 / 3.0) versus the mean happiness in the US (1.2 / 2.0), you naturally would want to ask things like:

  • What about the least happy people? Are there more people answering near 0.0 in the US or Europe?
  • What’s the variance √∑²/n?
  • What’s the skewness? (Blanchflower & Oswald’s data survey 45,000 Americans and 400,000 Europeans — enough degrees of freedom to meaningfully measure skew.)
  • What’s the conditional value-at-risk at the 10% level? (average of the bottom 10% unhappiness.)
  • Apply a smoothing kernel to pick up which country has the more least-happy people without choosing a particular cutoff. (And maybe a second kernel to deal with the different scales: should we assume US1.0 = EUR1.5? Or maybe count from the top, to US1.8 = EUR2.8?)

Running these operators on the dataset will tell you an answer to one question, just like in English.

One difference is that classical statistical operators typically spit out two numbers in reply to your question: an answer, and a confidence level in that answer. The confidence in the answer is computed based on experimental assumptions by people with names like Pearson, Fisher, and Chisquare.







  • The shape of the continents depends on the global temperature. (Cold locks ice in polar caps.) Google “Morse theory”.
  • The price of housing always rises, until it doesn’t.
     
  • You develop a system of habits to discipline yourself; maxims for self-motivation; then the working world changes on you. Loyalty is no longer rewarded. Hard work is less valued than the ability to make PlentyOfFish.com.
  • For years the normal trading range of [insert spread, instrument, or security] is X, until one day sufficiently many (external) parameters shift. The market changes and you see a 20-sigma event. Heroes only.

  • Whoever coded your profile website (chi.mp, flavors.me, tumblr), wrote a route that takes a string as parameter. Entering the name isomorphismes into this function fetches this webdata. Entering your name fetches your webdata. All part of one and the same formula.
  • The Lotka-Volterra equations of a large ecosystem, dancing as the sliders shift around in their hypercube. Death and life hang in the balance. And it’s literally a balance. If the fulcrum moves so far that the lever hits the ground, a species will either become extinct or overpopulate the ecosystem (like an algal bloom)—either phase change being irreversible. (Er, at least anti-entropic.)
     
  • You think you know yourself, until you step into a new context—new country, new career, new city—and latent aspects of you become dominant.

    Who was I before? If I was her then and am this now, what is the underlying me?

    Self as a function of circumstance. Perhaps just as constant at root, but reactive; responsive; springy; primed for change.




This week I posted different viewpoints on The Self.

Particularly I’m interested in self as a function of inputs. Just as the size of eyes a fly is born with is a function of the temperature of the eggs, so too, many facets of ourselves are a function of the environment, other people’s behaviour toward us, game-theoretic strategy, incentives, and so on.

Other people’s theories of us can be seen as functions as well. (For example, a hiring manager’s view of employee performance may assume school quality or GPA to be positively related to human capital.)

  • Economics: I didn’t get to Jean Tirole’s theory of money-saving as bargains among multiple selves.
  • Psychology: Jim Townsend found that self-versus-other dichotomies can be expressed as a negatively curved metric space.
  • Personality: I’ve already written that the MBTI is too restrictive a theory of self. It maps from habits to [0,1]⁴.
  • Douglas Hofstadter's thoughts on the extension of the pronoun “we”. ‘We’ went to the moon, ‘we’ share a common ancestor with other primates, ‘we’ are overcrowding the planet, ‘we’ have a nice theory of quantum chromodynamics, ‘we’ do not know if ‘we’ are experiencing a simulation or actual reality, ‘we’ don’t really know what makes an economy grow.
  • Criminology: My criminal output is a function of the crime level in the neighbourhood I’m raised in. Except when it’s a function of strongly held beliefs.
  • Sociology: In contemporary OECD places, ‘we’ are coerced by our cultures to play roles. “There are” certain scripts — modifiable but still requisite or recommended in some sense; at the very least influential, even if only because benefits and rewards are socially tied to role performance.
  • The topic of cultural coercion … is something I’ll return to.
  • The concept of people-as-functions is one I want to return to later, in discussing historyeconomics, and a couple different ways of talking about human behaviour mathematically.

I can think of several other mathematics-inspired questions about ourselves. The difference between habit and personality; the yogic metaphor of a river cutting deeper as related to habituation; choice & free will; Markovian and completely-the-opposite-of-Markovian choices (how constrained we are by our past choices); … and a lot more. But you know what, writing is hard. So I do only a little at a time.

Update, 25 September 2013: I’ve written more on this topic now:




Mathematics is the *Most* Different Language.

In Can we make mathematics intelligible?, R P Boas jokes:

There is a test for identifying some of the future professional mathematicians at an early age. These are the students who instantly comprehend a sentence beginning “Let X be an ordered quintuple (a, T, πσ, 𝔅), where …”

I’ll try to explain what mathematicians mean when they write this way.

Letters

Think about a set containing the letters {A, B, C, D, E, F, G}. As written it’s equivalent to the set {F, C, E, G, A, B}, so the set doesn’t communicate the order information we know “should” go along with these letters. To express that, we should talk about the pair ( {A,B,C,D,E,F,G}, 𝓞) where 𝓞 is the ordering A < B < C < D < E < F < G.

Would it have been clearer if I’d pasted the definition of the ordering into the interior of the pair, instead of using 𝓞 as a shorthand? I’m not sure. Part of the way you have to learn to read mathematics papers is mentally substituting shorthands for definitions wherever they appear.

Since no one reading this has to look up the enumerative definition of the alphabet, let me just use the shorthand 𝓪 for the set containing each of the letters and 𝓞ʹ for the well-known ordering of the letters A < B < C … < X < Y < Z (remember, I already used 𝓞 so now I have to add a prime to differentiate this new, larger ordering). Now I can just write (𝓪, 𝓞ʹ) for the ordered alphabet.

The Next Letter

So what if I wanted to talk about “the letter after Q” ? Using the current pair (𝓪, 𝓞ʹ) this concept is undefined. In order to include “after” as a concept in the space I am developing, I need to expand the pair to a triple.

Now, how should I add in the concept of “after”? I could parsimoniously add only the +1 operation. But I may want to talk about “the fourth letter after Q” as well. Should that be four iterations of +1 (i.e., 𝔰∘𝔰∘𝔰∘𝔰 where 𝔰 is the successor function)? It will be annoying enough to write out a definition of 𝔰 that clearly states “C = B+1,   S = R+1,   ” and so on. Deary me. I wouldn’t want to have to enumerate that for +3, +13, and so on. I don’t have an infinite amount of either ink or patience.

I’ll leave it to function composition  and just define the output (image) of the +1 operator 𝔰 as “The letter to the right of the input, under the ordering 𝓞ʹ.” That doesn’t sound formal enough to be correct, but I’ll stop there.

An Ordered Triple

Now “we have” a triple (𝓪, 𝓞ʹ, 𝔰) containing a set 𝓪, an order 𝓞ʹ, and a function 𝔰. This is still the alphabet we’re trying to talk about here, right? In fact I’m starting to doubt if even a triple is enough, because the triple doesn’t contain either < or ∘, and those are symbols I’ve used so they’d better belong to the universe.

So I’ll say the alphabet is defined as a quintuple (𝓪, 𝓞ʹ, 𝔰, <,) containing a set, an order, a unary function 𝔰, and two binary functions > and . Phew. Please tell me I’m done!

You know what, I just thought of something else. What about the letter before? Argh! It’s so simple (the alphabet) and yet so difficult (defining an appropriate k-tuple). Alrighty, I learned a shortcut in school for this: I’ll define an inverse operation 𝔰¹. And everybody knows what I mean before I say it so I’ll just stop here.

Now, consider the alphabet. The alphabet is defined as a quintuple (𝓪 , 𝓞ʹ , 𝔰 , < , , 𝔰 ⁻¹) . Or maybe I should say it’s a triple? (𝓪 , (𝓞ʹ, <) , (𝔰, ∘, 𝔰⁻¹) ). One has options.

I still haven’t captured everything we know about the alphabet. I couldn’t do Excel spread sheets with Z < AA < AB < AC < … < AZ < BA < BB < BC < … BZ < …. Nor could I take account of cryptographic rules where Z loops around: Z+1=A, and A < B < C < … X < Y < Z < A (←a non-wellfounded set). I didn’t include the Alphabet Song or the pronunciation of the letters (have you been saying “zed” or “zee”?), nor did I include vowel/consonant classification, rhyming info, or an IPA-style breakdown of the phonemes each letter can make (and in English there are many phonemes per letter). But I did include a bunch of known information about the alphabet into the logical universe .

So here’s the point of this example. Even to express a simple concept that everyone knows — the alphabet — as well as what are normally implicit mappings and relationships — you have to explicitly include those facts in a tuple to be logically complete.

Mathematics is a totally different language than English. It’s more different from English than is Mandarin, Pormpuraaw, Tagalog, Aymara, Farsi, or Pirahã. That means you can think different thoughts once you learn mathematics. You can fathom what was unfathomable. Conceive what was inconceivable. See what was invisible. It also means that learning to “speak” this way sounds very strange.




Logic, like mathematics, is regarded by many designers with suspicion. Much of it is based on various superstitions about the kind of force logic has in telling us what to do.

First of all, the word “logic” has some currency among designers as a reference to a particularly unpleasing and functionally unprofitable kind of formalism. The so-called logic of Jacques François Blondel or Vignola, for instance, referred to rules according to which the elements of architectural style could be combined. As rules they may be logical. But this gives them no special force unless there is also a legitimate relation between the system of logic and the needs and forces we accept in the real world.

Again, the cold visual “logic” of the steel-skeleton office building seems horribly constrained, and if we take it seriously as an intimation of what logic is likely to do, it is certain to frighten us away from analytical methods. But no one shape can any more be a consequence of the use of logic than any other, and it is nonsense to blame rigid physical form on the rigidity of logic.




The eigenvectors of a matrix summarise what it does.

  1. Think about a large, not-sparse matrix. A lot of computations are implied in that block of numbers. Some of those computations might overlap each other—2 steps forward, 1 step back, 3 steps left, 4 steps right … that kind of thing, but in 400 dimensions. The eigenvectors aim at the end result of it all.
     
  2. The eigenvectors point in the same direction before & after a linear transformation is applied. (& they are the only vectors that do so) 

    For example, consider a shear three-elevenths shear to the east, per northward block repeatedly applied to ℝ².

    image
    In the above, eig_1 = \vec{blue} = \vec{(1  0)}  and image. (The red arrow is not an eigenvector because it shifted over.)

  3. The eigenvalues say how their eigenvectors scale during the transformation, and if they turn around.

    If λᵢ = 1.3 then |eig| grows by 30%.
     If λᵢ = −2»_i = 2 then eig_i doubles in length and points backwards. If λᵢ = 1 then |eig| stays the same. And so on. Above, λ₁ = 1 since eig_1 = \vec{blue} = \vec{(1  0)} stayed the same length.

    It’s nice to add that image and image.

For a long time I wrongly thought an eigenvector was, like, its own thing. But it’s not. Eigenvectors are a way of talking about a (linear) transform / operator. So eigenvectors are always the eigenvectors of some transform. Not their own thing.

Put another way: eigenvectors and eigenvalues are a short, universally comparable way of summarising a square matrix. Looking at just the eigenvalues (the spectrum) tells you more relevant detail about the matrix, faster, than trying to understand the entire block-of-numbers and how the parts of the block interrelate. Looking at the eigenvectors tells you where repeated applications of the transform will “leak” (if they leak at all).

To recap: eigenvectors are unaffected by the matrix transform; they simplify the matrix transform; and the λ's tell you how much the |eig|’s change under the transform.

Now a payoff.

Dynamical Systems make sense now.

If repeated applications of a matrix = a dynamical system, then the eigenvalues explain the system’s long-term behaviour.

image

I.e., they tell you whether and how the system stabilises, or … doesn’t stabilise.

Dynamical systems model interrelated systems like ecosystems, human relationships, or weather. They also unravel mutual causation.

What else can I do with eigenvectors?

Eigenvectors can help you understand:

  • helicopter stability
  • quantum particles (the Von Neumann formalism)
  • guided missiles
  • PageRank 1 2
  • the fibonacci sequence
  • your Facebook friend network
  • eigenfaces
  • lots of academic crap
  • graph theory
  • mathematical models of love
  • electrical circuits
  • JPEG compression 1 2
  • markov processes
  • operators & spectra
  • weather
  • fluid dynamics
  • systems of ODE’s … well, they’re just continuous-time dynamical systems
  • principal components analysis in statistics
  • for example principal components (eigenvalues after varimax rotation of the correlation matrix) were used to try to identify the dimensions of brand personality

Plus, maybe you will have a cool idea or see something in your life differently if you understand eigenvectors intuitively.