Posts tagged with vocabulary

Topology gets appropriate for qualitative rather than quantitative properties, since it deals with closeness and not distance.

It is also appropriate where distances exist, but are ill-motivated.

These approaches have already been used successfully, for analyzing:

  • • physiological properties in Diabetes patients
  • • neural firing patterns in the visual cortex of Macaques
  • • dense regions in ℝ⁹ of 3×3 pixel patches from natural [black-and-white] images
  • • screening for CO₂ adsorbative materials
Michi Johanssons (@michiexile)

(Source: blog.mikael.johanssons.org)




going the long way
What does it mean when mathematicians talk about a bijection or homomorphism?
Imagine you want to get from X to X′ but you don’t know how. Then you find a "different way of looking at the same thing" using ƒ. (Map the stuff with ƒ to another space Y, then do something else over in image ƒ, then take a journey over there, and then return back with ƒ ⁻¹.)
The fact that a bijection can show you something in a new way that suddenly makes the answer to the question so obvious, is the basis of the jokes on www.theproofistrivial.com.


In a given category the homomorphisms Hom ∋ ƒ preserve all the interesting properties. Linear maps, for example (except when det=0) barely change anything—like if your government suddenly added another zero to the end of all currency denominations, just a rescaling—so they preserve most interesting properties and therefore any linear mapping to another domain could be inverted back so anything you discover over in the new domain (image of ƒ) can be used on the original problem.
All of these fancy-sounding maps are linear:
Fourier transform
Laplace transform
taking the derivative
Box-Müller
They sound fancy because whilst they leave things technically equivalent in an objective sense, the result looks very different to people. So then we get to use intuition or insight that only works in say the spectral domain, and still technically be working on the same original problem.

Pipe the problem somewhere else, look at it from another angle, solve it there, unpipe your answer back to the original viewpoint/space.
 
For example: the Gaussian (normal) cumulative distribution function is monotone, hence injective (one-to-one), hence invertible.

By contrast the Gaussian probability distribution function (the “default” way of looking at a “normal Bell Curve”) fails the horizontal line test, hence is many-to-one, hence cannot be totally inverted.

So in this case, integrating once ∫[pdf] = cdf made the function “mathematically nicer” without changing its interesting qualities or altering its inherent nature.
 
Or here’s an example from calc 101: u-substitution. You’re essentially saying “Instead of solving this integral, how about if I solve a different one which is exactly equivalent?” The →ƒ in the top diagram is the u-substitution itself. The “main verb” is doing the integral. U-substituters avoid doing the hard integral, go the long way, and end up doing something much easier.

 
Or in physics—like tensors and Schrödinger solving and stuff.Physicists look for substitutions that make the computation they have to do more tractable. Try solving a Schrödinger PDE for hydrogen’s first electron s¹in xyz coordinates (square grid)—then try solving it in spherical coordinates (longitude & latitude on expanding shells). Since the natural symmetry of the s¹ orbital is spherical, changing basis to polar coords makes life much easier.

 
Likewise one of the goals of tensor analysis is to not be tied to any particular basis—so long as the basis doesn’t trip over itself, you should be free to switch between bases to get different jobs done. Terry Tao talks about something like this under the keyword “spending symmetry”—if you use up your basis isomorphism, you need to give it back before you can use it again.
"Going the long way" can be easier than trying to solve a problem directly.

going the long way

What does it mean when mathematicians talk about a bijection or homomorphism?

Imagine you want to get from X to X′ but you don’t know how. Then you find a "different way of looking at the same thing" using ƒ. (Map the stuff with ƒ to another space Y, then do something else over in image ƒ, then take a journey over there, and then return back with ƒ ⁻¹.)

The fact that a bijection can show you something in a new way that suddenly makes the answer to the question so obvious, is the basis of the jokes on www.theproofistrivial.com.

image
image
image



In a given category the homomorphisms Hom ∋ ƒ preserve all the interesting properties. Linear maps, for example (except when det=0) barely change anything—like if your government suddenly added another zero to the end of all currency denominations, just a rescaling—so they preserve most interesting properties and therefore any linear mapping to another domain could be inverted back so anything you discover over in the new domain (image of ƒ) can be used on the original problem.

All of these fancy-sounding maps are linear:

They sound fancy because whilst they leave things technically equivalent in an objective sense, the result looks very different to people. So then we get to use intuition or insight that only works in say the spectral domain, and still technically be working on the same original problem.

image

Pipe the problem somewhere else, look at it from another angle, solve it there, unpipe your answer back to the original viewpoint/space.

 

For example: the Gaussian (normal) cumulative distribution function is monotone, hence injective (one-to-one), hence invertible.

image

By contrast the Gaussian probability distribution function (the “default” way of looking at a “normal Bell Curve”) fails the horizontal line test, hence is many-to-one, hence cannot be totally inverted.

image

So in this case, integrating once ∫[pdf] = cdf made the function “mathematically nicer” without changing its interesting qualities or altering its inherent nature.

 

Or here’s an example from calc 101: u-substitution. You’re essentially saying “Instead of solving this integral, how about if I solve a different one which is exactly equivalent?” The →ƒ in the top diagram is the u-substitution itself. The “main verb” is doing the integral. U-substituters avoid doing the hard integral, go the long way, and end up doing something much easier.

http://latex.codecogs.com/gif.latex?%5Cdpi%7B200%7D%20%5Cbg_white%20%5Clarge%20%5Ctext%7BProblem%3A%20integrate%20%7D%20%5Cint%20%7B8x%5E7%20-%206x%5E2%20%5Cover%20x%5E8%20-%202x%5E3%20+%2013587%7D%20%5C%20%5Cmathrm%7Bd%7Dx%20%5C%5C%20%5C%5C%20%5Crule%7B13cm%7D%7B0.4pt%7D%20%5C%5C%20%5C%5C%20%5Ctext%7B%5Ctextsc%7BClever%20person%3A%7D%20%5Ctextit%7BHow%20about%20instead%20I%20integrate%7D%20%7D%20%5Cint%20%7B1%20%5Cover%20u%7D%20%5C%20%5Cmathrm%7Bd%7Du%20%5Ctext%7B%20%5Ctextit%7B%3F%7D%7D%20%5C%5C%20%5C%5C%20%5C%5C%20%5Ctext%7B%5Ctextsc%7BQuestion%20asker%3A%7D%20%5Ctextit%7BHuh%3F%7D%7D%20%5C%5C%20%5C%5C%20%5C%5C%20%5Ctext%7B%5Ctextsc%7BClever%20person%3A%7D%20%5Ctextit%7BThey%27re%20equivalent%2C%20you%20see%3F%20Watch%21%7D%20%7D%20%5C%5C%20%5C%5C%20%5Ctext%7B%5Csmall%7B%28applies%20basis%20isomorphism%20%7D%7D%20%5Cphi%3A%20x%20%5Cmapsto%20u%20%5C%5C%20%5Ctext%7B%5Csmall%7B%20as%20well%20as%20chain%20rule%20for%20%7D%7D%20%5Cmathrm%7Bd%7D%20%5Ccirc%20%5Cphi%3A%20%5Cmathrm%7Bd%7Dx%20%5Cmapsto%20%5Cmathrm%7Bd%7Du%20%5Ctext%7B%5Csmall%7B%29%7D%7D%20%5C%5C%20%5C%5C%20%5Ctext%7B%20%5Csmall%7B%28gets%20easier%20integral%29%7D%7D%20%5C%5C%20%5C%5C%20%5Ctext%7B%20%5Csmall%7B%28does%20easier%20integral%29%7D%7D%20%5C%5C%20%5C%5C%20%5Ctext%7B%20%5Csmall%7B%28laughs%29%7D%7D%20%5C%5C%20%5C%5C%20%5Ctext%7B%20%5Csmall%7B%28transforms%20it%20back%20%7D%7D%20%5Cphi%5E%7B-1%7D%3A%20u%20%5Cmapsto%20x%20%5Ctext%7B%5Csmall%7B%29%7D%7D%20%5C%5C%20%5C%5C%20%5Ctext%7B%20%5Csmall%7B%28laughs%20again%29%7D%7D%20%5C%5C%20%5C%5C%20%5Ctext%7B%5Ctextsc%7BQuestion%20asker%3A%7D%20%5Ctextit%7BUm.%7D%7D%20%5C%5C%20%5C%5C%20%5Ctext%7B%20%5Csmall%7B%28thinks%29%7D%7D%20%5C%5C%20%5C%5C%20%5Ctext%7B%20%5Ctextit%7BUnbelievable.%20That%20worked.%20You%20must%20be%20some%20kind%20of%20clever%20person.%7D%7D

 

Or in physics—like tensors and Schrödinger solving and stuff.
|3,2,1>+|3,1,-1> Orbital Animation
Physicists look for substitutions that make the computation they have to do more tractable. Try solving a Schrödinger PDE for hydrogen’s first electron in xyz coordinates (square grid)—then try solving it in spherical coordinates (longitude & latitude on expanding shells). Since the natural symmetry of the orbital is spherical, changing basis to polar coords makes life much easier.

polar coordinates "at sea" versus rectangular coordinates "in the city"

 

Likewise one of the goals of tensor analysis is to not be tied to any particular basis—so long as the basis doesn’t trip over itself, you should be free to switch between bases to get different jobs done. Terry Tao talks about something like this under the keyword “spending symmetry”—if you use up your basis isomorphism, you need to give it back before you can use it again.

"Going the long way" can be easier than trying to solve a problem directly.




When I was ten years old I used to keep a notebook of difficult words I had come across. The present I most wanted for Christmas was: The Dictionary of Difficult Words. And I still love exploring dark corners of the English language. A year or two ago I picked up a drill book and found there were quite a lot of “college level words” I didn’t know.

Some of these words I had an inkling on, or really knew outright (vixen = a female fox) — but because I’m obsessive like this, I wrote down any words that I was strictly less than 100% certain about. Could I forget that ursine is a bear? Under the stress of a test, perhaps yes.

Most interesting were words that I thought I knew, but didn’t. For example ponderous doesn’t mean something you think hard about: it means heavy. Factoids aren’t factitos and enormity ≠ size. Rush means to beat back, not to hurry, and natty is almost opposite to tatty. Whoa-za.

Here’s the list (may contain typoes), sorted and uniqued with unix tools:

  • aberrant
  • abeyance
  • avuncular
  • adage
  • adumbrate
  • advent
  • adventitious
  • advert (v.)
  • aerie (n.)
  • affable
  • agglomerate
  • agog
  • akimbo
  • alacrity
  • alimentary
  • allocate
  • alloy (v.)
  • allude
  • alluvial
  • aloft
  • alluvial
  • amok
  • analgesic
  • angular
  • animadversion
  • animus
  • anneal
  • anodyne
  • antic
  • aphasia
  • aphorism
  • apiary
  • aplomb
  • apogee
  • apostasy
  • apostate
  • apothegm
  • apposite
  • apprise
  • appurtenance
  • arabesque
  • arcade
  • arroyo
  • aseptic
  • asperity
  • aspersion
  • aspersive
  • astringent
  • atavism
  • aureole
  • aver
  • avocation
  • avuncular
  • badinage
  • balk
  • beatify
  • bedizen

Read More




The growing popularity of the #negrospotting hashtag on twitter today has prompted some tweeters to utter improper English. Because I believe that in order for a productive discussion to be had, we all need to agree on the meanings of the words we use, I’m reaching out to correct the ungrammatical usage of “racist” by some twitterati, when the more proper “lacist” should be preferred.

Examples:

It’s actually called “lacist”, guys and girls and others.

 

I looked this up in my super-thick Volume I of the Oxford English Dictionary. I don’t have the online subscription or I would link but here is the official grammatical usage of the word “lacism”:

lacism (n.):

  1. The act of pointing out that someone belongs to a race, esp. if that race is not white, caucasian, or caucasoid.
  2. Counting, naming, photographing, or otherwise cataloguing members of a race, esp. if they are geospatially proximate to each other (e.g., in a neighbourhood; at a convention).
  3. Thinking or saying that anyone has a race when, at the same time, they are in a location.
  4. Drawing parallels between someone’s location and race.
  5. Saying, writing, or believing that a physically visible, proximate, and colocated group of humans does not include many members of a (usu. non-white) race.
  6. (Less commonly) Any mention of race.

Examples of usage:

Scenario 1

  • 1: I live in a neighbourhood that’s mostly black.
  • 2: Don’t say that!
  • 1: Why not? I have like maybe three non-black neighbours.
  • 2: Because, that’s lacist!
Scenario 2
  • 2: Do you go to church?
  • 1: Yep, every Sunday.
  • 2: Where at?
  • 1: First Purchase.
  • 2: Oh … is that in town?  I’ve never heard of it.
  • 1: It’s actually only six blocks down from your house.
  • 2: Which direction?
  • 1: East.
  • 2: Oh … I don’t really venture over that way very much.
  • 1: Yeah, well … yeah, I go there. It’s a mostly black church.
  • 2: Omigod, don’t say that!!!
  • 1: What do you mean? Why not?
  • 2: Because, that’s lacist!

Scenario 3

  • 1: I am a black mathematician.
  • 2: Ewww!
  • 1: What?
  • 2: You just brought up race! That’s lacist!

There are other examples in the OED, which is the official source of everything grammatical and the most bestest source of information about the English language. I won’t type the full etymology or historical occurrences but the first known usage of the word lacist in English was at the Battle of Hastings, when one soldier said to another:

  • 1: I think I just slew one of our own!
  • 2: (shouting over the din of battle) Wot?!
  • 1: Look at this man at the end of my spear! He looks like a Norman!
  • 2: Shhh!! Don’t use that word!!!!
  • 1: What do you mean? (turns to parry a blow, ripostes into the opponent’s midsection, jiggles the slumping body off of his blade, then swivels his head back toward #2)
  • 2: That’s lacist!

So it’s clear that the word has a noble and storied history. It’s believed to derive from the proto-Berber word for “candelabra”.

Also, obviously:

lacist (adj.):

  1. Something that exhibits lacism.
  2. A person who engages in lacism.




Real numbers are imaginary, and imaginary numbers are real.


[I]maginary numbers describe a physical state of something, so as much as a number can exist, these do. But … real numbers, [being ideal], are imaginary.

David Manheim

(I changed some parts that I don’t agree with but the phrasing and initiative are his.)

The “rational” numbers are ratios and the “counting” numbers are, um, what you get when you count. But “real” and “imaginary” numbers have nothing to do with reality or imagination (each is both real and ideal in the same sense).

 

How about we start referring to them this way?

  • ℝ = the complete numbers. ℝ is the Cauchy-completion of the integers, meaning that ℝ has completely fills in enough options so that any sequential pattern will be able to dance wherever it wants and never need to step its shoe on another element outside the system in order to fulfill its pattern.
  • Any field adjoined to the √−1 becomes "twisting numbers". This derives from the “twisting” feeling one gets when multiplying numbers from ℂ. For example 3exp{i 10°} • 5exp{i 20°} = 15exp{i 30°}, they spiral as they multiply outwards. Keep multiplying numbers off the zero line and they keep twisting. Tristan Needham coined the word “amplitwist” for use in ℂ.
  • ℂ = the complete, twisting numbers. Since ℂ=ℝ adjoin √−1.
  • "Complete spiral numbers" sounds nice as well.

Just to give a few examples of other acceptable numbers systems:

  • ℚ adjoin √2
  • the algebraics
  • ℚ adjoin √[a+√[b+c]]
  • sets
  • DAGs
  • square matrices … with many kinds of stuff inside
    Magma to group2.svg
  • special matrix families
  • certain polynomials (sequences) … taking many kinds of things (not just “regular numbers”) as the inputs
  • clock numbers (modulo numbers)
  • Archimedean fields and non-Archimedean fields
  • functions themselves … and the number of things that functions can represent boggles the mind. Especially when the range can be different than the domain. (Declarative sentences can have a codomain of truth value. Time series have a domain of an interval. Rotations of an object map the object to itself in a space. And more….)
  • And many, many more! Imagination is the limiting reagent here.




The word probability didn’t take on a likelihood-related meaning until maybe the 18th century. The original meaning was synonymous with “worthy of approval”.
Also interesting: Ian Hacking suggests that the creation of probability & chance concepts was intimately tied to equiprobable outcomes, like rolling fair dice. Since fair dice are harder to make than unfair dice, the games through most of history were played with raw dice, such as animal bones.
That jives with my own experience of the linguistics of randomness. It’s easier to talk about “completely random” things — draws from the uniform distribution — than to call other probability distributions random. Unpredictable-outcomes-with-tendencies-or-bias (the other random variables) somehow don’t feel quite “random”.
(Then again, people use the word “random” to mean “crazy” or “arbitrary” as well as “uniformly random”.)
Similarly, I think, our shared intuition of the “expectation” concept is intimately involved with gambling and fairness notions. So maybe it’s no surprise that the St Petersburg paradox, early mathematical correspondences on probability, and L. J. Savage’s Dutch books philosophical formulation of probability rely on rational bettor gedankenexperiments.
That games of chance provide the Bayesian foundation for quantum communication is creepy. Or maybe the quantum communication theories provide a mirror at our own ways of thinking (as social animals) as we try to describe physical phenomena.

The word probability didn’t take on a likelihood-related meaning until maybe the 18th century. The original meaning was synonymous with “worthy of approval”.

Also interesting: Ian Hacking suggests that the creation of probability & chance concepts was intimately tied to equiprobable outcomes, like rolling fair dice. Since fair dice are harder to make than unfair dice, the games through most of history were played with raw dice, such as animal bones.

That jives with my own experience of the linguistics of randomness. It’s easier to talk about “completely random” things — draws from the uniform distribution — than to call other probability distributions random. Unpredictable-outcomes-with-tendencies-or-bias (the other random variables) somehow don’t feel quite “random”.

(Then again, people use the word “random” to mean “crazy” or “arbitrary” as well as “uniformly random”.)

Similarly, I think, our shared intuition of the “expectation” concept is intimately involved with gambling and fairness notions. So maybe it’s no surprise that the St Petersburg paradox, early mathematical correspondences on probability, and L. J. Savage’s Dutch books philosophical formulation of probability rely on rational bettor gedankenexperiments.

That games of chance provide the Bayesian foundation for quantum communication is creepy. Or maybe the quantum communication theories provide a mirror at our own ways of thinking (as social animals) as we try to describe physical phenomena.


hi-res