Posts tagged with mathematics

Big, long cycle = trend.










(NB: Actually a weighted sum. But if you just normalise it (divide by the overall total) you’ll get a weighted average.)
normalisation

  • The Economist's Which MBA? website scores MBA programmes on:

    faculty quality, student quality, student diversity, percentage who found jobs through the careers service, student assessment of career service, percentage in work three months after graduation, increase in salary, potential to network, internationalism of alumni, student rating of alumni effectiveness, and a few other metrics

    — and lets you adjust how important each of these factors are to you, determining your ranking of MBA programmes (using their data=methodology),
    economist which MBA ranking
    rather than pretending there’s a universal or objective weighting of importance of factors (as the US News & World Report ranking of US undergrad schools does).
    MBA rankings
  • My friend made a spreadsheet of all the factors that determined what city she wants to move to.
    city data
    She scored each city on various factors, then assigned each of those factors an importance, added and timesed and got a total score for each city. (I don’t think the result is meaningful, because I don’t think the space is linear. But the exercise itself was fun and gave her a reason to do the research.)
  • In my car radio I have knobs for “treble" and "bass", which weight particular functional forms more heavily than others.
    http://image.shutterstock.com/display_pic_with_logo/874921/874921,1328400891,1/stock-photo-three-golden-equalizer-knobs-for-bass-middle-and-treble-94352236.jpg
  • When you do a Gaussian Blur in photoshop
    Gaussian kernel smooth in 2D.
    box blur
    or smooth a time series against a Gaussian kernel,
    image


    you’re (basically) covectoring against a Normal curve. In other words you weight the neighbours with heights of 2**−distance² = 1/2, 1/16, 1/512, ....
    gaussian
    (I actually think of the Gaussian now as an optimal smoother, primarily, instead of as Bell Curve religion. But that’s a story for another time.)
  • The standard “regression beta"—the OLS squares minimisation problem—is to adjust a covector—the tilts of the various data columns
    image
    =properties you’ve observed and quantified (plus a column of ones) to match
    http://33.media.tumblr.com/dc7d606f35cb95a4ac8834b324512f8c/tumblr_naubqg3kYK1qc38e9o1_1280.png
    a straight-line fit up against whatever you’ve chosen as y.
    linear mappings -- notice they're ALL straight lines through the origin!
  • An artist in a coffee shop once told me he had found some great numerical parameters for the particular visual (like a Winamp style one) he was creating. He was clearly thinking about the parameter space as such, but the maximisation procedure he was following was probably not a mechanical one.
  • If polynomials are sequences where, instead of being limited to a largest digit of 9 in the hundreds digit, we’re not limited to positive, negative, fraction, whatever, in the xx=x² constant, then the constants you line up — whether they have some well-known name or pattern like combinatorial sequence, Sheffer sequence, Schur polynomial, Taylor series, or have no name — are the covector. (This overstretches my simplification that covectors are averages. Here they really need to be sums.)
  • A client wanted customers to be able to browse his wares easier in his online store. This boils down to bubbling up to the top what they want to see and sorting down what they don’t want to see. One idea he had was to give the customer a number of “sliders” and let them choose which aspects were important to them. So instead of sorting first by price, then sorting within that sort by alphabetical, you would catalogue various properties of the stuff in your ecommerce storefront, multiply those by a fixed number chosen by the customer, add those subscores together to get a total score, and then sort on that total score. That way the list can be mixed. (The customer wants to penalise high prices and non-red dresses, but doesn’t want to see only $2 purse accessories that somehow got parsed by the computer as “dress”.)
    image
    Another way to say this is he wanted to let customers define their own “scoring metric” and sort results based on that.

All of these are covectors.

In order to not get confused about the meaning of “parameter" versus "variable" — let me just use the concrete examples above. The weighting scheme on the MBA program is the covector and the observed  properties of each MBA program are the vector. Multiply the vector for a particular school and the covector (weighting scheme) you’ve chosen, and you get “your score” (a single number). Do this for each school and you can then sort the results to get “your ranking”.

If you changed the weighting scheme, you change the covector, i.e. you change the parameters. This is “moving in the dual space” and it outputs a different “your ranking”.

So the next time someone says to you "Canonically identify a vector space with its dual via g↦∫fg", thatbasically what they mean.

(By the way, this duality is also used in the reproducing kernel Hilbert space, a key part of machine learning.)




gradient descent on a 2-dimensional convex, quadratic cost function with condition number=100
adding momentum the gradient speeds up the approximation, in these high-condition cases — still using gradient descent (which scales better than Newton-Raphson in high-D)
like adding momentum in an oscillating mechanical system that vibrates too much
heavy ball method (Polyak)

gradient descent on a 2-dimensional convex, quadratic cost function with condition number=100

  • adding momentum the gradient speeds up the approximation, in these high-condition cases — still using gradient descent (which scales better than Newton-Raphson in high-D)
  • like adding momentum in an oscillating mechanical system that vibrates too much
  • heavy ball method (Polyak)

(Source: simons.berkeley.edu)


hi-res




Over the last century-and-a-half, mathematicians found every possible multiplication table.

The largest irreducible multiplication-table, dubbed the Monster Group, contains

808017424794512875886459904961710757005754368000000000
=
2⁴⁶×3²°×5⁹×7⁶×11²×13³×17×19×23×29×31×41×47×59×71

interlocking pieces.

That’s like the number of atoms in Jupiter.

Richard Borcherds

(modified by me)

(Source: ams.org)




Three observations get you there:

  1. min {a,b,c} = − max {−a, −b, −c}
  2. second-from-top {a,b,c,d,e} = max ( {a,b,c,d,e} without max{a,b,c,d,e} )
  3. max {a,b,c} ~ log_t (t^a + t^b + t^c ),   t→∞

Putting these three together you can make a continuous formula approximating the median. Just subtract off the ends until you get to the middle.

It’s ugly. But, now you have a way to view the sort operation—which is discontinuous—in a “smooth” way, even if the smudging/blurring is totally fabricated. You can take derivatives, if that’s something you want to do. I see it as being like q-series: wriggling out from the strictures so the fixed becomes fluid.




√(x²−1)(x²−k²).      x,k∈ℂ

(actually just going over the unit circle, not all of ℂ)

edit: hey, are these showing up as moving gif’s for you?

Read More

(Source: math.berkeley.edu)













  • solid — the category FinSet http://upload.wikimedia.org/math/4/b/0/4b01e1d7f710de6818f24f140d5528cb.png, a sack of wheat http://cloud.graphicleftovers.com/23704/516160/the-scattered-bag-with-wheat-of-a-grain.jpg, a bag of marbles; atoms; axiom of choice; individuation. The urelemente or wheat-kernels are interchangeable although they’re technically distinct. Yet I can pick out just one and it has a mass.
  • liquid — continuity; probability mass; Lewis’ gunky line; Geoff Hellman; the pre-modern, “continuous” idea of water; Urs Schreiber; Yoshihiro Maruyama; John L Bell
  • gas — Lebesgue measure theory; sizing Wiener processes image or other things in other “smooth” categories; here I mean again the pre-atomic vision of gas: in some sense it has constant mass, but it might be so de-pressurised that there’s not much in some sub-chamber, and the mass might even be so dispersed not only can you not pick out atoms and expect them to have a size (so each point of probability density has “zero” chance of happening), but you might need a “significant pocket” of gas before you get the volume—and unlike liquid, the gas’ volume might confuse you without some “pressure”-like concept “squeezing” the stuff to constrain the notion of volume.




(x²−y²−1) • (x²−z²−1) •  (y²−z²−1)   =   0

(Source: imaginary.org)










Double integrals ∫∫ƒ(x)dA are introduced as a “little teacher’s lie” in calculus. The “real story” requires “geometric algebra”, or “the logic of length-shape-volume relationships”. Keywords

  • multilinear algebra
  • Grassmann algebra / Grassmanian
  • exterior calculus
  • Élie Cartán’s differential-forms approach to tensors

These equivalence-classes of blobs explain how

  • volumes (ahem—oriented volumes!)
  • areas (ahem—oriented areas!)
  • arrows (vectors)
  • numbers (scalars)

"should" interface with each other. That is, Clifford algebra or Grassman algebra or "exterior algebra" or "geometrical algebra" encodes how physical quantities with these dimensionalities do interface with each other.

(First the volumes are abstracted from their original context—then they can be “attached” to something else.)

 

EDIT:user mrfractal points out that Clifford algebras can only have dimensions of 2,4,8,16,… https://en.wikipedia.org/wiki/Clifford_algebra#Basis_and_dimension Yes, that’s right. This post is not totally correct. I let it fly out of the queue without editing it and it may contain other inaccuracies. I was trying to throw out a bunch of relevant keywords that go along with these motivating pictures, and relate it to equivalence-classing, one of my favourite themes within this blog. The text here is disjointed, unedited, and perhaps wrong in other ways. Mostly just wanted to share the pictures; I’ll try to fix up the text some other time. Grazie.

(Source: arxiv.org)