Posts tagged with New York
This post should give you the feeling of bijecting between domains without knowing a lot of mathematics. Which is part of getting the intuitive feeling of mathematics with less work.
Besides automorphisms, there’s another interesting kind of bijection. I’ll try to give you the feeling of bijecting between different domains (a kind of analogy) without requiring much prior knowledge.
Like I said yesterday, a bijection is an invertible total mapping. It ≥ covers ↓ the target and ≤ injects ↑ one-to-one into the target. This is thinking of spaces as wholes—deductive thinking—rather than example-by-example thinking. (There’s a joke about an engineer and a mathematician who are friends and go to a talk about 47-dimensional geometry. The engineer after the talk tells the mathematician friend that it was hard to visualise 47 dimensions; how did you do it? The mathematician replies “Oh, it’s easy. I simply considered the problem in arbitrary
N dimensions and then set
N=47!” I used to be frustrated by this way of thinking but after X years, it finally makes sense and is better for some things.)
So, graphically, a bijection is surjective/covering/ ≥ / ↓
and injective/one-to-one (not one-to-many)/ ≤ / ↑
This amounts to a mathematical way of saying two things are “the same”, when of course there are a lot of ways in which that could be meant. The equation
x=x is the least interesting one so “sameness” has to be more broad than “literally the same”. The same like how? Bijection as a concept opens the door to ≥1 kinds of comparison.
That was a definition. Now on to the example which should give you the feeling of bijecting across domains and the feeling of payoff after you come up with an unintuitive bijection.
Let’s talk about an “ideal city” where the streets make a perfectly rectangular lattice. I’m standing at 53rd St & 140th Ave and I want to walk/bike/cab to 60th St & 147th Ave.
How many short ways can I take to get there?
The first abstraction I would do from real life to a drawing is to centre the data. A common theme in statistics and mathematically it’s like removing the origin. I can actually ignore everything except the 7×7 block between me and my destination to the northeast.
(By the way, by “short” paths I mean not circling around any more than necessary. Obviously I could take infinitely more and more circuitous routes to the point of circling the Earth 10 times before I get there. But I’m trying not to go out of my way here.)
Now the problem looks smaller. Just go from bottom left corner to top right corner.
I drew one shortest path in red and two others in black. To me it would be boring to go north, north, north, north, north, north, north, east, east, east, east, east, east, east. But if I want to count all the ways of making snakey red-like paths then I should bracket the possibilities by those two black ones.
When I try to draw or mentally imagine all the snakey paths, I lose track—looking for patterns (like permute, then anti-inner-permute, but also pro-inner-anti-inner-inner-permute…these are words I make up to myself) that I probably could see if I understood the fundamental theorem of combinatorics, but I’ve never been able to fully see the additive pattern.
But, I know a shortcut. This is where the bijection comes in.
Every one of these paths is isomorphic to a rearrangement of the letters
Every time I “flip” one of the corners in the picture—which is how I was creating new snakes in between the black brackets—that’s just like interchanging an
N and an
Of course! It’s so obvious in hindsight.
And now here’s the payoff. Rearrangements of strings of letters like
AAABBBCCCCD are already a solved problem.
I explained how to count combinatorial rearrangements of letters here. It’s 1026 words long.
The way to get the following formula is to  derive a trick for over-counting,  over-count and then  quotient using the same trick.
- since the rearrangements of
AAAAAAABBBBBBBare isomorphic to the rearrangements of
- and since the rearrangements of
NNNNNNNEEEEEEEare isomorphic to the short paths I could take through the city to my destination,
the correct answer to my original question—how many short ways to go 7 blocks east and 7 blocks north—is
I asked the Berkeley Calculator the answer to that one and it told me 3432. Kind of glad I didn’t count those out by hand.
So, the payoff came from (1) knowing some other solved problem and (2) bijecting my problem onto the one with the known solution method.
But does it work in New York? Even though NYC is kind of like a square lattice, there may be a huge building making some of the blocks not accessible.
Or maybe ∃ a “Central Park” where you can cut a diagonal path.
And things like “Broadway” that cut diagonally across the city.
And some dead ends in certain ranges of the ciudad. And places called The Flat Iron Building where roads meet in a sharp V.
So my clever discovery doesn’t quite work in a non-square world.
However now I maybe also gave you a microcosm of mathematical modelling. The barriers and the shortcuts could be added to a computer program that counts the paths. We could keep adjusting things and adding more bits of reality and make the computer calculate the difference. But the “basic insight”, I feel, is lacking there. After all I could have written a computer program to permute the letters
NNNNNNNEEEEEEE or even just literally model the paths in the first place. (At least with such a small problem.) But then there would be no Eureka moment. I think it’s in this sort of way that mathematicians mean their world is more beautiful than the real one.
As mathematical modellers we inherit deep basic insights—like the Poisson process and the Gaussian as two limits of a binary branching process—and try to construct a convoluted sculpture using those profound insights as the basis. For example maybe I could stitch together a bunch of square lattice pieces together. Maybe for instance two square lattices representing different boroughs and connected only by a single congested bridge. Since I solved the square lattice analytically, the computational extensions will be less mysterious to me if I use the understood pieces. Unless I can be smart enough to figure out how to count triangles & multi-block industrial buildings & shortcuts & construction roadblocks and find an equally excellent insight into how the various discrepancies change the number at the end of my computation (rather than just reading it off and having an answer but no wisdom), I’m left using the excellent insight as a starting point and doing some dirty computations from there—no wisdom at all, no map, just scrapping in the wilderness—a lot of firepower and no idea how to use it. I might as well be spraying a tree with a shotgun instead of cutting the V with an axe and letting its weight do the work.
I remember as a child, walking in New York City. Of course I had no idea where we were going or why we were doing anything, I just knew I had to walk somewhere. I was tossing my stuffed animal in the air as we walked, for fun. It was a stuffed stegosaurus my mum had sewed for me. He was awesome. We crossed a street. I threw Steggy the stegosaurus up in the air. He went straight up instead of up-and-in-front-of-me. My mum was holding my hand, making me cross straight to the other kerb. We went forward and he went straight up and straight down onto the pavement behind me. I tried to turn around and pick him up. I saw a car coming and wanted to run back and pick Steggy up from the ground. I was afraid he would be run over.
In discussions with @gappy3000 and @groditi over the past few weeks I made an argument as to why economic parity should lead to faster economic growth than great inequality. As I googled and found blogular discussions about this recent paper, I couldn’t find anyone making the same points I did. So I’ll state them in case I’m the only person who thought of this (though how could that be?):
- From a supply-side perspective, a more-homogeneous population is easier to produce for.
- In planning a new business, less variation in customers makes it easier to answer the necessary question: “Who is the customer going to be?”
- For “blue chip” businesses, greater efficiencies of scale are possible in a broad market.
- Taking the example of housing (which constitutes the lion’s share of most people’s lifetime spending): it’s easier to build the same house 10⁹ times than to build 10⁶ different kinds of houses for 10⁹ different people.
- Positional goods should account for a larger share of the economy in more-unequal economies than in economies closer to income parity. Take for example rents in Manhattan.
- As William Vickrey argues, micro ≠ macro so whilst ↓ 10% of your income would make you 10% worse off,if everyone in Manhattan ↓ 10% of their income it might mostly ↓ the price of housing, since an auction is a better model for that market than a posted-offer.
- Yet the option for businesspeople to cash in on easy money by producing exclusively for the rich must crowd out other uses of capital. Why work hard to insulate 10⁹ homes owned by people without much dosh to hand over, when there’s so much more of a market base making stuff for the people who have plenty?
- ↓ economic profits to providing positional goods would decrease the producer surplus without a deadweight loss (since the trades would still take place but at a ↓ price), but the £10,000,000 that went to counter-bidding against other top-wealth people would instead be owned by lower-wealth people and go toward purchasing goods they previously couldn’t afford, incenting ↑ production levels.
- As an add-on: this wouldn’t show up in GDP, but utility per pound spent should ↑ if the incomes are more homogeneous because the people who most value things would get them rather than people who just have more to spend.
One final thought. Which do you think is more possible: to ↑ the output of someone making £2/day, or to ↑ the output of someone making £500/day? To me the person making £500/day is likelier to be near his peak output than the person making £2/day. The person ganando £2/day is not efficiently plugged into a “system” or hooked up to capital such that her abilities are amplified the way the £500/day person is.
So according to me, economic growth should be easier to obtain by ↑ the productivity of the least productive individuals than by ↑ the productivity of the most productive individuals. This is consistent with the observation that growth rates are ↑ in developing countries (where ∃ more poor people) than in rich countries:
The counter-argument is about incentives. Lesser rewards make it less attractive to work all the way to the top, if indeed it’s work that takes you to the top.
- Pay is strongly related to negotiation power and less strongly to output.
- Motivation and utility are derived from rank as well as from absolute earnings/spending. To the extent this is true, if tax levies do not reorder wealth levels they leave those individuals’ utilities alone.
- Most well-paid people are replaceable. Convexities in pay are more common than convexities in talent.
- Elasticity of labour supply is very low around the high income levels. These people either have a driven personality (easily bored if not working), competitive personality (rank-based utility), or logarithmic utility in money (so linear reductions in money reduce utility concavely).
For these reasons and maybe a few more, when well-paid people threaten to work less if they are taxed more I either don’t believe them, or think that if they did quit then the world’s output would be very little affected.
I have a colleague whose wife appears to be remarkably materialistic.
He feels that he makes a good living so that she can enjoy fancy handbags and days at the spa; he wants kids but she isn’t convinced that they can afford them. (He makes [an] upper middle-class income; she works in the low-paying fashion industry.) Whenever he talks about her, he inevitably refers to “how women are” or “how New York women are” — his wife is not unusual in any way, she’s just like every woman. Also, did you know that women always blame their bad behavior on their hormones (PMS!) but men are not able to do so?
In case it’s not clear, this man is not some poor misunderstood statistical genius with a gift for generalization; he is simply a cretin.
I’ve been reading a lot of
The Epicurean Dealmaker and
Synthetic Assets blogs this weekend, as well as this article that says Wall Street’s compensation is never coming back (due to Dodd-Frank).
Two summary thoughts:
- Leverage creates credit. Maybe the way to look at The Thing That Happened In 2008 And Its Aftermath is like this:
- Leverage creates credit. Credit is money. Leverage thus creates wealth.
- Imagine ∃ three counterparties each lending secured debt to each other (A lends to B, B lends to C, C lends to A). Each of them inflates its book value by marking levered assets at fair market value. They could (under this imagined theory) create more and more credit ad infinitum, as long as B is blind to A’s worth, C is blind to B’s worth, and A is blind to C’s worth.
- This could be called “manufactured” or falsidical credit.
- Since part of the naughties’ expansion of credit facilitated the extension of credit to American consumers—turning their promise to “getcha back later” for that plasma TV into an asset — bets on which could also be levered and even more money created out of belief (credere).
- (Bankers benefited from this because they were minting money.)
- Perhaps this is the way to see what’s happened: now all of the credit (money) that was falsidically minted is gone—gone from the bankers’ paychecks, gone from the dentists’ retirement accounts, not-gone from the hands of those who bought durable goods, not-gone from the memories of those who bought expensive meals, not-gone from the cash accounts of those who liquidated their stock holdings (for example maybe some people who converted stocks into bonds upon retiring in 2007), gone from the restaurateurs’ sales numbers, gone from the servers’ tip plates. The plasma TV’s are still inside the McMansions and the houses are still there, but credit—belief—has dried up.
- All of the New Yorkers who sustained themselves by serving über-high-class clientèle — working in a high-class bar, escorting, personal assistants, postmodern chefs — are going to feel the pinch, if they haven’t already. It should become harder to pay rent in NYC working a food or hospitality job.
Over some period of time in the future that will have to mean people either moving even further outside Manhattan or moving back to where they came from. And that should eventually put downward price pressure on every aspect of NYC—rent, food, availability of fine cuisine, and so on.
Burning Spear by Sonic Youth, performed by Tune-Yards
I don’t like how internet culture makes fun of ugly people with mullets who shop at Wal-Mart and glorifies stylish, genius or otherwise superlative New Yorkers and Californians on
Instagr.am (for iPhone owners only).
Ditto regarding the reification of wealthy angel investors saving the world through entrepreneurship and the degradation of the limp-willed failures who serve them food wearing a front-brimmed cap & collared shirt with logo.
Not that this matters to everyone, but I know which group of people Christ said would inherit the Earth.
Rednecks by Randy Newman