Posts tagged with elasticity

andrewmaclean says the “Spotify Model”—which we could also call the "newspaper model"—where "consumers" get something they want for free, but are really the product which media outlets are selling to advertisers—is “inevitable”.


Why would this be a more logical way for the world to run than just paying for movies, music, television, journalism, comics, and T-shirts? I’m going to spitball together a slapdash explanation and ask if you can improve on it. Here’s my model:

  1. Three car dealerships each have a big marketing budget. (Why? See 4.)
  2. The only newspaper in town, by charging $2/paper, was accessing 10% of the town—that’s the demand level to just buy the paper.
  3. 999/1000 newspaper readers are not interested in buying a car. But the 1/1000 who has been thinking about buying one, hasn’t decided which dealership to go to.
  4. Each car dealership stands to gain $20,000 from making the sale—and furthermore they’re in competition with each other. If the car-purchaser can be swayed to my dealership instead of yours, once they walk on the lot we have a 90% chance of selling them a car that day.
  5. So it’s worth spending quite a lot of money on ads to win that selling opportunity. At some level the monetary value of influencing 1-in-1000 customers to be more likely to walk onto my lot instead of yours, outweighs the revenue the newspaper was making from $2/pop reader payments.
  6. But why not take the money from both sides? Surely it’s better to have two revenue streams (advertisers + readers) than one? But not if by selling the newspaper for $0 or negative, you can double, triple, dectuple the circulation. If you can stuff the ads down people’s throats by slashing the price or equivalently finding people and putting it in their hands, then you can double, triple, dectuple the advertising revenue (so long as the car dealerships are willing to keep paying for more exposure, even if it’s crappier exposure).
  7. So in this story it all comes down to the fact that people don’t want to pay a lot for newspaper but they will pay a lot for cars. So much more, in fact, that subscription revenue is dwarfed by even 0.1% of the value of influencing the big-ticket purchase decision.
  8. In other words it’s because the demand for big-ticket items is not just one or two orders of magnitude higher than the demand for comics, movies, television episodes, songs, albums, and so on. It’s many orders of magnitude higher. Enough more orders of magnitude to that demand that it more than makes up for the low fraction of interested buyers and the fact that your ad can only influence the customer, not control them.

That’s my half-baked story. Care to critique or improve on it?

Tyler Cowen says that super hackers will benefit from improving computer technology and reap the high wages of the post-recession economy.

I’m sorry to say I too have used the lazy robo-programmers metaphor. That was uncareful non-thinking on my part.

Trying to be more logical, what should we really conclude from the assumption that observed ↑ growth in “computer stuff” will continue apace?

Read More

If you buy a loaf of bread from the supermarket both you and the supermarket (its shareholders, its employees, its bread suppliers) are made to some degree better off. How do I know? Because the supermarket offered the bread voluntarily and you accepted the offer voluntarily. Both of you must have been made better off, a little or a lot—or else you two wouldn’t have done the deal.

Economists have long been in love with this simple argument. They have since the eighteenth century taken the argument a crucial and dramatic step further: that is, they have deduced something from it, namely, Free trade is neat.

If each deal between you & the supermarket, and the supermarket & Smith, and Smith & Jones, and so forth is betterment-producing (a little or a lot: we’re not talking quantities here), then (note the “then”: we’re talking deduction here) free trade between the entire body of French people and the entire body of English people is betterment-producing. Therefore (note the “therefore”) free trade between any two groups is neat.

The economist notes that if all trades are voluntary they all have some gain. So free trade in all its forms is neat. For example, a law restricting who can get into the pharmacy business is a bad idea, not neat at all, because free trade is good, so non-free trade is bad. Protection of French workers is bad, because free trade is good. And so forth, to literally thousands of policy conclusions.

Deirdre McCloskey, Secret Sins of Economics

A wonderful essay. I’ll just add what I think are some common answers to common objections:


We start with data (how was it collected?) and the hope that we can compare them. We also start with a question which is of the form:

  • how much tax increase is associated with how much tax avoidance/tax evasion/country fleeing by the top 1%?
  • how much traffic does our website lose (gain) if we slow down (speed up) the load time?
  • how many of their soldiers do we kill for every soldier we lose?
  • how much do gun deaths [suicide | gang violence | rampaging multihomicide] decrease with 10,000 guns taken out of the population?
  • how much more fuel do you need to fly your commercial jet 1,000 metres higher in the sky?
  • how much famine [to whom] results when the price of low-protein wheat rises by $1?
  • how much vegetarian eating results when the price of beef rises by $5? (and again distributionally, does it change preferentially by people with a certain culture or personal history, such as they’ve learned vegetarian meals before or they grew up not affording meat?) How much does the price of beef rise when the price of feed-corn rises by $1?
  • how much extra effort at work will result in how much higher bonus?
  • how many more hours of training will result in how much faster marathon time (or in how much better heart health)?
  • how much does society lose when a scientist moves to the financial sector?
  • how much does having a modern financial system raise GDP growth? (here ∵ the X ~ branchy and multidimensional, we won’t be able to interpolate in Tufte’s preferred sense)
  • how many petatonnes of carbon per year does it take to raise the global temperature by how much?
  • how much does $1000 million spent funding basic science research yield us in 30 years?
  • how much will this MBA raise my annual income?
  • how much more money does a comparable White make than a comparable Black? (or a comparable Man than a comparable Woman?)
  • how much does a reduction in child mortality decrease fecundity? (if it actually does)

  • how much can I influence your behaviour by priming you prior to this psychological experiment?
  • how much higher/lower do Boys score than Girls on some assessment? (the answer is usually “low |β|, with low p" — in other words "not very different but due to the high volume of data whatever we find is with high statistical strength")

bearing in mind that this response-magnitude may differ under varying circumstances. (Raising morning-beauty-prep time from 1 minute to 10 minutes will do more than raising 110 minutes to 120 minutes of prep. Also there may be interaction terms like you need both a petroleum engineering degree and to live in one of {Naija, Indonesia, Alaska, Kazakhstan, Saudi Arabia, Oman, Qatar} in order to see the income bump. Also many of these questions have a time-factor, like the MBA and the climate ones.)

building up a nonlinear function from linear parts

As Trygve Haavelmo put it: using reason alone we can probably figure out which direction each of these responses will go. But knowing just that raising the tax rate will drive away some number of rich doesn’t push the debate very far—if all you lose is a handful of symbolic Eduardo Saverins who were already on the cusp of fleeing the country, then bringing up the Laffer curve is chaff. But if the number turns out to be large then it’s really worth discussing.

In less polite terms: until we quantify what we’re debating about, you can spit bollocks all day long. Once the debate is quantified then the discussion should become way more intelligent, less derailing to irrelevant theoretically-possible-issues-which-are-not-really-worth-wasting-time-on.

So we change one variable over which we have control and measure how the interesting thing responds. Once we measure both we come to the regression stage where we try to make a statement of the form “A 30% increase in effort will result in a 10% increase in wage” or “5 extra minutes getting ready in the morning will make me look 5% better”. (You should agree from those examples that the same number won’t necessarily hold throughout the whole range. Like if I spend three hours getting ready the returns will have diminished from the returns on the first five minutes.)


Avoiding causal language, we say that a 10% increase in (your salary) is associated with a 30% increase in (your effort).


The two numbers that jump out of any regression table output (e.g., lm in R) are p and β.

  • β is the estimated size of the linear effect
  • p is how sure we are that the estimated size is exactly β. (As in golf, a low p is better: more confident, more sure. Low p can also be stated as a high t.)

Wary that regression tables spit out many, many numbers (like Durbin-Watson statistic, F statistic, Akaike Information, and more) specifically to measure potential problems with interpreting β and p naïvely, here are pictures of the textbook situations where p and β can be interpreted in the straightforward way:

First, the standard cases where the regression analysis works as it should and how to read it is fairly obvious:
(NB: These are continuous variables rather than on/off switches or ordered categories. So instead of “Followed the weight-loss regimen” or “Didn’t follow the weight-loss regimen” it’s someone quantified how much it was followed. Again, actual measurements (how they were coded) getting in the way of our gleeful playing with numbers.)


Second, the case I want to draw attention to: a small statistical significance doesn’t necessarily mean nothing’s going on there.


The code I used to generate these fake-data and plots.

If the regression measures a high β but low confidence (high p), that is still worth taking a look at. If regression picks up wide dispersion in male-versus-female wages—let’s say double—but we’re not so confident (high p) that it’s exactly double because it’s sometimes 95%, sometimes 180%, sometimes 310%, we’ve still picked up a significant effect.

The exact value of β would not be statistically significant or confidently precise due to a high p but actually this would be a very significant finding. (Try it the same with any of my other examples, or another quantitative-comparison scenario you think up. It’s either a serious opportunity, or a serious problem, that you’ve uncovered. Just needs further looking to see where the variation around double comes from.)

You can read elsewhere about how awful it is that p<.05 is the password for publishable science, for many reasons that require some statistical vocabulary. But I think the most intuitive problem is the one I just stated. If your geiger counter flips out to ten times the deadly level of radiation, it doesn’t matter if it sometimes reads 8, sometimes 0, and sometimes 15—the point is, you need to be worried and get the h*** out of there. (Unless the machine is wacked—but you’d still be spooked, wouldn’t you?)


The scale of β is the all-important thing that we are after. Small differences in βs of variables that are important to your life can make a huge difference.

  • Think about getting a 3% raise (1.03) versus a 1% wage cut (.99).
  • Think about twelve in every 1000 births kill the mother versus four in every 1000.
  • Think about being 5 minutes late for the meeting versus 5 minutes early.

linear maps as multiplication
linear mappings -- notice they're ALL straight lines through the origin!

Order-of-magnitude differences (like 20 versus 2) is the difference between fly and dog; between life in the USA and near-famine; between oil tanker and gas pump; between Tibet’s altitude and Illinois’; between driving and walking; even the Black Death was only a tenth of an order of magnitude of reduction in human population.

Keeping in mind that calculus tells us that nonlinear functions can be approximated in a local region by linear functions (unless the nonlinear function jumps), β is an acceptable measure of “Around the current levels of webspeed” or “Around the current levels of taxation” how does the interesting thing respond.

Linear response magnitudes can also be used to estimate global responses in a nonlinear function, but you will be quantifying something other than the local linear approximation.

Anscombes quartet  The four data sets are different, yet they have the same &#8220;line of best fit&#8221; as computed by ordinary least squares regression.

[I]n the late 1920’s and early 1930’s…. There were lots of deep thoughts [in economics], but a lack of quantitative results. … It is usually not of very great practical or even scientific interest to know whether the [causal] influence [of some factor] is positive or negative, if one does not know anything about the strength.

But much worse is the situation when an [outcome] is determined by many different factors at the same time, some factors working in one direction, others in the opposite directions. One could write long papers about so-called tendencies explaining how this … might work…. But what is the … total net effect of all the factors? This question cannot be answered without measures of … strength….

Trygve Haavelmo

Bank of Sweden pseudo-Dynamite Prize Laureate 1989, for work in econometrics

(Source: nobelprize.org)