As a result of a usenet argument and general interest in the subject, I have recently become involved in the controversy over the Lott and Mustard paper. In addition to webbing John's responses to his critics, with appropriate links, I thought it would also be worth adding some of my own comments. Here are some of them.
In his critique of the Lott and Mustard article, Stephen Teret objects that they did not allow for a variety of complications that might be relevant to the relationship they were measuring, and concludes that their results are worthless. Quite aside from whether his claims are true (Lott argues that they are not), it is interesting to see how his standards change when he is dealing with a study whose results he agrees with.
Take a look at the story describing a study done by Dr. Garen Wintemute on the relation between "junk" guns and crime. It consisted of collecting a sample of 5,360 Californians between 21 and 25 years of age who legally bought handguns in 1988, collecting criminal histories, and looking at the relation between what kind of guns people bought and how likely they were to subsequently commit a crime.
According to an online description of the research "Associations were assessed by relative risks adjusted for gender and race or ethnicity." It seems clear the study did not control for income. On average, poorer people are both more likely to be convicted of crimes and more likely to buy cheap guns if they are available--it does not follow that if cheap guns are banned, poor people will stop committing crimes. I predict with some confidence that if Wintemute had redone his study, looking at cars instead of guns, he would have discovered that cheap cars cause crime too.
Wintemute's work, judging by the online examples, is legitimate research, although done at (statistically speaking) a fairly primitive level and obviously intended for propagandistic purposes. The authors take much less care than Lott and Mustard do to control for relevant variables, to check results by rerunning the regressions on a variety of different assumptions, use all available data, report potential problems, and the like. Unlike Lott and Mustard, they have no direct data at all on the results of the policy (banning cheap handguns) that they are arguing for.
" "I would say that it's very good evidence that
those guns are problematic," said Stephen Teret, director of the
Center for Gun Policy and Research at Johns Hopkins University and a
good friend of Wintemute."
This is really the second round of the battle over statistics between economists and sociologists, criminologist's, et. al., the first being fought over Isaac Ehrlich's work on the effect of the death penalty and more generally over the issue of deterrence. In each case, if you look at the history of the articles, you find the same pattern:
Time A: Statistical work is being done by non-economist non-statisticians, typically criminologist's, sociologists, physicians etc., and is relatively primitive--on the order of comparing average murder rates in all states with the death penalty to average murder rates in all states without, or selecting a small sample of counties, without offering any basis for choosing those instead of others, and reporting what happened in those counties after a shall issue law was adopted, without controlling for any other variables. The results of this work--that the death penalty does not deter and that laws permitting concealed carry increase the murder rate--are routinely reported as scientific facts by people in the field.
Time B: An economist (Ehrlich, Lott) does a study an order of magnitude more sophisticated, using both time series and longitudinal data, controlling for relevant factors insofar as data is available, using techniques such as second stage least squares to try to control for problems with unobservable variables, etc. It produces a result that the people in the field don't like.
Time C: People in the field publish furious attacks on the economist and on his study; the latter, insofar as they are legitimate, are arguments showing that there are possible explanations for his results other than the one he gave--which is almost always true, to some degree, of statistical results. These attacks apply a standard of proof enormously higher than that applied to the Time A studies--which the same people happily accepted.
Back to my page of links relevant to the controversy