When the Heritage Foundation released that study showing immigration reform would cost American taxpayers a gajillion feptillion bazillion dollars, people were obviously going to pick it apart and reveal its flaws and tendentious assumptions, which they did. But today came something else interesting. Dylan Matthews read the dissertation written by one of the authors, Jason Richwine, in which Richwin writes that "The average IQ of immigrants in the United States is substantially lower than that of the white native population, and the difference is likely to persist over several generations." In order to deal with the problem, Richwine suggests IQ-testing everyone who wants to immigrate, and taking only the smart ones. As Matthews describes it, "Richwine's dissertation asserts that there are deep-set differentials in intelligence between races ... He writes, 'No one knows whether Hispanics will ever reach IQ parity with whites, but the prediction that new Hispanic immigrants will have low-IQ children and grandchildren is difficult to argue against.'" Well now.
So: does this provide even more reason to reject the Heritage study Richwine co-wrote? In other words, how much weight should we give to someone's repellent views on a topic when evaluating an empirical piece of work they produce? If you conclude that Richwine has bad intentions, can that be all you need to know to reject what he has to say about the costs of immigration reform?
My answers to those questions are not really, not too much, and no. We already knew that the Heritage study was produced with the direct intention of contributing to the defeat of immigration reform. That wasn't a mystery, and I'm sure even Heritage itself would admit it. They're hardly alone; every day, think tanks and advocacy groups produce reports and studies that are intended to move public policy in their authors' favored direction. It's important to know where the authors are coming from, not because it allows you to dismiss them, but because it alerts you to where you should be on your guard.
It's perfectly possible for someone with a strong opinion about a policy issue, even a despicable opinion, to produce unimpeachable research on that issue. But when you're reading it, you need to examine how many assumptions they made and in what direction, and how complex the analysis is, because the more complicated it is, the greater the opportunities to put a thumb on the scale. Some kinds of data collection are simple and straightforward, and don't allow bias to creep in. For instance, let's say someone went through the Fortune 500 list of America's largest companies to see how many CEOs are women. The answer happens to be 20, or a whole 4 percent (a new record this year—way to go, ladies!). That fact is the same whether the person who counted was Gloria Steinem or Donald Trump. When you take the next step and ask why or what should be done about it, then your perspective matters.
In this case, perspective matters quite a bit. That isn't to say that Richwine and his co-author, Robert Rector, produced something markedly different from what someone else who also wanted to scuttle immigration reform would have produced, but their analysis has dozens of decisions embedded in it. For instance, they decided to assume that income mobility for immigrants is zero; if they have low education when they arrive, they'll have low income forever. They decided, to the chagrin of conservative supporters of reform, not to make an attempt at "dynamic scoring" to account for any kind of beneficial effect immigrants might have on the economy, but just to count what they'll pay in taxes and what services they'll use. That's obviously a decision that profoundly affects the numbers they produce.
I don't mean to get too deep into this particular study, because it's so flawed it's barely worth talking about anymore. But it's a good lesson in how to approach these kinds of things. As a general rule, saying "Now that we know the biases of the person who produced this work, we can ignore it" is a bad idea. The work either has merit or it doesn't. But knowing those biases can tell you where you should look for flaws. This study is full of them, but the next one might not be.