In my review of Mark Penn's new book Microtrends, I got a bit of flack from pollsters who otherwise loved the piece because I widened the conclusion from Penn to polling. I wrote:
Pollsters occupy a uniquely powerful space in American political discourse: They bring science to elections. Armed with heaps of raw data, they elevate their opinions into something altogether weightier: Conclusions. When an organization sends out a press release saying the organization is right, it's ignored. When a pollster sends out a poll showing the electorate agrees, ears in Washington perk up.
The enterprise has always been dodgy. Populist pollsters reliably discover that the electorate thirsts for more populism. Conservative pollsters routinely discover a small government consensus pulsing at the heart of the body politic. When the libertarian Cato Institute commissioned a poll of the electorate, they found—shockingly—that the essential swing vote was made of libertarians. Remarkably, whenever a politician or self-interested institution releases a poll, the results show a symmetry between the attitudes of the pollster's employer and those of the voters. But Penn's book shines light on this phenomenon: If he is the pinnacle of his profession, then the profession uses numbers as a ruse—a superficial empiricism that obscures garden-variety hackery. And that's a trend worth worrying about.
Lots of pollsters protested that, sure, Mark Penn weights his numbers and doesn't show his work and has a bit of a bad reputation, but he's an isolated case. But take Celinda Lake's push-polling. Celinda Lake is not, so far as I know, a pollster with a bad reputation. Rather the opposite, in fact. Yet she's pushing out a "sobering" poll that shows support for Democratic congressional candidates drops after respondents are told, "Some people say [your Democratic incumbent] is a strong supporter of Hillary Clinton/Barack Obama and will support her/his liberal agenda of big government and higher taxes if she becomes president." And this poll is being reported a problem for those candidates.
But that's moronic. If the Democrat is defined as being a candidate of high taxes and an amorphous, evil-sounding "liberal agenda," they will do poorly. If they're defined in more positive ways, they'll do better. If the question was, "Some people say [your Democratic incumbent] is a strong supporter of Hillary Clinton/Barack Obama and will support her/his attempts to withdraw troops from Iraq and ensure every American has health care if he/she becomes president," support would increase.
This poll wanted a result. It got it. It also could have gotten the opposite result. This happens all the time. It just depends on who's paying, and what they want to show. It's certainly true that good polling can be and often is, conducted, but far too much of it is of this type, and nether the polling industry nor the media polices these practices.