As you may have noticed, there's a big brouhaha brewing with respect to polls conducted by the Research 2000 firm for Daily Kos. Long story short, Markos Moulitsas, who fired the firm for alleged inaccuracy not long ago, is now alleging that some serious book-cooking may have transpired, after reviewing an analysis of some strange and hard-to-explain anomalies in R2K findings. Both sides have lawyered up, and the whole thing may be resolved in court, though R2K could do itself some good by releasing its raw data or at least responding in specificity to the allegations.
Though it's too early for anyone to start apologizing for reliance on R2K polls, I think I speak for most analysts in saying that we all sort of got in the habit recently of treating R2K's apparent pro-Democratic "house effect" as a counter-weight to the apparent pro-Republican "house effect" of the Rasmussen firm (though I am not, repeat not, suggesting that Rasmussen is doing anything unethical, there is clearly something about the firm's technique that tends to produce more robust Republican performance findings than is the case without other pollsters, a concern exacerbated by Rasmussen's "flood the zone" domination of state polling). In states where both firms released polls, we all kinda figured the truth lied somewhere in between.
One big exception was a much-cited (by me among many others) R2K poll of Republicans which suggested that rank-and-file GOPers had some mighty strange views. But as Tom Jensen of PPP noted the other day, his firm did its own poll of Republicans and reached similar findings. In other words, even if a poll is marred by faulty methodology or worse, the conclusions it supports are not necessarily wrong.
And that leads me to the inevitable fallout from this furor: the perennial complaints that will soon be revived about reliance on polling data generally.
Without question, even in the best of circumstances, there are limits to the utility of polling data, and heavy reliance on any one poll or pollster is generally a mistake. But the answer to insufficient or faulty data is more data and better data, not a refusal to collect or look at it. And that's why any know-nothing overreaction to the R2K controversy could be its most damaging consequence.