Quick note on The Lancet which, in blogosphere time, may already be old and musty. When Megan McArdle questioned The Lancet study's authenticity (and I think she was the most valid and convincing of the skeptics), she did so on the back of an editor's note admitting that sample error may have occurred because, among other things, many communities were too dangerous for return questioning, families with combatants could have hidden deaths, and many families possibly underreported infant deaths. Two provinces, due to a miscommunication, were left out of the sample, and so no excess deaths were counted. Some families may have been totally obliterated -- a bomb landing on their house, say -- and so wouldn't have reported any deaths. And migratory patterns between the population survey and the sampling could have over-represented high mortality areas.
Reacting to all this, Megan says "if you can't take a good sample, which these guys pretty clearly couldn't, it doesn't matter how faithfully you run the regressions on the crap you managed to collect." Fair enough. Except just about all these biases would undercount the death rate. It's not that they have an poor sample biasing the data in unpredictable ways, but that they have bias pointing in a discernible direction. If, like Megan, you believe the estimate is too high, this doesn't much help your case.
And if you really want to get deep in the weeds on this, The Lancet has a podcast with one of the study authors defending the methodology.
Update From Comments: On the other hand, SomeCallMeTim points to this guy, an actual an expert, who figures the study probably offered a good "upper-bound" estimate. "Population-analysis sampling based techniques like this do tend to produce larger numbers than other analyses, but over the long term, while the sampling techniques tend to over-estimate, those higher numbers have tended to be quite a bit closer to the truth than the lower numbers generated by other techniques."