During the Senate's Armed Services Committee hearing on DADT repeal, Republican Sens. Scott Brown and Jim Inhofe expressed concern about the "low response rate" of service members to the DADT survey. These concerns are unfounded.
As the report notes, the survey's 28 percent response rate is well within the average response rate for military surveys:
The response rate for this survey, as a whole and by Service, was in-line with typical response rates for surveys within the Department of Defense. Since 2008, DMDC's Status of Forces Survey (SOFS) program, which features the most comparable methodology to the Service member and spouse surveys (web administration with postal and e-mail notifications and reminders), has seen response rates of 29–32% for Active Duty Service members, and 25–29% for Reservists.
Not only that, but the number of service members polled was massive -- 400,000 service members out of 2.2 million. This was likely done for political reasons, since national surveys of political opinion rely on far smaller samples to glean accurate assessments of public opinion, and it's not clear that adding more respondents to a survey beyond a certain point actually paints a much more accurate picture. A survey of about a thousand people has a margin of error of around 3 percent, the DADT survey's margin of error is below 1 percent.
So it's surprising that politicians like Brown and Inhofe, who presumably rely on polls surveying around a thousand respondents to gauge how their political positions play with a nation of 300 million people, or how well they're performing against political rivals in their particular states, would be so concerned about the accuracy of the DADT survey, given its statistical overkill.
UPDATE: In the comments, Carl writes "Sorry Adam, but you don't quite have this right. Sample size and response rate are two entirely different quantities. The margin of error only takes into account the sample size. The response rate does not affect the margin of error, and there is no statistic by which to estimate how much the response rate has biased our estimates." That's true--it wasn't my intention to conflate the two. I was just trying to say that increasing sample size has diminishing returns, and a higher response rate doesn't necessarily mean your data is more accurate. Apologies if that was unclear.