News that there were serious methodological flaws in the RAND health insurance study is actually very, very important. The RAND health insurance study remains the source for almost all speculation about how individuals react to different types of health insurance. When we say that higher co-pays make people cut care indiscriminately, we're going off of their evidence. When some say that health outcomes weren't much better with no co-pays, they're going off of RAND's evidence. When HSA supporters say that higher co-pays didn't degrade health status at all, and thus we should cut insurance spending across the board, they're going off of RAND's evidence. The problem is, RAND's evidence may not have been very good:
Of the various responses to cost sharing that were observed in the participants of the RAND HIE, by far the strongest and most dramatic was in the relative number of RAND participants who voluntarily dropped out of the study over the course of the experiment. Of the 1,294 adult participants who were randomly assigned to the free plan, 5 participants (0.4 percent) left the experiment voluntarily during the observation period, while of the 2,664 who were assigned to any of the cost-sharing plans, 179 participants (6.7 percent) voluntarily left the experiment. This represented a greater than sixteenfold increase in the percentage of dropouts, a difference that was highly significant and a magnitude of response that was nowhere else duplicated in the experiment.
“What explains this? The explanation that makes the most sense is that the dropouts were participants who had just been diagnosed with an illness that would require a costly hospital procedure. … If they dropped out, their coverage would automatically revert to their original insurance policies, which were likely to cover major medical expenses (such as hospitalizations) with no copayments … As a result of dropping out, these participants' inpatient stays (and associated health care spending) did not register in the experiment, and it appeared as if participants in the cost-sharing group had a lower rate of inpatient use. … the cost-sharing participants who remained exhibited a lower rate of inpatient use than free FFS participants, not because they were responding to the higher coinsurance rate by forgoing frivolous hospital care but instead because they did not need as much hospital care, since many of those who became ill and needed hospital care had already dropped out of the experiment before their hospitalization occurred. …
So when we say that higher co-pays didn't degrade health outcomes, it turns out that many of those facing a health crisis and assigned to the high co-pay set dropped out of the study. So we have no idea what their outcomes would've been. Sick individuals in the low co-pay bracket, by contrast, did not drop out of the study, and so the comparison between the two brackets is deeply flawed.
For most folks, this probably seem really wonky and in-the-weeds. But it's almost impossible to overstate how much pull the RAND study has in health policy circles. Jason Furman's whole paper on cost sharing? Largely based on the RAND study. Robin Hanson's theories about slashing medical care in half? Largely based on the RAND study. And now it turns out that the RAND study has some compromised data. I don't know which co-pay subgroups saw the most folks dropping out, and I don't know how the data would change if they had stayed in. It's possible that the RAND study's actual conclusions are right (if, at this point, somewhat outdated). But it's no longer clear that we can simply assume that that's true.