Is the published economic evaluation literature WORTH the effort of systematically reviewing? I tend to agree with my colleague, Professor Stirling Bryan, who said, 'I do think that all the published economic evaluations should be gathered in one place somewhere. And then a match should be thrown on the pile.' So many published studies are either flawed, past their best-before date or won’t generalise to your local decision-making context. With RCTs the meta-analysis approach assumes that if we take a systematic overview we can distil drops of pure wisdom. My colleague and I would contend that if we distilled published economic studies then we would en dup with drops of something but we’re pretty sure it wouldn’t be wisdom.
You therefore have to question the basis of a recently published study attempting to answer the question “What is the value for money of medicines?” (Please read the original article in the Journal of Clinical Pharmacy and Therapeutics 2011, e-pub ahead of printing on 4th August). Having said that I enjoyed the paper more than I expected – the discussion mentions most of the criticisms I would have made, although I would have made more of them. It also doesn’t attempt overly complicated statistical analysis of its somewhat flawed data set.
Briefly, the author reviewed the Tufts Cost-Effectiveness Analysis Registry and extracted economic evaluations that were cost-utility analyses of medicines published in English between 2000 and 2007 and basing their clinical data on European patients. Looking at the range of cost per QALY figures the author concluded there was an 81% chance the overall cost-effectiveness was less than €50k, with cardiovascular medicines (median €5,167/QALY) and medicines for infectious diseases (median €5,643/QALY) looking particularly good value. (Figures are in 2008 euros, having been updated in line with household inflation and converted using exchange rates.)
Of course you will be thinking all sorts of things:
- €50k/QALY sounds high, what’s the probability of cost-effectiveness at a more realistic level like €20k/QALY? (answer 58%, barely more than a 50:50 chance)
- What was the quality of the methods used in these studies? (answer: only 14% had a quality score from Tufts of 6 or 7 out of a maximum of 7)
- Who sponsored these studies? (answer: 63% were industry funded)
- Were the studies in any way representative of medicines actually used in practice? (answer: no analysis)
- Was there any analysis of the cost-effectiveness of non-medicines? (answer: no)
- Was there any allowance for publication bias? (answer: no)
But if you put your nose in the air and stopped reading you would miss a few gems.
First, we know the English government has expressed approval of including factors in economic evaluation beyond the usual health care / social care perspective. In Simoens’ sample he could compare the median cost per QALY for medicines using a societal perspective and an NHS perspective: they were virtually the same (€11,218 for societal, €11,558 for NHS). So the societal perspective makes less difference than you might think?
Second, as we might suspect, industry-funded studies gave lower cost per QALY figures than studies sponsored by other sources. What did surprise we was the size of the difference, medians of €9,518/QALY versus €21, 756/QALY. We aren’t comparing studies of the same medicines, of course, but there does seem to be a case for suspecting publication bias, ‘optimism bias’ or both in industry-sponsored studies. Simoens notes 63% of studies were industry-sponsored but in a further 18% funding was unclear or not declared so the true figure could be even higher.
Thirdly, Simoens recorded the Tufts rating of the quality of the methods used in the study from 7 (the best) to 1 (not the best). Studies rated between 1 through to 5 for quality had a median cost per QALY of €10,878, whereas those with a rating of 6 or 7 had a median cost per QALY of €31,954. That’s just under £28,000/QALY and if we allow for a rise in prices between 2008 and 2011 that sounds very close to £30k/QALY (which may be unaffordable in the first place). Is another take on the data that if we restrict it to the most credible studies, medicines are barely cost-effective?
In summary, thanks to Steve Simoens for undertaking this work. I can’t agree with the main conclusions but I do think it raises some very interesting points for discussion.