Wednesday 14 September 2011

Why say no? 2. Cost per QALY too high

Reminder: these opinions are my own, not those of SMC.

From the previous post, you will know I am exploring reasons why SMC said no in a calendar year covering 2010-11.  I was surprised to find the number one reason for saying no was that the company declined to submit any evidence, and that explained 21 of the 56 not recommended decisions (one submission did not include an economics case, so that makes 22 if we restrict attention to cost-effectiveness).

What was the next biggest problem for the 35 remaining?  Again, I was surprised (I am easily shocked, as you can tell!) because in 16 cases the submission either included a ‘base case’ cost per QALY that exceeded £30,000 or included a cost-minimisation analysis that showed the new medicine was not the cheapest option.

The SMC has always made clear that £30k per QALY is NOT an absolute cut-off.  If cost-effectiveness is to be taken seriously then everyone involved has to have an idea of what levels are broadly acceptable and which levels are less acceptable.  I think everyone involved in HTA committees would say this not a simple “above this figure bad, below this figure good” situation.  SMC has made a statement about the sorts of factors it will also consider alongside ‘cost per QALY’:
Having said that other factors have a role to play, it might be expected that they will carry more weight when the cost per QALY is £31k compared to £70k, in other words it seems likely there is a trade-off involved.

Of the 16 cases I mentioned, 14 were for cancer medicines, suggesting that companies were hoping the additional factors to be considered alongside cost per QALY would come into play.  On some occasions these did persuade SMC that a higher cost per QALY could be accepted – I do not have a figure for this, but my personal opinion would be that it would be less than 14 cases in a year, therefore I would conclude that a submission strategy of a high cost per QALY and hoping modifiers will be applied worked in less than half the cases during this period. 

As a footnote, only one submission during this time submitted a case where the new medicine was not the cheapest in a cost-minimisation analysis – picking up on the last point above, there is a world of difference between being a few pennies more expensive than the competition and being thousands of pounds more expensive.  For example, if a new medicine seems non-inferior on clinical criteria but the total cost over a lifetime is £800 whereas with the existing treatment it is £799, then my inclination is to say that in economics terms it is ‘non-inferior’ as well.  My inclination would then be to let the case past and leave local formularies to decide which to use.

Why say no? 1. Non-submissions

Just the standard reminder before I start: these are my personal opinions and interpretations, not those of the SMC.

Once upon a time in a galaxy far, far away I wrote a Briefing Note for SMC members on the top 10 reasons why submissions fail.  While SMC’s formal evidence requirements haven’t changed since then, everyone involved will recognize that ‘norms’ for expected evidence do move over time – our attitude to indirect comparisons being one example but refinements in techniques for extrapolation being another.

So what would a more recent ‘top 10 reasons’ list look like?  I picked a recent 12 month period (covering 2010-2011) and looked at the SMC guidance. 

Non-submissions
I was surprised to find that SMC had issued 56 pieces of ‘not recommended’ guidance in that period, as that is 4 per month and the monthly meetings do not feel that negative.  However, when I found out that the number one reason for the ‘not recommended’ guidance was that the company had declined the opportunity to make a submission, that made more sense. At a monthly meeting, those do not involve a discussion, so given that 21 of the 56 fitted that description, a total of 35 not recommended following an in-depth discussion (3 per month) feels about right.

But before we move on, why do companies not submit?  My guess would be there are a variety of reasons.  One could be that this is for a new indication or license extension that the company does not especially wish to pursue in terms of promotion, for whatever reason e.g. patent is about to expire.  Another possibility is that a company has assessed the case they can present, predicts a ‘not recommended’ decision, and decides not to commit to the costs involved.  A third possibility is that the indication is only for a few patients and the company feels the money would not be well-spent on a submission as they are happy to rely on local funding requests for individual patients.

In some situations, non-submissions are very efficient.  If prescribers in Scotland are not very interested in a medicine, why would the company incur costs and the SMC use up its time reviewing the case?  The key issue is whether the local NHS and prescribers have the information they need to do their job, and I can see no evidence they feel this is a problem in the vast majority of cases.

Do non-submissions indicate the SMC’s submission process is too difficult?  I don’t think it does – there is a basic requirement to put the company’s clinical effectiveness and cost-effectiveness evidence in the public domain.  Is that really too difficult to do?  It could be done more concisely, but why should some companies have to make a full submission when others do not?  If any submissions could be cut back then in my own view it should be ones where we have another addition to a therapeutic class at similar costs; it’s not usually these medicines that are non-submissions.

One concern would be if a medicine for a rare condition is left to local funding request panels and they make different decisions across Scotland – while this would be understandable on one level, in that local health boards may have different spending priorities, on another level – postcode-based access to medicines, it would be difficult to defend.

Part of the answer is, of course, for companies to make a submission to SMC, who will then make an assessment of their evidence available to all concerned: this shared starting point for debate is likely (in my view) to reduce variation between local panels making decisions.

Thursday 1 September 2011

Nobel Prize?

I hope you, readers, are enjoying the blog.  Just in passing I note the nomination forms for the next Nobel Prize in Economics have just been sent out:
http://www.nobelprize.org/nobel_prizes/economics/nomination/
and I know I can count on you all to point the judges in the right direction.  Nuff said?

Are new medicines good value-for-money?

Is the published economic evaluation literature WORTH the effort of systematically reviewing?  I tend to agree with my colleague, Professor Stirling Bryan, who said, 'I do think that all the published economic evaluations should be gathered in one place somewhere.  And then a match should be thrown on the pile.'  So many published studies are either flawed, past their best-before date or won’t generalise to your local decision-making context.  With RCTs the meta-analysis approach assumes that if we take a systematic overview we can distil drops of pure wisdom.  My colleague and I would contend that if we distilled published economic studies then we would en dup with drops of something but we’re pretty sure it wouldn’t be wisdom.

You therefore have to question the basis of a recently published study attempting to answer the question “What is the value for money of medicines?”  (Please read the original article in the Journal of Clinical Pharmacy and Therapeutics 2011, e-pub ahead of printing on 4th August).  Having said that I enjoyed the paper more than I expected – the discussion mentions most of the criticisms I would have made, although I would have made more of them.  It also doesn’t attempt overly complicated statistical analysis of its somewhat flawed data set.

Briefly, the author reviewed the Tufts Cost-Effectiveness Analysis Registry and extracted economic evaluations that were cost-utility analyses of medicines published in English between 2000 and 2007 and basing their clinical data on European patients.  Looking at the range of cost per QALY figures the author concluded there was an 81% chance the overall cost-effectiveness was less than €50k, with cardiovascular medicines (median €5,167/QALY) and medicines for infectious diseases (median €5,643/QALY) looking particularly good value. (Figures are in 2008 euros, having been updated in line with household inflation and converted using exchange rates.)

Of course you will be thinking all sorts of things:
  • €50k/QALY sounds high, what’s the probability of cost-effectiveness at a more realistic level like €20k/QALY? (answer 58%, barely more than a 50:50 chance)
  • What was the quality of the methods used in these studies? (answer: only 14% had a quality score from Tufts of 6 or 7 out of a maximum of 7)
  • Who sponsored these studies? (answer: 63% were industry funded)
  • Were the studies in any way representative of medicines actually used in practice? (answer: no analysis)
  • Was there any analysis of the cost-effectiveness of non-medicines?  (answer: no)
  • Was there any allowance for publication bias? (answer: no)

But if you put your nose in the air and stopped reading you would miss a few gems.

First, we know the English government has expressed approval of including factors in economic evaluation beyond the usual health care / social care perspective.  In Simoens’ sample he could compare the median cost per QALY for medicines using a societal perspective and an NHS perspective: they were virtually the same (€11,218 for societal, €11,558 for NHS).  So the societal perspective makes less difference than you might think?

Second, as we might suspect, industry-funded studies gave lower cost per QALY figures than studies sponsored by other sources.  What did surprise we was the size of the difference, medians of €9,518/QALY versus €21, 756/QALY.  We aren’t comparing studies of the same medicines, of course, but there does seem to be a case for suspecting publication bias, ‘optimism bias’ or both in industry-sponsored studies.  Simoens notes 63% of studies were industry-sponsored but in a further 18% funding was unclear or not declared so the true figure could be even higher.

Thirdly, Simoens recorded the Tufts rating of the quality of the methods used in the study from 7 (the best) to 1 (not the best).  Studies rated between 1 through to 5 for quality had a median cost per QALY of €10,878, whereas those with a rating of 6 or 7 had a median cost per QALY of €31,954.  That’s just under £28,000/QALY and if we allow for a rise in prices between 2008 and 2011 that sounds very close to £30k/QALY (which may be unaffordable in the first place).  Is another take on the data that if we restrict it to the most credible studies, medicines are barely cost-effective?  

In summary, thanks to Steve Simoens for undertaking this work.  I can’t agree with the main conclusions but I do think it raises some very interesting points for discussion.

I just wanted to fit a Weibull

In my day, public information films were about the Green Cross Man and the fairy godmother saying "Learn to swim, young man, learn to swim!"  Times have changed and now children as youndg as three are bing shown Nick "Oscar" Latimer's new production:

https://rcb-exserv.rcbdomain.com/owa/redir.aspx?C=ef242c85411c4d61938e24de8756035a&URL=http%3a%2f%2fwww.xtranormal.com%2fwatch%2f12276585%2fi-just-wanted-to-fit-a-weibull
I am a massive fan of any way in which we eeconomists can make our point other than through slabs of text and I am delighted Nick gave permission to paste the link, thank you.  The conversation between the rabbit and the dog does stir uncomfortable memories of a few meetings with pharma companies though ...

For what it's worth, I endorse the rabbit's view - the economics model in an HTA submission should test out several different ways of extrapolating the data and present statistics such as 'goodness-of-fit' plus graphs showing observed data versus plotted extrapolations for visual inspection.  I have seen submissions where companies have argued only one form of extrapolation is appropriate but in general I would say it is best to use the rabbit's approach in the base case and then make any 'special case' in the sensitivity analysis.