Wednesday 14 September 2011

Why say no? 2. Cost per QALY too high

Reminder: these opinions are my own, not those of SMC.

From the previous post, you will know I am exploring reasons why SMC said no in a calendar year covering 2010-11.  I was surprised to find the number one reason for saying no was that the company declined to submit any evidence, and that explained 21 of the 56 not recommended decisions (one submission did not include an economics case, so that makes 22 if we restrict attention to cost-effectiveness).

What was the next biggest problem for the 35 remaining?  Again, I was surprised (I am easily shocked, as you can tell!) because in 16 cases the submission either included a ‘base case’ cost per QALY that exceeded £30,000 or included a cost-minimisation analysis that showed the new medicine was not the cheapest option.

The SMC has always made clear that £30k per QALY is NOT an absolute cut-off.  If cost-effectiveness is to be taken seriously then everyone involved has to have an idea of what levels are broadly acceptable and which levels are less acceptable.  I think everyone involved in HTA committees would say this not a simple “above this figure bad, below this figure good” situation.  SMC has made a statement about the sorts of factors it will also consider alongside ‘cost per QALY’:
Having said that other factors have a role to play, it might be expected that they will carry more weight when the cost per QALY is £31k compared to £70k, in other words it seems likely there is a trade-off involved.

Of the 16 cases I mentioned, 14 were for cancer medicines, suggesting that companies were hoping the additional factors to be considered alongside cost per QALY would come into play.  On some occasions these did persuade SMC that a higher cost per QALY could be accepted – I do not have a figure for this, but my personal opinion would be that it would be less than 14 cases in a year, therefore I would conclude that a submission strategy of a high cost per QALY and hoping modifiers will be applied worked in less than half the cases during this period. 

As a footnote, only one submission during this time submitted a case where the new medicine was not the cheapest in a cost-minimisation analysis – picking up on the last point above, there is a world of difference between being a few pennies more expensive than the competition and being thousands of pounds more expensive.  For example, if a new medicine seems non-inferior on clinical criteria but the total cost over a lifetime is £800 whereas with the existing treatment it is £799, then my inclination is to say that in economics terms it is ‘non-inferior’ as well.  My inclination would then be to let the case past and leave local formularies to decide which to use.

Why say no? 1. Non-submissions

Just the standard reminder before I start: these are my personal opinions and interpretations, not those of the SMC.

Once upon a time in a galaxy far, far away I wrote a Briefing Note for SMC members on the top 10 reasons why submissions fail.  While SMC’s formal evidence requirements haven’t changed since then, everyone involved will recognize that ‘norms’ for expected evidence do move over time – our attitude to indirect comparisons being one example but refinements in techniques for extrapolation being another.

So what would a more recent ‘top 10 reasons’ list look like?  I picked a recent 12 month period (covering 2010-2011) and looked at the SMC guidance. 

Non-submissions
I was surprised to find that SMC had issued 56 pieces of ‘not recommended’ guidance in that period, as that is 4 per month and the monthly meetings do not feel that negative.  However, when I found out that the number one reason for the ‘not recommended’ guidance was that the company had declined the opportunity to make a submission, that made more sense. At a monthly meeting, those do not involve a discussion, so given that 21 of the 56 fitted that description, a total of 35 not recommended following an in-depth discussion (3 per month) feels about right.

But before we move on, why do companies not submit?  My guess would be there are a variety of reasons.  One could be that this is for a new indication or license extension that the company does not especially wish to pursue in terms of promotion, for whatever reason e.g. patent is about to expire.  Another possibility is that a company has assessed the case they can present, predicts a ‘not recommended’ decision, and decides not to commit to the costs involved.  A third possibility is that the indication is only for a few patients and the company feels the money would not be well-spent on a submission as they are happy to rely on local funding requests for individual patients.

In some situations, non-submissions are very efficient.  If prescribers in Scotland are not very interested in a medicine, why would the company incur costs and the SMC use up its time reviewing the case?  The key issue is whether the local NHS and prescribers have the information they need to do their job, and I can see no evidence they feel this is a problem in the vast majority of cases.

Do non-submissions indicate the SMC’s submission process is too difficult?  I don’t think it does – there is a basic requirement to put the company’s clinical effectiveness and cost-effectiveness evidence in the public domain.  Is that really too difficult to do?  It could be done more concisely, but why should some companies have to make a full submission when others do not?  If any submissions could be cut back then in my own view it should be ones where we have another addition to a therapeutic class at similar costs; it’s not usually these medicines that are non-submissions.

One concern would be if a medicine for a rare condition is left to local funding request panels and they make different decisions across Scotland – while this would be understandable on one level, in that local health boards may have different spending priorities, on another level – postcode-based access to medicines, it would be difficult to defend.

Part of the answer is, of course, for companies to make a submission to SMC, who will then make an assessment of their evidence available to all concerned: this shared starting point for debate is likely (in my view) to reduce variation between local panels making decisions.

Thursday 1 September 2011

Nobel Prize?

I hope you, readers, are enjoying the blog.  Just in passing I note the nomination forms for the next Nobel Prize in Economics have just been sent out:
http://www.nobelprize.org/nobel_prizes/economics/nomination/
and I know I can count on you all to point the judges in the right direction.  Nuff said?

Are new medicines good value-for-money?

Is the published economic evaluation literature WORTH the effort of systematically reviewing?  I tend to agree with my colleague, Professor Stirling Bryan, who said, 'I do think that all the published economic evaluations should be gathered in one place somewhere.  And then a match should be thrown on the pile.'  So many published studies are either flawed, past their best-before date or won’t generalise to your local decision-making context.  With RCTs the meta-analysis approach assumes that if we take a systematic overview we can distil drops of pure wisdom.  My colleague and I would contend that if we distilled published economic studies then we would en dup with drops of something but we’re pretty sure it wouldn’t be wisdom.

You therefore have to question the basis of a recently published study attempting to answer the question “What is the value for money of medicines?”  (Please read the original article in the Journal of Clinical Pharmacy and Therapeutics 2011, e-pub ahead of printing on 4th August).  Having said that I enjoyed the paper more than I expected – the discussion mentions most of the criticisms I would have made, although I would have made more of them.  It also doesn’t attempt overly complicated statistical analysis of its somewhat flawed data set.

Briefly, the author reviewed the Tufts Cost-Effectiveness Analysis Registry and extracted economic evaluations that were cost-utility analyses of medicines published in English between 2000 and 2007 and basing their clinical data on European patients.  Looking at the range of cost per QALY figures the author concluded there was an 81% chance the overall cost-effectiveness was less than €50k, with cardiovascular medicines (median €5,167/QALY) and medicines for infectious diseases (median €5,643/QALY) looking particularly good value. (Figures are in 2008 euros, having been updated in line with household inflation and converted using exchange rates.)

Of course you will be thinking all sorts of things:
  • €50k/QALY sounds high, what’s the probability of cost-effectiveness at a more realistic level like €20k/QALY? (answer 58%, barely more than a 50:50 chance)
  • What was the quality of the methods used in these studies? (answer: only 14% had a quality score from Tufts of 6 or 7 out of a maximum of 7)
  • Who sponsored these studies? (answer: 63% were industry funded)
  • Were the studies in any way representative of medicines actually used in practice? (answer: no analysis)
  • Was there any analysis of the cost-effectiveness of non-medicines?  (answer: no)
  • Was there any allowance for publication bias? (answer: no)

But if you put your nose in the air and stopped reading you would miss a few gems.

First, we know the English government has expressed approval of including factors in economic evaluation beyond the usual health care / social care perspective.  In Simoens’ sample he could compare the median cost per QALY for medicines using a societal perspective and an NHS perspective: they were virtually the same (€11,218 for societal, €11,558 for NHS).  So the societal perspective makes less difference than you might think?

Second, as we might suspect, industry-funded studies gave lower cost per QALY figures than studies sponsored by other sources.  What did surprise we was the size of the difference, medians of €9,518/QALY versus €21, 756/QALY.  We aren’t comparing studies of the same medicines, of course, but there does seem to be a case for suspecting publication bias, ‘optimism bias’ or both in industry-sponsored studies.  Simoens notes 63% of studies were industry-sponsored but in a further 18% funding was unclear or not declared so the true figure could be even higher.

Thirdly, Simoens recorded the Tufts rating of the quality of the methods used in the study from 7 (the best) to 1 (not the best).  Studies rated between 1 through to 5 for quality had a median cost per QALY of €10,878, whereas those with a rating of 6 or 7 had a median cost per QALY of €31,954.  That’s just under £28,000/QALY and if we allow for a rise in prices between 2008 and 2011 that sounds very close to £30k/QALY (which may be unaffordable in the first place).  Is another take on the data that if we restrict it to the most credible studies, medicines are barely cost-effective?  

In summary, thanks to Steve Simoens for undertaking this work.  I can’t agree with the main conclusions but I do think it raises some very interesting points for discussion.

I just wanted to fit a Weibull

In my day, public information films were about the Green Cross Man and the fairy godmother saying "Learn to swim, young man, learn to swim!"  Times have changed and now children as youndg as three are bing shown Nick "Oscar" Latimer's new production:

https://rcb-exserv.rcbdomain.com/owa/redir.aspx?C=ef242c85411c4d61938e24de8756035a&URL=http%3a%2f%2fwww.xtranormal.com%2fwatch%2f12276585%2fi-just-wanted-to-fit-a-weibull
I am a massive fan of any way in which we eeconomists can make our point other than through slabs of text and I am delighted Nick gave permission to paste the link, thank you.  The conversation between the rabbit and the dog does stir uncomfortable memories of a few meetings with pharma companies though ...

For what it's worth, I endorse the rabbit's view - the economics model in an HTA submission should test out several different ways of extrapolating the data and present statistics such as 'goodness-of-fit' plus graphs showing observed data versus plotted extrapolations for visual inspection.  I have seen submissions where companies have argued only one form of extrapolation is appropriate but in general I would say it is best to use the rabbit's approach in the base case and then make any 'special case' in the sensitivity analysis.

Monday 29 August 2011

Composite endpoints in RCTs: what are they worth?

Composite endpoints in Phase 3 trials – doncha just love them?  Well I don’t.  As has been pointed out elsewhere, in terms of proving value in an economic evaluation, the first thing I want to do is pick the composite apart because I want to convert each bit into QALYs and savings to understand how that compares against the added cost.

An article has just been published that goes some way to illustrating this point:
‘Weighting components of composite end points in clinical trials: an approach using disability-adjusted life-years’ K.-S. Hong, L. Ali, S. Selco, G. Fonarow and J. Saver  Stroke 2011; 42: 1722-1729.

You need to read the original article but in brief they have focused on vascular endpoints and converted the common components into DALYs left as follows:
7.63 DALYs lost per non-fatal stroke
5.14 DALYs lost per non-fatal MI
11.59 DALYs lost per vascular death
In DALY terms, therefore, if a non-fatal MI = 1 then a non-fatal stroke = 1.48 and vascular death = 2.25.

As a QALY-orientated economist my ideal would have been if they had used QALYs instead of DALYs, but I can understand the DALY disease weightings are more accessible than disutilities in QALY studies.  If you were intending to use these results you also need to understand the different assumptions made – events happen at age 60, US life expectancy data, 3% discount rate, and an assumption good health for older people is worth less than good health for younger people, and so on.  As I said, you have to read the article!

But with those gripes aside I think this is a fantastic illustration of the issue.  I’d like to take it one stage further because Hong and colleagues were thinking as clinicians and trying to produce a measure of health effect alone whereas I would be interested in savings as well.  Supposing I work in a system that is willing to pay £20,000 per QALY, and let’s assume for present purposes DALYs and QALYs are roughly equivalent.  Just to illustrate let’s assume that the lifetime discounted cost of managing events are as follows:
Non-fatal stroke £20,000
Non-fatal MI £4,000 without PCI, £10,000 with PCI
Vascular death £5,000
Then in QALY terms these are worth 1, 0.2 to 0.5, and 0.25 QALYs respectively.

Adding these back in to Hong et al’s figures, we get
Non-fatal stroke = 7.63 (health) + 1 (saving) = 8.63
Non-fatal MI = 5.14 (health) + either 0.2 or 0.5 (saving) = 5.34 to 5.64
Vascular death = 11.59 (health) + 0.25 (saving) = 11.84
Using the higher value of 5.64 for non-fatal MI and setting that to 1, the ratios are 1.53 (non-fatal MI) and 2.1 (vascular death).

I’m a little surprised as my intuition would be that there is a bigger gulf between non-fatal MI on the one hand and non-fatal stroke and vascular death on the other.  I don’t perceive the disability consequence of a non-fatal MI to be any where near that of a stroke.  Vascular death also seems to me a major loss of DALYs or QALYs, losing years of life at 0.6 or 0.7 quality, whereas an MI might be the difference between 70% and 60% over a decade.

But that is to lose sight of the main point of this article which is to put in the public domain something to get this sort of debate started.  Thank you to Hong and colleagues!


Wednesday 24 August 2011

Utility values for diabetes

The search for consistency (and its desirability)

One of the emerging themes of this blog is the extent to which we can establish standardise aspects of producing HTA evidence and a paper I have just read by Lung et al is an illustration:

Lung TW, Hayes AJ, Hayen A, Farmer A, Clarke PM.
Qual Life Res. 2011 Apr 7. [Epub ahead of print]

This team carried out a literature search so thorough it has me worrying for their psychological stability to identify studies that used one of the QALY-compatible preference measures like EQ-5D or an SF measure, or which used time trade-off or standard gamble in people with diabetes.

They report huge ranges in the values obtained from a humble 14 points for diabetes with no complications (i.e. lowest was 0.74, highest was 0.88) through to 48 points (stroke and end-stage renal disease).  It’s obvious that stroke, for example, would depend on severity and ESRD might depend on whether the person required dialysis and, if they did, whether it was hospital or home based.  However, if an HTA organisation accepts published values from the literature, say because it used its preferred utility elicitation technique, it has handed a substantial element of choice to the people writing the HTA submission.  For ESRD in diabetes will we use a utility value of 0.33 citing source study A or 0.81 citing source study B? 

Using stats techniques I am too dull to understand they then carried out two analyses that I will pick out, a random effects meta-analysis (MA) and a random effects meta regression (MR).

The MA gives a point estimates of the mean utility value across the studies but, as important, it provides a 95% confidence interval (and sample size) ideal for use in sensitivity analysis.  For example, the diabetes with no complications state had values from individual studies from 0.74 to 0.88 but in the MA the mean value was 0.81, 95% confidence interval 0.78 to 0.84.  Fantastic, as a reviewer of HTA submissions, that is so helpful.

In the MR they analyse how much of the variation between the estimates in different studies can be explained by measured features such as the age of the patients, their sex, and the elicitation technique.  Older age and being female led to lower utilities (<<insert joke of your choice>>), and of the elicitation techniques TTO and SG (combined) gave higher values than EQ-5D which, in turn, gave higher values than HUI-3 and SF-6D (combined).    

Tom Lung and team, take a bow, my grateful thanks.  The only other study like this I am aware of is in stroke:

Tengs TO, Lin TH.
Pharmacoeconomics. 2003;21(3):191-200.

Are other people aware of similar studies?  And how should we use them? 

Clearly, the ideal is still that companies measure quality of life in their trials in a way that is compatible with QALYs.  However, in this ‘second best’ situation, I think there is a strong case for making values from these meta-analyses the default settings for an HTA submission.  Of course I would be interested in listening to a company’s arguments for why this should not apply to their particular submission – for example, suppose it could be shown that for a particular treatment the ESRD experienced secondary to diabetes were always of a milder type than a utility value of 0.48 (from Lung et al’s meta-analysis) would imply.

But what we all need to get away from is where an HTA submission can select between two hugely different utility values and cite a supporting reference with equal authority.  This should work for companies as well as it gives them greater certainty when they are estimating the likely cost per QALY at an early stage of a product’s life and it may save them some money on commissioning their own utility surveys.