HEDS is part of the School of Health and Related Research (ScHARR) at the University of Sheffield. We undertake research, teaching, training and consultancy on all aspects of health related decision science, with a particular emphasis on health economics, HTA and evidence synthesis.

Tuesday 9 April 2013

Online comparative effectiveness research decision making tool

An  online assessment tool aimed at helping payers evaluate comparative effectiveness research (CER) studies is now available.  Developed by the CER Collaborative, a group composed of Academy of Managed Care Pharmacy (AMCP), the International Society for Pharmacoeconomics and Outcomes Research (ISPOR) and the National Pharmaceutical Council (NPC), the tool is meant to assist payers in assessing the credibility and relevance of individual research studies as part of making formulary decisions.

Our first quick look at “Assessing the Evidence for Health Care Decision Makers” suggests that this is an easy to use and accessible tool.  Registration is simple.  Entry of study details is easy.  Study questions are straightforward.  Your account also stores all your past assessments, which makes it a useful personal resource.

Underpinning it all are sets of questions which allow you to assess the relevance and credibility of a research study.  There are separate sets of questions for four types of study; modeling study, network meta-analysis/indirect treatment comparison study, prospective observational study and retrospective observational database study.  For modelling studies, the four questions used to assess relevance are:

1.    Is the population relevant?
2.    Are any critical interventions missing?
3.    Are any relevant outcomes missing?
4.    Is the context (settings and circumstances) applicable?

Each of these also have a set of sub-questions, for example “is the population relevant?” is broken down into; are the demographics similar? Are risk factors similar? Are behaviors similar? Is the medical condition similar? And are co-morbidities similar?

The eleven questions used to assess the credibilty of modelling studies are:
1.    Is external validation of the model sufficient to make its results credible for your decision?
2.    Is internal verification of the model sufficient to make its results credible for your decision?
3.    Does the model have sufficient face validity to make its results credible for your decision?
4.    Is the design of model adequate for your decision problem?
5.    Are the data used in populating the model suitable for your decision problem?
6.    Were the analyses performed using the model adequate to inform your decision problem?
7.    Was there an adequate assessment of the effects of uncertainty?
8.    Was the reporting of the model adequate to inform your decision problem?
9.    Was the interpretation of results fair and balanced?
10.    Were there any potential conflicts of interest?
11.    If there were potential conflicts of interest, were steps taken to address these?

As with the relevance questions, further sub-questions are available for each of these.  For example, the external validity question has associated with it these further questions; Has the model been shown to accurately reproduce what was observed in the data used to create the model? Has the model been shown to accurately estimate what actually happened in one or more separate studies?  Has the model been shown to accurately forecast what eventually happens in reality?

Whilst designed as a tool to help with formulary decisions, it has clear overlaps with more general critical appraisal methods and could also be used a CER teaching tool.

Photo credit: CarbonNYC via Flickr Creative Commons