Congratulations to our own Dr Marrissa Martyn-St James who won the Best Oral Presentation Award at HTAi 2019 with her talk on Developing a Decision Tool for Selecting
Approaches for Rapid Reviews. The details of the talk are below and you can contact Marrissa at m.martyn-stjames@sheffield.ac.uk
Authors: Abdullah Pandor, Eva Kaltenthaler,
Marrissa Martyn-St James (presenter), Ruth Wong, Katy Cooper, Munya Dimairo,
Alicia O’Cathain, Fiona Campbell, Andrew Booth
HTAi 2019 Best Oral Presentation |
A modified online Delphi
survey was conducted to reach consensus on items in the STARR (SelecTing Approaches for Rapid
Reviews) decision tool for adapting review processes. Following
pilot testing of the tool, a panel of 30 experts was recruited. Delphi
consensus of ≥70% agreement of 20 items in the final STARR
decision tool was reached at Round 2.
Introduction:
Dr Marrissa Martyn-St-James |
Rapid
reviews are of increasing importance within evidence synthesis and health
technology assessment (HTA) due to the need for timely evidence to underpin the
assessment of new technologies. Financial constraints have also contributed to
the increase in rapid reviews. There are many rapid review methods available.
However, there is little definitive guidance as to which methods are most
appropriate. We developed the STARR (SelecTing Approaches for Rapid Reviews), decision tool for adapting review processes
through a modified Delphi approach. The STARR tool was initially developed
around our work in rapid reviews.
Method:
A modified online Delphi survey was conducted in
May 2018. The STARR tool covers four themes: interaction with commissioners/policy-makers,
scoping and searching the evidence base, data extraction and synthesis methods,
and reporting of rapid review methods. The Delphi included 8 questions on theme
headers and descriptors plus 11 questions on items in the tool. Following independent
pilot testing, a panel of 30 experts who had published
rapid reviews or been involved in development of their methods were invited to
complete an online questionnaire through an iterative process including
suggestions for changes. Importance of each item was rated on a scale of 1 (not
important) to 9 (critically important). Consensus was ≥70% agreement
across each item rated ≥7. All responses were collected in Delphi Manager®
software.
Results:
Consensus was reached at Round 1 (30 respondents).
However, following Round 1, one additional item on quality assessment that was
suggested was added, and suggestions to improve clarity and understanding of
the decision tool were made. Final consensus of 20 STARR items was reached
after Round 2 (24 respondents).
Conclusions:
Delphi
consensus of the final STARR decision tool was reached after two rounds.
Roll-out of the STARR decision tool is now planned to assist in helping plan
rapid reviews.