Heoro.com can save time and money in finding studies on the humanistic and economic burden of disease and economic models of interventions. A frequent question we were asked at ISPOR DC was how well heoro.com compares to systematic reviews conducted from scratch. Here, we investigate…
We developed heoro.com after years of experience in replicating very similar burden of illness searches and literature reviews for pharmaceutical clients and finding the same data repeatedly for our colleagues who were trying to find data for economic model parameterisation, PRO tool development or market access dossiers.
The problem with conducting such literature reviews from scratch is the time it takes to set up a disease and study type-specific search, and to screen the thousands of abstracts that such a search typically finds. Very often, junior staff are involved in the screening, and their relative inexperience often means that relevant studies are missed and less relevant ones retrieved in full text for subsequent exclusion by more experienced staff.
Heoro.com was designed the other way round: we ran a systematic search of PubMed from 2005 for PRO studies, economic models and costs-resource use studies, for any disease or intervention. We then indexed a training set of 10,000 of the 100,000+ abstracts identified by the search, using human experts with health economic and outcomes research training and clinical knowledge. Abstracts are tagged to a detailed ontology of diseases, interventions and PRO instruments, and indexed to geographical location and study type:
- PRO studies: general, validation and utility studies
- Economic models: cost-effectiveness, cost-benefit, cost-utility and other types of economic models
- Costs-resource use: direct costs, indirect costs, resource use, treatment patterns and adherence
We used our training set of 10,000 indexed abstracts to train natural language processing software to index the remaining 90,000 abstracts, with an average of 95% accuracy. This means that users of heoro.com can instantly find relevant abstracts by searching for the disease, intervention, study type and geographical location of interest.
The specificity of our detailed ontology mapping means that users don’t have to wade through hundreds of irrelevant abstracts to find those they need. The accuracy of our software indexing means that our database is comprehensive, with users able to access all relevant abstracts from the 100,000 found by our initial search.
But how well does heoro.com compare with burden of illness reviews conducted from scratch? As we were at ISPOR DC, we decided to start there in evaluating our database.
We identified poster abstracts in the congress proceedings that were targeted or systematic reviews on the quality of life, costs or resource use, adherence with treatments, or economic models of interventions, where the number of relevant papers was reported in the abstract. We then ran a search of heoro.com for the same topics, to see how many relevant papers could be found from our database of PubMed indexed studies.
We found 30 relevant abstracts, of which 9 searched for studies within the 2005-2016 dates relevant to heoro.com, 8 of which reported the number of relevant studies included after abstract screening and 4 also reported the number of abstracts found by their original search.
We then used heoro.com to identify studies on the same topic for each of these 8 reviews. The comparison is shown in the table below. For three of the reviews, heoro.com identified more studies than the original review, and for a further three, it identified between 78% and 93% of the total number of studies included in the original review – in the two of examples where the percentage was below 90%, heoro.com fell short by only 1 or 2 papers.
The performance of heoro.com was less good in 2 of the 8 reviews, finding less than half the number of papers identified by the search of more databases in the original reviews. However, even in these cases, heoro.com could be helpful, by finding some relevant papers in a substantially shorter time than the original abstract screening would have taken, saving an estimated 277 hours of screening time to find 43% of the number of papers identified by one review.