Commentary on USAID Position Paper: Cost-Effectiveness
In October 2024, USAID published its first Position Paper on Cost-Effectiveness, stating that the Agency’s vision is the widespread use of cost-effectiveness evidence during program design. We welcome the direction, and we commend the authors for a technically sound document. Here, we give suggestions for how the recommendations could be implemented and provide a few areas of caution.
The USAID position paper places great emphasis on impact evaluation as a means for determining the ‘effect’ side in a cost-effectiveness analysis. For instance, it is stated that ‘assessment of future cost-effectiveness of a possible intervention should be based on data which can credibly estimate impact, i.e., impact evaluations” (page 7). An impact evaluation will however not be sufficient for measuring the overall effects of an intervention. First, there are risks of confounding factors, which can only be adequately minimized by using randomized trial or case control study designs. Secondly, impact evaluations generally have a relatively short time span, making them inadequate for cost-effectiveness analyses where a lifetime horizon is recommended. Data from an impact evaluation can in no doubt be used as important input parameters in the analysis, but they will not provide the full answer. Instead, decision-analytic models are unavoidable for cost-effectiveness analysis[1]. A model is needed for predicting final health impacts, for combining cost and effect estimates, to account for different time periods of input values, and for adjusting for uncertainty. While economic evaluation guidelines are not specific about which type of modeling approach should be used, the need for modeling is clear[2].
The position paper does not mention that the relevance of cost-effectiveness analysis varies considerably between sectors. USAID investments benefit a wide range of economic and social sectors, including agriculture, health, gender, and education, but robustness of cost-effectiveness methodologies is not at all equal across these fields. Health is arguably the sector where cost-effectiveness analysis is most prominently used, with the number of economic evaluations of health interventions having grown exponentially in the past three decades[3]. This success is especially due to the development of robust methods for measuring quality-adjusted life years, which enables researchers to compare intervention effects across distinct disease areas and age groups. Such refined outcome measures do not exist in other sectors, making cost-effectiveness analysis either more difficult or unfeasible. Cost-benefit analysis, where impacts are measuring using a dollar-value, is the methodology of choice within agriculture and construction sectors. Sector-specific recommendations would be useful companions to the position paper.
The position paper recommends that USAID programs should seek to use existing cost-effectiveness evidence during the design process and that new impact evaluations, with cost analysis, should be prioritized primarily in cases where sufficient cost-effectiveness evidence does not yet exist (see Box 1, page 3). This argument can seem contradictory. First, the level of evidence used in the design phase should not limit the need for assessing impact of the intervention implemented with USAID funding as we will not know to what extent the intervention is aligned with other interventions without measuring it. Secondly, cost-effectiveness evidence is inherently context-specific because most of the variables driving the result differ between settings. In particular, the costs of interventions vary due to different standards and different unit costs, such as for salaries, medicines, energy consumption, etc. While numerous ‘global’ cost-effectives studies have been published, it is rare that decision makers choose to rely on these. Decision makers value studies where data have been collected in their specific setting, enabling them to recognize implementation structures and data sources[4]. A systematic review on perceptions of health policymakers on their use of evidence found that the most mentioned facilitator of using research evidence in policymaking was ‘personal contact between researchers and policymakers’, and similarly, the most mentioned barrier was ‘absence of personal contact between researchers and policymakers’[5]. Hence, if the aim is that governments should take over the financial responsibility of interventions implemented by USAID, it would be important to collect local cost and outcome data alongside the intervention and use these as parameter values in a cost-effectiveness analysis.
[1] Buxton, M. J., et al. (1997). Modelling in economic evaluation: an unavoidable fact of life. Health Economics, 6(3): p. 217-27.
[2] Griffiths, U. K., et al., (2016, February). Comparison of Economic Evaluation Methods Across Low-income, Middle-income and High-income Countries: What are the Differences and Why? Health Economics. 2016 Feb;25 Suppl 1(Suppl Suppl 1):29-41.
[3] Center for the Evaluation of Value and Risk in Health. The Cost-Effectiveness Analysis Registry [Internet]. (Boston), Institute for Clinical Research and Health Policy Studies, Tufts Medical Center. Available from: www.cearegistry.org Accessed on 26 Nov 2024.
[4] Hoffmann, C., et al. (2002). Do health-care decision makers find economic evaluations useful? The findings of focus group research in UK health authorities. Value in Health. 5: 71–78.
[5] Innvaer, S., et al. (2002). Health policy-makers’ perceptions of their use of evidence: a systematic review. Journal of Health Services Research & Policy 7: 239–244.