Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Disentangling the messiness of natural experiments to evaluate public policy

Disentangling the messiness of natural experiments to evaluate public policy Comprehensive evaluations of public policy using natural experimental studies often produce mixed findings. Making sense of these to inform decision making requires a robust critique, synthesis and communication of all available evidence.Evaluating natural experiments can be messy. This messiness can arise from multiple sources, including lack of researcher control over the intervention, unmeasured confounding and a poor or inappropriate counterfactual. Furthermore, it is commonplace for multiple studies to be undertaken that explore a similar research question but incorporate diverse study types (e.g. qualitative and quantitative), methodological designs (e.g. cross‐sectional and longitudinal), data sources (e.g. self‐report surveys and retail sales), populations of interest (e.g. general population and dependent drinkers), time‐periods and analytical approaches. These can produce a diverse and sometimes conflicting set of answers.Such messiness is playing out in Scotland in relation to the evaluation of minimum unit pricing (MUP) and, specifically, its impact on alcohol consumption. In a recent Opinion and Debate article, Holmes [1] summarizes 12 studies that have explored whether the introduction of the policy in 2018 has led to the theorized reduction in consumption in Scotland overall and among population subgroups most likely to experience alcohol‐related harm, including men, harmful drinkers and those living in the http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Addiction Wiley

Disentangling the messiness of natural experiments to evaluate public policy

Addiction , Volume 118 (9) – Sep 1, 2023

Loading next page...
 
/lp/wiley/disentangling-the-messiness-of-natural-experiments-to-evaluate-public-obbLO1R6al

References (10)

Publisher
Wiley
Copyright
© 2023 Society for the Study of Addiction
ISSN
0965-2140
eISSN
1360-0443
DOI
10.1111/add.16265
Publisher site
See Article on Publisher Site

Abstract

Comprehensive evaluations of public policy using natural experimental studies often produce mixed findings. Making sense of these to inform decision making requires a robust critique, synthesis and communication of all available evidence.Evaluating natural experiments can be messy. This messiness can arise from multiple sources, including lack of researcher control over the intervention, unmeasured confounding and a poor or inappropriate counterfactual. Furthermore, it is commonplace for multiple studies to be undertaken that explore a similar research question but incorporate diverse study types (e.g. qualitative and quantitative), methodological designs (e.g. cross‐sectional and longitudinal), data sources (e.g. self‐report surveys and retail sales), populations of interest (e.g. general population and dependent drinkers), time‐periods and analytical approaches. These can produce a diverse and sometimes conflicting set of answers.Such messiness is playing out in Scotland in relation to the evaluation of minimum unit pricing (MUP) and, specifically, its impact on alcohol consumption. In a recent Opinion and Debate article, Holmes [1] summarizes 12 studies that have explored whether the introduction of the policy in 2018 has led to the theorized reduction in consumption in Scotland overall and among population subgroups most likely to experience alcohol‐related harm, including men, harmful drinkers and those living in the

Journal

AddictionWiley

Published: Sep 1, 2023

Keywords: Alcohol; consumption; evaluation; natural experiment; policy; pricing

There are no references for this article.