Synthetic Control Methods in Public Safety Research

Synthetic Control Methods in Public Safety Research

Synthetic control methods provide a robust framework for analyzing the impact of public safety interventions. Authored by Aaron Chalfin and Zubin Jelveh, this study highlights the methodological challenges and potential biases inherent in these techniques. The paper discusses the importance of selecting appropriate comparison groups and the implications of software choices on treatment effect estimates. It is particularly relevant for researchers in criminology and public policy, offering insights into the application of synthetic controls in evaluating interventions. The findings emphasize the need for careful consideration of methodological choices to ensure reliable results.

Key Points

  • Explores the application of synthetic control methods in evaluating public safety interventions.
  • Analyzes the impact of software choices on treatment effect estimates in criminology research.
  • Discusses methodological challenges and biases associated with synthetic control techniques.
  • Highlights the importance of selecting appropriate comparison groups for accurate evaluations.
37
/ 41
Perils and Pitfalls in the
Use of Synthetic Control Methods to Study Public Safety Interventions
Aaron Chalfin
University of Pennsylvania and NBER
Zubin Jelveh
University of Maryland
This version: May 30, 2024
Abstract
The method of synthetic controls, pioneered by Abadie et al. (2010), has generated a paradigm shift in
the analysis of case studies. The method selects an appropriate synthetic comparison group by identify-
ing a weighted set of units that closely match the treated unit on the basis of pre-intervention levels and
trends. Since Abadie’s seminal paper, there has been a proliferation of research expanding and refining
the method and a corresponding litany of software packages that provide the means to estimate these
models. We show that there can be a shocking lack of correspondence between the estimates produced
by commonly used software packages. Even the seemingly innocent choice between using
R
or Stata
to estimate SCM can lead to a meaningful difference in estimated treatment effects. We demonstrate
this surprising finding, invoking a recent debate in criminological research concerning a paper on the
effects of “de-prosecution” by Hogan (2022) which has been criticized by Kaplan et al. (2022).
We are deeply indebted to John MacDonald for helpful comments on an earlier version of the draft of what
later b ecame this paper. All remaining errors are our own. Correspondence: Chalfin:
achalfin@sas.upenn.edu
;
Jelveh: zjelveh@umd.edu.
1 Introduction
The method of synthetic controls, pioneered by Abadie et al. (2010), has led to a paradigm shift in the
analysis of case studies a research scenario in which there is a single treated unit and a large p ool of
potential comparison units to choose from. In evaluating the effects of a policy that is implemented in
a single city or county a common setting in criminal justice policy research a key question is how to
select a comparison group against which that city or county should be compared. In the past, researchers
appealed to geographic proximity or baseline covariate overlap in order to motivate a comparison group.
In other words, select a theoretically-motivated comparison group and then pray for something resembling
parallel trends, the partially-testable core identifying assumption of differences-in-differences estimation.
1
A considerable virtue of synthetic controls is that it dispenses with the need for prayer, providing a
roadmap to select a comparison group for which pre-intervention trends are as closely matched as
possible.
2
The method is also notable for being data-driven, reducing the need for researcher discretion
and therefore potentially offering a means of making case study research more reliable and less subject
to the potentially devastating effects of selective reporting of results (Iyengar and Greenhouse, 1988;
Ioannidis et al., 2014; Simonsohn et al., 2014) and
p
-hacking (Benjamin et al., 2018; Coker et al., 2021).
Due to its attractive qualities, its transparency, and its easy accessibility for applied researchers
(thanks to off-the-shelf implementations for
R
, Stata and Python), SCM has become an increasingly
popular method of causal inference in case study settings across the social sciences.
3
Within criminology,
synthetic controls has been used to study the link between immigration and crime (Chalfin and Deza,
2020), the effects of police turnover (Mourtgos et al., 2022), police use of force (Goh, 2021), the impact of
death penalty moratoriums (Oliphant, 2022), the effect of labor market shifts on crime (Mitre-Becerril and
Chalfin, 2021), a variety of place-based interventions (Saunders et al., 2015; Robbins et al., 2017; Rydberg
et al., 2018; Piza et al., 2020; Lawrence et al., 2022; Buggs et al., 2022), prosecutorial reforms (Hogan, 2022;
Wu and McDowall, 2023; Zhou et al., 2023), marijuana liberalization (Wu and Cullenbine, 2022; Harper
and Jorgensen, 2023) and the effect of gun control policies (Donohue et al., 2019), among other topics.
Synthetic controls methods have also taken root in related social science disciplines including economics
1
The parallel trends assumption is formally untestable as it is a counterfactual assumption about what would
have happened in the absence of the intervention. However, a test of pre-intervention trends provides some
assurance that treated and comparison units were not experiencing different trends prior to the intervention.
2
As is noted in a recent working paper by Pickett et al. (2022), it is not necessarily the case that minimizing
pre-intervention differences between a treatment unit and its synthetic counterpart will minimize bias. Researchers
could potentially overfit by matching on noise, a problem which is intended to be addressed by using penalized
regression estimators like Ridge regression (Ben-Michael et al., 2021; Abadie and L’Hour, 2021).
3
The original paper by Abadie et al. (2010) has, to date, generated nearly 3,000 citations.
1
(Billmeier and Nannicini, 2013; Bohn et al., 2014; Grier and Maynard, 2016) and political science (Abadie
et al., 2015; Kikuta, 2020; Gilens et al., 2021). We illustrate this observation in Figure 1 which plots
changes in the number of synthetic controls papers identified using a directed Google Scholar keyword
search.
4
As is evident from the figure, use of the methodology in criminology has increased markedly
during the last decade and has featured prominently in some of the discipline’s most highly-cited journals.
Alongside the increased use of SCM by applied researchers, there has been a corresponding prolif-
eration of methodological research which has expanded and refined the methodology. Recent innovations
have addressed a number of important issues with SCM which can arise in applied settings. For example,
in some applications traditional SCM will not yield a sufficiently good pre-intervention match if the
treatment unit’s pre-intervention characteristics lie outside of the common support of the available
comparison units. When this happens, SCM is potentially biased due to the absence of common trends.
Bias corrected synthetic controls estimators proposed by Ben-Michael et al. (2021) and Abadie and
L’Hour (2021) offer principled approaches to constructing counterfactual estimates for the treated group
that extrapolate away from the pre-intervention characteristics of the control units. These approaches
are also intended to make the approach more robust and to guard against the problem of overfitting
where an analyst ends up matching on noise rather than true signal.
Alongside these methodological innovations are a growing number of software packages written for
R
, Stata and Python that provide applied researchers with the tools to estimate the newest synthetic
controls models with relative ease. In addition to
Synth
, the original package written by the authors of
Abadie et al. (2010) for both
R
and Stata, there is
AugSynth
, written for
R
and
allsynth
, written for
Stata, which implement the method of bias corrected synthetic controls proposed by Ben-Michael et al.
(2021) and Abadie and L’Hour (2021), respectively. There is also the
scpi
package written for
R
which
allows researchers to flexibly select a means of bias correcting and which uses a method proposed by
Cattaneo et al. (2021) to capture uncertainty.
5
Each of the packages offers a degree of flexibility, allowing
researchers to change some of the default settings in order to test the robustness of their estimates to
choices made during the research process. However, as we show, the default settings of these packages
as well as choices made by the package’s designers that cannot be easily changed by end users can have
critically important implications for the resulting estimates.
While a full accounting of best practices in the implementation of SCM is beyond the scope of
this paper and is premature as the literature continues to evolve we identify several different and
4
We collected annual results from Google Scholar using the search terms ”synthetic control and ”criminology.
5
This method recognizes that there are two sources of uncertainty in SCM estimates: one that is derived from
uncertainty about the weights themselves and the other that arises from sampling variability.
2
/ 41
End of Document
37
You May Also Like

FAQs of Synthetic Control Methods in Public Safety Research

What are synthetic control methods?
Synthetic control methods are statistical techniques used to evaluate the effects of interventions or policies by constructing a synthetic version of the treatment group from a weighted combination of control units. This approach allows researchers to create a counterfactual scenario, helping to isolate the impact of the intervention. The method is particularly useful in cases where randomized control trials are not feasible, such as in public policy evaluations. By ensuring that the synthetic control closely matches the treated unit's pre-intervention characteristics, researchers can derive more reliable estimates of treatment effects.
What challenges are associated with synthetic control methods?
One of the primary challenges with synthetic control methods is the potential for bias due to poor pre-intervention matches between the treatment and control groups. Additionally, the choice of software and specific implementation details can significantly affect the estimated treatment effects. Researchers must also navigate issues related to the selection of matching variables and the optimization of weights assigned to these variables. These challenges highlight the importance of methodological rigor and transparency in the application of synthetic controls to ensure valid conclusions.
How do software choices impact synthetic control estimates?
Different software packages for implementing synthetic control methods can yield varying results due to differences in default settings, optimization routines, and bias correction techniques. For example, the choice between using R or Stata can lead to substantial differences in estimated treatment effects. Additionally, some packages may optimize variable weights while others use uniform weights, affecting the balance of pre-intervention characteristics. Researchers must be aware of these discrepancies and consider running sensitivity analyses across multiple software implementations to validate their findings.
What is the significance of selecting comparison groups in synthetic controls?
Selecting appropriate comparison groups is crucial in synthetic control methods, as the quality of the match directly influences the validity of the estimated treatment effects. A well-chosen comparison group should closely resemble the treated unit in terms of pre-intervention characteristics and trends. If the synthetic control does not adequately reflect the treatment unit's context, the resulting estimates may be biased and misleading. This underscores the need for careful consideration and justification of the selection process in empirical applications of synthetic controls.
What insights does the paper provide for public safety researchers?
The paper offers valuable insights for public safety researchers by highlighting the methodological pitfalls and best practices in applying synthetic control methods. It emphasizes the importance of transparency in reporting the choices made during the analysis, including software selection and variable weighting. By addressing common challenges and biases, the authors provide a roadmap for researchers to enhance the reliability of their evaluations. This guidance is particularly relevant for those studying the effects of interventions in criminology and public policy contexts.

Related of Synthetic Control Methods in Public Safety Research