A researcher curious about understanding how we measure and understand program impacts in education, Catherine Asher, Ph.D.’21, realizes that traditional evaluations of policy or program effectiveness don’t always tell us everything we want to know. Policies that worked in one context or with a particular group of students might not work with another. So, how do researchers design studies to capture the nuances of program impact?
As a student at HGSE, Asher found that the answer to this question was complex. The HGSE community challenged her thinking about policy and program evaluation in exciting ways. Over the course of her program, her thinking about how programs and policies are evaluated evolved significantly — and she thanks her cohort of fellow researchers for adding this nuance to her thinking.
As the Ph.D. marshal, Asher has a chance to show her support for the research community at HGSE that has been so impactful in her own thinking.
“Our Ph.D. student community has been, hands down, my favorite part of this program. As much as I’ll miss the physical space of HGSE, I’m grateful for the ways that we’ve learned to adapt this year to stay connected,” she says, confident that the strong relationships and research community she’s built as a student have shown they can withstand distance and will continue to be a source of support.
Poised to begin a position at the University of Michigan’s Youth Policy Lab in August, Asher reflects on her work at HGSE and the ways in which education researchers can help develop more effective policy.
What got you interested in thinking about program and policy evaluation in education?
After I graduated from college, I worked for a year at a small nonprofit in Ghana. I was working with local, nonprofit school and community stakeholders to think about how the school could improve its services and help its students be successful. Through that process, I got really excited about the internal monitoring and evaluation work that we were doing. We were collecting data from students, parents, and teachers, and trying to understand what programming was effective and how we could improve our efforts to support students — that was really when I started to think about research as a way to improve educational practices and interventions.
In your dissertation, you focus on something called “treatment effect heterogeneity.” What is that and why does it matter for researchers?
Treatment effect heterogeneity is the idea that the effectiveness of a policy, program, or intervention can be different in different contexts or for different groups of students or in different situations. What I do in my dissertation is think about how we can use creative and thoughtful experimental designs to help us learn from that variation in impacts. It’s been interesting to think about how my work, which is primarily focused on causal effects of specific interventions or policies, fits into a larger context of research about education. I hope my work speaks to the idea that we can’t just say something works or doesn’t — we can use the research designs to help provide insight into potential mechanisms, or, at a minimum, more nuance that acknowledges the differences in contexts and in populations.
How does that play out in your dissertation?
In my dissertation, there are three distinct papers that look at this idea of treatment effect heterogeneity, each from a slightly different lens. In the first one, my co-author Ethan Scherer and I are interested in demonstrating that slight variations in a text message intervention program can have differential effects. The point is that if these message interventions are differently effective, you have to be really thoughtful about how you’re designing one.
In the second paper, I’m looking at how differences in what’s happening in a control group shape the understanding of the effectiveness of a policy, using the example of a publicly run district pre-K program. I show that when measuring the effectiveness of that district program, we get different answers, depending on the region of the district we’re considering and what types of early childhood experiences the control group students in that community receive.
The third paper is a methodological paper, which means it’s research about how to do quantitative research. In that study, I’m looking at an experimental design called a Sequential Multiple Assignment Randomized Trial (or SMART) that’s used to develop adaptive interventions. I show that analyzing these experiments requires making assumptions about the study population that we may not want to make, and that acknowledging that different treatments are going to affect people differently can results in biased analyses.
What’s the lasting impact you hope this work has?
The larger takeaway, and my whole research agenda, is that researchers need to think about how they design their studies so they take into account the possibility of treatment effect heterogeneity. In doing this, I hope education research will be able to provide useful information that will make interventions and policies more effective in the long run.
And what’s the impact that HGSE has left on you and your work?
It’s not necessarily a single experience but more a collection of individuals and conversations that will stick with me. My Ph.D. cohort has shaped my time at HGSE from our first semester — one of the things that’s been valuable to me about the Ph.D. Program at HGSE is the diversity of research interests in my cohort and the variety of types of meaningful evidence that they draw on. As scholars, my cohort has inspired me to think more critically about how I engage with research and what I consider to be rigorous. As people, they’re brilliant and kind and helpful. I feel really lucky that we have had such a big group to support each other through the ups and downs of this journey.