Anthony Biglan and Brian Flay
This is an exciting time in America. We are witnessing the first significant effort to comprehensively address concentrated poverty in a generation and numerous efforts to ensure that all young people develop successfully. Examples of these efforts include the Obama Administration’s Promise Neighborhood initiative (inspired by the success of the Harlem Children’s Zone), the Department of Education’s Race to the Top, and a National Prevention System that a federal interagency task force has been discussing.
There is solid evidence that these ambitious efforts can succeed. The recently released Institute of Medicine report on prevention identifies numerous evidence-based programs, policies, and practices that can ensure young people’s successful development. All of the proposed efforts will draw on this knowledge.
But these efforts could fail if they do not use the scientific tools that got us this far.
Traditionally, once a program’s value has been shown by one or two rigorous experimental evaluations, it is widely implemented without further evaluation. But such a practice is risky for at least three reasons.
First, it is well-documented that a program’s benefit cannot be replicated unless the program is implemented with fidelity. If we do not measure fidelity and verify that benefits are being achieved, the quality and thereby the impact of our interventions will deteriorate.
Second, we cannot be sure that an intervention that worked for one population will work when it is tried in a different, and perhaps more challenging environment. This is especially true when we first begin to implement evidence-based interventions in high poverty neighborhoods where they have not been tried before.
Third and most important, if good science does not accompany these important efforts, we simply won’t know if they are working. Two things are paramount: good measures and good experimental evaluations.
We must abandon the traditional model of doing sporadic evaluations of social programs and replace it with a public health system for ensuring young people’s successful development. In this system, communities will routinely monitor youth wellbeing and will evaluate the impact of their policies, programs, and practices on outcomes for children and adolescents at each stage of development.
We now have the measures to know whether infants are thriving, young children are ready for elementary school, elementary school students are progressing at grade level, and whether adolescents are learning and developing the prosocial behaviors that will protect them from the major threats to adolescent wellbeing such as substance abuse, delinquency, depression, and risky sexual behavior. With these measures it is possible for every community to know how well their young people are doing, and to take steps to ensure their wellbeing when measurement shows that some are failing or falling behind.
This innovation would be no different than the system for management of the economy that was developed in the 1940’s or the one for managing infectious diseases that evolved over the last four hundred years of struggle to control epidemics.
Why shouldn’t we devote as much energy to assessing young people’s wellbeing as we do to measuring the health of the economy or the course of epidemics? The costs of our failures in childrearing are just as great as those involved in failures to manage the economy or control epidemics. For example, the economist Ted Miller estimated that adolescents who develop multiple problems cost this country more than $400 billion each year.
But even if we build an infrastructure for permanent measurement of young people’s wellbeing, our success will be limited if we don’t also use the experimental tools that have created the mountain of evidence that shows that young people need nurturing environments in order to develop the skills, interests, and health habits needed to live productive and caring lives. We can achieve reductions in social and environmental risks and toxins, as well as promoting safe, nurturing, and health-enhancing environments.
Randomized trials are the “gold standard” for knowing whether an intervention has benefits. Every prescription medication that is approved by the FDA has gone through at least two randomized trials. By randomly assigning people (or schools, workplaces, or communities) to get or not get the intervention, we can be confident that any differences in outcomes for the two groups are due to the intervention. This is why we can be so confident that behavioral parenting skills programs help parents replace harsh and inconsistent discipline with patient and reinforcing nurturance of their children’s skill; this type of intervention has been tested in more than fifty randomized trials around the world—with all ages of children and every level of income.
Interventions must target specific developmental stages and contextual conditions to produce desirable effects. Rigorous repeated evaluations can then be conducted of each intervention approach.
Policymakers may blanch at the thought of doing randomized trials to test community interventions. But David Hawkins tells us that when the State of Washington lacked funds for prevention work in all of the communities that applied, they randomly chose the communities to be funded and, as a result, created a randomized trial of the impact of the intervention. Why shouldn’t we do something similar at the federal level?
If we do, we will know with some confidence whether the intervention is working. If it is not, it will prompt further research that will incrementally improve the quality of our interventions. If we don’t do this, then, when the political winds change, claims will be made that once again we fought the war on poverty and poverty won.
Randomized trials are not the only rigorous way of testing the effects of what we do. If communities have a good system for measuring the outcomes and processes you are trying to affect in multiple neighborhoods or communities, you can intervene in one and see if the things you targeted change there and not in the places where you didn’t intervene. If it looks like you are getting an effect, you can move on to the next community or neighborhood. This is called a Multiple Baseline Design (MBD).
The monitoring or surveillance needed for MBD’s are also good for management of program implementation and quality control. Implementing comprehensive preventive interventions in whole communities is a complex and arduous process. You would be crazy to think it will go smoothly in every case or even that we know how to ensure implementation when we begin. But, if you accept that, and recognize that in the initial intervention you will need to learn from your mistakes and get better as you go, then you will have the makings of a multiple baseline design. Once you have some success in the first community you can go onto others. You can use the first community to train people for work in subsequent communities. And you can use evidence from the results in the first one to garner support for more efforts.
Careful experimental evaluation of social interventions is something new under the sun. Through all of human history we have tried to improve the human condition through guesswork and ideology, uninformed by empirical evidence. It is as though we have been climbing the sheer rock face of El Capitan in Yosemite Park with our bare hands—rising high occasionally, but inevitably slipping back.
But as the Institute of Medicine report documents, a growing number of careful experiments have allowed us to steadily—and at an accelerating pace—identify the programs, policies, and practices that can ensure that every young person develops successfully.
There is much work left to be done. If all the agencies of the federal government work together, we can have a new round of experimental evaluations that identify the most effective strategies for getting tested programs, policies, and practices widely disseminated and that refine our interventions so that they are even more effective. To do otherwise would be like climbing half way up El Capitan with our new scientific tools and then throwing them to the ground for the rest of our climb.