The Review Process

The review process began by establishing our inclusion and exclusion criteria. We then used several methods to identify potentially relevant classroom-based programs designed for use with a universal population of students.

In early 2009 we put out an initial call for nominations and identified potentially relevant programs. During 2011 and early 2012, we made additional outreach efforts to program developers and researchers.

At the same time we examined CASEL’s original program review, Safe and Sound, and other major literature reviews, national reports, and key publications. We also searched national databases including but not limited to:

  • The What Works Clearinghouse, administered by The Institute of Educational Sciences (IES) of the U.S. Department of Education (http://ies.ed.gov/ncee/wwc/);     
  • The National Registry of Evidence-Based Programs and Practices, administered by the Substance Abuse and Mental Health Services Administration (SAMHSA), a branch of the U.S. Department      of Health and Human Services (http://nrepp.samhsa.gov); and
  • Blueprints for Violence Prevention Model and Promising Programs, administered by the Center for the Study and Prevention of Violence at the University of Colorado (http://www.colorado.edu/cspv/blueprints/).     

All programs identified for possible inclusion were then examined in several different ways by teams of trained coders. If the program was classroom-based and designed for use with a universal population of students, we requested from program developers copies of all available published and unpublished outcome evaluations that would meet our criteria.

We then checked these reports against those we found through our own literature search procedures. Coders examined every outcome evaluation submitted by each
program.

We also conducted an e-mail survey completed by program developers or their designated staff about the training they offered for program implementation. We supplemented these surveys through phone contact, if necessary, to clarify answers to certain questions. Our final sample consisted of 25 SELect programs.

When evaluations met
our inclusion criteria and training and other support for implementation were available, we asked the programs to send us their materials. Graduate-level coders with extensive education and experience in social and emotional learning reviewed all program materials. The coders received more than 40 hours of training in the coding system from senior SEL researchers involved in the CASEL Guide development process.

For each review, coders scanned the complete set of program materials provided by the developers in order to
familiarize themselves with the overall organization and content of the program. Coders then completed an intensive content analysis of sample years of each program.

In most cases this involved review of the preschool materials, first-grade materials, and fourth-grade materials, depending on whether those were found to be representative of the program as a whole. Additional grades were reviewed as necessary.

Before the coders worked independently, they had to be at least 85% in agreement on all rating elements for a subset (20%) of the
programs. Reliability was monitored throughout the process to maintain the same level of agreement (85%) on the remaining programs. Any disagreements in coding were eventually resolved through discussion among the raters and supervising staff.

To avoid conflicts of interest, no one having any financial relationship to any program was involved in reviewing the programs or in discussions about programs.

Additional information on our program ratings and a copy of our coding manuals are available on request.