- The review process began by establishing our inclusion and exclusion criteria. We then used several methods to identify potentially relevant classroom-based programs designed for use with a universal population of students.
2009: initial call for nominations and identified potentially relevant programs
2011/2012: additional outreach efforts to program developers and researchers.
2011/2012: examined CASEL’s original program review, Safe and Sound, and other major literature reviews, national reports, key publications, and searched national database.
2012: All programs identified for possible inclusion were examined in several different ways by teams of trained coders. If the program was classroom-based and designed for use with a universal population of students, we requested copies of all available published and unpublished outcome evaluations that would meet our criteria from providers.
- We checked these reports against those we found through our own literature search procedures. Coders examined every outcome evaluation submitted by each program.
- We conducted an e-mail survey completed by program developers or their designated staff about the training they offered for program implementation. We supplemented these surveys through phone contact, if necessary, to clarify answers to certain questions. Our final sample consisted of 25 SELect programs.
- Graduate-level coders with extensive education and experience in social and emotional learning reviewed all program materials. The coders received more than 40 hours of training in the coding system from senior SEL researchers involved in the Guide development process
The review: For each review, coders scanned the complete set of program materials provided by the developers in order to familiarize themselves with the overall organization and content of the program.
- Coders completed an intensive content analysis of sample years of each program. In most cases this involved review of the preschool materials, first-grade materials, and fourth-grade materials, depending on whether those were found to be representative of the program as a whole. Additional grades were reviewed as necessary.
- Before the coders worked independently, they had to be at least 85% in agreement on all rating elements for a subset (20%) of the programs. Reliability was monitored throughout the process to maintain the same level of agreement (85%) on the remaining programs. Any disagreements in coding were eventually resolved through discussion among the raters and supervising staff.To avoid conflicts of interest, no one having any financial relationship to any program was involved in reviewing the programs or in discussions about programs.