Background, Predictive Validity, and Reliability


The University of Chicago Consortium on Chicago School Research (UChicago CCSR) developed the 5Essentials framework and survey over the course of twenty years of research.

The framework itself was developed through a broad scan of the literature on the organizational factors that matter most for improvement in schools in addition to active consultation with teachers and principals working in schools. The 5Essentials framework brought coherence to a wide array of divergent findings and theories about school improvement.

Although research established that each of the five concepts included in the framework showed relationships with student outcomes and school improvement, there was no existing way to measure the five concepts simultaneously. UChicago CCSR designed the survey to measure all 5Essentials at the same time, giving schools information about their organizational capacity across the entire framework. Many of the questions on the original 5Essentials survey were drawn from measures that had been validated in research from studies of schools across the country.

UChicago CCSR has refined its survey measurement tool over time, continuously re-assessing the system’s reliability and data quality.

Over the past two decades, the 5Essentials survey has been refined repeatedly to ensure accurate measurement and to reflect changing practices in schools. The surveys have been administered to students and teachers in Chicago since 1994 (11 total administrations as of 2013). After each administration, the survey measures reported to schools are evaluated for consistency, reliability and item fit, and data quality. UChicago CCSR employs a psychometrician, a survey methodologist, and quantitative social scientists to conduct these annual evaluations of the survey.

A construct, called a ‘measure’ on the survey report, is a group of questions measuring the same concept. Each construct that is reported to schools (e.g., Teacher-Parent Trust) is measured through responses to a number of survey questions. To understand reliability and validity of each construct, each question that is part of the construct is evaluated on a number of dimensions. First, analysts examine whether people have responded to the question in ways that make sense—checking that there does not seem to have been confusion about what the question was asking and that the question captures the underlying concept it is supposed to be measuring. Second, the question should also be able to differentiate respondents who feel differently about the construct, which would lead them respondents to answer the question in systematically different ways. Finally, the group of questions that are used to create the construct should combine to produce a score that accurately measures this construct in the same way across all individuals and schools (i.e., the combination of the survey questions should provide a reliable measurement of the construct). The score should be able to differentiate among individuals and among schools, showing which have higher or lower levels on that construct. Individual question (items) or constructs (measures) that do not meet these criteria for fit or reliability are modified or discarded. Separate analyses are performed for schools serving different grade levels to ensure that the questions are appropriate for different types of schools. Measures of each construct are included only if they are sufficiently reliable at both the individual and school level, meaning they can reliably differentiate between people and schools. [1]

In addition to analyzing the questions that comprise the survey, the survey staff also examines individual responses and school aggregate responses for evidence of misreporting. Individuals who report in ways that are inconsistent, incomplete, or questionable are either removed or downweighted in school averages, depending on the nature and extent of the problem. Schools with very low response rates or with respondent numbers that exceed school populations and staffing levels do not receive survey reports.

UChicago CCSR continuously checks survey content  for validity and relevance.

The original validation of the 5Essentials survey was based on a 10-year study that used multiple years of survey data to show how the essential supports together were related to improvements in elementary schools in Chicago. This study culminated in the widely cited book, Organizing Schools for Improvement. This large-scale study examined the ways in which combinations of essential supports led to different levels of improvement for different student outcomes. A key finding from this study was that schools strong on at least three of the 5Essentials were 10 times more likely to improve student growth in test scores and 30 times less likely to stagnate than similar schools that were weak on these supports. They were also more likely to improve attendance. While different levels of the essentials were needed for improvement in different neighborhood contexts, the 5Essentials were related to school improvement for all types of schools. A review of school climate literature in the Review of Educational Research called the multi-year study by UChicago CCSR based on the survey “some of the most important research that elucidates the relationship between school climate and school improvement efforts.”

As part of the continuing evaluation of survey content, researchers annually examine the degree to which each of the measures in the survey has been shown through research to predict student outcomes and school improvement  (i.e. predictive validity) in a way that is unique from other measures on the survey. Those measures that have shown the strongest relationships to student outcomes are retained. This ongoing work helps us support our understanding of how combinations of measures (i.e. essentials) relate to important student outcomes in different contexts.

In addition, each year, the content of the survey is evaluated to see whether the concepts that are being measured are still meaningful given current practice. Researchers and practitioners also discuss whether new concepts should be included in the survey to account for changes in educational practice or new research evidence about school improvement. When the decision is made to add new concepts to the survey, the questions go through an extensive evaluation process. They are piloted with small groups of students or teachers, who also report verbally on how they interpreted each question. They are evaluated for reliability with large samples, and then they are re-evaluated for reliability and fit after the survey is administered

As the survey database expands, UChicago CCSR and UChicago Impact will begin to explore how the 5Essentials relate to important academic outcomes in more diverse contexts.

The ever-growing number and variety of schools using the 5Essentials Survey present an unprecedented opportunity to provide and gather rigorous data on what matters most for school improvement nationwide. As more data are generated over time, UChicago CCSR and UChicago Impact plan to build on the robust, existing research and continue to explore ways in which individual 5Essentials concepts and measures are relevant across diverse school contexts and predictive of important student and school outcomes.