Process of Revision of the ECERS-3 |
|||||||||||||||||||||||||||||||||
Our revision process included consideration of current literature on child development, early childhood curriculum, and emergent classroom challenges, such as appropriate use of technology, as well as health, safety, and facility recommendations. While the wide list of resource material used in this process is too lengthy for inclusion in this Introduction, several deserve special mention. With a greater emphasis on cognitive development, including both language, mathematics, and science, the following were all pivotal in guiding our revisions: National Association for the Education of Young Children’s [NAEYC] revision of Developmentally Appropriate Practice (Copple & Bredekamp, 2009) and Developmentally Appropriate Practice: Focus on Preschoolers (Copple et al., 2013); the National Council of Teachers of Mathematics position statement, What Is Important in Early Childhood Mathematics? (2007; n.d.); Mathematics Learning in Early Childhood (National Research Council, 2009); the joint position statement on technology and interactive media tools of NAEYC and the Fred Rogers Center for Early Learning and Children’s Media at Saint Vincent College (2012); Preventing Reading Difficulties in Young Children (National Research Council, 1998); ASTM International’s Standard Consumer Safety Performance Specification for Playground Equipment for Public Use (2014); Caring for Our Children: National Health and Safety Performance Standards (American Academy of Pediatrics, American Public Health Association, & National Resource Center for Health and Safety in Child Care and Early Education, 2011); and the U.S. Consumer Product Safety Commission’s Public Playground Safety Handbook (2008). A second major source of information that guided our revision process was a study of a large sample of classrooms assessed with the ECERS-R. Working with colleagues at Frank Porter Graham Child Development Institute, we were able to examine in detail the functioning of the ECERS-R Indicators and Items. Gordon and colleagues (2013) noted that there was some disordering in the placement of Indicators along the 1–7 continuum of quality and that some Indicators were actually measuring more than one domain of quality, but were only counted in one domain, resulting in loss of information. In the large data set we accumulated, we were able to identify the specific Indicators of concern and eventually arrive at an alternate set of subscales and a new scoring system for ECERS-R. Based on this work, we adjusted the location of key indicators in ECERS-3, modified the Indicators themselves, and added new Indicators to improve the scaling. We are also recommending that all Indicators in ECERS-3 be scored, even when this is not necessary to determine a given Item’s score. A final and critical source of information for revision of the scale was the close contact and open communication we have maintained with practitioners in the field, including classroom teaching staff, program directors, licensing agencies, technical assistance providers, college and other training faculty, and in particular with our close colleagues at ERSI, who provide training and reliability determination to users of the ERS materials across the United States. (See the Acknowledgements section for more details on the extensive support we have received from the field.) The experience of the authors in observing programs first-hand, training observers, conducting research, and working with local, state, and national officials on Quality Rating and Improvement Systems all have had an impact on this edition as well. |
|||||||||||||||||||||||||||||||||
- Introduction - Development - Process of Revision - Overview of the Subscales and Items - Reliability and Validity - Additional Notes - Supplementary Materials |
|||||||||||||||||||||||||||||||||