Is your core literacy program effective? Is it meeting the needs of at least 80% of your students, meaning that at least 80% of your students are meeting grade level expectations or making sufficient progress to close the gap? I am sure if someone came to your school you would be able to answer these questions, even if it took a bit of time to gather the data necessary to do so. If the answer to these questions is, “Not yet”, how would your team go about identifying why not and next steps?
It can be tempting to blame the program. Maybe the program does not meet the needs of your particular student population, or maybe the program is designed to provide depth of understanding and rich learning experiences that builds on foundational skills your students have yet to master. But you know your school team spent time researching evidence-based core literacy programs and carefully selected one that they felt met your students’ needs. Perhaps the first question that really should be asked is, “Are we implementing the program in its entirety, as it was meant to be implemented?” In other words, are we implementing with fidelity? If a program is not implemented with fidelity, it will not produce the expected outcomes.
Assessing fidelity of implementation is complicated. When I ask principals, teachers, and coaches how they know whether a program is being implemented as intended, they often have to rely on anecdotal evidence and perceptions or self-report. How do you measure fidelity of implementation?
Many programs now include fidelity measures. For some programs you may need to dig a bit, searching the research literature and on-line resources, to find an implementation guide. However, some programs do not provide any fidelity measures and it will be incumbent upon the leaders of literacy in the district or building to create an implementation checklist. While this is not always a simple task, it is a critical one, and fortunately the National Implementation Research Network provides some guidance.
To create a fidelity checklist, NIRN suggests that educators first find or develop a description of the practice that “reflects principles, values, and expected outcomes”. Then time must be taken to analyze the program or practice to identify and develop an understanding of its core, non-negotiable elements. The team needs to describe what each of those elements will look like and sound like when seen in their classrooms. Only then can the team develop an implementation checklist to assess fidelity. In the School Tool above, there is a link to a simple tool to guide such a discussion.
Once a fidelity tool is developed, NIRN states that there are three qualities to effective fidelity assessment:
- Frequent assessment, with the stated mantra being “Every teacher every month.”
- Relevant assessment, with items explicitly drawn from the practice or program; and
- Actionable assessment, where teachers are provided a path forward for improvement when an element is missing.
What are some examples of fidelity tools? The October RSE-TASC Reporter describes how a fidelity measure for Self-Regulated Strategy Development (SRSD) would be developed. SRSD has six steps that are key to the practice and the article explicitly describes each of the six steps so that a teacher could ensure that all six components are included in instruction. Another example of a fidelity measure, one provided by the developer of a system, is the Benchmarks of Quality (BoQ), a PBIS fidelity measure. The BoQ allows a PBIS team to assess their implementation of ten critical elements of PBIS with observable, measurable items.
Fidelity of implementation measures are the first step to program evaluation and are critical to student success. As McKenna, Flower, and Ciullo (2014) state: “It is imperative that practitioners measure fidelity to maximize instructional effectiveness and determine whether teacher practices are influencing student outcomes or if changes are needed”. If your core program is not meeting the needs of 80% or more of their students, you must first know if the program is being implemented correctly. When fidelity is much lower than 100%, professional development resources can be better aligned to meet the needs of teachers. If fidelity is close to or at 100%, and student outcomes are not improving, it might be time to consider a change in program.
So, is your program effective? If not, why not? The only way to begin to truly know is with fidelity data.
McKenna, J. W., Flower, A., & Ciullo, S. (2014). Measuring fidelity to improve intervention effectiveness. Intervention in School and Clinic, 50(1), 15–21. https://doi.org/10.1177/1053451214532348