The use of data is not new to schools, teachers, administrators, state education agencies, or parents. Indeed, data has been used by school administrators and teachers since schooling began; however, never has data literacy been as important as it is in the wake of the No Child Left Behind Act (NCLB) of 2001. NCLB mandated that teachers systematically analyze data collected from standardized state- and national-level assessments and use the findings in instructional decision making, ushering in the era of “data-driven decision-making”. This continues as a priority in the Every Student Succeeds Act (ESSA), the successor to NCLB.
The underlying premise of data-driven decision-making is that educators collect and use data in a way that leads to improvements in instruction and student outcomes. Data is meant to be collected, organized and analyzed, then combined with educators’ understanding and expertise, to become actionable knowledge (Marsh et al., 2006). Individual educators and teams of educators then engage in a Plan-Do-Study-Act, or PDSA, cycle (see NIRN) where they plan and take action based on their understanding of the data and of effective instructional practices (Plan-Do), collect new data to assess the effectiveness of those actions (Study), and continue to analyze and act to improve instruction and student outcomes (Act). The cycle might focus on a range of important actions, from setting a school-wide goal and monitoring progress toward attainment of the goal, to assessing the impact of targeted support to low-performing students, to evaluating a new program or set of practices.
So how effective have we been as a field in implementing data-driven decision-making? An article from the RAND Research Corporation from 2006 by Marsh, Pane and Hamilton, still resonates today. The authors note many challenges in engaging in the process:
- The quality and types of data used: We still tend to over-rely on statewide assessment test scores and interim progress tests based on those assessments. We are not as systematic in learning from other data sources like classroom-generated data (student work on classroom tests, assignments and homework); examination of classroom practices (learning walks and walk-throughs that include observations of instruction and materials and talking to students); and satisfaction and opinion data (teacher, student, family surveys).
- Easy and timely access to data: Because of our reliance on statewide and interim progress tests, teachers and teams often do not receive data on student performance until significant time has passed.
- Perceived quality of data: Many educators express doubts about the validity of this data because of factors like low response rates, concerns about whether students take them seriously, and whether they measure meaningful aspects of the curriculum, affecting their buy-in and support for use of the data.
- Training in use of data to make decisions about instruction: Many educational leaders have not received training and coaching in using data to make meaningful decisions. This leads to the frequent teacher complaint that they are “drowning in data” that provides no useful and actionable information.
- Curriculum pacing pressures: Many teachers still feel they do not have the flexibility to alter instruction when analysis of data reveals problems because of curriculum pacing calendars.
- Lack of time to collect, analyze, synthesize and interpret data.
- Organizational culture and leadership: Effective implementation of data-driven decision-making can only occur in school cultures with well-established norms of openness and collaboration, and when school leaders have a strong vision for data use. If teachers do not see data use as an opportunity to reflect on and improve craft but as a “gotcha”, they will have little motivation to engage in the process.
So what do Marsh et al. recommend? Read the full report, as well as the Fitch Research Roundup in the School Tools section above, for a more in-depth discussion of effective strategies for data-driven decision-making, but here is a summary of some of their recommendations:
- Inventory your student outcome assessments to ensure they serve a clear purpose and provide useful information; stop using assessments that do not provide actionable information
- At the same time, collect and use other types of data, like satisfaction data and data on implementation of practices, to inform decision-making
- Ensure you are using data that can be accessed and analyzed immediately; assign individuals to filter data and help translate them into usable knowledge
- Invest time in identifying a repertoire of evidence-based instructional practices and interventions to use when a problem is detected
- Train staff in a problem-solving process that goes beyond analyzing data to taking action based on data; allocate adequate time for educators to study and think about the available data and to collaborate in interpreting data and developing next steps and actions
- Give teachers sufficient flexibility to alter instruction based on data analysis rather than following mandated curriculum pacing schedules
These are not easy fixes in our complex educational systems, but this Rand Research Brief suggests that these practices result in more engaged teachers and improved outcomes for students. If we start with that as our “Why”, we need to strive for these “Hows”.
Marsh, JA, Pane, JF & Hamilton, LS. (2006). Making sense of data-driven decision making in education. Santa Monica, CA: https://www.rand.org/pubs/occasional_papers/OP170.html.