Action Step and Orientation

A5. Use assessment data to evaluate students' overall literacy performance.

This lesson focuses on analyzing and using data at the campus level.

Part 1 provides an overview of using assessment data to evaluate students' overall literacy performance and how you may integrate this practice into your literacy assessment plan.

Part 2 focuses on analyzing summative, or outcome, assessments to inform instructional decision making.

In Part 3, you will consider how your data analysis supports you in evaluating Tiers I, II, and III instruction and setting goals for the future.

To get started, download the Implementation Guide for this component and refer to the Action Step for this lesson. Examine the Implementation Indicators for each level of implementation and note the Sample Evidence listed at the bottom of the chart.

Part 1—Using Assessment Data to Evaluate Students' Overall Literacy Performance

This module explains how to create and maintain a balanced literacy assessment plan at your campus. Lessons 1–4 describe some of the key purposes for assessing students within a response to intervention (RTI) framework, and show how to evaluate the measures used at your campus to ensure alignment between the purpose of each assessment and how the data are used in your efforts to improve literacy outcomes for all students. The following purposes for assessing students and using data are presented:

  • To identify students at risk of literacy difficulties (Lesson 2)
  • To determine students' specific instructional needs (Lesson 3)
  • To monitor students' progress toward targeted literacy goals (Lesson 4)

All of these assessment activities are key in optimizing the RTI problem-solving process and engaging in effective educational decision making. Each supports you in making valid inferences about students' needs, allocating resources to meet those needs, and accurately monitoring students' response to targeted instruction. Now you will consider how to evaluate the overall impact of these ongoing processes by looking at student outcome data.

You might consider the following scenario as you begin thinking about this Action Step.

A group of medical researchers who focus on heart disease may examine patient data at the group level to explore trends in patients' health needs and the effects of treatment. For example, researchers may identify trends in patient needs that should be better addressed. Further, they may look at different groups of patients receiving different treatments, or combinations of treatments, to determine which treatments are working best and show evidence of improvement in patients' overall health.

Just as researchers examine the overall effects of different treatments for patients, effective educators and leaders examine how instructional programming and decision making affect students' overall learning and literacy development. Action Step A5 is about looking at student outcomes and considering ways to best use this type of data in your continuing efforts to improve literacy instruction at your campus. As with the other types of assessment activities, you will be identifying students' ongoing literacy needs and planning support accordingly; however, you will also be taking a step back to evaluate the efficacy of your instructional framework as a whole. Through this process, you will consider data that will help you answer "big-picture" questions such as these:

  • What areas of strength or need do outcome data indicate should be addressed?
  • Have our instructional decisions made an impact? In core instruction? In Tiers II and III instruction?
  • What challenges do we continue to face in allocating resources and efficiently providing services that target students' needs? How do we address those challenges in the future?

In general, analyzing student outcomes is an activity within your campus's assessment cycle that is focused on instructional planning at the campus and/or district level. It includes collaborative analysis and discussion among instructional leaders, teachers, and specialists to address program-wide gaps in meeting students' language and literacy needs.

What types of assessment data are used for this purpose?

In grades 6–12, you use an array of tools or assessments to evaluate students' overall literacy performance. One of the most common types of assessment tools used for this purpose is a summative assessment. In broad terms, summative assessments measure if students have learned what they were supposed to learn. Summative assessments are typically administered after instruction (as opposed to formative assessments that are administered during instruction), and they are administered to all students. These assessments tell educators about students' skill mastery and are used to make educational decisions about resource allocation in response to how students have performed. (Understanding Types of Assessment within an RTI Framework, Center on Response to Intervention).

Most often, summative assessments are administered at the end of the instructional year, and these assessments can be used to determine the effectiveness of instruction (Wixson & Valencia, 2011). Standardized, end-of-year (EOY) assessments are most commonly administered for this purpose. The State of Texas Assessments of Academic Readiness (STAAR) is the standardized assessment used in Texas as the basis for determining whether or not students have met grade-level expectations in literacy. However, in grades 6–12 (particularly grades in which students do not take state assessments), you may use additional tools to examine students' overall literacy performance. For example, in 6th grade, you may look at EOY screener data such as Aimsweb or Texas Middle School Fluency Assessment (TMFSA) to evaluate overall student progress in literacy skills. By 11th and 12th grades, after students have passed the English Language Arts (ELA) End-of-Course (EOC) exam, their English language arts grades may serve as an adequate measure of literacy skills. Also, you may use curriculum-based district assessments to measure students' mastery of literacy skills.

While summative assessment data can be used to analyze the overall literacy skills of students at a school, formative progress monitoring data are generally not considered appropriate for analyzing student outcomes. However, you may look at aggregate formal progress monitoring data (for students receiving Tier II or Tier III services) as part of your evaluation of the implementation of the RTI process at your school. In combination with summative outcome measures, looking at trends in progress monitoring data for different student groups (e.g., by specific intervention, by specific need) can help evaluate the efficacy of targeted literacy instruction.

Additionally, language proficiency data (for example, from the Texas English Language Proficiency Assessment System, or TELPAS) is important to analyze to determine if English learners have met yearly goals in English language development. Language development and literacy development in English are strongly related; you and your staff will want to look at data for both language and literacy development as you analyze literacy outcomes for English learners.

You and your team can use your Assessment Audit to guide your discussion about the instructional decisions made from the assessments administered at your site or campus. For each instrument used to measure student outcomes, discuss these questions:

  • What are the data from this instrument used for? In other words, what decisions will be made based on this data?
  • Are the criteria for this decision defined?
  • If yes, are the criteria communicated to teachers and other stakeholders?

You can refer to the third column of the Assessment Audit form for this discussion.

Previous lessons in this module explain the importance of looking at the validity and alignment of each source of data for the purposes at hand: identifying struggling students, determining specific needs, and monitoring progress toward goals. As you discuss these questions with your team in relation to Action Step A5, you may notice that you are looking at data through different lenses. The goals here are broader because you will be looking more globally at the effectiveness of instruction and intervention across the school and grade levels.

As you use data to determine the effectiveness of your overall program, you might consider what criteria would enable you to determine exactly how well an initiative or program is succeeding. Part of your discussion will also include deciding what other information you might need to effectively evaluate the implementation and outcomes of literacy instruction so that you can make solid decisions for next steps.

For example, when implementing a reading intervention class for students who have failed the STAAR test, it might not be sufficient to measure only outcomes for those students (i.e., success on EOC re-take). You might also consider what measures will allow you to know along the way if the intervention is being implemented effectively. If your measures along the way show the intervention is not effective for some students, you can consider the actions you and your team might take to enhance the intervention or to shift emphasis. In this way, you can see how it is important to use a two-pronged approach to evaluation: analyze outcome data to evaluate program effectiveness at the end of the year and examine data throughout the year to respond to how students are progressing.

Part 2—Analyzing Outcome Data to Inform Action

When used appropriately, summative assessments can support educators in analyzing their own instruction and targeting areas of improvement for practice. On a wider level, analyzing outcome data can inform school leaders of ways to improve their RTI framework, as this practice is focused on evaluating the effectiveness of a program, intervention, or the overall literacy curriculum (Ball & Christ, 2012).

You and your team are likely familiar with these steps for analyzing student data to inform action:

  • Identifying areas of strength and need in global student performance
  • Investigating possible causes for different outcomes for different students or groups of students
  • Engaging in critical dialogue with key campus staff to determine the best action steps that address identified areas of need

As part of your campus assessment system, you and your staff should be already engaging with these questions multiple times throughout the year. You can identify learning problems through screening and diagnostic assessments, and you can continuously engage in program evaluation and planning as you determine ways to address those learning problems and monitor progress toward set goals. Examining data from summative or outcome measures provides another opportunity for you to evaluate the educational decisions you have made, allowing you to look at trends across classrooms, grade levels, student populations, and even campuses within a district to better understand the effectiveness of the RTI framework as a whole.

The figure below provides some key discussion questions you and your team may use to engage staff in this type of analysis:

discussion questions

Part 3—Using Data to Inform Both Core (Tier I) and Supplemental (Tiers II and III) Instruction

Collecting and analyzing data on students' overall literacy performance is important to the evaluation of both core (Tier I) and supplemental (Tiers II and III) instruction. As part of your data-analysis process, you will want to examine what the data is telling you about instruction in all tiers.

Quality, evidence-based Tier I instruction is at the foundation of an effective RTI framework, and without it in place, it is likely that the need for Tiers II and III instruction will exceed available resources. Thus, using summative data to evaluate Tier I instruction should be considered a critical step in evaluating the RTI process at your campus. For example, if 40% of students at your campus are unsuccessful on the EOC STAAR assessment, you might consider the ways in which you can adjust Tier I instruction.

Likewise, it is important that the analysis of summative data be extended to the evaluation of Tiers II and III instruction, and you and your team will want to ensure that this activity is integrated in the decision-making process for students receiving additional support (McDougal, Graney, Wright, & Ardoin, 2010). Disaggregating outcome data for students receiving Tiers II and III interventions will help you identify trends within and across these groups. If there are significant numbers of students in Tier II or III not making adequate gains, you have reason to further evaluate the efficacy of that intervention instruction. In turn, if there are significant gains made by particular groups of students in specific interventions, you will want to identify why those practices were successful and coordinate resources to ensure those practices are implemented across intervention groups.

Once your team has analyzed patterns in the data and identified possible causes for performance gaps, you will want to extend this process to evaluate the instruction being delivered to students in all tiers. You may consider the following questions when evaluating Tiers I, II, and III instruction and addressing performance gaps:

Guiding Questions

You may want to download a copy of the entire flow chart Analyzing Outcome Data to Inform Action: Questions to Guide the Process.

While analyzing outcome data is essential to understanding program effectiveness and making decisions for the next school year, it is also important to understand how effective a program is while it is being implemented. This is especially true for students in intervention programs. It may not be acceptable to wait until the end of the year to know if a yearlong intervention class is effective. You might want to know at several points throughout the year to what degree the intervention is working. In this case, you might consider ways to measure program effectiveness throughout the year. Many schools use district-created, curriculum-based assessments to measure Tier I program effectiveness. These assessments are closely aligned to the goals of Tier I instruction (i.e., mastery of state standards) and enable teachers to adjust their instruction if it was not effective. This kind of assessment data should be shared with interventionists.

In addition, interventionists should consider what kinds of assessments they could administer to understand how effective the intervention is for most students. You may find it useful to look at progress monitoring data or use other assessments that are likewise closely tied to the goals of the intervention in determining the effectiveness of intervention instruction throughout the year. In this case, you are looking through the lens of program evaluation with a focus on overall effectiveness of intervention instruction rather than looking at individual student responses to the instruction. Switching between these "lenses" may take place at the same progress monitoring intervals but allows you to look at the bigger picture of Tiers II and III instruction overall (The High School Tiered Interventions Initiative: Progress Monitoring, Center on Intervention).

Within an RTI framework, intervention integrity is a critical factor to consider in the problem-solving process. This pertains to how well tiered instruction is delivered "with regard to content, quantity, and process" as part of your RTI program (Ball & Christ, 2012, p. 237). Another common term used for this characteristic is fidelity. Fidelity of the intervention program refers to the quality of its implementation and the degree to which practitioners "implement programs as intended by the program developers" (Using Fidelity to Enhance Program Implementation Within an RTI Framework, Center on Intervention).

When determining the degree to which students have met goals, it is important to measure and evaluate the fidelity of instruction delivered to those students and identify any departures or modifications made to intervention instruction (e.g., modified pacing, inconsistency in time allotted or daily schedule). You and your team will need to determine how you can measure the fidelity of implementation and what impact any deviations have had on student learning, and then you can plan for ways to address any gaps in the delivery of instruction within your RTI framework.

As part of your instructional planning, you may consider ways to better measure fidelity of instruction to ensure that students are receiving high-quality, evidence-based intervention instruction before making decisions about students' future needs. There are a variety of tools that can help measure and monitor fidelity. The Monitoring Fidelity in RTI webinar, available on the Center on Response to Intervention site, is one helpful resource for understanding fidelity in RTI. The webinar also discusses specific tools that may help you monitor the integrity of intervention instruction.

In sum, as you engage in data analysis and critical discussion of these issues, you and your team will want to determine action steps to address any identified gaps. As part of the cycle of data-based decision making, you will want to ensure that instructional staff collaborate to set program-wide goals for the next year and create a plan to meet those. As the cycle continues, you will use your balanced assessment plan to determine the impact of your instructional plan and the degree to which goals are being met. This cycle of data analysis and decision making is part of the practice of creating, evaluating, and revising your data-informed plan for improving literacy instruction, as introduced in the Leadership module of this course.

icon for Learning more

TO LEARN MORE: Extensive research and resources are available on the topic of using data to inform educational reform efforts. These resources can provide guidance as you establish and maintain this process at your campus in a way that best aligns with the needs of educators and students in your particular context.

The Center on Response to Intervention's Implementation & Evaluation section provides information about RTI implementation, implementing RTI with fidelity, and evaluating your RTI framework. It contains a variety of guidance documents and training modules related to these topics.

The Regional Educational Laboratory (REL) Pacific has put together an excellent resource on logic models, “Logic Models: A Tool for Designing and Monitoring Program Evaluations.” Logic models can help educators understand the causes, effects, and systems surrounding programs (such as Tier II interventions or the implementation of a new Tier I curriculum) on their campuses. This introduction to logic models as a tool for designing program evaluations defines the major components of education programs—resources, activities, outputs, and short-, mid-, and long-term outcomes—and uses an example to demonstrate the relationships among them.

REL Pacific has also put together a brief, “Five Steps for Structuring Data-Informed Conversations and Action in Education,” which outlines guiding questions, suggests activities, and provides a framework and tools needed to support tough conversations about data. The guide also outlines five key steps in using data to make decisions about program effectiveness: setting the stage, examining the data, understanding the findings, developing an action plan, and monitoring progress and measuring success.

Brown and Doolittle's (2008) practitioner brief titled “A Cultural, Linguistic, and Ecological Approach to Response to Intervention with English Language Learners” provides guidance in ensuring that the RTI framework is implemented to meet the unique needs of English learners. Practitioners can find guiding questions to consider when evaluating intervention decisions made about English learners and the integrity of instruction in each tier. It is available on the Center on Response to Intervention site.

REL Mid-Atlantic conducted a webinar, Root Cause Analysis, which provides an outline of root cause analysis and several resources to help educators implement root cause analyses for problems that arise on their campuses. Root cause analysis can be a powerful method for educators to analyze data in new ways and identify the root causes of events. This type of analysis can help you avoid the pitfall of trying to solve the "wrong" problems and missing the root causes of the issues at hand.

icon for next steps

NEXT STEPS: Depending on your progress in using assessment data to evaluate overall literacy performance, you may want to consider the following next steps:

  • Determine how data are being used to evaluate the effectiveness of programs as they are being implemented.
  • Discuss how outcome data are used to make instructional and programmatic decisions. You might use the third and fourth columns of the Assessment Audit to think through these issues.
  • Gather and review the intervention materials (e.g., teachers' guide, instructional manuals) to determine if interventions are being delivered as designed.
  • Determine which staff members have been trained in analyzing summative data and plan necessary professional development to address staff needs in this area.
  • Determine procedures for gathering and sharing additional data to support valid decisions for all students and student groups, such as English learners.


A5. Use assessment data to evaluate students' overall literacy performance.

With your site/campus-based leadership team, review your team’s self-assessed rating for Action Step A5 in the TSLP Implementation Status Ratings document and then respond to the four questions in the assignment.

TSLP Implementation Status Ratings 6-12

As you complete your assignment with your team, the following resources and information from this lesson’s content may be useful to you:

  • Refer to Part 1 for an overview of using assessment data to evaluate students' overall literacy performance.
  • Refer to Part 2 for information about analyzing summative data for instructional planning purposes.
  • Refer to Part 3 for information about using summative data to evaluate literacy instruction in Tiers I, II, and III.

Next Steps also contains suggestions that your campus may want to consider when you focus your efforts on this Action Step.

To record your responses, go to the Assignment template for this lesson and follow the instructions.


Follow instructions provided by your school or district.


Ball, C. R., & Christ, T. J. (2012). Supporting valid decision making: Uses and misuses of assessment data within the context of RTI. Psychology in the Schools, 49(3), 231–244).

Center on Response to Intervention. (n.d). The high school tiered interventions initiative: Progress monitoring [Webinar]. Retrieved from

Center on Response to Intervention. (n.d). Understanding types of assessment within an RTI framework [Webinar]. Retrieved from

Center on Response to Intervention. (n.d). Using fidelity to enhance program intervention within an RTI framework [Webinar]. Retrieved from

Hosp, M. K., Hosp, J. L., & Howell, K. W. (2007). The ABCs of CBM: A practical guide to curriculum-based measurement. New York, NY: The Guilford Press.

McDougal, J. L., Graney, S. B., Wright, J. A., & Ardoin, S. P. (2010). RTI in practice: A practical guide to implementing effective evidence-based interventions in your school. Hoboken, NJ: Wiley & Sons.

Reed, D. K., Wexler, J., & Vaughn, S. (2013). RTI for reading at the secondary level: Recommended literacy practices and remaining questions. New York, NY: The Guilford Press.

Wixson, K. K., & Valencia, S. W. (2011). Assessment in RTI: What teachers and specialists need to know. The Reading Teacher, 64(6), 466–469.