Action Step and Orientation
A1. Create and maintain a literacy assessment plan.
The Assessment component of the Texas State Literacy Plan (TSLP) is closely linked to the Effective Instructional Framework component, which calls for implementation of a response to intervention (RTI) model for literacy instruction.
The Assessment component includes Action Steps focused on each of the important purposes for using data in the context of RTI implementation:
- to identify students experiencing or at risk for literacy difficulties;
- to determine students' specific instructional needs;
- to measure the response students have made to instruction and/or intervention; and
- to evaluate overall literacy achievement and the effectiveness of literacy instruction school wide.
Action Step A1 calls for schools to establish a literacy assessment plan that incorporates the effective collection and use of data for these purposes.
Part 1 of this lesson outlines the key elements for assessment, such as reliability and validity, which relate to each of the assessment purposes explored in the subsequent Action Steps.
In Part 2, you will examine the data that you are already collecting at your school and identify whether you have gaps or overlaps that you need to address in an articulated assessment plan.
To get started, download the Implementation Guide for this component and refer to the Action Step for this lesson. Review the Implementation Indicators for each level of implementation and note the Sample Evidence listed at the bottom of the chart.
Part 1—Key Elements of Assessment
Most schools in Texas are already conducting assessments and using data in various ways. The goal of this module is to help your team ensure you get the most out of these data to make important decisions about literacy instruction at your school.
In this lesson and throughout the Assessment module, you will be asked to reflect on and look critically at your assessment practices and to extend that discussion among your staff. This section outlines key elements of assessment to help you and your staff use the same language and vocabulary as you engage in this discussion.
In the realm of assessment, reliability is about consistency: Will the measure consistently produce similar results under similar circumstances? Those similar circumstances might be taking the same test again, being assessed by another examiner, being assessed with another method, or even answering similar questions on the same test. The idea is that in all cases, if the results are consistent, we consider them to be reliable.
A simple example of an assessment measure that could give reliable results is a scale. If you weigh yourself each morning after your shower, you may get a reliable (consistent) result, staying more or less the same or showing small, expected variations based on your recent activities.
Reliability can be calculated, and you will find reliability expressed as a decimal. The closer the decimal is to 1.0, the more reliable the assessment is for its prescribed purpose. Most experts believe that .80 or higher indicates strong reliability. Note that most schools and districts do not have the resources to test locally developed assessments for reliability. If reliability has not been evaluated, you cannot assume that the results from such assessments alone are a reliable source of information about students' skills and progress.
Keep in mind, though, that reliability alone does not guarantee a true measure. Results can be consistently wrong, as well as consistently right. That bathroom scale just might be off a few pounds. For this reason, scales used in high-stakes endeavors, such as the sale of precious metals or weight-governed sports like wrestling, are carefully calibrated. Literacy assessment data also need to be more than just reliable; they also need to be valid.
Validity is the degree to which the results of an assessment actually reflect what the test and the interpreters of the test intend to measure. Looking at validity means ruling out any variables that are not related to what you are trying to measure.
Reliability is a component of validity. If your bathroom scale showed great differences in your weight from day to day, this inconsistency would indicate that there is something distorting the real results, some outside variable (other than your weight) that is influencing the number on the scale. Therefore, those results would not be valid.
Reliability alone is not enough to ensure validity, however. There may be variables that consistently distort the results of the assessment to the same degree or in the same manner. Imagine you are asked to re-take the professional portion of your certification exam, but this time in Russian or some other language you do not know. Even if you take the exam multiple times, you are likely to have low results. Not understanding the test itself presents a significant (and consistent!) variable unrelated to what the test says it is measuring. That consistency indicates high reliability, but this is not a valid measure of your knowledge of the field of education.
An important point to keep in mind is that assessment data may be valid for one purpose but not for another. As stated above, validity is in relation to what the assessment is intended to measure. Throughout the Assessment component, the TSLP calls upon you to ask yourselves, "Are these data valid for the instructional decision we are considering?" The information in each lesson will help you answer that question for each type of assessment.
Formal and informal assessments
Formal assessments are those that are administered and scored using prescribed procedures and that have been examined for reliability and validity. Examples include state assessments such as the State of Texas Assessments of Academic Readiness (STAAR), end-of-course exams, and the Texas English Language Proficiency System (TELPAS) reading test. Commercially produced assessments that have been field tested for reliability and validity, and that are administrated and interpreted in a standard way, would also be considered formal assessments.
Informal assessments are those that are not administered with standard procedures or that have not been examined for reliability and validity. These include teacher-created tests, including essay tests; informal reading inventories; observations of students at work; student presentations; and department and district tests that have not been rigorously field tested for reliability and validity.
Informal assessments are useful in many ways. The teacher discretion involved in informal assessments may allow for the assessment to be adapted or deepened in response to student performance. Whether in one-to-one conferences, observations of groups, or "ticket out" questions at the end of class, teachers frequently gather data informally to help them gauge student understanding and determine next steps in their classes. Informal assessments alone are not, however, appropriate as the main source of data to make instructional decisions called for in the response to intervention (RTI) model.
Norm-referenced versus criterion-referenced
These terms refer to the way that assessments are designed to be interpreted. One of these ways is to compare a student's performance with what is expected at the student's age and grade level. That expectation is the "norm" in "norm-referenced." The SAT, ACT, and Graduate Record Examination (GRE) are examples of norm-referenced assessments. This type of test is best at answering the question "How well is this student developing this knowledge or skill compared to his or her peers?"
Criterion-referenced tests compare a student's performance not to that of other students in a cohort or group, but to an expected level of performance that is established ahead of time. A test with a predetermined passing standard, like a teacher certification exam, is criterion-referenced. This type of test attempts to answer the question "How well did this student master this content or these skills?"
Formative versus summative
Like other terms here, the terms "formative" and "summative" are linked to the purpose of the assessment and how the results are used. When teachers gather information during a lesson or a course in order to measure interim learning and to make instructional adjustments accordingly, this is formative assessment.
Summative assessments are given at specific times to measure the achievement of students at that time. The information is used to report individual student performance and can also be used to evaluate how groups of students performed in a given course. Low performance in particular areas of a summative assessment (or overall) may spur educators to look into ways to adjust future instruction or curricula for a course, but this would not typically impact the students who were assessed. With formative assessment, teachers are in the process of teaching the content that was assessed, and they are just in time to adjust and address areas of need indicated by the data.
Formative assessments are often informal (without strict administration and interpretation procedures). Not all summative assessments are formal, however. Unit tests, midterms, and final exams that are typically developed at the classroom, grade, school, or district level may vary in the strictness of their administration and interpretation. What is more, these tests are rarely field tested to determine their reliability and validity and, therefore, should be understood as informal.
Part 2—Assessment Plans
If the review of these key elements of assessment reminds you of a class on research methods, it is for good reason. The RTI model asks educators to investigate their students' learning as a scientist would go about conducting a research project. In looking at assessment this way, these important questions for RTI implementation emerge:
- Are there students at risk for or currently experiencing difficulties in literacy whom we can identify right away?
- What are the specific causes for students' difficulties in literacy?
- How are students responding to literacy instruction and intervention?
- How did students do overall in literacy achievement, and what does this say about the effectiveness of literacy instruction at our school?
These questions correspond to Action Steps A2 through A5 of the Assessment component of the TSLP. In Action Step A1, your team is called upon to articulate an assessment plan that addresses each of these questions. This plan may be embedded in your data-informed plan for improving literacy instruction or a campus improvement plan. Action Step A1 calls upon schools to have an articulated plan for assessment, communicate that plan to stakeholders, and follow the plan consistently. You might use this assessment plan checklist based on the Indicators for Action Step A1.
Most schools these days conduct many different assessments and collect lots of data. One way to evaluate whether your assessment system meets the criteria outlined in the TSLP is to conduct an assessment audit. This audit will help you evaluate the degree to which your current assessment data collection adequately addresses the questions above, which correspond to the key instructional decisions required in an RTI framework.
Assessment Audit instructions
- Download the Assessment Audit form.
- Complete the Instrument column for each assessment purpose (Action Steps A2 through A5).
- Complete the Timeline column for each instrument as precisely as possible.
In Lessons 2–5, use the information provided to consider the use of data, alignment, and validity of each instrument for each assessment purpose (instructional decision). Use the two middle columns of the form to guide your discussion.
Click here to download the Assessment Audit form.
The first step in using the Assessment Audit form is to determine which assessments you currently use for the purpose of addressing each question. There are multiple spaces available for each assessment purpose because you may use more than one assessment. For example, the first assessment purpose is to identify students who may be at risk for literacy difficulties. Assessment for this purpose is frequently referred to as screening. You might use multiple instruments to screen students for reading difficulties; if so, you can list them separately under the A2 section of the audit form. This will allow you and your team to look carefully at each one as you work through Lessons 2–5.
As you complete the first column of the audit, you may notice an overlap, for example, where the multiple assessments for a given purpose may not be necessary or where the data for some of these assessments are not used or helpful in making the instructional decisions at hand. You may also find gaps where you have not yet identified assessment tools that provide information for one or more of the other purposes.
For this lesson, consider these questions with your team:
- Does your campus use a large number of assessments for any single purpose?
- Are there any purposes (the first column of the form) for which you are not currently assessing?
- Are there any assessment instruments currently in use for which the purpose is unclear?
These questions are a start to examining your assessment and data-use practices with regard to the RTI framework at your campus. Sorting out the issues that arise from an assessment audit is an endeavor that your team will work on for longer than just this lesson, perhaps even beyond the time frame of this module. The Assessment Audit form and the lessons in this module can be used to help your team begin this discussion, both within your team and across your campus.
TO LEARN MORE: To learn more about the role and types of assessment used for instructional decisions within an RTI framework, you may want to review the following sources:
The first lesson in the Effective Instructional Framework module, Lesson E1—Data to inform instruction, provides information on RTI, assessment plans, and the role of assessment in RTI.
The Center on Response to Intervention hosts a wide collection of webinars, articles, and resource documents for implementing RTI. These webinars provide an overview of RTI and the role of assessment.
NEXT STEPS: Depending on your progress in establishing and articulating an assessment plan at your campus, you may want to consider the following next steps:
- Identify and examine current assessment instruments and practices, such as beginning to complete the Assessment Audit as described in Part 2.
- Establish, review, or update your assessment plan, including data meeting schedules and procedures.
- Determine which staff members have been trained in administering and interpreting the various assessments used at your campus.
A1. Create and maintain a literacy assessment plan.
With your site/campus-based leadership team, review your team’s self-assessed rating for Action Step A1 in the TSLP Implementation Status Ratings document and then respond to the four questions in the assignment.
In completing your assignment with your team, the following resources and information from this lesson’s content may be useful to you:
- Refer to Part 1 for a review of some key elements of assessment.
- Refer to Part 2 for information about assessment plans and the Assessment Audit form.
Next Steps also contains suggestions that your campus may want to consider when you focus your efforts on this Action Step.
To record your responses, go to the Assignment template for this lesson and follow the instructions.
Burns, M. K., & Gibbons, K. (2012). Implementing response-to-intervention in elementary and secondary schools: Procedures to assure scientific-based practices (2nd ed.). New York, NY: Routledge.
Haager, D., Klingner, J., & Vaughn, S. (Eds.). (2007). Evidence-based reading practices for response to intervention. Baltimore, MD: Brookes Publishing.
McKenna, M. C., & Stahl, K. A. D. (2008). Assessment for reading instruction (2nd ed.). New York, NY: The Guilford Press.
Mellard, D. F., & Johnson, E. S. (2008). RTI: A practitioner's guide to implementing response to intervention. Thousand Oaks, CA: SAGE Publications.