Assessment: Outcomes, Screeners, Progress Monitors (and Intervention)/Data
Curated Resources Menu
Assessment is the process of sampling student performance in a construct of interest to make decisions. Assessment can be completed in many ways, including tests containing multiple items, summary measures of performance on a complex task (like the number of mathematics problems solved in two minutes), or ratings and/or counts of behavior. The common feature is that assessment is a purposeful and implicitly objective measure of some skill, characteristic, or proficiency. The purpose of assessment in education is to inform instruction and intervention, not to collect data for its own sake. Although people often think of assessments in terms of reliability and validity, it is the resulting data that are found to be reliable or not and the consequent decisions that are validated. To result in valid decisions, assessments must be aligned with—and judged in the contexts of—the decisions being made. For example, assessments used to screen students should be designed and validated to identify students who might need additional help. Making other types of decisions with an assessment designed to screen will likely result in decisions based on measurement error.
Given that individual assessments typically address only one purpose, they should be contextualized within a purposeful assessment system, or a set of related assessments each administered and interpreted for its intended purpose. For example, in the figure below, all students are screened, and all receive core instruction. Students for whom screening data suggest achievement below expectations then might receive additional diagnostic/placement assessments to gather additional information about what the student still needs to learn and where to begin instruction. With data from formative evaluation, teachers can attempt to better support students by assessing what was learned, what needs to be retaught, and the extent to which differentiation to core instruction is meeting the students’ needs. Students who require additional support in the way of an intervention in addition to core instruction would receive additional diagnostic data to focus intervention efforts and progress monitoring data would be critical to gauge whether they are benefiting from the intervention. If not, additional diagnostic/placement data may be warranted in order to further intensify or modify the intervention. Later, all students again participate in an outcome measure. In each case, a given assessment/test should be selected based on how well it addresses the specific purpose and how well it aligns with core instruction, intervention, the community being assessed, the culture and language of the person being assessed, and other assessments within the system.
The goal of the Evidence Advocacy Center’s Assessment: Outcomes, Screeners, Progress Monitoring, and Intervention/Data Team is to improve the outcomes of PreK–12 students by making scientific evidence and validated assessment practices more readily available to and widely used by K-12 educators and various stakeholders, and in turn allocating needed instructional resources to each student they serve. The Assessment team curates and shares resources from trusted organizations and individuals with anyone interested in improving educational outcomes through enhancing data-based decision-making so that instruction and intervention are based on student needs, student contexts, and equitable assessment practices, and student outcomes are closely monitored. Resources are organized by educational purpose.
Have questions or need implementation support?
For a specialized menu of resources for your specific needs, and to discuss implementation guidance and support, please contact us.
Online Resources (additional considerations)
*Resource focuses on fairness and equity in assessment.
Policy Makers
We All Count DATA EQUITY FRAMEWORK – We All Count*
FCRR Assessment Overview Purposes of Assessment
Great Schools 4 Ways to Make Assessments More Equitable | Great Schools Partnership*
Joint Committee Open Access Files – THE STANDARDS FOR EDUCATIONAL AND PSYCHOLOGICAL TESTING
National Center on Educational Outcomes: https://nceo.info/
Center for Assessment: https://www.nciea.org/
NCME Library – NCME
NCME Equity in Assessment Webinar*
Institutes of Higher Education
NCME https://www.ncme.org/home
Buros Buros Center for Testing
Eastern Oregon University and The Reading League Sample Syllabus for Assessment Course
EducationWeek Webinars on Assessment & Testing
Johnson & Gatlin-Nash Evidence-Based Practices in the Assessment and Intervention of Language-Based Reading Difficulties Among African-American Learners*
National Institute for Learning Outcomes Assessment Assessment Resources
International Test Commission ITC Videos – Cultural Adaptations
Center on Measurement Justice: https://measurementjustice.org/
University of Pittsburgh Making Your Assessments More Equitable*
Parents, Teachers, and School District Leaders
Casel Choosing and Using SEL Competency Assessments: What Schools and Districts Need to Know
IES Making Sense of Educational Assessment
IES Practice Guide Using Student Achievement Data
Data Quality Campaign What Is Student Data?
Data Quality Campaign How Data Empowers Parents
Reading League Florida Teaching Foundational Reading Skills: Data-Based Decision Making
Teachers Institute The Importance of Validity in Educational Assessments • Teachers Institute
Project Expert: Understanding Student Data Video
Books and Articles
Assessment Journals
Assessment for Effective Intervention: Sage Journals
Educational and Psychological Measurement: Sage Journals
Educational Assessment | Taylor & Francis Online
Educational Measurement: Issues and Practice – Wiley Online Library
Journal of Educational Measurement – Wiley Online Library
Journal of Psychoeducational Assessment
Books
American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (2014). Standards for educational and psychological testing. American Educational Research Association.
Cipani, E. (2017). Functional behavioral assessment, diagnosis, and treatment: A complete system for education and mental health settings (3rd ed.). Springer.
Hougen, M. C., & Smartt, S. S., (2020). Fundamentals of literacy instruction & assessment Pre-k-6 (2nd ed.). Paul Brookes Publishing.
Popham, W. (2019). Classroom assessment: What teachers need to Kkow (9th ed.). Pearson.
Smartt, S. S., Glaser, D. R., & Hasbrouck, J. (2023). Next STEPS in literacy instruction: Connecting assessments to effective interventions (2nd ed.). Paul Brookes Publishing.
von der Embse, N. P., Eklund, K., & Kilgus, S. P. (2022). Conducting behavioral and social-emotional assessments in MTSS: Screening to intervene. Routledge.
William, D. (2017). Embedded formative assessment (strategies for classroom formative assessment that drives student engagement and learning). Solution Tree.
Ysseldyke, J., Chaparro, E. A., & VanDerHeyden, A. M. (2023). Assessment in special and inclusive education (14th ed.). PRO-ED.
Who is at-risk in a broad area of performance and thus a candidate for additional services and supports?
Definition- “Screening [involves] the use of brief assessments with a group of individuals to identify individuals who are at risk [for a particular condition or status] or may be at risk of future difficulties” (Kettler, Glover, Albers, & Feeney-Kettler, 2014).
- Additionally, general screening is a broadscale, relatively low-fidelity, efficient assessment of domain-level performance; rather than skill-level assessment. General screening measures reflect performance across individual skills, and as a result, provide a composite picture of individual student performance and the effectiveness of instruction.
- General screening measures are designed to quickly and universally assess the current status of all individuals in a population, and identify those individuals who may be identified, after further assessment, as not meeting current expectations such that they are therefore candidates for supplemental intervention.
- General screening measures have become commonplace in multi-tiered systems of support and other early and differentiated intervention models for academic achievement, behavioral competence, and a variety of other educational and public health areas of focus.
All students participating in core instruction, typically applied universally (i.e., all students in a class, grade, school, or district)
FrequencyTypically, especially in most MTSS models, three times per year (i.e., Fall, Winter, Spring). More generally, often enough to detect changes in individual student development, achievement, or status.
Type of MeasureGeneral outcome measure
Standardized achievement tests
Standardized rating scales
Online Resources (additional considerations)NCII Academic Screening Tools Chart
NCII Behavior Screening Tools Chart
Reading Science Academy Should You Love CATS? (Computer-adaptive Assessments)
IES Considerations for Adopting Computer-Adaptive Assessments
IRIS Universal Screening Components
National Center on Improving Literacy (NCIL) Assessment Resources: Literacy Screening
Books and ArticlesClemens, N. H., Keller-Margulis, M. A., Scholten, T., & Yoon, M. (2016). Screening assessment within a multi-tiered system of support: Current practices, advances, and next steps. In S. R. Jimerson, M. K. Burns, & A. M. VanDerHeyden (Eds.), Handbook of response to intervention: The science and practice of multi-tiered systems of support (pp. 187–213). Springer.
Deno, S. L. (1985). Curriculum-based measurement: The emerging alternative. Exceptional Children, 52(3), 219–232.
Ketterlin-Geller, L.R., Shivraj, P., Basaraba, D., & Schielack, J. (2019). Universal screening for algebra readiness in middle school: Why, what, and does it work? Investigations in Mathematics Learning, 11(2), 120-133
Kettler, R. J., Glover, T. A., Albers, C. A., & Feeney-Kettler, K. A (2014). An introduction to screening in educational settings. In R. Kettler, T. Glover, C. Albers, & K. A. Feeney-Kettler (Eds.), Universal screening in educational settings: Identification, implications, and interpretation (pp. 3 – 16). American Psychological Association.
Lewis, T. J., Sugai, G., & Colvin, C. (1998). Reducing problem behavior through a school-wide system of effective behavioral support: Investigation of a school-wide social skills training program and contextual interventions. School Psychology Review, 27, 446–459.
McConnell, S. R., Bradfield, T. A., & Wackerle-Hollman, A. K. (2014). Early childhood literacy screening. In R. Kettler, T. Glover, C. Albers, & K. A. Feeney-Kettler (Eds.), Universal screening in educational settings: Identification, implications, and interpretation (pp. 141–170). American Psychological Association.
Romer, N., von der Embse, N., Eklund, K., Kilgus, S., Perales, K., Splett, J. W., Sudlo, S., Wheeler, D., (2020). Best practices in social, emotional, and behavioral screening: An implementation guide. Version 2.0. https://www.smhcollaborative.org/universalscreening
Walker, H. M., Severson, H., Stiller, B., Williams, G., Haring, N., Shinn, M., & Todis, B. (1987). Systematic screening of pupils in the elementary age range at risk for behavior disorders: Development and trial testing of a multiple gating model. 1–57.
Walker, H. M., Small, J. W., Severson, H. H., Seeley, J. R., & Feil, E. G. (2014). Multiple-gating approaches in universal screening within school and community settings. In R. J. Kettler, T. A. Glover, C. A. Albers, & K. A. Feeney-Kettler (Eds.), Universal screening in educational settings: Evidence-based decision making for schools. (pp. 47–75). American Psychological Association. https://doi.org/10.1037/14316-003
Yeatman, J. D., Tang, K. A., Donnelly, P. M., Yablonski, M., Ramamurthy, M., Karipidis, I. I., Caffarra, S., Takada, M. E., Kanopka, K., Ben-Shachar, M., & Domingue, B. W. (2021). Rapid online assessment of reading ability. Scientific Reports, 11(1), 6396. (Assessment offered by Stanford University’s Reading & Dyslexia Research Program)
What question does this answer?
How should I change my teaching or behavior plan to improve student outcomes?
Definition
- All those activities undertaken by teachers, and/or by their students, which provide information to be used as feedback to modify the teaching and learning activities in which they are engaged (Black & William, 1998, p. 7-8).
- Samples of the entire range of outcomes associated with a curriculum or instructional program, over a long period, to assess student mastery of those skills (Bloom et al., 1971).
- Formative assessment is a systematic process to continuously gather evidence and provide feedback about learning while instruction is under way. The feedback identifies the gap between a student’s current level of learning and a desired learning goal (Sadler, 1989).
- An ongoing assessment to feedback loop in which data both suggest current student functioning and dictate future instructional activities (Burns, 2010).
For whom?
All students
Frequency?
Continuously
Type of Measures
Subskill mastery measure
Informal assessments of student learning
Observation or ratings of behavior
Online Resources (additional considerations)
NWEA What is formative assessment? – Teach. Learn. Grow.
Renaissance Learning Understanding formative evaluation and its critical steps for successful learning
Visible Learning Formative evaluation Details
National Council on Measurement in Education Formative Assessment Modules – Professional Learning
Books and Articles
Burns, M. K. (2010, March). Formative evaluation in school psychology: Fully informing the instructional process. School Psychology Forum (Vol. 4, No. 1).
Kingston, N., & Nash, B. (2011). Formative assessment: A meta‐analysis and a call for research. Educational measurement: Issues and practice, 30(4), 28-37.
Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education: principles, policy & practice, 5(1), 7-74. https://assess.ucr.edu/sites/default/files/2019-02/blackwiliam_1998.pdf
Bloom, B. S., Hastings, J. T., & Madaus, G. F. (1971). Handbook on formative and summative evaluation of student learning. McGraw-Hill.
Heritage, M., Kim, J., Vendlinski T., and Herman J. (2009). From evidence to Action: a seamless process in formative assessment? Educational Measurement:Issues and Practice 28(3) 24-31. https://doi.org/10.1111/j.1745-3992.2009.00151.x
Stiggins R. (2009). Assessment for learning in upper elementary grades. Phi Delta Kappan, 90(6) 419-421. https://doi.org/10.1177/003172170909000608
Ketterlin-Geller, L.R., Powell, S., Chard, D., & Perry, L. (2019). Teaching math in middle school: Using MTSS to meet all students’ needs. Brookes Publishing.
What question does this answer?
What are my student’s specific strengths, prior knowledge, areas of needed skill development not mastered to help inform instruction or intervention?
What are the root causes of my student’s behavior challenges that are malleable to change?
Definition
- Measures of specific knowledge structures to determine the extent to which knowledge, skills, or strategies in a particular domain have been mastered (Sun & Suzuki, 2013).
- A diagnostic assessment is a tool teachers can use to collect information about a student’s strengths and weaknesses in a skill area. These assessments can be formal (e.g., standardized achievement test) or informal (e.g., work samples). IRIS
- Processes used by teachers and students during instruction to obtain feedback that is used to adjust instruction to improve student learning (National Center on Educational Outcomes).
Placement tests are diagnostic assessments that are specific to a particular curriculum or program and are designed to determine where to start in that curriculum or program.
For whom?
Any student who needs additional support and where precisely targeted intervention will promote achievement.
Type of Measure
Subskill mastery measure
Diagnostic interviews (math focused)
Error analysis
Observations or ratings of behavior
Frequency
Likely periodic, but whenever data are needed to better understand student needs.
Online Resources (additional considerations)
NCII Diagnostic Data
PaTTAN MTSS Mathematics Overview
NCII Examples of Diagnostic Assessments
NCII Using FBA for Diagnostic Assessment
SpringMath Skill Sequence by Grade
Books and Articles
Bejar (1984) Educational diagnostic assessment. Journal of Educational Measurement
CORE Learning Assessing Reading: Multiple Measures, 2nd Edition – Professional Learning & Support | Literacy, Math, Curriculum
Burns, M. K., & Parker, D. C. (2014). Curriculum-based assessment for instructional design: Using data to individualize instruction. Guilford Press. – Sample Chapter: Curriculum-Based Assessment for Instructional Design
Sun, Y., & Suzuki, M. (2013). Diagnostic assessment for improving teaching practice. International Journal of Information and Education Technology, 3(6), 607.
Ketterlin-Geller, L. R. & Yovanoff, P., (2009) Diagnostic assessments in mathematics to support instructional decision making. Practical Assessment, Research, and Evaluation 14(1): 19.
Diagnostic Assessments in Mathematics to Support Instructional Decision Making
Wall, D., Clapham, C., & Alderson, J. C. (1994). Evaluating a placement test. Language Testing, 11(3), 321-344.
What question does this answer?
Is this student improving?
Is this improvement occurring at an acceptable rate?
Is this intervention effective?
Definition
- The ongoing, frequent collection and use of formal data in order to (1) assess students’ performance, (2) quantify a student’s rate of improvement or responsiveness to instruction or intervention, and (3) evaluate the effectiveness of instruction and intervention using valid and reliable measures. Educators use measures that are appropriate for the student’s grade and/or skill level. (MTSS4Success.org)
- A scientifically based practice that teachers can use to evaluate the effectiveness of their instruction for individual students or their entire class. (National Center on Student Progress Monitoring. NC on Student Progress Monitoring
- The method by which teachers or other school personnel determine if students are benefitting appropriately from the typical (e.g., grade level, locally determined, etc.) instructional program, identify students who are not making adequate progress, and help guide the construction of effective intervention programs for students who are not profiting from typical instruction (Fuchs & Stecker, 2003). RTI Action Network on PM
- It is also beneficial to monitor student progress with a subskill mastery measure (e.g., nonsense word fluency for phonics interventions) to supplement the general outcome measure (GOM) to determine if the skill is being learned but not generalized to the GOM. Progress monitoring toward end of year goals is done using a GOM (e.g., CBM passage oral reading) while progress monitoring toward short term specific goals is done with subskill mastery measures (e.g., nonsense words). Both provide data on student progress while answering different questions. GOMs answer is the student improving overall, while SSMM answer is the student improving on this specific skill?
For whom?
Any student receiving additional support where information is needed to assess the efficacy of intervention, and provide direction for continuing or revising the current plan of support.
Frequency
Regularly (e.g., once per week to monthly)
Type of Measure
General outcome measure
Subskill mastery measures of the specific skill being intervened
Online Resources (additional considerations)
NCII Academic PM Tools Chart NCII Academic PM tools chart
NCII Behavior PM Tools Chart NCII Behavior PM tool chart
Research Institute on Progress Monitoring Overview and Research Reports
IRIS Center Progress Monitoring in RTI Module
IRIS Center Information Brief – Progress Monitoring: Mastery Measurement vs. General Outcome Measurement.
AIR Center on Multi-Tiered System of Supports AIR MTSS page on PM
RTI Action Network Progress Monitoring Within a Response-to-Intervention Mode
University of Connecticut Direct Behavior Ratings
Books and Articles
Hosp, Hosp, & Howell The ABCs of CBM: Second Edition: A Practical Guide to Curriculum-Based Measurement
Deno & Mirkin (1977) – Data-Based Program Modification Manual
Christ, T. J. (2006). Short-term estimates of growth using curriculum-based measurement of oral reading fluency: Estimating standard error of the slope to construct confidence intervals. School Psychology Review, 35(1), 128-133. https://doi.org/10.1080/02796015.2006.12088006
Christ, T. J., Zopluoglu, C., Monaghen, B. D., & Van Norman, E. R. (2013). Curriculum-based measurement of oral reading: Multi-study evaluation of schedule, duration, and dataset quality on progress monitoring outcomes. Journal of School Psychology, 51(1), 19-57. https://doi.org/10.1016/j.jsp.2012.11.001
Deno (1985) Curriculum-Based Measurement: The Emerging Alternative – Stanley L. Deno, 1985
Deno et al. (2008) DEVELOPING A SCHOOL‐WIDE PROGRESS‐MONITORING SYSTEM
Foegen, A., Jiban, C., & Deno, S. (2007). Progress monitoring measures in mathematics: A review of the literature. The Journal of Special Education, 41(2), 121-139. https://doi.org/10.1177/00224669070410020101
Harkin, B., Webb, T. L., Chang, B. P. I., Prestwich, A., Conner, M., Kellar, I., Benn, Y., & Sheeran, P. (2016). Does monitoring goal progress promote goal attainment? A meta-analysis of the experimental evidence. Psychological Bulletin, 142(2), 198–229. https://doi.org/10.1037/bul0000025
Miller, F. G., Crovello, N. J., & Chafouleas, S. M. (2017). Progress monitoring the effects of daily report cards across elementary and secondary settings using Direct Behavior Rating: Single Item Scales. Assessment for Effective Intervention, 43(1), 34-47. https://doi.org/10.1177/1534508417691019
Moulton, S., von der Embse, N., Kilgus, S., & Drymond, M. (2019). Building a better behavior progress monitoring tool using maximally efficient items. School Psychology, 34(6), 695–705. https://doi.org/10.1037/spq0000334
Stecker, P. M., Lembke, E. S., & Foegen, A. (2008). Using progress-monitoring data to improve instructional decision making. Preventing School Failure: Alternative Education for Children and Youth, 52(2), 48-58. https://doi.org/10.3200/PSFL.52.2.48-58
What question does this answer?
Did the instruction or behavior program work?
Did students achieve expected outcomes?
Definition
The collection of data after instruction occurs to make judgments about the instruction such as “grading, certification, evaluation of progress, or research on effectiveness” (Bloom et al., 1971, p. 117).
Evaluating student learning at the conclusion of a defined instructional period (e.g., end of a unit, course, semester, or school year) to (a) assess if students learned what they were supposed to learn, (b) evaluate the program and measure progress toward goals, and (c) assign scores or grades. (Glossary of Education Reform – https://www.edglossary.org/summative-assessment/)
For whom?
All students participating in core instruction
Frequency
At the end of instruction
Type of Measure
General outcome measure
Standardized measures of achievement
Online Resources (additional considerations)
Poorvu Center for Teaching and Learning Formative and Summative Assessments
Center for Research on Learning and Teaching Frequently Asked Questions About Student Ratings: Summary of Research Finding
HMH What Is the Purpose of Summative Assessment in Education? | HMH
Center for Advancement of Teaching Excellence Summative Assessments
Books and Articles
Dixson, D. D., & Worrell, F. C. (2016). Formative and summative assessment in the classroom. Theory into Practice, 55(2), 153-159.
Jens,D., Black, P., Harlen, W. & Tiberghien, A. (2018). Exploring relations between formative and summative assessment. In J. Dolin, & R. Evans (eds) Transforming assessment. Contributions from science education research, vol 4. Springer. https://doi.org/10.1007/978-3-319-63248-3_3
Interested in a custom menu of evidence-aligned resources for your organization?
EAC works closely with state departments of education, districts and schools, educator preparation programs, policy makers, advocacy organizations, and parent and family advocates to create menus that align with their specific goals and initiatives.