Curriculum-Based Measurement

Curriculum-based measurement, or CBM, is also referred to as a general outcomes measures (GOMs) of a student’s performance in either basic skills or content knowledge. CBM began in the mid 1970s with research headed by Stan Deno at the University of Minnesota. Over the course of 10 years, this work led to the establishment of measurement systems in reading, writing, and spelling that were (a) easy to construct, (b) brief in administration and scoring, (c) had technical adequacy [reliability and various types of validity evidence for use in making educational decisions), and (d) provided alternate forms to allow time series data to be collected on student progress. This focus in the three language arts areas eventually was expanded to include mathematics, though the technical research in this area continues to lag that published in the language arts areas. An even later development was the application of CBM to middle-secondary areas: Espin and colleagues at the University of Minnesota developed a line of research addressing vocabulary and comprehension (with the maze) and by Tindal and colleagues at the University of Oregon developed a line of research on concept-based teaching and learning.

Early research on the CBM quickly moved from monitoring student progress to its use in screening, normative decision-making, and finally benchmarking. Indeed, with the implementation of No Child Left Behind Act in 2001, and its focus on large-scale testing and accountability, CBM has become increasingly important as a form of standardized measurement that is highly related to and relevant for understanding student’s progress toward and achievement of state standards.

Probably the key feature of CBM is its accessibility for classroom application and implementation. It was designed to provide an experimental analysis of the effects from interventions, which includes both instruction and curriculum. This is one of the most important conundrums to surface on CBM: To evaluate the effects of a curriculum, a measurement system needs to provide an independent ‘audit’ and not be biased to only that which is taught. The early struggles in this arena referred to this difference as mastery monitoring (curriculum-based which was embedded in the curriculum and therefore forced the metric to be the number (and rate) of units traversed in learning) versus experimental analysis which relied on metrics like oral reading fluency (words read correctly per minute) and correct word or letter sequences per minute (in writing or spelling), both of which can serve as GOMs. In mathematics, the metric is often digits correct per minute. N.B. The metric of CBM is typically rate based to focus on 'automaticity' in learning basic skills.

The most recent advancements of CBM have occurred in three areas. First, they have been applied to students with low incidence disabilities. This work is best represented by Zigmond in the Pennsylvania Alternate Assessment and Tindal in the Oregon and Alaska Alternate Assessments. The second advancement is the use of generalizability theory with CBM, best represented by the work of John Hintze, in which the focus is parceling the error term into components of time, grade, setting, task, etc. Finally, Yovanoff, Tindal, and colleagues at the University of Oregon have applied Item Response Theory (IRT) to the development of statistically calibrated equivalent forms in their progress monitoring system.