Using the CSEM to Compare Scale Scores and Achievement Levels

In any test, one can assume that scores for an individual would vary if it were somehow possible to give the same test over and over again. For example, students may vary in their performance because of the way they are feeling on the day of the test or they may be especially lucky or unlucky when they guess at items they do not know. This random variation in individual scores is quantified through the use of a statistic of measurement precision called the CSEM. CSEMs are available in CERS and the student data files.

Given a single score for a student, it can be assumed that if the student were to take the test over and over again, the student would score within plus or minus one CSEM of the observed score about 68 percent of the time. This idea is expressed as follows:

“A student’s score is best interpreted when recognizing that the student’s knowledge and skills fall within a score range and not just a precise number. For example, 2300 (+/-10) indicates a score range between 2290 and 2310.”

A CSEM is calculated for each reported content-area assessment a student takes. In the current reports, the averaged CSEM at each scale score point was provided.