Term | Definition |
---|---|
Achievement Level |
A category of performance based on students’ scaled scores on the ICA and summative assessment. The four achievement levels indicate progress toward meeting the expectation of content mastery and college and career readiness: Level 4: Standard Exceeded; Level 3: Standard Met; Level 2: Standard Nearly Met; Level 1: Standard Not Met. |
Adjusted Form Summative Blueprint |
A version of the Smarter Balanced summative assessments that has been offered since the 2020–21 school year. The test assesses the same content in math and English language arts/literacy as the previous years, but with fewer questions on the computer adaptive segment of the test. There are no changes to the performance tasks. Results will help provide school and state leaders with key information to advance learning and support equitable outcomes for students. Due to fewer items included on the adjusted form, Smarter Balanced advises that claim data for individual students should not be reported. |
Average Scale Score |
Information about the average performance of students in a defined group for the tested grade and subject. |
Claim |
A summary statement about the knowledge and skills students are expected to demonstrate on the assessment related to a particular aspect of the Common Core State Standards (CCSS). The Smarter Balanced Summative Assessment for ELA includes claims in reading, listening, and speaking, writing, and research/inquiry and for mathematics includes concepts and procedures, problem solving and modeling & data analysis, and communicating reasoning. |
Common Core State Standards (CCSS) |
A set of standards created by a national council of state education leaders and adopted by most states in 2010. The standards describe what students should know and be able to do in mathematics and ELA in each grade K–12. |
Confidence Interval |
A calculated range around the student’s scale score on the IAB, equal to 1.5 times the standard error of measurement. |
Correctness |
Value arrived at by dividing the maximum score possible for an item by the student’s score. |
Depth of knowledge (DOK) |
A four-level framework developed to describe the conceptual complexity of curricular activities and assessment tasks (not to be confused with difficulty). |
Difficulty (Item Difficulty) |
The rating of an item as easy, moderate, or difficult is based on the proportion of students in a field-test reference group who answered the item correctly. See page 18 for the definitions of the item difficulty categories. |
Domain |
Larger groups of related standards in the mathematics CCSS (e.g., Numbers and Operations—Fractions). |
Error Band |
A student’s test score can vary if the test is taken several times. The error band is the level of uncertainty around a student score. The error band represents a score range that the student’s score would likely fall within if the student took the test multiple times before any additional instruction or learning occurs. |
Exemplar |
An example of a response that would earn full credit. |
IAB |
An Interim Assessment Block measures a portion of the material taught at each grade level, such as fractions. A Focused IAB (FIAB) measures a more limited portion of material taught at grade level, such as add and subtract with equivalent fractions. |
ICA |
Interim Comprehensive Assessments measure the same content as the summative assessments. |
Key and Distractor Analysis |
An item analysis that displays the percentage of students who selected the correct response option(s) (Key) and incorrect response options (Distractors). |
Performance Standard |
A reference point to know how students are performing in relationship to a standard. Meeting the standard means meeting the expectation of the content area. Performance standards are categorized by scale score. The scale score cuts associated with the performance level are publicly available in the Technical Manual. |
Reference Population |
The reference population is a group of students. In this context, the reference population for an item consists of all the students who took the test the year the item was field-tested. Depending on when the item was field tested, the reference population may be students who took the Spring 2014 Field Test or a subsequent summative assessment that included embedded field-tested items. These students’ responses to test items were used to classify each item into one of three difficulty categories—easy, moderate, or difficult. |
Reporting Category |
A category of performance based on students’ scaled scores on the IABs. The three reporting categories are: Above Standard, Near Standard, and Below Standard. |
Rubric |
A scoring guide for evaluating the quality of student responses, which describes the performance expectations for each test item. |
Scale Score/Student Score |
The score, ranging from 2000 to 3000, based on student results on a Smarter Balanced assessment. Smarter Balanced uses a single vertical scale across all tested grades. |
Standard Error of Measurement |
Acknowledges the difference between an estimated scale score and a student’s true scale score. The statistical uncertainty around a student’s true scale score, which may be affected by several factors, such as the sample of questions included on the test, a student’s mental or emotional state during testing, or the conditions under which the student took the test. |
Standard Error of the Mean |
The standard error is a statistical term that measures the accuracy with which a sample distribution represents a population by using standard deviation. In statistics, a sample mean deviates from the actual mean of a population—this deviation is the standard error of the mean. |
Status |
An indication of how the IAB was administered, including whether the test was a standardized or non-standardized administration, and whether the test was completed or partially complete. |
Target |
Describes the expectations of what will be assessed by the items and tasks within each claim. Also known as an assessment target. |
Writing Trait Scores |
Measures of the following writing proficiencies: Purpose/Organization: Organizing, Evidence/Elaboration, and Conventions |