Development of Next-Generation Assessments Technical Challenges and Solutions in the Transition to Common Core State Standards (CCSS)

Thursday, June 20, 2013: 8:30 AM-10:00 AM
Maryland 3-4 (Gaylord National Resort and Convention Center)
Presentations
  • Technical Challenge in Transition_Scaling (2).pdf (516.7 kB)
  • CCSSO_2013_Linking2.pdf (3.0 MB)
  • 2013 CCSSO_Presentation_y&d.pdf (386.5 kB)
  • Bumps in the road to the CCSS - Viger (NCSA, 2013).pdf (904.2 kB)
  • Content Strands:
    1. Transitioning assessment systems
    2. Improving data analysis, use, and reporting
    ABSTRACT:
    This session presents some important psychometric challenges and solutions in state assessment practices during the transition towards the CCSS and the next-generation assessments. To align the state assessment to CCSS, many elements in state assessment programs may face technical issues during the transitions period for a full implementation, such as changes in curriculum and classroom instruction, new or revised test blueprints, innovative item types and item development, application of technology in assessment, and implementation of online testing. These changes have significant implications in appropriately measuring and reporting of students’ performance and progress, and in providing sufficient evidence on reliability and validity of the transitional assessments. Researchers from multiple states and testing companies explore these issues and share their perspectives with practical approaches for the following technical issues.

    The first three presentations focus on the challenges and practical approaches related to the changes in testing blueprints and the comparability of scores between the transition and previous assessments. In the process of transition, states revised their testing blueprints or developed new blueprints to specify the content domains and corresponding proportions. With the change of the testing blueprints, two major potential threats would exist to the meaningful interpretation of test scores during the transition process. First, construct-irrelevant variance may exist when “test contains excess reliable variance that is irrelevant to the interpreted construct” (Messick[i], 1989, p.34). Second, construct underrepresentation may occur when test content is not reflective of relevant knowledge or there is the under-sampling of the achievement domain. From a psychometric perspective, we may treat the transitional assessment as a new assessment that is independent upon previous assessments. Therefore, we do not report the trend of student performance and progress over years between the transitional assessments and previous assessments. However, from a policy perspective, the trend comparisons may be important to understand, specifically how students are doing during the implementation of the CCSS. Therefore, practical psychometric approaches/options are needed in this situation. The first presentation delineates technical issues during the assessment transition comprehensively. It presents elements that would be affected by the transition of testing blueprints, corresponding linking methods/approaches, the advantages and limitations of the approaches. Finally, it recommends options that are suitable to various situations. The second presentation focuses on the practical linking methods/options that will be used for a state in 2013 state transitional assessment equating. Also, it discusses multiple analysis methods that will be used to examine the linkages between the transitional assessments and previous assessments to ensure reporting trend of student/school performance and progress appropriately. The third presenter focuses on the content and construct shift across grades. It explores the impact of various degrees of content shift on test construct and examines the tolerance to satisfy construct validity.

    In addition, two presenters will address psychometric challenges and approaches in the application of innovated items into the transitional assessments or interim assessments, such as machine-scored constructed-response items, evidence-based design items, and text-based writing in state transitional assessments. These two presentations will discuss and answer the following questions: Can we use conventional methods to analyze these new types of items and provide sufficient evidence of reliability and validity? What approach and methods we may need to supplement to the traditional item analysis?

    All presentations are based on state assessment practices, and session presenters will offer different perspectives based on different states’ transition practices. Empirical data analysis, methods used, the results and lessons learned will be shared in the session.

    [i] Messick, S. (1989). Validity. In R. L. Linn (Ed.), Educational Measurement, 3rd ed. New York: American Council on Education

    Official: