- Transitioning assessment systems
- Implementing state and federal programs and policy
These six sessions will explore experiences and challenges associated with developing and scoring a grade 4 computer-based writing pilot for the National Assessment of Educational Progress (NAEP). Initial concerns including 4th grade students' abilities to compose on computer and states' varying levels of preparedness for online assessment made the decision to administer the 2012 computer-based pilot a complex one. Presenters will discuss task/rubric development; cognitive lab trials; students' keyboarding action data; scoring results and student background data; and pilot score distributions within and across genres, and will examine gaps among sub-groups, discuss student responses and consider future trends and assessments.
1) The Context and Impetus for a New National Writing Assessment: This session will describe the new NAEP Writing Framework, the innovative context it created for the assessment, and some of the challenges, opportunities and tensions encountered in the transition from paper and pencil to computer-based assessment. Based on background data and cognitive laboratory studies, NAEP ultimately administered tasks to students under two conditions: three tasks timed at 20 minutes each and two tasks timed at 30 minutes each, to determine which condition functioned best and captured sufficient data to guide future operational assessments. The perspectives and priorities for NCES, participating states, and the writing community will also be discussed.
2) Design and Development: Challenges and Innovations: This session will begin with a discussion lessons learned from developing multimedia tasks at grades 8 and 12 and how these lessons were applied to the development of computer-based tasks for grade 4. Presenters will describe how developers determined what stimuli would be suitable and engaging for grade 4 students, and will present examples of stimuli, including power point style "pre-writing" presentations and animated story starters. Additionally, presenters will discuss how scoring criteria were revised to better match expectations for younger students working on computer. Finally, to provide a richer context for the decision to administer a NAEP grade 4 computer-based pilot assessment, observations from cognitive laboratories about grade 4 students' keyboarding skills and responses to tasks will be shared.
3) The Technology of Transition: This session will share best practices related to NAEP’s implementaion of technology-delivered assessments and will hopefully facilitate a continuing dialog that will add value to the field of technology-delivered assessments. NAEP conducted a number of usability studies and cog labs that informed the design of the assessment system. NAEP’s existing administration policies and procedures were modified to accommodate the migration from paper and pencil to computer-based assessments, and to leverage the universal design elements of the NAEP assessment system. Administration of assessments on computer afforded the collection of metrics from observable data captured by the assessment system. These data provide a wealth of information that can be used in the reporting of assessment results. The NAEP assessment system has been enhanced by leveraging user feedback to validate design changes and usability findings. This process has been repeated with each development cycle and has fed continuous improvement of the assessment system and items. The goal of these efforts is to reduce the cognitive load of users, improve the overall quality of our products, and inform the body of knowledge on assessment system design.
4) Scaffolding, Organization and Preparation for Scoring: This session will describe the planning necessary to ensure successful scoring of responses in the two timing conditions, including organization and composition of teams of scorers, modifications in scoring training practices that improved scorer agreement, and viewing of multimedia stimulus with scorers prior to scoring. Additionally, the presenter will describe application of lessons learned during scoring of grades 8 and 12 computer-based responses to grade 4 scoring.
5) Examining the Results of Scoring: Qualitative and quantitative results of the two timing conditions will be shared, including student responses, score distributions within each of the three writing conditions, grade 4 student responses to background questions about computer use, performance data for various student subgroups, and student "action" data (what students did, for example, use spell check and back-spacing as they composed).
6) NAEP Grade 4 Computer-Based Writing Assessment: Findings, Implications and Future Directions: Ultimately, the NAEP grade 4 computer-based pilot was deemed a success. This session will discuss the nature of that success and will share key insights that could be instructive for other large-scale assessment developers working to measure writing skills of younger students on computer. The presenter will also discuss performance differences observed across the three purposes for writing that were measured, differences in performance that resulted from the two timing and will share observations about the current state of grade 4 students' keyboarding skills and how these posed challenges for scoring.