- Evaluating and supporting educators
The session will focus on the critical issue of how best to evaluate the validity of reformed educator evaluation systems. State leaders have been racing to develop these new teacher evaluations, but they are working in an empirical vacuum with an extremely impoverished research base to inform these designs. Many practitioners are rightfully asking, “How will we know if the system is working and not leading to negative effects?” This session, drawing on work from two states, will present theories of action and interpretative arguments that have been developed to structure the validity evaluations.
A theory of action outlines the intended components of the system, while clearly specifying the connections among these components. Most importantly, a theory of action must specify the hypothesized mechanisms or processes for bringing about intended goals. In the case of educator evaluation systems, the theory of action should describe how the well-articulated goals will be achieved as a result the proposed evaluation system. There has been some discussion about the interplay of theories of action and interpretative arguments (e.g., Bennett, 2010, Marion & Perie, 2010) and while this session will touch on these theoretical issues, the major focus will be to illustrate how states and districts can use these heuristics to organize their investigations to evaluate the validity of educator evaluation programs.
The first presentation will provide a brief introduction to validity evaluation and will discuss the specific context of educator evaluation. This session will serve to frame the remaining discussions. Specifically, this presentation will summarize the current literature on the relationship between theories of action and interpretative arguments (Kane, 2006). This literature has largely focused on assessment programs and while there is nothing simple about evaluating test validity, educator evaluations are significantly most complex than assessment systems. Therefore, this presentation also will address the unique features of educator evaluation and how this context impacts validity investigations.
The second and third presentations will discuss the theories of action and overarching validity evaluation plans for two different state systems. These frameworks, while constructed from a similar orientation are operationalized differently because of the differences in state values and specific educator evaluation systems. Each of the presenters will walk through the development of the validity evaluation plan, but will also discuss how each state prioritized the research studies and data collection strategies to begin the work. All comprehensive validity evaluation plans enumerate more studies than could be completed in state contexts in any sort of reasonable time and cost frame. Therefore, one of the most challenging aspects of designing validity evaluations is figuring out how to prioritize the studies in a way to serve state needs while building towards a comprehensive evaluation of the state program.
The fourth presentation will focus on the specific validity requirements of Student Learning Objectives (SLO). SLO are content- and grade/course-specific measurable learning objectives that can be used to document student learning over a defined period of time. The use of SLO for documenting educators’ contributions to student learning is common to both states’ evaluation systems. In addition to describing an approach for evaluating SLO, this presentation will also provide an example of how the general frameworks described in sessions 2 and 3 must be expanded in terms of both scope and detail to serve as foundation for evaluating SLOs.
Finally, two discussants, one from each state, will help ground the session in the world of state policy and political realities and will engage the audience so that attendees come away with clearly articulated approaches about how to approach some of the complex challenges associated with evaluating these complex personnel evaluation systems.