The number of computer science classrooms is growing (evidenced by what the College Board calls the most successful launch of any course in AP history with almost 104,000 students taking the exam, Virginia’s requirement that all districts teach CS, and the growing number of states allowing CS to “count” toward graduation). This growth brings about one important change for STEM evaluators: our unit of analysis is beginning to shift from the student to the classroom because we are beginning to work on projects affecting dozens of classrooms. Much of our prior focus has been on measuring student success in programs affecting only a handful of classes, but now programs affect enough classrooms that it is becoming imperative to add classroom implementation to our theories of change.
Fortunately, Century, Rudnick, & Freeman (2010) offer a framework for evaluating classroom implementation that has been widely used in science and math evaluations (e.g., Hecht, Chval et al., 2012). We present their framework below and acknowledge that CS classrooms are different enough from math and science classrooms that CS evaluators and researchers will need to determine how to best measure each component of the framework. Guzdial describes some of these differences (https://cacm.acm.org/blogs/blog-cacm/224105-learning-computer-science-is-different-than-learning-other-stem-disciplines/fulltext).
This is where you come in: We ask that you respond by explaining your own approach to measuring each element in the framework. Our goal is to build a comprehensive framework that CS education evaluators and researchers may use to build valid, replicable, useful studies of classroom implementation efforts.
The framework focuses on measuring curriculum implementation and consists of two broad areas: structural components and instructional components. The structural components are defined as curriculum developers’ “intentions about the design and organization of the intervention.” The two structural components are:
- Structural procedural. We consider this to be what teachers plan and do, and this can include time spent on instruction, the order of lessons, materials used in each lesson, lesson preparation, readings, assessments, and instructional delivery formats required by the intervention.
- Structural Educative. We consider this to be what teachers know, and this can include both teacher background knowledge and specific knowledge needed to enact the intervention.
The instructional components are defined as “developers’ intentions about the participants.” This includes teachers and students behaviors and interactions when enacting the curriculum. The two instructional components are:
- Instructional pedagogical. We consider this to be how teachers interact and this can include pedagogical actions such as facilitating student engagement with the content; facilitating student engagement with each other and with other students; and facilitating student autonomy, risk taking, and interest.
- Student engagement. We consider this to be how and in what ways students engage and this can include students engaging with each other, with the content, and with instructional materials and activities.
We present the definitions of these four components in the hopes of more clearly defining and possibly expanding the approaches and measures we that evaluators and researchers use for evaluating CS classroom interventions. Knowing these, we might help each other build our practices and capacities in measuring classroom implementation, and perhaps even build a collective taxonomy of approaches/instruments that a wide range evaluators and educators can use when applying this model. Our hope is that this will make it easier for our CS evaluation community to incorporate classroom implementation into our program theories of change.