Although there is not one “best” evaluation design (see the NSF program evaluation guide), high-quality program evaluation tends to include the following four activities (adapted from the CDC Evaluation Framework): program description, implementation measurement, outcome measurement, and intentional work to enable the use of findings. Based on our CSONIC Needs Assessment Results, on average, both experienced and inexperienced computer science education evaluators need additional support in the following four domains:
Measuring Student Success
Measuring student success in computer science programs is complex, similar to all other educational initiatives. Students are a product of their environment, their larger ecological context, and what individual characteristics they bring to the context. But, recent research on academic achievement and computer science specifically suggest a few important outcomes to consider when developing your outcome measurement plan:
- Make sure the short-term, medium-term, and longer-term outcomes are identified and that they specifically relate to the strategies or activities offered. Logic modeling early on in the evaluation planning will be very important here.
- Students’ intention to persist (to the next high school course, to post-secondary education, and a computing career) are good predictors of students’ longevity in computer science.
- Non-cognitive factors, or skills beyond intelligence or academic content knowledge, have been shown to positively impact student performance in school. These non-cognitive factors include a range of specific skills grouped into these five categories: academic behaviors, perseverance, academic mindsets, social skills, and learning strategies. It may be important to consider these non-cognitive factors given the design of computer science programs.
Contemporary evaluation practice demands that educational initiatives measure program implementation. This information can elucidate whether the program was delivered as intended, whether there was sufficient program quality and dosage to produce intended effects, distinguish between implementation failure and theory failure when a program isn’t working, and ultimately gives us information to drive continuous program improvement.
According to Jeanne Century and colleagues’ conceptual framework for fidelity of implementation, there are four implementation components that should be measured: quality of curricular materials, educator content knowledge, quality of service delivery, and student engagement. For example, one way to think about a program’s “quality of service delivery” is to measure whether teachers are offering “high-impact” teaching strategies as they deliver the program or intervention to students.
Regardless of the implementation component, however, shared measurement and conceptual understanding of implementation will be increasingly important as evaluation science progresses in computer science education.
Ensuring Evaluation Use
Evaluation findings should be used to locally to improve programs so that participants’ outcomes can be maximized. Evaluation findings should also be used globally by aggregating evaluation reports together to determine the overall effectiveness of policy-level funding initiatives. Using common metrics will be important if evaluation projects are to be used in aggregate to inform the effectiveness of computer science initiatives.
Describing The Program
Logic models and theories of change are a critical part of the evaluation process. Logic models illustrate how program activities should lead to expected outcomes, specify short-term and long-term outcomes, and help situate the findings in the broader research literature. Learning how to create logic models and use them to benefit your evaluative inquiry is one way to improve evaluation and research in computer science.