Evaluation 2015: Exemplary Evaluations in a Multicultural World

View Printable Version

Four Approaches to Measuring STEM Education Innovations: Moving Toward Standardization and Large Data Sets

Session Number: 2275
Track: STEM Education and Training
Session Type: Multipaper
Tags: comprehensive community-wide initiatives, Data Sharing, Evaluative Thinking, science and technology
Session Chair: Jason Ravitz [Google]
Presenter 1: Wendy DuBow [National Center for Women & Information Technology]
Presenter 3: Elizabeth B. Bizot [Computing Research Association]
Presenter 4: Tom McKlin [The Findings Group]
Presentation 3 Additional Author: Jane G. Stout [Director - Computing Research Association]
Presentation 4 Additional Author: Shelly Engelman [Lead Evaluator - SageFox Consulting Group]
Time: Nov 14, 2015 (08:00 AM - 09:30 AM)
Room: Grand Suite 5

Abstract 1 Title: A Social Cognitive Career Theory Survey for Computing

Presentation Abstract 1: In 2012, my colleagues and I developed a Social Cognitive Career theory-based attitudes survey to use with secondary students. This instrument grew out of the social science research on gender and computing, which seeks to understand the barriers to female participation in computing and to study ways to attract and retain more girls and women in the field of computing. Our instrument measures five constructs relative to computing: interest, self-efficacy, outcome expectations, perceived social supports and barriers, and intent to persist in the field. The instrument was field-tested with a mixed gender, multi-state, adolescent sample of 350 students. It has since been used with high school and college-aged populations, both as a point-in-time and as a pre-/post-survey. This survey has been psychometrically tested and emerges as a validated instrument for measuring these five constructs, which are believed to lead toward the pursuit of computing as a field of study or career.

Abstract 2 Title: Moving beyond attitudes: Measuring a range of outcomes in undergraduate STEM education
Presentation Abstract 2: Authentic scientific research experiences and internships are becoming increasingly common in undergraduate education. My colleagues and I have conducted over a decade of research on the outcomes and processes of undergraduate education. This work has culminated in the development of the Undergraduate Research Student Self-Assessment (URSSA) survey. The instrument measures many relevant constructs in undergraduate STEM education, including attitudes toward science, scientific behaviors and skills, and cognitive outcomes, such as understanding the process of research and gaining a deeper understanding of the discipline. The instrument has been psychometrically tested and validated on a large population of Research Experiences for Undergraduate (REU) students, but may have applicability for research-based course experiences or other experiential scientific education. While attitudinal measures are quite common in STEM education, the URSSA has the advantage of also measuring knowledge, skills, and scientific behaviors.

Abstract 3 Title: A Survey Infrastructure for Comparing Outcomes for Postsecondary Program Participants and Nonparticipants
Presentation Abstract 3: In 2011, the Computing Research Association’s Center for Evaluating the Research Pipeline (CERP) was created to answer the “Compared to what?” question. Several higher education broadening participation programs had data on participant outcomes, but no way to compare that to outcomes for nonparticipants. CERP’s approach is to survey students from academic departments nationwide (~100); survey results allow multiple intervention programs to compare their participants’ outcomes to outcomes of similar nonparticipants and also support basic research on underrepresentation issues. Annual surveys reach large samples of students (~4000 undergraduate; ~2000 graduate) and faculty (~300), and assess their experiences, attitudes, and beliefs such as self-efficacy, sense of belonging, perception of department climate, career goals, and intent to continue in computing. Follow up surveys track students and faculty to assess whether participation in intervention programs is associated with change in experiences, intentions and with actual persistence in computing, in comparison to non-participation.

Abstract 4 Title: Measuring Attitudes: Lowering Survey Burden and Increasing Analytic Capacity

Presentation Abstract 4: This paper offers an approach to surveying student participants in STEM programs that 1) attends to current survey scales that principal investigators hope to affect; 2) provides both predictor and outcome variables; 3) reduces survey burden; and 4) controls for at least one form of bias. First, we recommend several survey scales and invite participation in an effort to curate a body of instruments. Second, we demonstrate a survey structure that allows evaluators to capture predictor and outcome variables in one instrument. Third, we combine the above with a retrospective pre/post survey approach to allow evaluators to measure change from pre to post alongside regression analyses afforded by the inclusion of both predictor and outcome variables. Finally, the retrospective approach, while not applicable in some situations, controls for response-shift bias, a shift in student’s pre-treatment self-ratings based on the student’s altered perceptions or definition of the construct being measured.
Audience Level: None

Session Abstract: 

In this session, you will be introduced to three nationally field-tested approaches to evaluating STEM education interventions, including validated instruments and a thoughtful method for retrospective surveying and using predictor variables. After the four descriptions of the instruments and approaches, the chair will facilitate an interactive discussion, which will cover the possible uses and adaptations of these instruments, and both the benefits and hurdles of having standardized methods of measuring common outcomes and building comparative datasets. We will discuss the evaluators’ dilemma of striking a balance between customized assessments that target the specific outcomes a program is intending to influence versus assessments that may target more general outcomes, but have the advantage of using validated instruments.

 

 

 

 



For questions or concerns about your event registration, please contact registration@eval.org or 202-367-1173.

For questions about your account, membership status, or help logging in, please contact info@eval.org.



Cancellation Policy: Refunds less a $50 fee will be granted for requests received in writing prior to 11:59 PM EDT October 6, 2015. Email cancellation requests to registration@eval.org. Fax request to (202) 367-2173. All refunds are processed after the meeting. After October 6, 2015 all sales are final.