Peter Zhang

Assistant Professor
College of Engineering
Fall 2024
19-867: Decision Analytics for Business and Policy (14-week course)
Research Question(s):- To what extent does using generative AI to complete an assignment impact students’ ability to engage in critical thinking?
- To what extent does generative AI tool use impact equity across various levels of technical preparation?
Zhang delivered scaffolded instructions on how to leverage genAI (ChatGPT) to solve decision analytic scenarios. During this instruction, students learned about prompt engineering, fine-tuning existing AI models, and how to use a group of genAI agents to perform a specific data analysis task.
Study Design:At the start of the semester, students completed a survey in which they answered questions about their technical background. Then, Zhang provided the same classroom instruction to all students on modeling frameworks and technical topics, such as contextual prediction and optimization under uncertainty. Students then self-selected to one of two study conditions. In one condition, students completed a prediction and optimization homework assignment with the assistance of genAI. In a second condition, students completed the same homework assignment without using genAI. Almost all students then worked in pairs, consisting of one student from each condition, in which they shared their individual work and then jointly analyzed both sets of their solutions. Together they submitted one file for their homework submission. Later in the semester, students completed three assessments including items related to prediction and optimization (quiz 1, midterm, quiz 2) without using genAI.
Sample size: Self-Selected Treatment (29 students); Self-Selected Control (26 students)
Data Sources:
- Rubric scores for students’ performance on prediction and optimization questions on 2 in-class quizzes and a midterm
- A survey of students’ prior experience with the technical concepts and self-efficacy regarding their data analytic skills and their ability to use genAI to complete analytic tasks.
- RQ1: The self-selected use of genAI on a homework assignment did not impact student performance on subsequent assessments focused on prediction and optimization over time. For the optimization skill, students improved over time regardless of their homework condition.
Figure 1. For the prediction questions on the assessments, there was no interaction showing an impact of self-selected genAI use, F(2, 106) = 1.59, p = .21, no main effect of time, F(2, 106) = .75, p = .47, and no main effect condition, F(1, 53) = 2.33, p = .13. Error bars are 95% confidence intervals for the means.
Figure 2. For the optimization questions on the assessments, there was no interaction showing an impact of self-selected genAI use, F(2, 106) = 1.01, p = .37, no main effect condition, F(1, 53) = 2.34, p = .13, and a main effect of time, F(2, 106) = 7.84, p < .001, ηp2 = .13. Post-hoc tests showed that students significantly improved from quiz 1 to the midterm (p < .001). No other significant differences were present. Error bars are 95% confidence intervals for the means.
- RQ2: Neither students’ choice of genAI use, nor their learning outcomes, differed by level of technical preparedness, as measured by a composite self-report score of educational background.
Eberly Center’s Takeaways:
- RQ1&2: Students’ self-selection of whether or not to use genAI did not differ by their technical preparedness. There was no evidence that self-selected use of genAI nor technical preparedness impacted students’ later performance for the skills optimization or prediction. This could be due to the small scope of the genAI intervention, sufficient instructional support for students’ learning, peer-to-peer learning when debriefing their homework together, or could represent a true null impact of genAI for these skills.