Brandon Bodily

Assistant Teaching Professor
College of Engineering
Fall 2024
49-101 Introduction to Engineering Design, Innovation, and Entrepreneurship
Research Question(s):- To what extent does type of feedback interaction (peer vs generative AI) impact the quality of revisions of interview protocols?
- To what extent does the type of feedback interaction impact the development of self-efficacy for interviewing skills?
- What are student attitudes about receiving feedback when role playing with a peer versus generative AI?
Bodily provided students with suggestions and tips for how to engage with the genAI tool (Co-Pilot) while preparing for an interview with a key stakeholder. Students then interacted with the tool to conduct a practice interview and elicit feedback on their interview protocols. Students next updated their interview protocols and engaged in real-life interviews as part of the engineering design process.
Study Design:Bodily delivered the same classroom instruction on interview protocol development to all students. All students then crafted an initial draft of an interview protocol. Bodily randomly assigned each student to engage with genAI, as described above, or to leverage peers to receive feedback on their protocols. Then, during the same class meeting, students practiced their interviewing skills by role playing an interview using their revised protocol either with a peer or with the genAI tool, depending on the study condition assigned. All students could revise their protocol after receiving feedback and roleplaying, before conducting the actual interview.
Sample size: Treatment (15 students); Control (18 students)
Data Sources:
- Rubric scores for the quality of students’ draft and final versions of their interview protocol as scored by one of two coders who did not know the students’ study condition
- Surveys of students’ self-efficacy regarding their development of an interview protocol and interviewing skills
- Student survey reflections on the feedback and role playing session
- RQ1: The type of feedback interaction (genAI or peer) did not affect the quality of students’ interview protocol revisions.
Figure 1. The rubric-scored (max 25) quality of interview protocol revisions was not affected by study condition (condition x draft: F(1,31) = 1.11, p = .30). Error bars are 95% confidence intervals for the means.
- RQ2: While students significantly grew in their self-efficacy for interviewing skills across the course, the type of feedback interaction (genAI or peer) did not affect the degree of this change.
Figure 2. Students’ growth in self-efficacy did not significantly differ depending on whether they interacted with and received feedback from a peer or from genAI to guide their interview protocol revisions (condition x time: F(1,30) = .91, p = .35). Self-efficacy did not significantly differ between conditions (F(1,30) = .41, p = .53). Error bars are 95% confidence intervals for the means.
- RQ3: Students interacting with and receiving feedback from genAI reported the feedback to be significantly more “useful” and less “awkward” than did students interacting with a peer. There was no difference between the two groups in their perceptions of how useful they found the role playing session to be, how comfortable they were, and how effective they believe their revised protocol would be at eliciting information from their interviewee.

Figure 3. Students interacting with genAI reported their feedback to be significantly more useful (M = 4.56, SD = 1.38) than did students interacting with a peer (M = 5.47, SD = .74) (t(26.92) = -2.29, p = .02, g = .80). Error bars of 95% confidence intervals for the means.

Figure 4. Students interacting with genAI reported the experience to be significantly less awkward (M = 1.20, SD = .41) than did students interacting with a peer (M = 2.44, SD = 1.38) (t(20.57) = -3.63, p = .002, g = 1.17). Error bars of 95% confidence intervals for the means.
Eberly Center’s Takeaways:
- RQ1, RQ2, & RQ3: There was no evidence that genAI affected either the development of students’ self-efficacy for interview skills or the development of the quality of their interview protocol. This could be due to a misalignment between the evaluation rubric and the task instructions that students received, the accuracy of the coders, the short, one day, time period between draft and revision, and/or students scoring high at their initial drafts with limited room to show further improvement. Despite this, students reported finding genAI feedback to be more useful than a peer’s and the session to be less awkward than with a peer. While this could suggest that students may prefer such a feedback interaction with genAI rather than a peer, interacting with a live human may be a more valid practice experience to the interviews that students will ultimately conduct. Additionally, though students perceived genAI’s feedback to be more useful, we do not have a direct measure to determine whether or not the genAI feedback was in fact better than, or different at all from, that of peers. GenAI may provide a viable feedback mechanism for developing an interview protocol when peer feedback is not readily available.