Emergent Mind

Feedback and Engagement on an Introductory Programming Module

(2201.01240)
Published Jan 4, 2022 in cs.CY

Abstract

We ran a study on engagement and achievement for a first year undergraduate programming module which used an online learning environment containing tasks which generate automated feedback. Students could also access human feedback from traditional labs. We gathered quantitative data on engagement and achievement which allowed us to split the cohort into 6 groups. We then ran interviews with students after the end of the module to produce qualitative data on perceptions of what feedback is, how useful it is, the uses made of it, and how it bears on engagement. A general finding was that human and automated feedback are different but complementary. However there are different feedback needs by group. Our findings imply: (1) that a blended human-automated feedback approach improves engagement; and (2) that this approach needs to be differentiated according to type of student. We give implications for the design of feedback for programming modules.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.