Emergent Mind

Abstract

Massive Open Online Courses (MOOCs) have been used by students as a low-cost and low-touch educational credential in a variety of fields. Understanding the grading mechanisms behind these course assignments is important for evaluating MOOC credentials. A common approach to grading free-response assignments is massive scale peer-review, especially used for assignments that are not easy to grade programmatically. It is difficult to assess these approaches since the responses typically require human evaluation. Here we link data from public code repositories on GitHub and course grades for a large massive-online open course to study the dynamics of massive scale peer review. This has important implications for understanding the dynamics of difficult to grade assignments. Since the research was not hypothesis-driven, we described the results in an exploratory framework. We find three distinct clusters of repeated peer-review submissions and use these clusters to study how grades change in response to changes in code submissions. Our exploration also leads to an important observation that massive scale peer-review scores are highly variable, increase, on average, with repeated submissions, and changes in scores are not closely tied to the code changes that form the basis for the re-submissions.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.