Emergent Mind

Abstract

In higher education courses, peer assessment activities are common for keeping students engaged during presentations. Defining precisely how students assess the work of others requires careful consideration. Asking the student for numeric grades is the most common method. However, students tend to assign high grades to most projects. Aggregating peer assessments, therefore, results in all projects receiving the same grade. Moreover, students might strategically assign low grades to the projects of others so that their projects will shine. Asking students to order all projects from best to worst imposes a high cognitive load on them, as studies have shown that people find it difficult to order more than a handful of items. To address these issues, we propose a novel peer rating model consisting of (a) an algorithm that elicits student assessments and (b) a protocol for aggregating grades to produce a single order. The algorithm asks students to evaluate projects and answer pairwise comparison queries. These are then aggregated into a ranking over the projects. An application based on our model was deployed and tested in a university course and showed promising results, including fewer ties between alternatives and a significant reduction in the communication load on students. These results indicate that the model provides a simple, accurate, and efficient approach to peer review.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.