Emergent Mind

Abstract

Peer review is the primary means of quality control in academia; as an outcome of a peer review process, program and area chairs make acceptance decisions for each paper based on the review reports and scores they received. Quality of scientific work is multi-faceted; coupled with the subjectivity of reviewing, this makes final decision making difficult and time-consuming. To support this final step of peer review, we formalize it as a paper ranking problem. We introduce a novel, multi-faceted generic evaluation framework for ranking submissions based on peer reviews that takes into account effectiveness, efficiency and fairness. We propose a preference learning perspective on the task that considers both review texts and scores to alleviate the inevitable bias and noise in reviews. Our experiments on peer review data from the ACL 2018 conference demonstrate the superiority of our preference-learning-based approach over baselines and prior work, while highlighting the importance of using both review texts and scores to rank submissions.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.