Emergent Mind

Finetuning Transformer Models to Build ASAG System

(2109.12300)
Published Sep 25, 2021 in cs.CL and cs.AI

Abstract

Research towards creating systems for automatic grading of student answers to quiz and exam questions in educational settings has been ongoing since 1966. Over the years, the problem was divided into many categories. Among them, grading text answers were divided into short answer grading, and essay grading. The goal of this work was to develop an ML-based short answer grading system. I hence built a system which uses finetuning on Roberta Large Model pretrained on STS benchmark dataset and have also created an interface to show the production readiness of the system. I evaluated the performance of the system on the Mohler extended dataset and SciEntsBank Dataset. The developed system achieved a Pearsons Correlation of 0.82 and RMSE of 0.7 on the Mohler Dataset which beats the SOTA performance on this dataset which is correlation of 0.805 and RMSE of 0.793. Additionally, Pearsons Correlation of 0.79 and RMSE of 0.56 was achieved on the SciEntsBank Dataset, which only reconfirms the robustness of the system. A few observations during achieving these results included usage of batch size of 1 produced better results than using batch size of 16 or 32 and using huber loss as loss function performed well on this regression task. The system was tried and tested on train and validation splits using various random seeds and still has been tweaked to achieve a minimum of 0.76 of correlation and a maximum 0.15 (out of 1) RMSE on any dataset.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.