Potentials of ChatGPT for Annotating Vaccine Related Tweets (2312.12016v1)
Abstract: This study evaluates ChatGPT's performance in annotating vaccine-related Arabic tweets by comparing its annotations with human annotations. A dataset of 2,100 tweets representing various factors contributing to vaccine hesitancy was examined. Two domain experts annotated the data, with a third resolving conflicts. ChatGPT was then employed to annotate the same dataset using specific prompts for each factor. The ChatGPT annotations were evaluated through zero-shot, one-shot, and few-shot learning tests, with an average accuracy of 82.14%, 83.85%, and 85.57%, respectively. Precision averaged around 86%, minimizing false positives. The average recall and F1-score ranged from 0.74 to 0.80 and 0.65 to 0.93, respectively. AUC for zero-shot, one-shot, and few-shot learning was 0.79, 0.80, and 0.83. In cases of ambiguity, both human annotators and ChatGPT faced challenges. These findings suggest that ChatGPT holds promise as a tool for annotating vaccine-related tweets.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.