Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

PuoBERTa: Training and evaluation of a curated language model for Setswana (2310.09141v2)

Published 13 Oct 2023 in cs.CL

Abstract: Natural language processing (NLP) has made significant progress for well-resourced languages such as English but lagged behind for low-resource languages like Setswana. This paper addresses this gap by presenting PuoBERTa, a customised masked LLM trained specifically for Setswana. We cover how we collected, curated, and prepared diverse monolingual texts to generate a high-quality corpus for PuoBERTa's training. Building upon previous efforts in creating monolingual resources for Setswana, we evaluated PuoBERTa across several NLP tasks, including part-of-speech (POS) tagging, named entity recognition (NER), and news categorisation. Additionally, we introduced a new Setswana news categorisation dataset and provided the initial benchmarks using PuoBERTa. Our work demonstrates the efficacy of PuoBERTa in fostering NLP capabilities for understudied languages like Setswana and paves the way for future research directions.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Vukosi Marivate (47 papers)
  2. Valencia Wagner (3 papers)
  3. Richard Lastrucci (2 papers)
  4. Isheanesu Dzingirai (2 papers)
  5. Moseli Mots'oehli (8 papers)
Citations (5)

Summary

We haven't generated a summary for this paper yet.