Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Discrete and Soft Prompting for Multilingual Models (2109.03630v1)

Published 8 Sep 2021 in cs.CL

Abstract: It has been shown for English that discrete and soft prompting perform strongly in few-shot learning with pretrained LLMs (PLMs). In this paper, we show that discrete and soft prompting perform better than finetuning in multilingual cases: Crosslingual transfer and in-language training of multilingual natural language inference. For example, with 48 English training examples, finetuning obtains 33.74% accuracy in crosslingual transfer, barely surpassing the majority baseline (33.33%). In contrast, discrete and soft prompting outperform finetuning, achieving 36.43% and 38.79%. We also demonstrate good performance of prompting with training data in multiple languages other than English.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Mengjie Zhao (35 papers)
  2. Hinrich Schütze (250 papers)
Citations (68)

Summary

We haven't generated a summary for this paper yet.