Positional Description Matters for Transformers Arithmetic (2311.14737v1)
Abstract: Transformers, central to the successes in modern Natural Language Processing, often falter on arithmetic tasks despite their vast capabilities --which paradoxically include remarkable coding abilities. We observe that a crucial challenge is their naive reliance on positional information to solve arithmetic problems with a small number of digits, leading to poor performance on larger numbers. Herein, we delve deeper into the role of positional encoding, and propose several ways to fix the issue, either by modifying the positional encoding directly, or by modifying the representation of the arithmetic task to leverage standard positional encoding differently. We investigate the value of these modifications for three tasks: (i) classical multiplication, (ii) length extrapolation in addition, and (iii) addition in natural language context. For (i) we train a small model on a small dataset (100M parameters and 300k samples) with remarkable aptitude in (direct, no scratchpad) 15 digits multiplication and essentially perfect up to 12 digits, while usual training in this context would give a model failing at 4 digits multiplication. In the experiments on addition, we use a mere 120k samples to demonstrate: for (ii) extrapolation from 10 digits to testing on 12 digits numbers while usual training would have no extrapolation, and for (iii) almost perfect accuracy up to 5 digits while usual training would be correct only up to 3 digits (which is essentially memorization with a training set of 120k samples).
- Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712.
- Charton, F. (2021). Linear algebra with transformers. arXiv preprint arXiv:2112.01898.
- Charton, F. (2022). What is my math transformer doing?–three results on interpretability and generalization. arXiv preprint arXiv:2211.00170.
- Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168.
- The neural data router: Adaptive control flow in transformers improves systematic generalization. arXiv preprint arXiv:2110.07732.
- Faith and fate: Limits of transformers on compositionality. arXiv preprint arXiv:2305.18654.
- How does gpt-2 compute greater-than?: Interpreting mathematical abilities in a pre-trained language model. arXiv preprint arXiv:2305.00586.
- Length generalization in arithmetic transformers. arXiv preprint arXiv:2306.15400.
- The impact of positional encoding on length generalization in transformers. arXiv preprint arXiv:2305.19466.
- Shape: Shifted absolute position embedding for transformers. arXiv preprint arXiv:2109.05644.
- Teaching arithmetic to small transformers. arXiv preprint arXiv:2307.03381.
- Systematic generalization and emergent structures in transformers trained on structured tasks. arXiv preprint arXiv:2210.00400.
- Let’s verify step by step. arXiv preprint arXiv:2305.20050.
- Investigating the limitations of transformers with simple arithmetic tasks. arXiv preprint arXiv:2102.13019.
- OpenAI (2023). Gpt-4 technical report. arXiv preprint arXiv:2309.05463.
- Train short, test long: Attention with linear biases enables input length extrapolation. arXiv preprint arXiv:2108.12409.
- Limitations of language models in arithmetic and symbolic induction. arXiv preprint arXiv:2208.05051.
- Randomized positional encodings boost length generalization of transformers. arXiv preprint arXiv:2305.16843.
- Roformer: Enhanced transformer with rotary position embedding. arXiv preprint arXiv:2104.09864.
- Testolin, A. (2023). Can neural networks do arithmetic? a survey on the elementary numerical skills of state-of-the-art deep learning models. arXiv preprint arXiv:2303.07735.
- Solving math word problems with process-and outcome-based feedback. arXiv preprint arXiv:2211.14275.
- Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35:24824–24837.
- Gpt can solve mathematical problems without a calculator. arXiv preprint arXiv:2309.03241.
- Unveiling transformers with lego: a synthetic reasoning task. arXiv preprint arXiv:2206.04301.
- Teaching algorithmic reasoning via in-context learning. arXiv preprint arXiv:2211.09066.