Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

What do character-level models learn about morphology? The case of dependency parsing (1808.09180v1)

Published 28 Aug 2018 in cs.CL

Abstract: When parsing morphologically-rich languages with neural models, it is beneficial to model input at the character level, and it has been claimed that this is because character-level models learn morphology. We test these claims by comparing character-level models to an oracle with access to explicit morphological analysis on twelve languages with varying morphological typologies. Our results highlight many strengths of character-level models, but also show that they are poor at disambiguating some words, particularly in the face of case syncretism. We then demonstrate that explicitly modeling morphological case improves our best model, showing that character-level models can benefit from targeted forms of explicit morphological modeling.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Clara Vania (16 papers)
  2. Andreas Grivas (5 papers)
  3. Adam Lopez (29 papers)
Citations (24)

Summary

We haven't generated a summary for this paper yet.