Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Analyzing the Limitations of Cross-lingual Word Embedding Mappings (1906.05407v1)

Published 12 Jun 2019 in cs.CL and cs.LG

Abstract: Recent research in cross-lingual word embeddings has almost exclusively focused on offline methods, which independently train word embeddings in different languages and map them to a shared space through linear transformations. While several authors have questioned the underlying isomorphism assumption, which states that word embeddings in different languages have approximately the same structure, it is not clear whether this is an inherent limitation of mapping approaches or a more general issue when learning cross-lingual embeddings. So as to answer this question, we experiment with parallel corpora, which allows us to compare offline mapping to an extension of skip-gram that jointly learns both embedding spaces. We observe that, under these ideal conditions, joint learning yields to more isomorphic embeddings, is less sensitive to hubness, and obtains stronger results in bilingual lexicon induction. We thus conclude that current mapping methods do have strong limitations, calling for further research to jointly learn cross-lingual embeddings with a weaker cross-lingual signal.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Aitor Ormazabal (10 papers)
  2. Mikel Artetxe (52 papers)
  3. Gorka Labaka (15 papers)
  4. Aitor Soroa (29 papers)
  5. Eneko Agirre (53 papers)
Citations (61)

Summary

We haven't generated a summary for this paper yet.