Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Language Model Metrics and Procrustes Analysis for Improved Vector Transformation of NLP Embeddings (2106.02490v1)

Published 4 Jun 2021 in cs.CL and cs.NE

Abstract: Artificial Neural networks are mathematical models at their core. This truismpresents some fundamental difficulty when networks are tasked with Natural Language Processing. A key problem lies in measuring the similarity or distance among vectors in NLP embedding space, since the mathematical concept of distance does not always agree with the linguistic concept. We suggest that the best way to measure linguistic distance among vectors is by employing the LLM (LM) that created them. We introduce LLM Distance (LMD) for measuring accuracy of vector transformations based on the Distributional Hypothesis ( LMD Accuracy ). We show the efficacy of this metric by applying it to a simple neural network learning the Procrustes algorithm for bilingual word mapping.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Thomas Conley (2 papers)
  2. Jugal Kalita (64 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.