Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 37 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 10 tok/s Pro
GPT-5 High 15 tok/s Pro
GPT-4o 84 tok/s Pro
Kimi K2 198 tok/s Pro
GPT OSS 120B 448 tok/s Pro
Claude Sonnet 4 31 tok/s Pro
2000 character limit reached

Large Language Models on Lexical Semantic Change Detection: An Evaluation (2312.06002v1)

Published 10 Dec 2023 in cs.CL

Abstract: Lexical Semantic Change Detection stands out as one of the few areas where LLMs have not been extensively involved. Traditional methods like PPMI, and SGNS remain prevalent in research, alongside newer BERT-based approaches. Despite the comprehensive coverage of various natural language processing domains by LLMs, there is a notable scarcity of literature concerning their application in this specific realm. In this work, we seek to bridge this gap by introducing LLMs into the domain of Lexical Semantic Change Detection. Our work presents novel prompting solutions and a comprehensive evaluation that spans all three generations of LLMs, contributing to the exploration of LLMs in this research area.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (21)
  1. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712.
  2. Bert: Pre-training of deep bidirectional transformers for language understanding.
  3. Analysing lexical semantic change with contextualised word representations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3960–3973, Online. Association for Computational Linguistics.
  4. Diachronic word embeddings reveal statistical laws of semantic change.
  5. Tabllm: Few-shot classification of tabular data with large language models.
  6. Large language models are zero-shot rankers for recommender systems.
  7. Diachronic sense modeling with deep contextualized word embeddings: An ecological view. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3899–3908, Florence, Italy. Association for Computational Linguistics.
  8. Statistically significant detection of linguistic change.
  9. TempoWiC: An Evaluation Benchmark for Detecting Meaning Shift in Social Media. ArXiv:2209.07216 [cs].
  10. MLLabs-LIG at TempoWiC 2022: A Generative Approach for Examining Temporal Meaning Shift. In Proceedings of the The First Workshop on Ever Evolving NLP (EvoNLP), pages 1–6, Abu Dhabi, United Arab Emirates (Hybrid). Association for Computational Linguistics.
  11. Capturing evolution in word usage: Just add more clusters? In Companion Proceedings of the Web Conference 2020, WWW ’20. ACM.
  12. Efficient estimation of word representations in vector space.
  13. OpenAI. 2023a. GPT-4 Technical Report. ArXiv:2303.08774 [cs].
  14. OpenAI. 2023b. Gpt-4 technical report.
  15. Code Llama: Open Foundation Models for Code. ArXiv:2308.12950 [cs].
  16. A Wind of Change: Detecting and Evaluating Lexical Semantic Change across Times and Domains. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 732–746, Florence, Italy. Association for Computational Linguistics.
  17. Peter Schönemann. 1966. A generalized solution of the orthogonal procrustes problem. Psychometrika, 31(1):1–10.
  18. Survey of computational approaches to lexical semantic change.
  19. Llama: Open and efficient foundation language models.
  20. Attention is all you need. Advances in neural information processing systems, 30.
  21. A survey of large language models. arXiv preprint arXiv:2303.18223.
Citations (3)

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)