Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 31 tok/s
Gemini 2.5 Pro 50 tok/s Pro
GPT-5 Medium 11 tok/s Pro
GPT-5 High 9 tok/s Pro
GPT-4o 77 tok/s Pro
Kimi K2 198 tok/s Pro
GPT OSS 120B 463 tok/s Pro
Claude Sonnet 4 31 tok/s Pro
2000 character limit reached

Transformer based neural networks for emotion recognition in conversations (2405.11222v1)

Published 18 May 2024 in cs.CL

Abstract: This paper outlines the approach of the ISDS-NLP team in the SemEval 2024 Task 10: Emotion Discovery and Reasoning its Flip in Conversation (EDiReF). For Subtask 1 we obtained a weighted F1 score of 0.43 and placed 12 in the leaderboard. We investigate two distinct approaches: Masked LLMing (MLM) and Causal LLMing (CLM). For MLM, we employ pre-trained BERT-like models in a multilingual setting, fine-tuning them with a classifier to predict emotions. Experiments with varying input lengths, classifier architectures, and fine-tuning strategies demonstrate the effectiveness of this approach. Additionally, we utilize Mistral 7B Instruct V0.2, a state-of-the-art model, applying zero-shot and few-shot prompting techniques. Our findings indicate that while Mistral shows promise, MLMs currently outperform them in sentence-level emotion classification.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (12)
  1. Language models are few-shot learners.
  2. Unsupervised cross-lingual representation learning at scale. CoRR, abs/1911.02116.
  3. GoEmotions: A Dataset of Fine-Grained Emotions. In 58th Annual Meeting of the Association for Computational Linguistics (ACL).
  4. Bert: Pre-training of deep bidirectional transformers for language understanding.
  5. Paul Ekman. 1992. An argument for basic emotions. Cognition and Emotion, 6(3–4):169–200.
  6. Mistral 7B. arXiv preprint arXiv:2310.06825.
  7. Mixtral of experts.
  8. SOLAR 10.7B: Scaling large language models with simple yet effective depth up-scaling.
  9. Semeval 2024 – task 10: Emotion discovery and reasoning its flip in conversation (ediref). In Proceedings of the 2024 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Association for Computational Linguistics.
  10. Discovering emotion and reasoning its flip in multi-party conversations using masked memory network and transformer.
  11. Discovering emotion and reasoning its flip in multi-party conversations using masked memory network and transformer. Knowledge-Based Systems, 240:108112.
  12. How to fine-tune bert for text classification?
Citations (1)

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets