Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 47 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 28 tok/s Pro
GPT-5 High 25 tok/s Pro
GPT-4o 104 tok/s Pro
Kimi K2 156 tok/s Pro
GPT OSS 120B 474 tok/s Pro
Claude Sonnet 4 36 tok/s Pro
2000 character limit reached

CatMemo at the FinLLM Challenge Task: Fine-Tuning Large Language Models using Data Fusion in Financial Applications (2407.01953v1)

Published 2 Jul 2024 in cs.CE, cs.AI, cs.LG, and q-fin.CP

Abstract: The integration of LLMs into financial analysis has garnered significant attention in the NLP community. This paper presents our solution to IJCAI-2024 FinLLM challenge, investigating the capabilities of LLMs within three critical areas of financial tasks: financial classification, financial text summarization, and single stock trading. We adopted Llama3-8B and Mistral-7B as base models, fine-tuning them through Parameter Efficient Fine-Tuning (PEFT) and Low-Rank Adaptation (LoRA) approaches. To enhance model performance, we combine datasets from task 1 and task 2 for data fusion. Our approach aims to tackle these diverse tasks in a comprehensive and integrated manner, showcasing LLMs' capacity to address diverse and complex financial tasks with improved accuracy and decision-making capabilities.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (17)
  1. Gpt-4 technical report. arXiv preprint arXiv:2303.08774.
  2. AI@Meta. 2024. Llama 3 model card.
  3. A survey of fintech research and policy discussion. Review of Corporate Finance, 1:259–339.
  4. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712.
  5. Natural language processing in accounting, auditing and finance: A synthesis of the literature with a roadmap for future research. Intelligent Systems in Accounting, Finance and Management, 23(3):157–214.
  6. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685.
  7. Multi-view fusion for instruction mining of large language model. Information Fusion, page 102480.
  8. Quantization and training of neural networks for efficient integer-arithmetic-only inference. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2704–2713.
  9. Mistral 7b. arXiv preprint arXiv:2310.06825.
  10. Large language models in finance: A survey. In Proceedings of the Fourth ACM International Conference on AI in Finance, pages 374–382.
  11. Peft: State-of-the-art parameter-efficient fine-tuning methods. https://github.com/huggingface/peft.
  12. An information fusion based approach to context-based fine-tuning of gpt models. Information Fusion, 104:102202.
  13. Fine-grained argument understanding with bert ensemble techniques: A deep dive into financial sentiment analysis. In Proceedings of the 35th Conference on Computational Linguistics and Speech Processing (ROCLING 2023), pages 242–249.
  14. Gemini: a family of highly capable multimodal models. arXiv preprint arXiv:2312.11805.
  15. The finben: An holistic financial benchmark for large language models. arXiv preprint arXiv:2402.12659.
  16. Finmem: A performance-enhanced llm trading agent with layered memory and character design. In Proceedings of the AAAI Symposium Series, volume 3, pages 595–597.
  17. Trade the event: Corporate events detection for news-based event-driven trading. arXiv preprint arXiv:2105.12825.
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.