Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 56 tok/s
Gemini 2.5 Pro 39 tok/s Pro
GPT-5 Medium 15 tok/s Pro
GPT-5 High 16 tok/s Pro
GPT-4o 99 tok/s Pro
Kimi K2 155 tok/s Pro
GPT OSS 120B 476 tok/s Pro
Claude Sonnet 4 38 tok/s Pro
2000 character limit reached

FedJudge: Federated Legal Large Language Model (2309.08173v3)

Published 15 Sep 2023 in cs.CL

Abstract: LLMs have gained prominence in the field of Legal Intelligence, offering potential applications in assisting legal professionals and laymen. However, the centralized training of these Legal LLMs raises data privacy concerns, as legal data is distributed among various institutions containing sensitive individual information. This paper addresses this challenge by exploring the integration of Legal LLMs with Federated Learning (FL) methodologies. By employing FL, Legal LLMs can be fine-tuned locally on devices or clients, and their parameters are aggregated and distributed on a central server, ensuring data privacy without directly sharing raw data. However, computation and communication overheads hinder the full fine-tuning of LLMs under the FL setting. Moreover, the distribution shift of legal data reduces the effectiveness of FL methods. To this end, in this paper, we propose the first Federated Legal LLM (FedJudge) framework, which fine-tunes Legal LLMs efficiently and effectively. Specifically, FedJudge utilizes parameter-efficient fine-tuning methods to update only a few additional parameters during the FL training. Besides, we explore the continual learning methods to preserve the global model's important parameters when training local clients to mitigate the problem of data shifts. Extensive experimental results on three real-world datasets clearly validate the effectiveness of FedJudge. Code is released at https://github.com/yuelinan/FedJudge.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (21)
  1. OpenAI, “Introducing chatgpt,” OpenAI Blogs, 2023.
  2. “Llama: Open and efficient foundation language models,” arXiv preprint arXiv:2302.13971, 2023.
  3. “Lawyer llama technical report,” ArXiv, 2023.
  4. “Chatlaw: Open-source legal large language model with integrated external knowledge bases,” ArXiv, 2023.
  5. “Fedclip: Fast generalization and personalization for CLIP in federated learning,” IEEE Data Eng. Bull., 2023.
  6. “Federated large language model: A position paper,” ArXiv, 2023.
  7. “Fedlegal: The first real-world federated learning benchmark for legal nlp,” in Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics, 2023, pp. 3492–3507.
  8. “Communication-efficient learning of deep networks from decentralized data,” in Artificial intelligence and statistics. PMLR, 2017, pp. 1273–1282.
  9. “Federated machine learning: Concept and applications,” ACM Transactions on Intelligent Systems and Technology (TIST), vol. 10, no. 2, pp. 1–19, 2019.
  10. “Communication-efficient online federated learning framework for nonlinear regression,” in ICASSP 2022, Virtual and Singapore, 23-27 May 2022.
  11. “Federated learning challenges and opportunities: An outlook,” in ICASSP 2022, Virtual and Singapore, 23-27 May 2022.
  12. “FedPETuning: When federated learning meets the parameter-efficient tuning methods of pre-trained language models,” in Findings of the Association for Computational Linguistics: ACL 2023, July 2023.
  13. “LoRA: Low-rank adaptation of large language models,” in International Conference on Learning Representations, 2022.
  14. “Few-shot continual learning for audio classification,” in ICASSP 2021, Toronto, ON, Canada, June 6-11, 2021.
  15. “Image de-raining via continual learning,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2021, pp. 4907–4916.
  16. “Attention is all you need,” Advances in neural information processing systems, 2017.
  17. “Circumstances enhanced criminal court view generation,” in Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, 2021.
  18. “Stanford alpaca: An instruction-following llama model,” https://github.com/tatsu-lab/stanford_alpaca, 2023.
  19. BaiChuan-Inc, “A large-scale 7b pretraining language model developed by baichuan-inc.,” https://github.com/baichuan-inc/Baichuan-7B, 2023.
  20. “Bertscore: Evaluating text generation with BERT,” in 8th International Conference on Learning Representations, ICLR 2020,Addis Ababa, Ethiopia, April 26-30, 2020. 2020, OpenReview.net.
  21. “Adam: A method for stochastic optimization,” in 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, Yoshua Bengio and Yann LeCun, Eds., 2015.
Citations (5)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.

Github Logo Streamline Icon: https://streamlinehq.com