Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

PDSS: A Privacy-Preserving Framework for Step-by-Step Distillation of Large Language Models (2406.12403v1)

Published 18 Jun 2024 in cs.CL and cs.AI

Abstract: In the context of real-world applications, leveraging LLMs for domain-specific tasks often faces two major challenges: domain-specific knowledge privacy and constrained resources. To address these issues, we propose PDSS, a privacy-preserving framework for step-by-step distillation of LLMs. PDSS works on a server-client architecture, wherein client transmits perturbed prompts to the server's LLM for rationale generation. The generated rationales are then decoded by the client and used to enrich the training of task-specific small LLM(SLM) within a multi-task learning paradigm. PDSS introduces two privacy protection strategies: the Exponential Mechanism Strategy and the Encoder-Decoder Strategy, balancing prompt privacy and rationale usability. Experiments demonstrate the effectiveness of PDSS in various text generation tasks, enabling the training of task-specific SLM with enhanced performance while prioritizing data privacy protection.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Tao Fan (19 papers)
  2. Yan Kang (49 papers)
  3. Weijing Chen (5 papers)
  4. Hanlin Gu (33 papers)
  5. Yuanfeng Song (27 papers)
  6. Lixin Fan (77 papers)
  7. Kai Chen (512 papers)
  8. Qiang Yang (202 papers)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets