Emergent Mind

Abstract

With the rapid advancement of large models and mobile edge computing, transfer learning, particularly through fine-tuning, has become crucial for adapting models to downstream tasks. Traditionally, this requires users to share their data with model owners for fine-tuning, which is not only costly but also raises significant privacy concerns. Furthermore, fine-tuning large-scale models is computationally intensive and often impractical for many users. To tackle these challenges, we introduce a system that combines offsite-tuning with physical-layer security, which provides local data owners with a lightweight adapter and a compressed emulator. Data owners then fine-tune the adapter locally and securely send it back to the model owners through a confidential channel for integration, ensuring privacy and resource conservation. Our paper focuses on optimizing computational resource allocation among data owners and the large model owner deployed on edge, and on the compression ratio of adapters. We incorporate a secrecy uplink channel to maximize the utility that we defined while minimizing system costs like energy consumption and delay. The optimization uses the Dinkelbach algorithm, fractional programming, successive convex approximation and alternating optimization. Experiments demonstrate our algorithm's superiority over existing methods.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.