Emergent Mind

Security and Privacy Preserving Deep Learning

(2006.12698)
Published Jun 23, 2020 in cs.CR , cs.AI , and cs.LG

Abstract

Commercial companies that collect user data on a large scale have been the main beneficiaries of this trend since the success of deep learning techniques is directly proportional to the amount of data available for training. Massive data collection required for deep learning presents obvious privacy issues. Users personal, highly sensitive data such as photos and voice recordings are kept indefinitely by the companies that collect it. Users can neither delete it nor restrict the purposes for which it is used. So, data privacy has been a very important concern for governments and companies these days. It gives rise to a very interesting challenge since on the one hand, we are pushing further and further for high-quality models and accessible data, but on the other hand, we need to keep data safe from both intentional and accidental leakage. The more personal the data is it is more restricted it means some of the most important social issues cannot be addressed using machine learning because researchers do not have access to proper training data. But by learning how to machine learning that protects privacy we can make a huge difference in solving many social issues like curing disease etc. Deep neural networks are susceptible to various inference attacks as they remember information about their training data. In this chapter, we introduce differential privacy, which ensures that different kinds of statistical analyses dont compromise privacy and federated learning, training a machine learning model on a data to which we do not have access to.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.