Measuring Information Leakage in Non-stochastic Brute-Force Guessing (2107.01113v1)
Abstract: This paper proposes an operational measure of non-stochastic information leakage to formalize privacy against a brute-force guessing adversary. The information is measured by non-probabilistic uncertainty of uncertain variables, the non-stochastic counterparts of random variables. For $X$ that is related to released data $Y$, the non-stochastic brute-force leakage is measured by the complexity of exhaustively checking all the possibilities of the private attribute $U$ of $X$ by an adversary. The complexity refers to the number of trials to successfully guess $U$. Maximizing this leakage over all possible private attributes $U$ gives rise to the maximal (i.e., worst-case) non-stochastic brute-force guessing leakage. This is proved to be fully determined by the minimal non-stochastic uncertainty of $X$ given $Y$, which also determines the worst-case attribute $U$ indicating the highest privacy risk if $Y$ is disclosed. The maximal non-stochastic brute-force guessing leakage is shown to be proportional to the non-stochastic identifiability of $X$ given $Y$ and upper bounds the existing maximin information. The latter quantifies the information leakage when an adversary must perfectly guess $U$ in one-shot via $Y$. Experiments are used to demonstrate the tradeoff between the maximal non-stochastic brute-force guessing leakage and the data utility (measured by the maximum quantization error) and to illustrate the relationship between maximin information and stochastic one-shot maximal leakage.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.