This introductory lecture aims at presenting the background necessary to understand side-channel attacks. We will present the different building blocks of a physical system and we will explain how an adversary may leverage its properties in order to recover some sensitive information processed by the system under attack. We will then explain the stakes for an adversary, and the different steps to implement in order to successfully extract a secret data. A practical demonstration will be shown to illustrate the key aspects mentioned during this lecture.
This introductory lecture aims at presenting the background necessary to understand side-channel attacks. We will present the different building blocks of a physical system and we will explain how an adversary may leverage its properties in order to recover some sensitive information processed by the system under attack. We will then explain the stakes for an adversary, and the different steps to implement in order to successfully extract a secret data. A practical demonstration will be shown to illustrate the key aspects mentioned during this lecture.
Adversarial attacks aim to deceive ML systems by leading them to make wrong decisions, for instance by learning necessary information about a classifier, by directly modifying the model or misclassifying inputs. Adversarial ML [1,2] studies these attacks and defenses created against them. Introducing adversarial examples to ML systems is a specific type of sophisticated and powerful attack, where additional and sometimes specially crafted or modified inputs are provided to the system with the intent of being misclassified by the model as legitimate as in the case of misclassification attacks [2] and the adversarial classifier reverse engineer learning problem [3]. Another class of adversarial attacks is constructed to infer membership [4-5], where the adversary’s goal is to decide whether a given data sample was included in the training dataset of the targeted ML model.
A common solution that may be tailored to counter each of these different types of adversarial attacks is offered by differential privacy (DP) [6], which is a stochastic measure of privacy and is now used in conjunction with ML algorithms to guarantee privacy of individual users while handling large datasets. DP has furthermore been used to develop practical methods for protecting private user-data at the moment they provide information to the ML system. In this case, the use of a differentially private measure aims to maintain the accuracy of the ML model without incurring a cost of the privacy of individual participants. A mechanism is said to be differentially private if the level of privacy of its users and the output of the mechanism remain unaltered, even when any of the participants decides to submit or remove their personal information from the statistical dataset.
This tutorial delivers an extensive summary on the theory of DP along with its properties as well as some examples of its use in practice to shield a chosen set of ML algorithms from a number of different adversarial attacks.
[1] A.D. Joseph, B. Nelson, B.I.P. Rubinstein, and J.D. Tygar, “Adversarial Machine Learning”, Cambridge University Press, 2018.
[2] J. Giraldo, A.A. Cardenas, M. Kantarcıoğlu and J. Katz, “Adversarial Classification under Differential Privacy”, Network and Distributed Systems Security Symposium 2020, Feb. 2020
[3] D. Lowd and C. Meek, “Adversarial Learning”, In Proceedings of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining (KDD '05).
[4] N. Carlini, S. Chien, M. Nasr, S. Song, A. Terzis and F. Tramèr, "Membership Inference Attacks from First Principles," 2022 IEEE Symposium on Security and Privacy (SP)
[5] R. Shokri, M. Stronati, C. Song and V. Shmatikov, “Membership Inference Attacks against machine learning models”, IEEE Symposium on Security and Privacy (SP), 2017.
[6] C. Dwork, “Differential Privacy”, Automata, Languages and Programming, pgs. 1-12, 2006
During this presentation Sonia and Cesar will give a generic introduction of differential privacy and its applicability to various applications. A second part of the presentation will focus on enforcing differential privacy for decentralised private computations in untrusted environments. Indeed, ensuring accurate and private computation in decentralized settings is challenging when no central party is trusted, communication must remain efficient, and adversaries may collude or deviate from the protocol. Existing approaches often suffer from high communication overhead or degrade in accuracy when participants drop out or behave maliciously.
This talk addresses these challenges by presenting decentralized mechanisms that achieve differential privacy with near-centralized accuracy, low communication cost, and strong robustness to dropouts and adversarial behavior.