beandeau>

Lecturers

The program is still under construction.

 

Josep Domingo-Ferrer (monday afternoon)

Bio:

Josep Domingo-Ferrer (Fellow, IEEE and Distinguished Scientist, ACM) received BSc-MSc and PhD degrees in computer science (Autonomous University of Barcelona), a BSc-MSc in mathematics (UNED) and an MA in philosophy (U. Paris Nanterre). He is a distinguished full professor of computer science and an ICREA-Acadèmia research professor at Universitat Rovira i Virgili, Tarragona, Catalonia, where he also leads CYBERCAT (Center for Cybersecurity Research of Catalonia). He is currently also affiliated as an invited professor with LAAS-CNRS, Toulouse, France. His research interests include data privacy, data security, trustworthy machine learning, and ethics in IT. 

More details: https://crises-deim.urv.cat/jdomingo

Contact him at josep.domingo@urv.cat

Title: Privacy and security in machine learning: attacks and defenses

Abstract:

In this talk, I will review privacy and security attacks against conventional machine learning, and I will discuss defenses and the conflict between defending privacy and security in decentralized ML. The usefulness of differential privacy as a privacy defense will be examined. I will also touch on some myths regarding privacy attacks against conventional ML. Some hints on how all this can apply to generative ML will be given as well

Cédric Lefebvre (tuesday morning)

Bio:

Dr. Cédric Lefebvre is head of AI and cybersecurity at Custocy. He got his PhD in 2021, specializing in cybersecurity and privacy. His primary research focused on practical applications of homomorphic encryption, particularly for genetic data. After completing his PhD, he joined Custocy to work on detecting network anomalies using AI models, identifying the key features for detecting attacks, and enhancing the understanding of detection mechanisms.

Abstract:

AI based detection network anomaly : I present a brief state of the art techniques to detect anomaly in network trafic thanks yo AI. Then I show that datasets used for academic works are very bad and all state of the art thecniques don't work in a real world. Let's discuss how to solve that and share some ideas based on concrete cases. I present our work at Custocy (a French Network Detection and Response solution) to conclude.

Grégory Blanc (tuesday morning)

Bio:
Coming soon

Abstract:
Coming soon

Malika Izabachène (tuesday afternoon)

Bio:

Malika Izabachène runs her research on improving homomorphic encryption techniques, and their integration into protocols guaranteeing the confidentiality of users. After postdocs dedicated to analyzing the security of cryptographic protocols, such as electronic voting and electronic money systems, she has worked within industry, and CEA. Her doctorate thesis, defended in 2009, deals with the analysis of anonymous authentication protocols.

Abstract:

Fully Homomorphic Encryption (FHE) is an advanced encryption technique allowing to compute over encrypted data, without requiring decryption. Thus, the data confidentiality is guaranteed, even when they are processed in an encrypted way by a third party, such as an online service.
 
This lecture gives an introduction to homomorphic encryption techniques, and illustrates their application to artificial intelligience (AI). We will show how these methods allow statistical processing or medical analysis over sensitive data, while keeping them confidential.

Gilles Tredan and Philippe Leleux (wednesday morning)

Gilles Tredan Bio:

Since November 2011 I work as a full time researcher in the TSF (now TRUST) group . I obtained a PhD degree from the University of Rennes 1 in November 2009. From January 2010 to September 2011, I worked as a postDoc in the FG Inet group, Berlin. I defended my Habilitation a Diriger les Recherches in June 2019.

My current focus is on algorithmic (adversarial) transparency: how to infer properties of remote (online) algorithms ? Which properties can be inferred at reasonable cost ? Can such approaches be used by societies to dispute with tech giants over the control of our digital existences ?

Philippe Leleux Bio:

Since September 2023, Philippe Leleux has been an associate professor in the TRUST team at LAAS-CNRS, a research laboratory specialized in systems analysis and architecture. He began his career as a research engineer in bioinformatics at INRAE from 2014 to 2016. He then joined CERFACS, where he worked in the field of high-performance computing and completed his PhD in 2021, jointly awarded by INP Toulouse and FAU Erlangen-Nürnberg. His doctoral work earned him the Léopold Escande prize. In 2022, he worked as a postdoctoral researcher at the French National Institute of Health and Medical Research (INSERM).

The current focus of his research is on the application of 1) artificial intelligence for safety, mainly intrusion detection in cybersecurity as well as decision support tools for pilots ; the safety of artificial intelligence methods, mainly to improve the "trust" of models applied for diagnostics in medical applications

Title: Adversarial examples: 10 years of worst case

Abstract:

In machine learning, adversarial examples are inputs designed by an adversary to fool a target classifier. More precisely, these examples are crafted by adding imperceptible noise to originally well-classified inputs in order to change the resulting classification. Introduced a decade ago (circa 2014), they generated a wide spectrum of research that ranges from very practical questions (can I fool an autonomous vehicle) to more fundamental ones (are there classifiers that resist adversarial inputs, and at which cost ?). This talk is a humble introduction to these various topics by a non-expert

Gabriel Zaid (thursday morning)

Bio:

Gabriel is a cryptography and machine learning engineer at Thales ITSEF (Information Technology Security Evaluation Facility) in Toulouse. He assesses the security of physical devices and systems embedding cryptographic primitives. His research works revolve cover several practical aspects about cryptography and its application on embedded systems. He particularly cares about side-channel attacks and the use of machine learning, not only to succeed them, but also to get a better understanding of the physical security of embedded AI.

Abstract:

This introductory lecture aims at presenting the background necessary to understand side-channel attacks. We will present the different building blocks of a physical system and we will explain how an adversary may leverage its properties in order to recover some sensitive information processed by the system under attack. We will then explain the stakes for an adversary, and the different steps to implement in order to successfully extract a secret data. A practical demonstration will be shown to illustrate the key aspects mentioned during this lecture.

Ayşe Ünsal (thursday afternoon)

Bio:
Dr. Ayşe ÜNSAL is working at the Digital Security Department of EURECOM as a research fellow since September 2020. Her current position is funded by ANR Frame XG project AIRSEC- Air Interface Security for 6G and European project TRUMAN-Trustworthy Human-Centric AI. Previously, she worked as a post-doctoral researcher on coded caching at the Communication Systems Dept. of EURECOM (September 2017- February 2019) where she had also received her Ph.D. degree from Telecom ParisTECH with a specialization in Electronics and Communication in November 2014. Her Ph.D. was funded by an FP7 European project (248993-LOLA) which focused on information-theoretic characterizations of transmission strategies to lower latency in wireless communications and explain the technical basis behind it. After having obtained her degree, she worked as a post-doctoral researcher and lecturer at the Paderborn University, Germany and CITI Lab of INRIA/INSA Lyon.

Abstract:
Adversarial attacks aim to deceive ML systems by leading them to make wrong decisions, for instance by learning necessary information about a classifier, by directly modifying the model or misclassifying inputs. Adversarial ML [1,2] studies these attacks and defenses created against them. Introducing adversarial examples to ML systems is a specific type of sophisticated and powerful attack, where additional and sometimes specially crafted or modified inputs are provided to the system with the intent of being misclassified by the model as legitimate as in the case of misclassification attacks [2] and the adversarial classifier reverse engineer learning problem [3]. Another class of adversarial attacks is constructed to infer membership [4-5], where the adversary’s goal is to decide whether a given data sample was included in the training dataset of the targeted ML model.
A common solution that may be tailored to counter each of these different types of adversarial attacks is offered by differential privacy (DP) [6], which is a stochastic measure of privacy and is now used in conjunction with ML algorithms to guarantee privacy of individual users while handling large datasets. DP has furthermore been used to develop practical methods for protecting private user-data at the moment they provide information to the ML system. In this case, the use of a differentially private measure aims to maintain the accuracy of the ML model without incurring a cost of the privacy of individual participants. A mechanism is said to be differentially private if the level of privacy of its users and the output of the mechanism remain unaltered, even when any of the participants decides to submit or remove their personal information from the statistical dataset.
This tutorial delivers an extensive summary on the theory of DP along with its properties as well as some examples of its use in practice to shield a chosen set of ML algorithms from a number of different adversarial attacks.

[1] A.D. Joseph, B. Nelson, B.I.P. Rubinstein, and J.D. Tygar, “Adversarial Machine Learning”, Cambridge University Press, 2018.
[2] J. Giraldo, A.A. Cardenas, M. Kantarcıoğlu and J. Katz, “Adversarial Classification under Differential Privacy”, Network and Distributed Systems Security Symposium 2020, Feb. 2020
[3] D. Lowd and C. Meek, “Adversarial Learning”, In Proceedings of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining (KDD '05).
[4] N. Carlini, S. Chien, M. Nasr, S. Song, A. Terzis and F. Tramèr, "Membership Inference Attacks from First Principles," 2022 IEEE Symposium on Security and Privacy (SP)
[5] R. Shokri, M. Stronati, C. Song and V. Shmatikov, “Membership Inference Attacks against machine learning models”, IEEE Symposium on Security and Privacy (SP), 2017.
[6] C. Dwork, “Differential Privacy”, Automata, Languages and Programming, pgs. 1-12, 2006

Sonia Ben Mokhtar (thursday afternoon)

Bio:

Sonia Ben Mokhtar is a CNRS research director at the LIRIS laboratory, Lyon, France and the head of the distributed systems and information retrieval group (DRIM). She received her PhD in 2007 from Université Pierre et Marie Curie before spending two years at University College London (UK). Her research focuses on the design of resilient and privacy-preserving distributed systems. Sonia has co-authored 80+ papers in peer-reviewed conferences and journals and has served on the editorial board of IEEE Transactions on Dependable and Secure Computing and co-chaired major conferences in the field of distributed systems (e.g., ACM Middleware, IEEE DSN). Sonia has served as chair of ACM SIGOPS France and as co-chair of GDR RSD a national academic network of researchers in distributed systems and networks.

César Sabater is a postdoctoral researcher in the DRIM Team at INSA Lyon. His research focuses on privacy-preserving and secure algorithms for processing sensitive data, with an emphasis on differential privacy, robustness against active adversaries, and efficient decentralized computation. He is particularly interested in the empirical assessment of privacy guarantees and resilience to inference and poisoning attacks in decentralized machine learning. He completed his PhD at INRIA-Lille, where he investigated privacy-accuracy-communication trade-offs using differential privacy, secure multi-party computation, and zero-knowledge proofs.

Abstract:

During this presentation Sonia and Cesar will give a generic introduction of differential privacy and its applicability to various applications. A second part of the presentation will focus on enforcing differential privacy for decentralised private computations in untrusted environments. Indeed, ensuring accurate and private computation in decentralized settings is challenging when no central party is trusted, communication must remain efficient, and adversaries may collude or deviate from the protocol. Existing approaches often suffer from high communication overhead or degrade in accuracy when participants drop out or behave maliciously.
This talk addresses these challenges by presenting decentralized mechanisms that achieve differential privacy with near-centralized accuracy, low communication cost, and strong robustness to dropouts and adversarial behavior.

Nathalie Aussenac-Gilles (friday morning)

Bio:

Nathalie Aussenac-Gilles has been a CNRS research fellow at IRIT, the computer science laboratory in Toulouse since 1991 and a research director since 2010. She chaired several national research groups associated with the French AI Association and the National Research Group in AI. Within IRIT, she co-founded the MELODI team in 2011, coordinated activities in big data, AI and data processing, and co-chaired the AI department from 2012 to 2025. Her research focuses on knowledge engineering and semantic web technologies, methods and models for building terminologies, ontologies and knowledge graphs, especially from text and data. She has proposed algorithms for extracting semantic relations from texts, annotating texts with concepts, integrating heterogeneous data based on ontologies, and several FAIR metadata models for open data. She has participated in more than 20 national and European collaborative projects, including the Starlight project dedicated to ethical AI solutions for European legal agencies, where she contributed to the use of Natural Language Processing and knowledge graphs for authorship identification in social networks.

Title: Natural Language Processing and Semantics for Cybersecurity : challenges and approaches to dal with social network data

Abstract:

In this talk, I will first rewiew some of the challenges raised by cybersecurity that requires natural language processing or document processing. In a second part of the talk, I will go into more details about the case of data and text coming from social networks.I will present state of the art techniques that deal with some of the main tasks related to this kind of data: authorship identification, fake news recognition, personnal network identification, etc. I will also mention the difficulty to deal with such data in keeping with ethics and current regulations about personal data and AI.

Mario Laurent (friday morning)

Bio:
Mario Laurent is a researcher in linguistics at Toulouse 1 Capitole University in France. He received his PhD in 2017 from Université Blaise Pascal, Clermont-Ferrand, where he carried out multidisciplinary research on dyslexia. In particular, he studied the cognitive origins of dyslexia, the psycho-linguistic understanding of learning disorders and the educational problems of dyslexic children. He used his knowledge of natural language processing to program a text comprehension aid. Mario then joined two successive European projects as a post-doctoral fellow, working on the theme of racist hate speech on social networks. His latest project, from 2023 to 2025, funded by the Institut Cybersécurité Occitanie, aimed to better understand how discursive strategies of concealment and manipulation are used by radical groups on social networks to disseminate racist or conspiracy-oriented discourse to other users.

Abstract:
In this presentation, we will take a step aside to focus on the Social Science and Humanities aspect of the analysis of online malicious behavior. First, we will look at the problem of hate speech on social networks, focusing on two forms of its expression. Firstly, the use of automatic generation tools (AI tools), which make it increasingly easy to create content to reach and destabilize a large audience. Secondly, the challenges and difficulties of moderating such speech on the social media, especially in regard of the increasingly sophisticated concealment strategies, which are embedded in the cultural codes of these social media. We will finally ask whether it is possible to create effective automatic moderation tools, given our observations.

Chargement... Chargement...