|
ATTENTION : Une migration de la base de données est programmée jeudi 21 août.
Elle peut occasionner des problèmes d'accès à Sciencesconf. |
|
Lecturers Philippe Leleux (monday afternoon) Bio: Since September 2023, Philippe Leleux has been an associate professor in the TRUST team at LAAS-CNRS, a research laboratory specialized in systems analysis and architecture. He began his career as a research engineer in bioinformatics at INRAE from 2014 to 2016. He then joined CERFACS, where he worked in the field of high-performance computing and completed his PhD in 2021, jointly awarded by INP Toulouse and FAU Erlangen-Nürnberg. His doctoral work earned him the Léopold Escande prize. In 2022, he worked as a postdoctoral researcher at the French National Institute of Health and Medical Research (INSERM). The current focus of his research is on the application of 1) artificial intelligence for safety, mainly intrusion detection in cybersecurity as well as decision support tools for pilots ; the safety of artificial intelligence methods, mainly to improve the "trust" of models applied for diagnostics in medical applications Title: Introduction to Artificial Intelligence [Slides] Josep Domingo-Ferrer (monday afternoon) Bio: Josep Domingo-Ferrer (Fellow, IEEE and Distinguished Scientist, ACM) received BSc-MSc and PhD degrees in computer science (Autonomous University of Barcelona), a BSc-MSc in mathematics (UNED) and an MA in philosophy (U. Paris Nanterre). He is a distinguished full professor of computer science and an ICREA-Acadèmia research professor at Universitat Rovira i Virgili, Tarragona, Catalonia, where he also leads CYBERCAT (Center for Cybersecurity Research of Catalonia). He is currently also affiliated as an invited professor with LAAS-CNRS, Toulouse, France. His research interests include data privacy, data security, trustworthy machine learning, and ethics in IT. More details: https://crises-deim.urv.cat/jdomingo Contact him at josep.domingo@urv.cat Title: Privacy and security in machine learning: attacks and defenses [Slides] Abstract: In this talk, I will review privacy and security attacks against conventional machine learning, and I will discuss defenses and the conflict between defending privacy and security in decentralized ML. The usefulness of differential privacy as a privacy defense will be examined. I will also touch on some myths regarding privacy attacks against conventional ML. Some hints on how all this can apply to generative ML will be given as well Cédric Lefebvre and Grégory Blanc (tuesday morning)Cédric Lefebvre Bio: Dr. Cédric Lefebvre is head of AI and cybersecurity at Custocy. He got his PhD in 2021, specializing in cybersecurity and privacy. His primary research focused on practical applications of homomorphic encryption, particularly for genetic data. After completing his PhD, he joined Custocy to work on detecting network anomalies using AI models, identifying the key features for detecting attacks, and enhancing the understanding of detection mechanisms.
Grégory Blanc Bio: Gregory a obtenu un doctorat en sécurité informatique du Nara Institute of Science and Technologie (Japon) dans le domaine de l’analyse des scripts malveillants dans les navigateurs Web, en 2012. Il a ensuite rejoint le laboratoire SAMOVAR (Télécom SudParis) en tant que chercheur postdoctoral et a contribué au montage et au pilotage de projets collaboratifs européens tels que NECOMA (projet européen-japonais). Depuis 2015, il est maître de conférences en sécurité et réseaux à Télécom SudParis où il coordonne la spécialisation en sécurité des systèmes et réseaux. Récemment, il coordonne le projet ANR GRIFIN dans lequel il explore les apports de l'apprentissage automatique pour rendre la boucle de sécurité autonomique afin d'améliorer la résilience des réseaux du futur : monitoring, détection, sélection de contremesures, déploiement des politiques de sécurité. Title: ML-based Network Intrusion Detection: what can be done in practice? [Slides] Abstract: In this lecture, after introducing what is a network intrusion, the object we want to detect, and some preliminaries on neural networks, we briefly review some state of the art techniques to detect anomalies, which is, by far, the most practical and attractive case for ML-based network intrusion detection. This leads us to establish that the performance of ML-based detectors is dependent on the (quality of the) data, which is yet to be properly evaluated. From there, we discuss a number of approaches both theoretical and practical to solve these issues and make ML-based network intrusion detection more reliable. Then, we inspect shortcomings of models in the face of adversarial attacks and how to test them. The last part will feature practical exercises and a presentation of Custocy, a French Network Detection and Response solution. Malika Izabachène (tuesday afternoon)Bio: Malika Izabachène runs her research on improving homomorphic encryption techniques, and their integration into protocols guaranteeing the confidentiality of users. After postdocs dedicated to analyzing the security of cryptographic protocols, such as electronic voting and electronic money systems, she has worked within industry, and CEA. Her doctorate thesis, defended in 2009, deals with the analysis of anonymous authentication protocols. Abstract: Fully Homomorphic Encryption (FHE) is an advanced encryption technique allowing to compute over encrypted data, without requiring decryption. Thus, the data confidentiality is guaranteed, even when they are processed in an encrypted way by a third party, such as an online service.
This lecture gives an introduction to homomorphic encryption techniques, and illustrates their application to artificial intelligience (AI). We will show how these methods allow statistical processing or medical analysis over sensitive data, while keeping them confidential.
Gilles Tredan and Philippe Leleux (wednesday morning)Gilles Tredan Bio: Since November 2011 I work as a full time researcher in the TSF (now TRUST) group . I obtained a PhD degree from the University of Rennes 1 in November 2009. From January 2010 to September 2011, I worked as a postDoc in the FG Inet group, Berlin. I defended my Habilitation a Diriger les Recherches in June 2019. Philippe Leleux Bio: Since September 2023, Philippe Leleux has been an associate professor in the TRUST team at LAAS-CNRS, a research laboratory specialized in systems analysis and architecture. He began his career as a research engineer in bioinformatics at INRAE from 2014 to 2016. He then joined CERFACS, where he worked in the field of high-performance computing and completed his PhD in 2021, jointly awarded by INP Toulouse and FAU Erlangen-Nürnberg. His doctoral work earned him the Léopold Escande prize. In 2022, he worked as a postdoctoral researcher at the French National Institute of Health and Medical Research (INSERM). The current focus of his research is on the application of 1) artificial intelligence for safety, mainly intrusion detection in cybersecurity as well as decision support tools for pilots ; the safety of artificial intelligence methods, mainly to improve the "trust" of models applied for diagnostics in medical applications Title: Adversarial examples: 10 years of worst case [Slides] [Hands on] Abstract: In machine learning, adversarial examples are inputs designed by an adversary to fool a target classifier. More precisely, these examples are crafted by adding imperceptible noise to originally well-classified inputs in order to change the resulting classification. Introduced a decade ago (circa 2014), they generated a wide spectrum of research that ranges from very practical questions (can I fool an autonomous vehicle) to more fundamental ones (are there classifiers that resist adversarial inputs, and at which cost ?). This talk is a humble introduction to these various topics by a non-expert. -------------------------------------------------- Hands on: Ever wondered how a few pixels can fool a deep neural network? In this hands-on session, you’ll craft adversarial attacks like FGSM and PGD on image models, visualize how tiny perturbations can cause big misclassifications, and measure the fallout on model performance. Then, we’ll see if countermeasures are possible—with adversarial training and a look at its cost in robustness vs. accuracy. Come learn how to outsmart (or secure) the machine. ---------------------------------------------------- Closing remarks (to be precisely determined) Abstract: This session will close the morning session on adversarial examples. We will discuss some limitations of adversarial defenses and practical applications of adversarial examples for model watermarking. Finally, we'll quickly discuss the transposition of adversarial examples from classifiers to the large language models realm. Gabriel Zaid (thursday morning)Bio:Gabriel is a cryptography and machine learning engineer at Thales ITSEF (Information Technology Security Evaluation Facility) in Toulouse. He assesses the security of physical devices and systems embedding cryptographic primitives. His research works revolve cover several practical aspects about cryptography and its application on embedded systems. He particularly cares about side-channel attacks and the use of machine learning, not only to succeed them, but also to get a better understanding of the physical security of embedded AI. Abstract:This introductory lecture aims at presenting the background necessary to understand side-channel attacks. We will present the different building blocks of a physical system and we will explain how an adversary may leverage its properties in order to recover some sensitive information processed by the system under attack. We will then explain the stakes for an adversary, and the different steps to implement in order to successfully extract a secret data. A practical demonstration will be shown to illustrate the key aspects mentioned during this lecture. Title: Side-Channel Attacks [Slides] Ayşe Ünsal (thursday afternoon)Bio: Dr. Ayşe ÜNSAL is working at the Digital Security Department of EURECOM as a research fellow since September 2020. Her current position is funded by ANR Frame XG project AIRSEC- Air Interface Security for 6G and European project TRUMAN-Trustworthy Human-Centric AI. Previously, she worked as a post-doctoral researcher on coded caching at the Communication Systems Dept. of EURECOM (September 2017- February 2019) where she had also received her Ph.D. degree from Telecom ParisTECH with a specialization in Electronics and Communication in November 2014. Her Ph.D. was funded by an FP7 European project (248993-LOLA) which focused on information-theoretic characterizations of transmission strategies to lower latency in wireless communications and explain the technical basis behind it. After having obtained her degree, she worked as a post-doctoral researcher and lecturer at the Paderborn University, Germany and CITI Lab of INRIA/INSA Lyon. Title: Differential Privacy as a Defense Mechanism: Concepts and Properties [Slides] Abstract: Adversarial attacks aim to deceive ML systems by leading them to make wrong decisions, for instance by learning necessary information about a classifier, by directly modifying the model or misclassifying inputs. Adversarial ML [1,2] studies these attacks and defenses created against them. Introducing adversarial examples to ML systems is a specific type of sophisticated and powerful attack, where additional and sometimes specially crafted or modified inputs are provided to the system with the intent of being misclassified by the model as legitimate as in the case of misclassification attacks [2] and the adversarial classifier reverse engineer learning problem [3]. Another class of adversarial attacks is constructed to infer membership [4-5], where the adversary’s goal is to decide whether a given data sample was included in the training dataset of the targeted ML model. [1] A.D. Joseph, B. Nelson, B.I.P. Rubinstein, and J.D. Tygar, “Adversarial Machine Learning”, Cambridge University Press, 2018. César Sabater (thursday afternoon)Bio: César Sabater is a postdoctoral researcher in the DRIM Team at INSA Lyon. His research focuses on privacy-preserving and secure algorithms for processing sensitive data, with an emphasis on differential privacy, robustness against active adversaries, and efficient decentralized computation. He is particularly interested in the empirical assessment of privacy guarantees and resilience to inference and poisoning attacks in decentralized machine learning. He completed his PhD at INRIA-Lille, where he investigated privacy-accuracy-communication trade-offs using differential privacy, secure multi-party computation, and zero-knowledge proofs. Title: Decentralized Algorithms with Differential Privacy [Slides] Abstract: During this presentation Cesar will give a generic introduction of differential privacy and its applicability to various applications. A second part of the presentation will focus on enforcing differential privacy for decentralised private computations in untrusted environments. Indeed, ensuring accurate and private computation in decentralized settings is challenging when no central party is trusted, communication must remain efficient, and adversaries may collude or deviate from the protocol. Existing approaches often suffer from high communication overhead or degrade in accuracy when participants drop out or behave maliciously. Nathalie Aussenac-Gilles (friday morning)Bio: Nathalie Aussenac-Gilles has been a CNRS research fellow at IRIT, the computer science laboratory in Toulouse since 1991 and a research director since 2010. She chaired several national research groups associated with the French AI Association and the National Research Group in AI. Within IRIT, she co-founded the MELODI team in 2011, coordinated activities in big data, AI and data processing, and co-chaired the AI department from 2012 to 2025. Her research focuses on knowledge engineering and semantic web technologies, methods and models for building terminologies, ontologies and knowledge graphs, especially from text and data. She has proposed algorithms for extracting semantic relations from texts, annotating texts with concepts, integrating heterogeneous data based on ontologies, and several FAIR metadata models for open data. She has participated in more than 20 national and European collaborative projects, including the Starlight project dedicated to ethical AI solutions for European legal agencies, where she contributed to the use of Natural Language Processing and knowledge graphs for authorship identification in social networks. Title: Natural Language Processing and Semantics for Cybersecurity : challenges and approaches to dal with social network data Abstract: In this talk, I will first rewiew some of the challenges raised by cybersecurity that requires natural language processing or document processing. In a second part of the talk, I will go into more details about the case of data and text coming from social networks.I will present state of the art techniques that deal with some of the main tasks related to this kind of data: authorship identification, fake news recognition, personnal network identification, etc. I will also mention the difficulty to deal with such data in keeping with ethics and current regulations about personal data and AI. Mario Laurent (friday morning)Bio: Mario Laurent is a researcher in linguistics at Toulouse 1 Capitole University in France. He received his PhD in 2017 from Université Blaise Pascal, Clermont-Ferrand, where he carried out multidisciplinary research on dyslexia. In particular, he studied the cognitive origins of dyslexia, the psycho-linguistic understanding of learning disorders and the educational problems of dyslexic children. He used his knowledge of natural language processing to program a text comprehension aid. Mario then joined two successive European projects as a post-doctoral fellow, working on the theme of racist hate speech on social networks. His latest project, from 2023 to 2025, funded by the Institut Cybersécurité Occitanie, aimed to better understand how discursive strategies of concealment and manipulation are used by radical groups on social networks to disseminate racist or conspiracy-oriented discourse to other users. Abstract: In this presentation, we will take a step aside to focus on the Social Science and Humanities aspect of the analysis of online malicious behavior. First, we will look at the problem of hate speech on social networks, focusing on two forms of its expression. Firstly, the use of automatic generation tools (AI tools), which make it increasingly easy to create content to reach and destabilize a large audience. Secondly, the challenges and difficulties of moderating such speech on the social media, especially in regard of the increasingly sophisticated concealment strategies, which are embedded in the cultural codes of these social media. We will finally ask whether it is possible to create effective automatic moderation tools, given our observations. |