|
|
LecturersThe program is still under construction.
Josep Domingo-Ferrer (monday afternoon)Bio: Josep Domingo-Ferrer (Fellow, IEEE and Distinguished Scientist, ACM) received BSc-MSc and PhD degrees in computer science (Autonomous University of Barcelona), a BSc-MSc in mathematics (UNED) and an MA in philosophy (U. Paris Nanterre). He is a distinguished full professor of computer science and an ICREA-Acadèmia research professor at Universitat Rovira i Virgili, Tarragona, Catalonia, where he also leads CYBERCAT (Center for Cybersecurity Research of Catalonia). He is currently also affiliated as an invited professor with LAAS-CNRS, Toulouse, France. His research interests include data privacy, data security, trustworthy machine learning, and ethics in IT. More details: https://crises-deim.urv.cat/jdomingo Contact him at josep.domingo@urv.cat Title: Privacy and security in machine learning: attacks and defenses Abstract: In this talk, I will review privacy and security attacks against conventional machine learning, and I will discuss defenses and the conflict between defending privacy and security in decentralized ML. The usefulness of differential privacy as a privacy defense will be examined. I will also touch on some myths regarding privacy attacks against conventional ML. Some hints on how all this can apply to generative ML will be given as well Cédric Lefebvre (tuesday morning)Bio: Dr. Cédric Lefebvre is head of AI and cybersecurity at Custocy. He got his PhD in 2021, specializing in cybersecurity and privacy. His primary research focused on practical applications of homomorphic encryption, particularly for genetic data. After completing his PhD, he joined Custocy to work on detecting network anomalies using AI models, identifying the key features for detecting attacks, and enhancing the understanding of detection mechanisms.
Abstract: AI based detection network anomaly : I present a brief state of the art techniques to detect anomaly in network trafic thanks yo AI. Then I show that datasets used for academic works are very bad and all state of the art thecniques don't work in a real world. Let's discuss how to solve that and share some ideas based on concrete cases. I present our work at Custocy (a French Network Detection and Response solution) to conclude.
Grégory Blanc (tuesday morning)Bio: Abstract: Malika Izabachène (tuesday afternoon)Bio: Malika Izabachène runs her research on improving homomorphic encryption techniques, and their integration into protocols guaranteeing the confidentiality of users. After postdocs dedicated to analyzing the security of cryptographic protocols, such as electronic voting and electronic money systems, she has worked within industry, and CEA. Her doctorate thesis, defended in 2009, deals with the analysis of anonymous authentication protocols. Abstract: Fully Homomorphic Encryption (FHE) is an advanced encryption technique allowing to compute over encrypted data, without requiring decryption. Thus, the data confidentiality is guaranteed, even when they are processed in an encrypted way by a third party, such as an online service.
This lecture gives an introduction to homomorphic encryption techniques, and illustrates their application to artificial intelligience (AI). We will show how these methods allow statistical processing or medical analysis over sensitive data, while keeping them confidential.
Gilles Tredan and Philippe Leleux (wednesday morning)Gilles Tredan Bio: Since November 2011 I work as a full time researcher in the TSF (now TRUST) group . I obtained a PhD degree from the University of Rennes 1 in November 2009. From January 2010 to September 2011, I worked as a postDoc in the FG Inet group, Berlin. I defended my Habilitation a Diriger les Recherches in June 2019. Philippe Leleux Bio: Since September 2023, Philippe Leleux has been an associate professor in the TRUST team at LAAS-CNRS, a research laboratory specialized in systems analysis and architecture. He began his career as a research engineer in bioinformatics at INRAE from 2014 to 2016. He then joined CERFACS, where he worked in the field of high-performance computing and completed his PhD in 2021, jointly awarded by INP Toulouse and FAU Erlangen-Nürnberg. His doctoral work earned him the Léopold Escande prize. In 2022, he worked as a postdoctoral researcher at the French National Institute of Health and Medical Research (INSERM). The current focus of his research is on the application of 1) artificial intelligence for safety, mainly intrusion detection in cybersecurity as well as decision support tools for pilots ; the safety of artificial intelligence methods, mainly to improve the "trust" of models applied for diagnostics in medical applications Title: Adversarial examples: 10 years of worst case Abstract: In machine learning, adversarial examples are inputs designed by an adversary to fool a target classifier. More precisely, these examples are crafted by adding imperceptible noise to originally well-classified inputs in order to change the resulting classification. Introduced a decade ago (circa 2014), they generated a wide spectrum of research that ranges from very practical questions (can I fool an autonomous vehicle) to more fundamental ones (are there classifiers that resist adversarial inputs, and at which cost ?). This talk is a humble introduction to these various topics by a non-expert Gabriel Zaid (thursday morning)Bio:Gabriel is a cryptography and machine learning engineer at Thales ITSEF (Information Technology Security Evaluation Facility) in Toulouse. He assesses the security of physical devices and systems embedding cryptographic primitives. His research works revolve cover several practical aspects about cryptography and its application on embedded systems. He particularly cares about side-channel attacks and the use of machine learning, not only to succeed them, but also to get a better understanding of the physical security of embedded AI. Abstract:This introductory lecture aims at presenting the background necessary to understand side-channel attacks. We will present the different building blocks of a physical system and we will explain how an adversary may leverage its properties in order to recover some sensitive information processed by the system under attack. We will then explain the stakes for an adversary, and the different steps to implement in order to successfully extract a secret data. A practical demonstration will be shown to illustrate the key aspects mentioned during this lecture. Ayşe Ünsal (thursday afternoon)Bio: Abstract: [1] A.D. Joseph, B. Nelson, B.I.P. Rubinstein, and J.D. Tygar, “Adversarial Machine Learning”, Cambridge University Press, 2018. Sonia Ben Mokhtar (thursday afternoon)Bio: Sonia Ben Mokhtar is a CNRS research director at the LIRIS laboratory, Lyon, France and the head of the distributed systems and information retrieval group (DRIM). She received her PhD in 2007 from Université Pierre et Marie Curie before spending two years at University College London (UK). Her research focuses on the design of resilient and privacy-preserving distributed systems. Sonia has co-authored 80+ papers in peer-reviewed conferences and journals and has served on the editorial board of IEEE Transactions on Dependable and Secure Computing and co-chaired major conferences in the field of distributed systems (e.g., ACM Middleware, IEEE DSN). Sonia has served as chair of ACM SIGOPS France and as co-chair of GDR RSD a national academic network of researchers in distributed systems and networks. César Sabater is a postdoctoral researcher in the DRIM Team at INSA Lyon. His research focuses on privacy-preserving and secure algorithms for processing sensitive data, with an emphasis on differential privacy, robustness against active adversaries, and efficient decentralized computation. He is particularly interested in the empirical assessment of privacy guarantees and resilience to inference and poisoning attacks in decentralized machine learning. He completed his PhD at INRIA-Lille, where he investigated privacy-accuracy-communication trade-offs using differential privacy, secure multi-party computation, and zero-knowledge proofs. Abstract: During this presentation Sonia and Cesar will give a generic introduction of differential privacy and its applicability to various applications. A second part of the presentation will focus on enforcing differential privacy for decentralised private computations in untrusted environments. Indeed, ensuring accurate and private computation in decentralized settings is challenging when no central party is trusted, communication must remain efficient, and adversaries may collude or deviate from the protocol. Existing approaches often suffer from high communication overhead or degrade in accuracy when participants drop out or behave maliciously. Nathalie Aussenac-Gilles (friday morning)Bio: Nathalie Aussenac-Gilles has been a CNRS research fellow at IRIT, the computer science laboratory in Toulouse since 1991 and a research director since 2010. She chaired several national research groups associated with the French AI Association and the National Research Group in AI. Within IRIT, she co-founded the MELODI team in 2011, coordinated activities in big data, AI and data processing, and co-chaired the AI department from 2012 to 2025. Her research focuses on knowledge engineering and semantic web technologies, methods and models for building terminologies, ontologies and knowledge graphs, especially from text and data. She has proposed algorithms for extracting semantic relations from texts, annotating texts with concepts, integrating heterogeneous data based on ontologies, and several FAIR metadata models for open data. She has participated in more than 20 national and European collaborative projects, including the Starlight project dedicated to ethical AI solutions for European legal agencies, where she contributed to the use of Natural Language Processing and knowledge graphs for authorship identification in social networks. Title: Natural Language Processing and Semantics for Cybersecurity : challenges and approaches to dal with social network data Abstract: In this talk, I will first rewiew some of the challenges raised by cybersecurity that requires natural language processing or document processing. In a second part of the talk, I will go into more details about the case of data and text coming from social networks.I will present state of the art techniques that deal with some of the main tasks related to this kind of data: authorship identification, fake news recognition, personnal network identification, etc. I will also mention the difficulty to deal with such data in keeping with ethics and current regulations about personal data and AI. Mario Laurent (friday morning)Bio: Abstract: |