P1 Program

Data Privacy in Machine Learning

Program Description

Through this program, the aim is to develop algorithms that ensure individual privacy without unduly reducing model utility. In the EU, where regulatory scrutiny of data handling and public concern over data rights are growing, and as machine learning relies on increasingly large and sensitive datasets, robust privacy guarantees are essential.

The program is organized around two complementary themes. The first theme develops privacy-preserving learning algorithms, including differential privacy and secure multi-party computation. The work focuses on algorithms that provide formal privacy guarantees while maintaining model utility.

The second theme focuses on data control and removal methods, such as machine unlearning, to allow contributors to withdraw or modify their data without prohibitive computational cost.

The vision is to create a cohesive research community bridging theoretical foundations, system implementation, and empirical evaluation in privacy-preserving machine learning.

Program Directors

Program Members

Boel Nelson

Boel Nelson

University of Copenhagen

Carolin Christin Heinzler

Carolin Christin Heinzler

University of Copenhagen

Chris Schwiegelshohn

Chris Schwiegelshohn

Aarhus University

Daniele Dell'aglio

Daniele Dell'aglio

Aalborg University

Hannah Keller

Hannah Keller

Aarhus University

Johanna Duengler

Johanna Düngler

University of Copenhagen

Lukas Retschmeier

Lukas Retschmeier

University of Copenhagen

Martin Aumueller

Martin Aumüller

IT University of Copenhagen

Nirupam Gupta

Nirupam Gupta

University of Copenhagen

Quentin Emmanuel Hillebrand

Quentin Emmanuel Hillebrand

University of Tokyo, University of Copenhagen

Sia Susanne Sejer

Sia Susanne Sejer

University of Copenhagen

Teresa Anna Steiner

Teresa Anna Steiner

University of Southern Denmark