The first Machine Unlearning Challenge

Deep learning has recently made remarkable progress in a wide range of applications, Machine Unlearning lineup to the generation of realistic images and language models that can converse like humans. While this advancement is fascinating, the use of large-scale neural network models requires caution. As per Google’s AI principles, we strive to develop responsible AI technologies by understanding and mitigating potential risks, such as amplifying unfair biases and safeguarding user privacy.

Complete eradication of the impact and traceability of requested data is a challenging task. Besides deleting data from the database where it has been stored, there is also a need to eliminate its influence on other trained machine learning models, such as sophisticated fine-tuned models. Furthermore, recent research has shown that in some cases [1,2], it is possible to estimate with high accuracy whether a machine learning model was trained using a particular example. These Membership Inference Attacks (MIAs) create privacy concerns since they imply that even if an individual’s data is erased from the database, it may still be feasible to learn whether or not their data was used to train a model.

Considering the above, machine learning is an expanding subfield of machine learning that aims to remove the impact of specific subsets of training examples from a trained model. An example-agnostic algorithm will effectively eliminate the influence of certain examples. During this process, other desirable properties such as accuracy on the remaining set and generalization of examples are maintained. One direct approach to preparing an example-agnostic model is to initialize the model with a pre-trained model and make effective adjustments to eliminate the influence of desired data.

We are excited to announce our first Machine Unlearning Challenge partnership with a diverse collection of academic and industrial experts. The challenge addresses a practical scenario where, after training, participants should forget a specific subset of training images for the sake of privacy or rights protection of relevant individuals. The challenge will be conducted, and the performance of both the forgotten and retained models will be automatically evaluated in terms of utility and model fidelity. We hope this challenge will contribute to the advancement of state-of-the-art machine unlearning and foster the development of effective, efficient, and ethical example-agnostic algorithms.

Applications for machine unlearning

Image source: google Ai.

Machine unlearning applications are those in machine learning that prioritize user privacy. For example, machine unlearning can be used to remove incorrect or outdated information from trained models (e.g., due to labeling errors or changes in the environment) or to eliminate harmful, biased, or outlying data.

Machine unlearning is related to various fields of machine learning such as differential privacy, lifelong learning, and fairness. The objective of differential privacy is to ensure that a particular training example has minimal impact on the trained model. In contrast, the objective of machine unlearning in the challenge is to forget a specific subset of training examples while maintaining the utility of the model.

Lifelong learning research aims to design models that can continuously learn while retaining previously acquired skills. As unlearning progresses, it can also open avenues for enhancing fairness in models by addressing unfair biases or different treatment of groups related to various demographics, age groups, etc.

Machine unlearning Challenges

Challenges in machine unlearning are complex and multifaceted due to conflicting objectives: forgetting requested data, maintaining model utility (e.g., accuracy on retained and new data), and efficiency. Consequently, existing unlearning algorithms have different trade-offs. For example, one approach successfully forgets while preserving model utility but suffers from incomplete performance. Incorporating weights in the objective helps balance the value of utility.

Moreover, there has been significant trade-off in the detection of unlearning algorithms. While some work provides indications of accuracy on forget set examples, others directly inform about the distance from a completely retrained model. Nevertheless, they still utilize metrics to measure the rate of Membership Inference Attacks (MIAs), despite the inconsistencies in their evaluation.

We believe that the discrepancy in detection metrics and the lack of standardized protocols present a substantial barrier to progress in the field – we are limited in directly comparing different approaches in literature.

This gives rise to theoretical considerations about open challenges and opportunities for developing better algorithms that weigh the relative strengths and weaknesses of different approaches. To address the challenge of conflicting detections and foster the latest advancements in machine unlearning, we have organized the first Machine Unlearning Challenge in collaboration with a wide group of academic and industry researchers.

The first Machine Unlearning Challenge has been announced.

We are excited to announce the first Machine Unlearning Challenge as part of the NeurIPS 2023 competition track. The objective of the challenge is twofold. Firstly, by creating a unified and standardized detection metric for unlearning, we aim to compare different algorithms’ strengths and weaknesses side by side. Secondly, by opening the challenge to everyone, we hope to foster the development of new solutions and shed light on open challenges and opportunities.

The competition will be held from Kaggle and run from mid-July to September 2023. As part of the challenge, we are now announcing the availability of the initial dataset. This initial cut provides a foundation for participants to create and evaluate their machine unlearning models.

The competition focuses on a realistic scenario where a predictor has been trained on facial images as a predictor of age, and after training, a specific subset of training images needs to be forgotten for privacy or rights preservation purposes. To this end, we will provide participants with an artificial dataset of faces (shown below) and also use several real-world datasets with faces for evaluation. Participants are encouraged to submit code that takes as input forget and retain sets, and outputs the weight assigned to the forget set by a predictor that has not seen it. We will evaluate the utility of unlearning algorithms and models based on both their forget and retain performance. We will also introduce a hard cutoff that rejects unlearning algorithms that cause more than a certain amount of degradation to retrained models over time. A valuable outcome of this competition will be to highlight the trade-offs in different unlearning commercializations.

To estimate forgetfulness, we will employ Membership Inference Attack (MIA) tools such as LIRA.  MIAs were first established in the privacy and security literature to estimate which examples were in the training set.If unlearning is successful, there should be no indication of the forgotten examples in the unlearned model, which renders MIAs ineffective in this case: an attack cannot reliably infer if the forget set was indeed part of the original training set. Additionally, we will use statistical tests to compare how different the distributions of unlearned models (generated by a particular unlearning algorithm) are from the distributions of retrained models at the start. For an ideal unlearning algorithm, both should be distinct.

Result:

Machine unlearning is a strong technology that has the ability to overcome many unresolved machine learning difficulties. As ongoing research in this area continues, we look forward to exploring new, more effective, efficient, and responsible approaches. We are thrilled to have this opportunity to generate interest in the field through this challenge and are eager to share our insights and findings with the community.

Leave a Comment