Paper

Forgetting Fast in Recommender Systems

Users of a recommender system may want part of their data being deleted, not only from the data repository but also from the underlying machine learning model, for privacy or utility reasons. Such right-to-be-forgotten requests could be fulfilled by simply retraining the recommendation model from scratch, but that would be too slow and too expensive in practice. In this paper, we investigate fast machine unlearning techniques for recommender systems that can remove the effect of a small amount of training data from the recommendation model without incurring the full cost of retraining. A natural idea to speed this process up is to fine-tune the current recommendation model on the remaining training data instead of starting from a random initialization. This warm-start strategy indeed works for neural recommendation models using standard 1st-order neural network optimizers (like AdamW). However, we have found that even greater acceleration could be achieved by employing 2nd-order (Newton or quasi-Newton) optimization methods instead. To overcome the prohibitively high computational cost of 2nd-order optimizers, we propose a new recommendation unlearning approach AltEraser which divides the optimization problem of unlearning into many small tractable sub-problems. Extensive experiments on three real-world recommendation datasets show promising results of AltEraser in terms of consistency (forgetting thoroughness), accuracy (recommendation effectiveness), and efficiency (unlearning speed). To our knowledge, this work represents the first attempt at fast approximate machine unlearning for state-of-the-art neural recommendation models.

Results in Papers With Code
(↓ scroll down to see all results)