Paper

Personalized Elastic Embedding Learning for On-Device Recommendation

To address privacy concerns and reduce network latency, there has been a recent trend of compressing cumbersome recommendation models trained on the cloud and deploying compact recommender models to resource-limited devices for the real-time recommendation. Existing solutions generally overlook device heterogeneity and user heterogeneity. They require devices with the same budget to share the same model and assume the available device resources (e.g., memory) are constant, which is not reflective of reality. Considering device and user heterogeneities as well as dynamic resource constraints, this paper proposes a Personalized Elastic Embedding Learning framework (PEEL) for the on-device recommendation, which generates Personalized Elastic Embeddings (PEEs) for devices with various memory budgets in a once-for-all manner, adapting to new or dynamic budgets, and addressing user preference diversity by assigning personalized embeddings for different groups of users. Specifically, it pretrains a global embedding table with collected user-item interaction instances and clusters users into groups. Then, it refines the embedding tables with local interaction instances within each group. PEEs are generated from the group-wise embedding blocks and their weights that indicate the contribution of each embedding block to the local recommendation performance. Given a memory budget, PEEL efficiently generates PEEs by selecting embedding blocks with the largest weights, making it adaptable to dynamic memory budgets on devices. Furthermore, a diversity-driven regularizer is implemented to encourage the expressiveness of embedding blocks, and a controller is utilized to optimize the weights. Extensive experiments are conducted on two public datasets, and the results show that PEEL yields superior performance on devices with heterogeneous and dynamic memory budgets.

Results in Papers With Code
(↓ scroll down to see all results)