no code implementations • 25 Mar 2024 • Eli Chien, Haoyu Wang, Ziang Chen, Pan Li
Our approach achieves a similar utility under the same privacy constraint while using $2\%$ and $10\%$ of the gradient computations compared with the state-of-the-art gradient-based approximate unlearning methods for mini-batch and full-batch settings, respectively.
1 code implementation • 23 Feb 2024 • Jin Yao, Eli Chien, Minxin Du, Xinyao Niu, Tianhao Wang, Zezhou Cheng, Xiang Yue
This study investigates the concept of the `right to be forgotten' within the context of large language models (LLMs).
no code implementations • 18 Jan 2024 • Eli Chien, Haoyu Wang, Ziang Chen, Pan Li
We propose Langevin unlearning, an unlearning framework based on noisy gradient descent with privacy guarantees for approximate unlearning problems.
1 code implementation • 28 Oct 2023 • Zheyuan Liu, Guangyao Dou, Yijun Tian, Chunhui Zhang, Eli Chien, Ziwei Zhu
Exploring the full spectrum of trade-offs between privacy, model utility, and runtime efficiency is critical for practical unlearning scenarios.
no code implementations • 24 Oct 2023 • Rongzhe Wei, Eleonora Kreačić, Haoyu Wang, Haoteng Yin, Eli Chien, Vamsi K. Potluru, Pan Li
Privacy concerns have led to a surge in the creation of synthetic datasets, with diffusion models emerging as a promising avenue.
2 code implementations • 14 Aug 2023 • Saurav Prakash, Jin Sima, Chao Pan, Eli Chien, Olgica Milenkovic
Third, we compute the complexity of the convex hulls in hyperbolic spaces to assess the extent of data leakage; at the same time, in order to limit communication cost for the hulls, we propose a new quantization method for the Poincar\'e disc coupled with Reed-Solomon-like encoding.
no code implementations • 31 May 2023 • Che-Ping Tsai, Jiong Zhang, Eli Chien, Hsiang-Fu Yu, Cho-Jui Hsieh, Pradeep Ravikumar
We introduce a novel class of sample-based explanations we term high-dimensional representers, that can be used to explain the predictions of a regularized high-dimensional model in terms of importance weights for each of the training samples.
1 code implementation • 21 May 2023 • Eli Chien, Jiong Zhang, Cho-Jui Hsieh, Jyun-Yu Jiang, Wei-Cheng Chang, Olgica Milenkovic, Hsiang-Fu Yu
Unlike most existing XMC frameworks that treat labels and input instances as featureless indicators and independent entries, PINA extracts information from the label metadata and the correlations among training instances.
1 code implementation • 6 Nov 2022 • Chao Pan, Eli Chien, Olgica Milenkovic
As the demand for user privacy grows, controlled data removal (machine unlearning) is becoming an important feature of machine learning models for data-sensitive Web applications such as social networks and recommender systems.
1 code implementation • 18 Jun 2022 • Eli Chien, Chao Pan, Olgica Milenkovic
For example, when unlearning $20\%$ of the nodes on the Cora dataset, our approach suffers only a $0. 1\%$ loss in test accuracy while offering a $4$-fold speed-up compared to complete retraining.
1 code implementation • 19 May 2022 • Eli Chien, Puoya Tabaghi, Olgica Milenkovic
Furthermore, it is currently not known how to choose the most suitable approximation objective for noisy fitting.
1 code implementation • 7 Mar 2022 • Chao Pan, Eli Chien, Puoya Tabaghi, Jianhao Peng, Olgica Milenkovic
The excellent performance of the Poincar\'e second-order and strategic perceptrons shows that the proposed framework can be extended to general machine learning problems in hyperbolic spaces.
4 code implementations • ICLR 2022 • Eli Chien, Wei-Cheng Chang, Cho-Jui Hsieh, Hsiang-Fu Yu, Jiong Zhang, Olgica Milenkovic, Inderjit S Dhillon
We also provide a theoretical analysis that justifies the use of XMC over link prediction and motivates integrating XR-Transformers, a powerful method for solving XMC problems, into the GIANT framework.
Ranked #2 on Node Property Prediction on ogbn-papers100M
1 code implementation • 8 Sep 2021 • Eli Chien, Chao Pan, Puoya Tabaghi, Olgica Milenkovic
For hierarchical data, the space of choice is a hyperbolic space since it guarantees low-distortion embeddings for tree-like structures.
1 code implementation • ICLR 2022 • Eli Chien, Chao Pan, Jianhao Peng, Olgica Milenkovic
We propose AllSet, a new hypergraph neural network paradigm that represents a highly general framework for (hyper)graph neural networks and for the first time implements hypergraph neural network layers as compositions of two multiset functions that can be efficiently learned for each task and each dataset.
1 code implementation • 19 Feb 2021 • Puoya Tabaghi, Chao Pan, Eli Chien, Jianhao Peng, Olgica Milenkovic
The results show that classification in low-dimensional product space forms for scRNA-seq data offers, on average, a performance improvement of $\sim15\%$ when compared to that in Euclidean spaces of the same dimension.
1 code implementation • ICLR 2021 • Eli Chien, Jianhao Peng, Pan Li, Olgica Milenkovic
We address these issues by introducing a new Generalized PageRank (GPR) GNN architecture that adaptively learns the GPR weights so as to jointly optimize node feature and topological information extraction, regardless of the extent to which the node labels are homophilic or heterophilic.
no code implementations • 14 Jun 2020 • Eli Chien, Olgica Milenkovic, Angelia Nedich
Here we introduce the first known approach to support estimation in the presence of sampling artifacts and errors where each sample is assumed to arise from a Poisson repeat channel which simultaneously captures repetitions and deletions of samples.
no code implementations • 15 Nov 2019 • Eli Chien, Antonia Maria Tulino, Jaime Llorca
Galhotra et al. recently proposed a motif-counting algorithm for unsupervised community detection in the geometric block model that is proved to be near-optimal.
no code implementations • 8 Nov 2019 • Anuththari Gamage, Eli Chien, Jianhao Peng, Olgica Milenkovic
Generative models are successful at retaining pairwise associations in the underlying networks but often fail to capture higher-order connectivity patterns known as network motifs.
no code implementations • 20 Oct 2019 • Eli Chien, Pan Li, Olgica Milenkovic
We describe the first known mean-field study of landing probabilities for random walks on hypergraphs.
no code implementations • NeurIPS 2019 • Pan Li, Eli Chien, Olgica Milenkovic
Landing probabilities (LP) of random walks (RW) over graphs encode rich information regarding graph topology.