no code implementations • NAACL (TextGraphs) 2021 • Parishad BehnamGhader, Hossein Zakerinia, Mahdieh Soleymani Baghshah
Pre-trained models like Bidirectional Encoder Representations from Transformers (BERT), have recently made a big leap forward in Natural Language Processing (NLP) tasks.
no code implementations • 6 Feb 2024 • Hossein Zakerinia, Amin Behjati, Christoph H. Lampert
We introduce a new framework for studying meta-learning methods using PAC-Bayesian theory.
1 code implementation • 8 Jun 2023 • Jonathan Scott, Hossein Zakerinia, Christoph H. Lampert
We present PeFLL, a new personalized federated learning algorithm that improves over the state-of-the-art in three aspects: 1) it produces more accurate models, especially in the low-data regime, and not only for clients present during its training phase, but also for any that may emerge in the future; 2) it reduces the amount of on-client computation and client-server communication by providing future clients with ready-to-use personalized models that require no additional finetuning or optimization; 3) it comes with theoretical guarantees that establish generalization from the observed clients to future ones.
no code implementations • 20 Jun 2022 • Hossein Zakerinia, Shayan Talaei, Giorgi Nadiradze, Dan Alistarh
Federated Learning (FL) enables large-scale distributed training of machine learning models, while still allowing individual nodes to maintain data locally.