1 code implementation • 11 May 2024 • Özlem Tuğfe Demir, Lianet Méndez-Monsanto, Nicola Bastianello, Emma Fitzgerald, Gilles Callebaut
Furthermore, we show that adopting a more distributed CF-mMIMO architecture is necessary to meet the data rate requirements.
no code implementations • 26 Mar 2024 • Nicola Bastianello, Changxin Liu, Karl H. Johansson
In this paper we propose the federated private local training algorithm (Fed-PLT) for federated learning, to overcome the challenges of (i) expensive communications and (ii) privacy preservation.
no code implementations • 1 Sep 2023 • Nicola Bastianello, Diego Deplano, Mauro Franceschelli, Karl H. Johansson
The recent deployment of multi-agent networks has enabled the distributed solution of learning problems, where agents cooperate to train a global model without sharing their local, private data.
no code implementations • 13 Jul 2023 • Nicola Bastianello, Apostolos I. Rikos, Karl H. Johansson
Online distributed learning refers to the process of training learning models on distributed data sources.
1 code implementation • 27 May 2021 • Nicola Bastianello, Andrea Simonetto, Emiliano Dall'Anese
This paper presents a new regularization approach -- termed OpReg-Boost -- to boost the convergence and lessen the asymptotic error of online optimization and learning algorithms.
2 code implementations • 12 Nov 2020 • Nicola Bastianello
Then it discusses the different components of the framework and their use for modeling and solving time-varying optimization problems.
no code implementations • 24 Apr 2020 • Nicola Bastianello, Ruggero Carli, Andrea Simonetto
In this paper, we focus on the solution of online optimization problems that arise often in signal processing and machine learning, in which we have access to streaming sources of data.