no code implementations • 26 Mar 2024 • Khac-Hoang Ngo, Johan Östman, Giuseppe Durisi, Alexandre Graell i Amat
In this paper, we delve into the privacy implications of SecAgg by treating it as a local differential privacy (LDP) mechanism for each local update.
no code implementations • 26 Mar 2024 • Lise Aabel, Sven Jacobsson, Mikael Coldrey, Frida Olofsson, Giuseppe Durisi, Christian Fager
The CU is connected to multiple single-antenna remote radio heads (RRHs) via optical fibers, over which a binary RF waveform is transmitted.
no code implementations • 8 Sep 2023 • Fredrik Hellström, Giuseppe Durisi, Benjamin Guedj, Maxim Raginsky
Over the past decades, the PAC-Bayesian approach has been established as a flexible framework to address the generalization capabilities of machine learning algorithms, and design new ones.
no code implementations • 12 Oct 2022 • Fredrik Hellström, Giuseppe Durisi
Furthermore, using the evaluated CMI, we derive a samplewise, average version of Seeger's PAC-Bayesian bound, where the convex function is the binary KL divergence.
no code implementations • 12 Oct 2022 • Fredrik Hellström, Giuseppe Durisi
Recent work has established that the conditional mutual information (CMI) framework of Steinke and Zakynthinou (2020) is expressive enough to capture generalization guarantees in terms of algorithmic stability, VC dimension, and related complexity measures for conventional learning (Harutyunyan et al., 2021, Haghifam et al., 2021).
no code implementations • 23 Jul 2021 • Oscar Castañeda, Seyed Hadi Mirfarshbafan, Shahaboddin Ghajari, Alyosha Molnar, Sven Jacobsson, Giuseppe Durisi, Christoph Studer
All-digital basestation (BS) architectures for millimeter-wave (mmWave) massive multi-user multiple-input multiple-output (MU-MIMO), which equip each radio-frequency chain with dedicated data converters, have advantages in spectral efficiency, flexibility, and baseband-processing simplicity over hybrid analog-digital solutions.
no code implementations • 24 Dec 2020 • Sina Rezaei Aghdam, Sven Jacobsson, Ulf Gustavsson, Giuseppe Durisi, Christoph Studer, Thomas Eriksson
By studying the spatial characteristics of the distortion, we demonstrate that conventional linear precoding techniques steer nonlinear distortions towards the users.
Information Theory Signal Processing Information Theory
no code implementations • 4 Nov 2020 • Sharu Theresa Jose, Osvaldo Simeone, Giuseppe Durisi
In this paper, we introduce the problem of transfer meta-learning, in which tasks are drawn from a target task environment during meta-testing that may differ from the source task environment observed during meta-training.
no code implementations • 22 Oct 2020 • Fredrik Hellström, Giuseppe Durisi
If the conditional information density is bounded uniformly in the size $n$ of the training set, our bounds decay as $1/n$.
no code implementations • 21 Oct 2020 • Arezou Rezazadeh, Sharu Theresa Jose, Giuseppe Durisi, Osvaldo Simeone
Meta-learning optimizes an inductive bias---typically in the form of the hyperparameters of a base-learning algorithm---by observing data from a finite number of related tasks.
no code implementations • 28 Sep 2020 • Fredrik Hellström, Giuseppe Durisi
If the conditional information density is bounded uniformly in the size $n$ of the training set, our bounds decay as $1/n$, which is referred to as a fast rate.
no code implementations • 8 Sep 2020 • Oscar Castañeda, Sven Jacobsson, Giuseppe Durisi, Tom Goldstein, Christoph Studer
All-digital basestation (BS) architectures enable superior spectral efficiency compared to hybrid solutions in massive multi-user MIMO systems.
no code implementations • 16 May 2020 • Fredrik Hellström, Giuseppe Durisi
We present a general approach, based on exponential inequalities, to derive bounds on the generalization error of randomized learning algorithms.
no code implementations • 20 Apr 2020 • Fredrik Hellström, Giuseppe Durisi
Our approach can be used to obtain bounds on the average generalization error as well as bounds on its tail probabilities, both for the case in which a new hypothesis is randomly generated every time the algorithm is used - as often assumed in the probably approximately correct (PAC)-Bayesian literature - and in the single-draw case, where the hypothesis is extracted only once.