Search Results for author: Paolo Di Lorenzo

Found 29 papers, 8 papers with code

Opportunistic Information-Bottleneck for Goal-oriented Feature Extraction and Communication

no code implementations14 Apr 2024 Francesco Binucci, Paolo Banelli, Paolo Di Lorenzo, Sergio Barbarossa

This approach is particularly useful every time a device needs to transmit data (or features) to a server that has to fulfil an inference task, as it provides a principled way to extract the most relevant features for the task to be executed, while looking for the best trade-off between the size of the feature vector to be transmitted, inference accuracy, and complexity.

Dynamic Relative Representations for Goal-Oriented Semantic Communications

no code implementations25 Mar 2024 Simone Fiorellino, Claudio Battiloro, Emilio Calvanese Strinati, Paolo Di Lorenzo

This paper presents a novel framework for goal-oriented semantic communication, leveraging relative representations to mitigate semantic mismatches via latent space alignment.

Control Aspects for Using RIS in Latency-Constrained Mobile Edge Computing

1 code implementation19 Dec 2023 Fabio Saggese, Victor Croisfelt, Francesca Costanzo, Junya Shiraishi, Radosław Kotaba, Paolo Di Lorenzo, Petar Popovski

This paper investigates the role and the impact of control operations for dynamic mobile edge computing (MEC) empowered by Reconfigurable Intelligent Surfaces (RISs), in which multiple devices offload their computation tasks to an access point (AP) equipped with an edge server (ES), with the help of the RIS.

Edge-computing

Enabling Edge Artificial Intelligence via Goal-oriented Deep Neural Network Splitting

no code implementations6 Dec 2023 Francesco Binucci, Mattia Merluzzi, Paolo Banelli, Emilio Calvanese Strinati, Paolo Di Lorenzo

In this work, we explore the opportunity of DNN splitting at the edge of 6G wireless networks to enable low energy cooperative inference with target delay and accuracy with a goal-oriented perspective.

Learning Multi-Frequency Partial Correlation Graphs

1 code implementation27 Nov 2023 Gabriele D'Acunto, Paolo Di Lorenzo, Francesco Bonchi, Stefania Sardellitti, Sergio Barbarossa

Despite the large research effort devoted to learning dependencies between time series, the state of the art still faces a major limitation: existing methods learn partial correlations but fail to discriminate across distinct frequency bands.

Time Series

Goal-oriented Communications for the IoT: System Design and Adaptive Resource Optimization

no code implementations21 Oct 2023 Paolo Di Lorenzo, Mattia Merluzzi, Francesco Binucci, Claudio Battiloro, Paolo Banelli, Emilio Calvanese Strinati, Sergio Barbarossa

Internet of Things (IoT) applications combine sensing, wireless communication, intelligence, and actuation, enabling the interaction among heterogeneous devices that collect and process considerable amounts of data.

Federated Learning

Generalized Simplicial Attention Neural Networks

1 code implementation5 Sep 2023 Claudio Battiloro, Lucia Testa, Lorenzo Giusti, Stefania Sardellitti, Paolo Di Lorenzo, Sergio Barbarossa

The aim of this work is to introduce Generalized Simplicial Attention Neural Networks (GSANs), i. e., novel neural architectures designed to process data defined on simplicial complexes using masked self-attentional layers.

Graph Classification Imputation +1

From Latent Graph to Latent Topology Inference: Differentiable Cell Complex Module

no code implementations25 May 2023 Claudio Battiloro, Indro Spinelli, Lev Telyatnikov, Michael Bronstein, Simone Scardapane, Paolo Di Lorenzo

Latent Graph Inference (LGI) relaxed the reliance of Graph Neural Networks (GNNs) on a given graph topology by dynamically learning it.

Lyapunov-Driven Deep Reinforcement Learning for Edge Inference Empowered by Reconfigurable Intelligent Surfaces

no code implementations18 May 2023 Kyriakos Stylianopoulos, Mattia Merluzzi, Paolo Di Lorenzo, George C. Alexandropoulos

In this paper, we propose a novel algorithm for energy-efficient, low-latency, accurate inference at the wireless edge, in the context of 6G networks endowed with reconfigurable intelligent surfaces (RISs).

Data Compression Edge Classification +1

Multi-user Goal-oriented Communications with Energy-efficient Edge Resource Management

no code implementations3 May 2023 Francesco Binucci, Paolo Banelli, Paolo Di Lorenzo, Sergio Barbarossa

A common challenge in running inference tasks from remote is to extract and transmit only the features that are most significant for the inference task.

Management

Tangent Bundle Convolutional Learning: from Manifolds to Cellular Sheaves and Back

no code implementations20 Mar 2023 Claudio Battiloro, Zhiyang Wang, Hans Riess, Paolo Di Lorenzo, Alejandro Ribeiro

We define tangent bundle filters and tangent bundle neural networks (TNNs) based on this convolution operation, which are novel continuous architectures operating on tangent bundle signals, i. e. vector fields over the manifolds.

Topological Signal Processing over Weighted Simplicial Complexes

no code implementations16 Feb 2023 Claudio Battiloro, Stefania Sardellitti, Sergio Barbarossa, Paolo Di Lorenzo

Weighing the topological domain over which data can be represented and analysed is a key strategy in many signal processing and machine learning applications, enabling the extraction and exploitation of meaningful data features and their (higher order) relationships.

Tangent Bundle Filters and Neural Networks: from Manifolds to Cellular Sheaves and Back

no code implementations26 Oct 2022 Claudio Battiloro, Zhiyang Wang, Hans Riess, Paolo Di Lorenzo, Alejandro Ribeiro

In this work we introduce a convolution operation over the tangent bundle of Riemannian manifolds exploiting the Connection Laplacian operator.

Denoising

Topological Slepians: Maximally Localized Representations of Signals over Simplicial Complexes

1 code implementation26 Oct 2022 Claudio Battiloro, Paolo Di Lorenzo, Sergio Barbarossa

This paper introduces topological Slepians, i. e., a novel class of signals defined over topological spaces (e. g., simplicial complexes) that are maximally concentrated on the topological domain (e. g., over a set of nodes, edges, triangles, etc.)

Denoising

Pooling Strategies for Simplicial Convolutional Networks

1 code implementation11 Oct 2022 Domenico Mattia Cinque, Claudio Battiloro, Paolo Di Lorenzo

The goal of this paper is to introduce pooling strategies for simplicial convolutional neural networks.

Graph Classification

Cell Attention Networks

1 code implementation16 Sep 2022 Lorenzo Giusti, Claudio Battiloro, Lucia Testa, Paolo Di Lorenzo, Stefania Sardellitti, Sergio Barbarossa

In this paper, we introduce Cell Attention Networks (CANs), a neural architecture operating on data defined over the vertices of a graph, representing the graph as the 1-skeleton of a cell complex introduced to capture higher order interactions.

Graph Attention Graph Classification +1

Multiscale Causal Structure Learning

no code implementations16 Jul 2022 Gabriele D'Acunto, Paolo Di Lorenzo, Sergio Barbarossa

The inference of causal structures from observed data plays a key role in unveiling the underlying dynamics of the system.

Computational Efficiency Time Series +1

Energy-Efficient Classification at the Wireless Edge with Reliability Guarantees

no code implementations21 Apr 2022 Mattia Merluzzi, Claudio Battiloro, Paolo Di Lorenzo, Emilio Calvanese Strinati

Learning at the edge is a challenging task from several perspectives, since data must be collected by end devices (e. g. sensors), possibly pre-processed (e. g. data compression), and finally processed remotely to output the result of training and/or inference phases.

Data Compression Image Classification

Goal-Oriented Communication for Edge Learning based on the Information Bottleneck

no code implementations25 Feb 2022 Francesco Pezone, Sergio Barbarossa, Paolo Di Lorenzo

The IB principle is used to design the encoder in order to find an optimal balance between representation complexity and relevance of the encoded data with respect to the goal.

Image Classification Stochastic Optimization

Dynamic Edge Computing empowered by Reconfigurable Intelligent Surfaces

no code implementations21 Dec 2021 Paolo Di Lorenzo, Mattia Merluzzi, Emilio Calvanese Strinati, Sergio Barbarossa

In this paper, we propose a novel algorithm for energy-efficient, low-latency dynamic mobile edge computing (MEC), in the context of beyond 5G networks endowed with Reconfigurable Intelligent Surfaces (RISs).

Edge-computing Stochastic Optimization

Discontinuous Computation Offloading for Energy-Efficient Mobile Edge Computing

no code implementations8 Aug 2020 Mattia Merluzzi, Nicola di Pietro, Paolo Di Lorenzo, Emilio Calvanese Strinati, Sergio Barbarossa

We propose a novel strategy for energy-efficient dynamic computation offloading, in the context of edge-computing-aided beyond 5G networks.

Edge-computing Stochastic Optimization

Distributed Training of Graph Convolutional Networks

no code implementations13 Jul 2020 Simone Scardapane, Indro Spinelli, Paolo Di Lorenzo

After formulating the centralized GCN training problem, we first show how to make inference in a distributed scenario where the underlying data graph is split among different agents.

Distributed Optimization

Adaptive Graph Signal Processing: Algorithms and Optimal Sampling Strategies

no code implementations12 Sep 2017 Paolo Di Lorenzo, Paolo Banelli, Elvin Isufi, Sergio Barbarossa, Geert Leus

Numerical simulations carried out over both synthetic and real data illustrate the good performance of the proposed sampling and reconstruction strategies for (possibly distributed) adaptive learning of signals defined over graphs.

Graph Sampling

Stochastic Training of Neural Networks via Successive Convex Approximations

1 code implementation15 Jun 2017 Simone Scardapane, Paolo Di Lorenzo

Additionally, we show how the algorithm can be easily parallelized over multiple computational units without hindering its performance.

A Framework for Parallel and Distributed Training of Neural Networks

1 code implementation24 Oct 2016 Simone Scardapane, Paolo Di Lorenzo

The aim of this paper is to develop a general framework for training neural networks (NNs) in a distributed environment, where training data is partitioned over a set of agents that communicate with each other through a sparse, possibly time-varying, connectivity pattern.

Adaptive Least Mean Squares Estimation of Graph Signals

no code implementations18 Feb 2016 Paolo Di Lorenzo, Sergio Barbarossa, Paolo Banelli, Stefania Sardellitti

The aim of this paper is to propose a least mean squares (LMS) strategy for adaptive estimation of signals defined over graphs.

Graph Sampling

Cannot find the paper you are looking for? You can Submit a new open access paper.