1 code implementation • 15 Sep 2023 • Richard Cornelius Suwandi, Zhidi Lin, Feng Yin, Zhiguo Wang, Sergios Theodoridis
This paper presents a novel GP linear multiple kernel (LMK) and a generic sparsity-aware distributed learning framework to optimize the hyper-parameters.
2 code implementations • 3 Sep 2023 • Zhidi Lin, Juan Maroñas, Ying Li, Feng Yin, Sergios Theodoridis
The Gaussian process state-space model (GPSSM) has attracted extensive attention for modeling complex nonlinear dynamical systems.
no code implementations • 1 Jun 2023 • Sarthak Yadav, Sergios Theodoridis, Lars Kai Hansen, Zheng-Hua Tan
In this work, we propose a Multi-Window Masked Autoencoder (MW-MAE) fitted with a novel Multi-Window Multi-Head Attention (MW-MHA) module that facilitates the modelling of local-global interactions in every decoder transformer block through attention heads of several distinct local and global windows.
no code implementations • 28 May 2022 • Lei Cheng, Feng Yin, Sergios Theodoridis, Sotirios Chatzis, Tsung-Hui Chang
However, a come back of Bayesian methods is taking place that sheds new light on the design of deep neural networks, which also establish firm links with Bayesian models and inspire new paths for unsupervised learning, such as Bayesian tensor decomposition.
1 code implementation • 5 Dec 2021 • Konstantinos P. Panousis, Sotirios Chatzis, Sergios Theodoridis
This work explores the potency of stochastic competition-based activations, namely Stochastic Local Winner-Takes-All (LWTA), against powerful (gradient-based) white-box and black-box adversarial attacks; we especially focus on Adversarial Training settings.
Ranked #2 on Adversarial Robustness on CIFAR-10
no code implementations • 15 Sep 2021 • Sergis Nicolaou, Lambros Mavrides, Georgina Tryfou, Kyriakos Tolias, Konstantinos Panousis, Sotirios Chatzis, Sergios Theodoridis
Speech is the most common way humans express their feelings, and sentiment analysis is the use of tools such as natural language processing and computational algorithms to identify the polarity of these feelings.
no code implementations • 4 Jan 2021 • Konstantinos P. Panousis, Sotirios Chatzis, Antonios Alexos, Sergios Theodoridis
The main operating principle of the introduced units lies on stochastic arguments, as the network performs posterior sampling over competing units to select the winner.
no code implementations • 5 Sep 2020 • Lei Cheng, Zhongtao Chen, Qingjiang Shi, Yik-Chung Wu, Sergios Theodoridis
However, the optimal determination of a tensor rank is known to be a non-deterministic polynomial-time hard (NP-hard) task.
no code implementations • 12 May 2020 • Christos Chatzichristos, Eleftherios Kofidis, Lieven De Lathauwer, Sergios Theodoridis, Sabine Van Huffel
The fusion methods reported so far ignore the underlying multi-way nature of the data in at least one of the modalities and/or rely on very strong assumptions about the relation of the two datasets.
no code implementations • 8 Mar 2020 • Feng Yin, Zhidi Lin, Yue Xu, Qinglei Kong, Deshi Li, Sergios Theodoridis, Shuguang, Cui
In this overview paper, data-driven learning model-based cooperative localization and location data processing are considered, in line with the emerging machine learning and big data methods.
no code implementations • 13 Feb 2020 • Konstantinos P. Panousis, Sotirios Chatzis, Sergios Theodoridis
Hidden Markov Models (HMMs) comprise a powerful generative approach for modeling sequential data and time-series in general.
no code implementations • 21 Apr 2019 • Feng Yin, Lishuo Pan, Xinwei He, Tianshi Chen, Sergios Theodoridis, Zhi-Quan, Luo
Gaussian processes (GP) for machine learning have been studied systematically over the past two decades and they are by now widely used in a number of diverse applications.
no code implementations • 1 Aug 2018 • Kai Chen, Yijue Dai, Feng Yin, Elena Marchiori, Sergios Theodoridis
Then, we propose a novel SM kernel with a dependency structure (SMD) by using cross-convolution between the SM components.
1 code implementation • 19 May 2018 • Konstantinos P. Panousis, Sotirios Chatzis, Sergios Theodoridis
To this end, we revisit deep networks that comprise competing linear units, as opposed to nonlinear units that do not entail any form of (local) competition.
no code implementations • 20 Apr 2018 • Youngjoo Seo, Manuel Morante, Yannis Kopsinis, Sergios Theodoridis
In this paper, we propose a novel unsupervised learning method to learn the brain dynamics using a deep learning architecture named residual D-net.
1 code implementation • 5 Feb 2018 • Manuel Morante, Yannis Kopsinis, Sergios Theodoridis, Athanassios Protopapas
The new method allows the incorporation of a priori knowledge associated both with the experimental design as well as with available brain Atlases.
no code implementations • 23 Mar 2017 • Pantelis Bouboulis, Symeon Chouvardas, Sergios Theodoridis
To the best of our knowledge, this is the first time that a complete protocol for distributed online learning in RKHS is presented.
no code implementations • 11 Oct 2016 • Manuel Morante Moreno, Yannis Kopsinis, Eleftherios Kofidis, Christos Chatzichristos, Sergios Theodoridis
Extracting information from functional magnetic resonance (fMRI) images has been a major area of research for more than two decades.
no code implementations • 15 Jul 2016 • Christos Chatzichristos, Eleftherios Kofidis, Giannis Kopsinis, Sergios Theodoridis
The growing use of neuroimaging technologies generates a massive amount of biomedical data that exhibit high dimensionality.
no code implementations • 12 Jun 2016 • Pantelis Bouboulis, Spyridon Pougkakiotis, Sergios Theodoridis
We present a new framework for online Least Squares algorithms for nonlinear modeling in RKH spaces (RKHS).
no code implementations • 4 Jan 2016 • George Papageorgiou, Pantelis Bouboulis, Sergios Theodoridis
Finally, the proposed robust estimation framework is applied to the task of image denoising, and its enhanced performance in the presence of outliers is demonstrated.
no code implementations • 9 Mar 2013 • Pantelis Bouboulis, Sergios Theodoridis, Charalampos Mavroforakis, Leoni Dalla
The method exploits the notion of widely linear estimation to model the input-out relation for complex-valued data and considers two cases: a) the complex data are split into their real and imaginary parts and a typical real kernel is employed to map the complex data to a complexified feature space and b) a pure complex kernel is used to directly map the data to the induced complex feature space.