no code implementations • 2 May 2023 • Bhanu Angam, Alessandro Beretta, Eli de Poorter, Matthieu Duvinage, Daniel Peralta
We also compare the performance of the forecast before and after COVID.
no code implementations • 28 Feb 2023 • Germán González-Almagro, Daniel Peralta, Eli de Poorter, José-Ramón Cano, Salvador García
To remedy this, this study presents in-detail the background of constrained clustering and provides a novel ranked taxonomy of the types of constraints that can be used in constrained clustering.
no code implementations • 21 Dec 2022 • Christoph Bergmeir, Frits de Nijs, Abishek Sriramulu, Mahdi Abolghasemi, Richard Bean, John Betts, Quang Bui, Nam Trong Dinh, Nils Einecke, Rasul Esmaeilbeigi, Scott Ferraro, Priya Galketiya, Evgenii Genov, Robert Glasgow, Rakshitha Godahewa, Yanfei Kang, Steffen Limmer, Luis Magdalena, Pablo Montero-Manso, Daniel Peralta, Yogesh Pipada Sunil Kumar, Alejandro Rosales-Pérez, Julian Ruddick, Akylas Stratigakos, Peter Stuckey, Guido Tack, Isaac Triguero, Rui Yuan
As both forecasting and optimization are difficult problems in their own right, relatively few research has been done in this area.
no code implementations • 4 Oct 2022 • Oliver Urs Lenz, Daniel Peralta, Chris Cornelis
We propose polar encoding, a representation of categorical and numerical $[0, 1]$-valued attributes with missing values to be used in a classification context.
no code implementations • 28 Jun 2022 • Oliver Urs Lenz, Daniel Peralta, Chris Cornelis
Imputation allows datasets to be used with algorithms that cannot handle missing values by themselves.
no code implementations • 4 Feb 2021 • Oliver Urs Lenz, Daniel Peralta, Chris Cornelis
The hyperparameters of SVM and LOF have to be optimised through cross-validation, while NND, LNND and ALP allow an efficient form of leave-one-out validation and the reuse of a single nearest-neighbour query.
no code implementations • 26 Jan 2021 • Oliver Urs Lenz, Daniel Peralta, Chris Cornelis
One-class classification is a challenging subfield of machine learning in which so-called data descriptors are used to predict membership of a class based solely on positive examples of that class, and no counter-examples.
no code implementations • 21 Mar 2017 • Daniel Peralta, Isaac Triguero, Salvador García, Yvan Saeys, Jose M. Benitez, Francisco Herrera
In our experiments, convolutional neural networks yielded better accuracy and penetration rate than state-of-the-art classifiers based on explicit feature extraction.