Search Results for author: Anne-Laure Boulesteix

Found 11 papers, 6 papers with code

Position Paper: Rethinking Empirical Research in Machine Learning: Addressing Epistemic and Methodological Challenges of Experimentation

no code implementations3 May 2024 Moritz Herrmann, F. Julian D. Lange, Katharina Eggensperger, Giuseppe Casalicchio, Marcel Wever, Matthias Feurer, David Rügamer, Eyke Hüllermeier, Anne-Laure Boulesteix, Bernd Bischl

We warn against a common but incomplete understanding of empirical research in machine learning (ML) that leads to non-replicable results, makes findings unreliable, and threatens to undermine progress in the field.

Position

Evaluating machine learning models in non-standard settings: An overview and new findings

no code implementations23 Oct 2023 Roman Hornung, Malte Nalenz, Lennart Schneider, Andreas Bender, Ludwig Bothmann, Bernd Bischl, Thomas Augustin, Anne-Laure Boulesteix

Our findings corroborate the concern that standard resampling methods often yield biased GE estimates in non-standard settings, underscoring the importance of tailored GE estimation.

Prediction approaches for partly missing multi-omics covariate data: A literature review and an empirical comparison study

1 code implementation8 Feb 2023 Roman Hornung, Frederik Ludwigs, Jonas Hagenberg, Anne-Laure Boulesteix

Frequently, however, in the training data and the data to which automatic prediction rules should be applied, the test data, the different omics data types are not available for all patients.

Essential guidelines for computational method benchmarking

1 code implementation3 Dec 2018 Lukas M. Weber, Wouter Saelens, Robrecht Cannoodt, Charlotte Soneson, Alexander Hapfelmeier, Paul Gardner, Anne-Laure Boulesteix, Yvan Saeys, Mark D. Robinson

In computational biology and other sciences, researchers are frequently faced with a choice between several computational methods for performing data analyses.

Benchmarking

Hyperparameters and Tuning Strategies for Random Forest

1 code implementation10 Apr 2018 Philipp Probst, Marvin Wright, Anne-Laure Boulesteix

In a benchmark study on several datasets, we compare the prediction performance and runtime of tuneRanger with other tuning implementations in R and RF with default hyperparameters.

Tunability: Importance of Hyperparameters of Machine Learning Algorithms

2 code implementations26 Feb 2018 Philipp Probst, Bernd Bischl, Anne-Laure Boulesteix

Firstly, we formalize the problem of tuning from a statistical point of view, define data-based defaults and suggest general measures quantifying the tunability of hyperparameters of algorithms.

Benchmarking BIG-bench Machine Learning

To tune or not to tune the number of trees in random forest?

1 code implementation16 May 2017 Philipp Probst, Anne-Laure Boulesteix

The number of trees T in the random forest (RF) algorithm for supervised learning has to be set by the user.

General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.