Cross-Cluster Weighted Forests

17 May 2021  ·  Maya Ramchandran, Rajarshi Mukherjee, Giovanni Parmigiani ·

Adapting machine learning algorithms to better handle clustering or batch effects within training data sets is important across a wide variety of biological applications. This article considers the effect of ensembling Random Forest learners trained on clusters within a single data set with heterogeneity in the distribution of the features. We find that constructing ensembles of forests trained on clusters determined by algorithms such as k-means results in significant improvements in accuracy and generalizability over the traditional Random Forest algorithm. We denote our novel approach as the Cross-Cluster Weighted Forest, and examine its robustness to various data-generating scenarios and outcome models. Furthermore, we explore the influence of the data-partitioning and ensemble weighting strategies the benefits of our method over the existing paradigm. Finally, we apply our approach to cancer molecular profiling and gene expression data sets that are naturally divisible into clusters and illustrate that our approach outperforms classic Random Forest. Code and supplementary material are available at https://github.com/m-ramchandran/cross-cluster.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here