no code implementations • 19 Oct 2023 • Danny Wood, Theodore Papamarkou, Matt Benatan, Richard Allmendinger
In particular, by adapting permutation feature importance, partial dependence plots, and individual conditional expectation plots, we demonstrate that novel insights into model behaviour may be obtained and that these methods can be used to measure the impact of features on both the entropy of the predictive distribution and the log-likelihood of the ground truth labels under that distribution.
1 code implementation • 16 Jul 2023 • Adam Perrett, Danny Wood, Gavin Brown
This work presents a novel algorithm for transforming a neural network into a spline representation.
1 code implementation • 10 Jan 2023 • Danny Wood, Tingting Mu, Andrew Webb, Henry Reeve, Mikel Luján, Gavin Brown
We present a theory of ensemble diversity, explaining the nature of diversity for a wide range of supervised learning scenarios.
no code implementations • 26 Apr 2022 • Danny Wood, Tingting Mu, Gavin Brown
We introduce a novel bias-variance decomposition for a range of strictly convex margin losses, including the logistic loss (minimized by the classic LogitBoost algorithm), as well as the squared margin loss and canonical boosting loss.