no code implementations • 1 May 2024 • Yuta Nakahara
We apply this algorithm to a clustering task in machine learning.
no code implementations • 9 Feb 2024 • Keito Tajima, Naoki Ichijo, Yuta Nakahara, Toshiyasu Matsushima
Bagging independently constructs decision trees without evaluating their combination performance and averages them afterward.
no code implementations • 9 Feb 2024 • Ryota Maniwa, Naoki Ichijo, Yuta Nakahara, Toshiyasu Matsushima
Thus, it is expected that ensembles of meta-trees are more effective in improving predictive performance than a single meta-tree, and there are no previous studies that construct multiple meta-trees in boosting.
no code implementations • 12 Jun 2023 • Yuta Nakahara, Shota Saito, Naoki Ichijo, Koki Kazama, Toshiyasu Matsushima
In the field of decision trees, most previous studies have difficulty ensuring the statistical optimality of a prediction of new data and suffer from overfitting because trees are usually used only to represent prediction functions to be constructed from given data.
no code implementations • 17 Mar 2023 • Yuta Nakahara, Toshiyasu Matsushima
Previously, we proposed a probabilistic data generation model represented by an unobservable tree and a sequential updating method to calculate a posterior distribution over a set of trees.
no code implementations • 26 Jan 2022 • Ryohei Oka, Yuta Nakahara, Toshiyasu Matsushima
When the basis is unknown the candidate of basis increases in exponential order with respect to the signal size.
no code implementations • 24 Jan 2022 • Yuta Nakahara, Shota Saito, Akira Kamatsuka, Toshiyasu Matsushima
The hierarchical and recursive expressive capability of rooted trees is applicable to represent statistical models in various areas, such as data compression, image processing, and machine learning.
no code implementations • 27 Sep 2021 • Yuta Nakahara, Shota Saito, Akira Kamatsuka, Toshiyasu Matsushima
Its parametric representation is suitable for calculating the properties of our distribution using recursive functions, such as the mode, expectation, and posterior distribution.
no code implementations • 24 Sep 2020 • Yasushi Esaki, Yuta Nakahara, Toshiyasu Matsushima
We propose two new criteria to understand the advantage of deepening neural networks.