no code implementations • 21 Feb 2023 • Tristan Cinquin, Tammo Rukat, Philipp Schmidt, Martin Wistuba, Artur Bekasov
Variational inference is often used to implement Bayesian neural networks, but is difficult to apply to GBMs, because the decision trees used as weak learners are non-differentiable.
no code implementations • 28 Jun 2019 • Tammo Rukat, Christopher Yau
We build upon probabilistic models for Boolean Matrix and Boolean Tensor factorisation that have recently been shown to solve these problems with unprecedented accuracy and to enable posterior inference to scale to Billions of observation.
1 code implementation • ICML 2018 • Tammo Rukat, Chris Holmes, Christopher Yau
Boolean tensor decomposition approximates data of multi-way binary relationships as product of interpretable low-rank binary factors, following the rules Boolean algebra.
1 code implementation • 11 May 2018 • Tammo Rukat, Chris C. Holmes, Christopher Yau
Boolean tensor decomposition approximates data of multi-way binary relationships as product of interpretable low-rank binary factors, following the rules of Boolean algebra.
no code implementations • 30 Nov 2017 • Tammo Rukat, Dustin Lange, Cédric Archambeau
Learning attribute applicability of products in the Amazon catalog (e. g., predicting that a shoe should have a value for size, but not for battery-type at scale is a challenge.
no code implementations • ICML 2017 • Tammo Rukat, Chris C. Holmes, Michalis K. Titsias, Christopher Yau
Boolean matrix factorisation aims to decompose a binary data matrix into an approximate Boolean product of two low rank, binary matrices: one containing meaningful patterns, the other quantifying how the observations can be expressed as a combination of these patterns.
no code implementations • 7 Jun 2016 • Tammo Rukat, Adam Baker, Andrew Quinn, Mark Woolrich
Functional brain networks exhibit dynamics on the sub-second temporal scale and are often assumed to embody the physiological substrate of cognitive processes.