1 code implementation • 5 Dec 2023 • Michael Igorevich Ivanitskiy, Alex F. Spies, Tilman Räuker, Guillaume Corlouer, Chris Mathwin, Lucia Quirke, Can Rager, Rusheb Shah, Dan Valentine, Cecilia Diniz Behn, Katsumi Inoue, Samy Wu Fung
Transformer models underpin many recent advances in practical machine learning applications, yet understanding their internal behavior continues to elude researchers.
1 code implementation • 19 Sep 2023 • Michael Igorevich Ivanitskiy, Rusheb Shah, Alex F. Spies, Tilman Räuker, Dan Valentine, Can Rager, Lucia Quirke, Chris Mathwin, Guillaume Corlouer, Cecilia Diniz Behn, Samy Wu Fung
Understanding how machine learning models respond to distributional shifts is a key research challenge.
no code implementations • 15 Jul 2022 • Alex F. Spies, Alessandra Russo, Murray Shanahan
We investigate the composability of soft-rules learned by relational neural architectures when operating over object-centric (slot-based) representations, under a variety of sparsity-inducing constraints.