Search Results for author: Jake Mendel

Found 3 papers, 1 papers with code

The Local Interaction Basis: Identifying Computationally-Relevant and Sparsely Interacting Features in Neural Networks

1 code implementation17 May 2024 Lucius Bushnaq, Stefan Heimersheim, Nicholas Goldowsky-Dill, Dan Braun, Jake Mendel, Kaarel Hänni, Avery Griffin, Jörn Stöhler, Magdalena Wache, Marius Hobbhahn

We present a novel interpretability method that aims to overcome this limitation by transforming the activations of the network into a new basis - the Local Interaction Basis (LIB).

Using Degeneracy in the Loss Landscape for Mechanistic Interpretability

no code implementations17 May 2024 Lucius Bushnaq, Jake Mendel, Stefan Heimersheim, Dan Braun, Nicholas Goldowsky-Dill, Kaarel Hänni, Cindy Wu, Marius Hobbhahn

We propose that if we can represent a neural network in a way that is invariant to reparameterizations that exploit the degeneracies, then this representation is likely to be more interpretable, and we provide some evidence that such a representation is likely to have sparser interactions.

Learning Theory

Dynamical versus Bayesian Phase Transitions in a Toy Model of Superposition

no code implementations10 Oct 2023 Zhongtian Chen, Edmund Lau, Jake Mendel, Susan Wei, Daniel Murfet

We investigate phase transitions in a Toy Model of Superposition (TMS) using Singular Learning Theory (SLT).

Learning Theory

Cannot find the paper you are looking for? You can Submit a new open access paper.