In the original narrative of deep learning, each neuron builds progressively more abstract, meaningful features by composing features in the preceding layer. In recent years, there’s been some skepticism of this view, but what happens if you take it really seriously? InceptionV1 is a classic vision model with around 10,000 unique neurons — a large number, but still on a scale that a group effort could attack. What if you simply go through the model, neuron by neuron, trying to understand each one and the connections between them? The circuits collaboration aims to find out. The natural unit of publication for investigating circuits seems to be short papers on individual circuits or small families of features. Compared to normal machine learning papers, this is a small and unusual topic for a paper. To facilitate exploration of this direction, Distill is inviting a “thread” of short articles on circuits, interspersed with critical commentary by experts in adjacent fields. The thread will be a living document, with new articles added over time, organized through an open slack channel (#circuits in the Distill slack). Content in this thread should be seen as early stage exploratory research.

PDF
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here