Scalable Adaptive Computation for Iterative Generation

22 Dec 2022  Â·  Allan Jabri, David Fleet, Ting Chen ·

Natural data is redundant yet predominant architectures tile computation uniformly across their input and output space. We propose the Recurrent Interface Networks (RINs), an attention-based architecture that decouples its core computation from the dimensionality of the data, enabling adaptive computation for more scalable generation of high-dimensional data. RINs focus the bulk of computation (i.e. global self-attention) on a set of latent tokens, using cross-attention to read and write (i.e. route) information between latent and data tokens. Stacking RIN blocks allows bottom-up (data to latent) and top-down (latent to data) feedback, leading to deeper and more expressive routing. While this routing introduces challenges, this is less problematic in recurrent computation settings where the task (and routing problem) changes gradually, such as iterative generation with diffusion models. We show how to leverage recurrence by conditioning the latent tokens at each forward pass of the reverse diffusion process with those from prior computation, i.e. latent self-conditioning. RINs yield state-of-the-art pixel diffusion models for image and video generation, scaling to 1024X1024 images without cascades or guidance, while being domain-agnostic and up to 10X more efficient than 2D and 3D U-Nets.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image Generation ImageNet 128x128 RIN FID 2.75 # 5
IS 144.1 # 4
Image Generation ImageNet 256x256 RIN FID 4.51 # 33
Image Generation ImageNet 64x64 RIN Inception Score 66.5 # 2
FID 1.23 # 1
Video Prediction Kinetics-600 12 frames, 64x64 RIN (400 steps) FVD 11.5 # 5
IS 17.7 # 1
Video Prediction Kinetics-600 12 frames, 64x64 RIN (1000 steps) FVD 10.8 # 4
IS 17.7 # 1

Methods