Representation Recycling for Streaming Video Analysis

28 Apr 2022  ·  Can Ufuk Ertenli, Ramazan Gokberk Cinbis, Emre Akbas ·

We present StreamDEQ, a method that aims to infer frame-wise representations on videos with minimal per-frame computation. Conventional deep networks do feature extraction from scratch at each frame in the absence of ad-hoc solutions. We instead aim to build streaming recognition models that can natively exploit temporal smoothness between consecutive video frames. We observe that the recently emerging implicit layer models provide a convenient foundation to construct such models, as they define representations as the fixed-points of shallow networks, which need to be estimated using iterative methods. Our main insight is to distribute the inference iterations over the temporal axis by using the most recent representation as a starting point at each frame. This scheme effectively recycles the recent inference computations and greatly reduces the needed processing time. Through extensive experimental analysis, we show that StreamDEQ is able to recover near-optimal representations in a few frames' time and maintain an up-to-date representation throughout the video duration. Our experiments on video semantic segmentation, video object detection, and human pose estimation in videos show that StreamDEQ achieves on-par accuracy with the baseline while being more than 2-4x faster.

PDF Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Semantic Segmentation Cityscapes val StreamDEQ (8 iterations) mIoU 78.2 # 55
FPS 1.1 # 4
Semantic Segmentation Cityscapes val StreamDEQ (4 iterations) mIoU 71.5 # 76
FPS 1.9 # 3
Semantic Segmentation Cityscapes val StreamDEQ (2 iterations) mIoU 57.9 # 85
FPS 2.9 # 2
Semantic Segmentation Cityscapes val StreamDEQ (1 iterations) mIoU 45.5 # 86
FPS 4.3 # 1

Methods