Cost Function Unrolling in Unsupervised Optical Flow

30 Nov 2020  ·  Gal Lifshitz, Dan Raviv ·

Steepest descent algorithms, which are commonly used in deep learning, use the gradient as the descent direction, either as-is or after a direction shift using preconditioning. In many scenarios calculating the gradient is numerically hard due to complex or non-differentiable cost functions, specifically next to singular points. In this work we focus on the derivation of the Total Variation semi-norm commonly used in unsupervised cost functions. Specifically, we derive a differentiable proxy to the hard L1 smoothness constraint in a novel iterative scheme which we refer to as Cost Unrolling. Producing more accurate gradients during training, our method enables finer predictions of a given DNN model through improved convergence, without modifying its architecture or increasing computational complexity. We demonstrate our method in the unsupervised optical flow task. Replacing the L1 smoothness constraint with our unrolled cost during the training of a well known baseline, we report improved results on both MPI Sintel and KITTI 2015 unsupervised optical flow benchmarks. Particularly, we report EPE reduced by up to 15.82% on occluded pixels, where the smoothness constraint is dominant, enabling the detection of much sharper motion edges.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Optical Flow Estimation KITTI 2015 UnrolledCost Fl-all 10.81 # 14
Optical Flow Estimation Sintel-clean UnrolledCost Average End-Point Error 4.69 # 25
Optical Flow Estimation Sintel-final UnrolledCost Average End-Point Error 5.8 # 25

Methods


No methods listed for this paper. Add relevant methods here