Paper

Scalable Nonlinear Learning with Adaptive Polynomial Expansions

Can we effectively learn a nonlinear representation in time comparable to linear learning? We describe a new algorithm that explicitly and adaptively expands higher-order interaction features over base linear representations. The algorithm is designed for extreme computational efficiency, and an extensive experimental study shows that its computation/prediction tradeoff ability compares very favorably against strong baselines.

Results in Papers With Code
(↓ scroll down to see all results)