LRN: Limitless Routing Networks for Effective Multi-task Learning

29 Sep 2021  ·  Ryan Wickman, Xiaofei Zhang, Weizi Li ·

Multi-task learning (MTL) is a field involved with learning multiple tasks simultaneously typically through using shared model parameters. The shared representation enables generalized parameters that are task invariant and assists in learning tasks with sparse data. However, the presence of unforeseen task interference can cause one task to improve at the detriment of another. A recent paradigm constructed to tackle these types of problems is the routing network, that builds neural network architectures from a set of modules conditioned on the input instance, task, and previous output of other modules. This approach has many constraints, so we propose the Limitless Routing Network (LRN) which removes the constraints through the usage of a transformer-based router and a reevaluation of the state and action space. We also provide a simple solution to the module collapse problem and display superior accuracy performance over several MTL benchmarks compared to the original routing network.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here