Paper

Analysis and Prediction of NLP Models Via Task Embeddings

Task embeddings are low-dimensional representations that are trained to capture task properties. In this paper, we propose MetaEval, a collection of $101$ NLP tasks. We fit a single transformer to all MetaEval tasks jointly while conditioning it on learned embeddings. The resulting task embeddings enable a novel analysis of the space of tasks. We then show that task aspects can be mapped to task embeddings for new tasks without using any annotated examples. Predicted embeddings can modulate the encoder for zero-shot inference and outperform a zero-shot baseline on GLUE tasks. The provided multitask setup can function as a benchmark for future transfer learning research.

Results in Papers With Code
(↓ scroll down to see all results)