Transformers

Retriever-Augmented Generation, or RAG, is a type of language generation model that combines pre-trained parametric and non-parametric memory for language generation. Specifically, the parametric memory is a pre-trained seq2seq model and the non-parametric memory is a dense vector index of Wikipedia, accessed with a pre-trained neural retriever. For query $x$, Maximum Inner Product Search (MIPS) is used to find the top-K documents $z_{i}$. For final prediction $y$, we treat $z$ as a latent variable and marginalize over seq2seq predictions given different documents.

Source: Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Retrieval 131 32.03%
Question Answering 53 12.96%
Language Modelling 27 6.60%
Information Retrieval 21 5.13%
Large Language Model 19 4.65%
Open-Domain Question Answering 12 2.93%
Text Generation 12 2.93%
Benchmarking 8 1.96%
Sentence 6 1.47%

Categories