Search Results for author: Yapei Chang

Found 4 papers, 4 papers with code

FABLES: Evaluating faithfulness and content selection in book-length summarization

3 code implementations1 Apr 2024 Yekyung Kim, Yapei Chang, Marzena Karpinska, Aparna Garimella, Varun Manjunatha, Kyle Lo, Tanya Goyal, Mohit Iyyer

While LLM-based auto-raters have proven reliable for factuality and coherence in other settings, we implement several LLM raters of faithfulness and find that none correlates strongly with human annotations, especially with regard to detecting unfaithful claims.

Long-Context Understanding

BooookScore: A systematic exploration of book-length summarization in the era of LLMs

2 code implementations1 Oct 2023 Yapei Chang, Kyle Lo, Tanya Goyal, Mohit Iyyer

We find that closed-source LLMs such as GPT-4 and Claude 2 produce summaries with higher BooookScore than those generated by open-source models.

RankGen: Improving Text Generation with Large Ranking Models

1 code implementation19 May 2022 Kalpesh Krishna, Yapei Chang, John Wieting, Mohit Iyyer

Given an input sequence (or prefix), modern language models often assign high probabilities to output sequences that are repetitive, incoherent, or irrelevant to the prefix; as such, model-generated text also contains such artifacts.

Contrastive Learning Language Modelling +2

RELIC: Retrieving Evidence for Literary Claims

1 code implementation ACL 2022 Katherine Thai, Yapei Chang, Kalpesh Krishna, Mohit Iyyer

Humanities scholars commonly provide evidence for claims that they make about a work of literature (e. g., a novel) in the form of quotations from the work.

Information Retrieval Retrieval +2

Cannot find the paper you are looking for? You can Submit a new open access paper.