no code implementations • JEP/TALN/RECITAL 2022 • Bingzhi Li, Guillaume Wisniewski, Benoît Crabbé
Ce travail aborde la question de la localisation de l’information syntaxique qui est encodée dans les représentations de transformers.
no code implementations • ACL 2022 • Bingzhi Li, Guillaume Wisniewski, Benoit Crabbé
This work addresses the question of the localization of syntactic information encoded in the transformers representations.
1 code implementation • 23 Oct 2023 • Bingzhi Li, Lucia Donatelli, Alexander Koller, Tal Linzen, Yuekun Yao, Najoung Kim
The goal of compositional generalization benchmarks is to evaluate how well models generalize to new complex linguistic expressions.
1 code implementation • 8 Dec 2022 • Bingzhi Li, Guillaume Wisniewski, Benoît Crabbé
The long-distance agreement, evidence for syntactic structure, is increasingly used to assess the syntactic generalization of Neural Language Models.
no code implementations • EMNLP 2021 • Bingzhi Li, Guillaume Wisniewski, Benoit Crabbé
Many recent works have demonstrated that unsupervised sentence representations of neural networks encode syntactic information by observing that neural language models are able to predict the agreement between a verb and its subject.
1 code implementation • EACL 2021 • Bingzhi Li, Guillaume Wisniewski
We evaluate the ability of Bert embeddings to represent tense information, taking French and Chinese as a case study.