no code implementations • JEP/TALN/RECITAL 2022 • Bingzhi Li, Guillaume Wisniewski, Benoît Crabbé
Ce travail aborde la question de la localisation de l’information syntaxique qui est encodée dans les représentations de transformers.
1 code implementation • 8 Dec 2022 • Bingzhi Li, Guillaume Wisniewski, Benoît Crabbé
The long-distance agreement, evidence for syntactic structure, is increasingly used to assess the syntactic generalization of Neural Language Models.
2 code implementations • Findings (ACL) 2022 • Nathanaël Beau, Benoît Crabbé
Considering the seq2seq architecture of TranX for natural language to code translation, we identify four key components of importance: grammatical constraints, lexical preprocessing, input representations, and copy mechanisms.
Ranked #2 on Code Generation on Django
no code implementations • 6 Jan 2021 • Yair Lakretz, Théo Desbordes, Jean-Rémi King, Benoît Crabbé, Maxime Oquab, Stanislas Dehaene
Finally, probing the internal states of the model during the processing of sentences with nested tree structures, we found a complex encoding of grammatical agreement information (e. g. grammatical number), in which all the information for multiple words nouns was carried by a single unit.
7 code implementations • LREC 2020 • Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoît Crabbé, Laurent Besacier, Didier Schwab
Language models have become a key step to achieve state-of-the art results in many different Natural Language Processing (NLP) tasks.
Ranked #1 on Natural Language Inference on XNLI French
1 code implementation • TACL 2019 • Maximin Coavoux, Benoît Crabbé, Shay B. Cohen
Lexicalized parsing models are based on the assumptions that (i) constituents are organized around a lexical head (ii) bilexical statistics are crucial to solve ambiguities.