no code implementations • EMNLP (LaTeCHCLfL, CLFL, LaTeCH) 2021 • Zuoyu Tian, Dylan Jarrett, Juan Escalona Torres, Patricia Amaral
The results demonstrate that our test sets are capable of measuring the quality of vector space models and can provide a holistic view of the model’s ability to capture syntactic and semantic information.
no code implementations • EMNLP (LaTeCHCLfL, CLFL, LaTeCH) 2021 • Zuoyu Tian, Sandra Kübler
In this study, we study language change in Chinese Biji by using a classification task: classifying Ancient Chinese texts by time periods.
no code implementations • LREC (LAW) 2022 • Ludovic Mompelat, Zuoyu Tian, Amanda Kessler, Matthew Luettgen, Aaryana Rajanala, Sandra Kübler, Michelle Seelig
Conspiracy theories have found a new channel on the internet and spread by bringing together like-minded people, thus functioning as an echo chamber.
1 code implementation • 23 Oct 2022 • Jian Zhu, Zuoyu Tian, Yadong Liu, Cong Zhang, Chia-wen Lo
Inducing semantic representations directly from speech signals is a highly challenging task but has many useful applications in speech mining and spoken language understanding.
1 code implementation • Findings (ACL) 2021 • Hai Hu, He Zhou, Zuoyu Tian, Yiwen Zhang, Yina Ma, Yanting Li, Yixin Nie, Kyle Richardson
These results, however, come with important caveats: cross-lingual models often perform best when trained on a mixture of English and high-quality monolingual NLI data (OCNLI), and are often hindered by automatically translated resources (XNLI-zh).
no code implementations • LREC 2020 • Zuoyu Tian, S K{\"u}bler, ra
In this study, we investigate the use of Brown clustering for offensive language detection.
3 code implementations • COLING 2020 • Liang Xu, Hai Hu, Xuanwei Zhang, Lu Li, Chenjie Cao, Yudong Li, Yechen Xu, Kai Sun, Dian Yu, Cong Yu, Yin Tian, Qianqian Dong, Weitang Liu, Bo Shi, Yiming Cui, Junyi Li, Jun Zeng, Rongzhao Wang, Weijian Xie, Yanting Li, Yina Patterson, Zuoyu Tian, Yiwen Zhang, He Zhou, Shaoweihua Liu, Zhe Zhao, Qipeng Zhao, Cong Yue, Xinrui Zhang, Zhengliang Yang, Kyle Richardson, Zhenzhong Lan
The advent of natural language understanding (NLU) benchmarks for English, such as GLUE and SuperGLUE allows new NLU models to be evaluated across a diverse set of tasks.
no code implementations • WS 2019 • Hai Hu, Wen Li, He Zhou, Zuoyu Tian, Yiwen Zhang, Liang Zou
This paper describes the IUCL system at VarDial 2019 evaluation campaign for the task of discriminating between Mainland and Taiwan variation of mandarin Chinese.
1 code implementation • SEMEVAL 2019 • Jian Zhu, Zuoyu Tian, Sandra Kübler
This paper describes the UM-IU@LING's system for the SemEval 2019 Task 6: OffensEval.