no code implementations • 8 May 2024 • Cheng Song, Lu Lu, Zhen Ke, Long Gao, Shuai Ding
In this paper, we propose a contrastive learning framework utilizing selective strong augmentation (SSA) for self-supervised gait-based emotion representation, which aims to derive effective representations from limited labeled gait data.
no code implementations • NAACL 2021 • Zhen Ke, Liang Shi, Songtao Sun, Erli Meng, Bin Wang, Xipeng Qiu
Recent researches show that pre-trained models (PTMs) are beneficial to Chinese Word Segmentation (CWS).
no code implementations • 13 Apr 2020 • Zhen Ke, Liang Shi, Erli Meng, Bin Wang, Xipeng Qiu, Xuanjing Huang
Besides, the pre-trained BERT language model has been also introduced into the MCCWS task in a multi-task learning framework.