no code implementations • LREC 2016 • Timothy Wong, Claire Li, Sam Lam, Billy Chiu, Qin Lu, Minglei Li, Dan Xiong, Roy Shing Yu, Vincent T. Y. Ng
This paper reports our work on building up a Cantonese Speech-to-Text (STT) system with a syllable based acoustic model.
1 code implementation • 19 Sep 2023 • Yalun Wang, Shidong Chen, Huicong Bian, Weixiao Li, Qin Lu
To ascertain the effectiveness of our approach, our SANet model achieved competitive results on the real-time CamVid and cityscape datasets.
no code implementations • 14 Sep 2023 • Fei Dou, Jin Ye, Geng Yuan, Qin Lu, Wei Niu, Haijian Sun, Le Guan, Guoyu Lu, Gengchen Mai, Ninghao Liu, Jin Lu, Zhengliang Liu, Zihao Wu, Chenjiao Tan, Shaochen Xu, Xianqiao Wang, Guoming Li, Lilong Chai, Sheng Li, Jin Sun, Hongyue Sun, Yunli Shao, Changying Li, Tianming Liu, WenZhan Song
Artificial General Intelligence (AGI), possessing the capacity to comprehend, learn, and execute tasks with human cognitive abilities, engenders significant anticipation and intrigue across scientific, commercial, and societal arenas.
no code implementations • 14 Jun 2023 • Saleh Soltan, Andy Rosenbaum, Tobias Falke, Qin Lu, Anna Rumshisky, Wael Hamza
(2) Conversely, using an encoder to warm-start seq2seq training, we show that by unfreezing the encoder partway through training, we can match task performance of a from-scratch seq2seq model.
no code implementations • 10 Jun 2022 • Konstantinos D. Polyzos, Qin Lu, Georgios B. Giannakis
Labeled data can be expensive to acquire in several application domains, including medical imaging, robotics, and computer vision.
no code implementations • 27 May 2022 • Qin Lu, Konstantinos D. Polyzos, Bingcong Li, Georgios B. Giannakis
Tests on synthetic functions and real-world applications showcase the merits of the proposed method.
no code implementations • 1 Dec 2021 • Qin Lu, Georgios B. Giannakis
Value function approximation is a crucial module for policy evaluation in reinforcement learning when the state space is large or continuous.
no code implementations • 13 Oct 2021 • Qin Lu, Georgios V. Karanikolas, Georgios B. Giannakis
Belonging to the family of Bayesian nonparametrics, Gaussian process (GP) based approaches have well-documented merits not only in learning over a rich class of nonlinear functions, but also in quantifying the associated uncertainty.
no code implementations • SEMEVAL 2021 • Rong Xiang, Jinghang Gu, Emmanuele Chersoni, Wenjie Li, Qin Lu, Chu-Ren Huang
In this contribution, we describe the system presented by the PolyU CBS-Comp Team at the Task 1 of SemEval 2021, where the goal was the estimation of the complexity of words in a given sentence context.
no code implementations • Joint Conference on Lexical and Computational Semantics 2020 • Emmanuele Chersoni, Rong Xiang, Qin Lu, Chu-Ren Huang
Our experiments focused on crosslingual word embeddings, in order to predict modality association scores by training on a high-resource language and testing on a low-resource one.
no code implementations • Asian Chapter of the Association for Computational Linguistics 2020 • Rong Xiang, Mingyu Wan, Qi Su, Chu-Ren Huang, Qin Lu
Mandarin Alphabetical Word (MAW) is one indispensable component of Modern Chinese that demonstrates unique code-mixing idiosyncrasies influenced by language exchanges.
no code implementations • LREC 2020 • Rong Xiang, Yunfei Long, Mingyu Wan, Jinghang Gu, Qin Lu, Chu-Ren Huang
Deep neural network models have played a critical role in sentiment analysis with promising results in the recent decade.
no code implementations • LREC 2020 • Rong Xiang, Xuefeng Gao, Yunfei Long, Anran Li, Emmanuele Chersoni, Qin Lu, Chu-Ren Huang
Automatic Chinese irony detection is a challenging task, and it has a strong impact on linguistic research.
no code implementations • WS 2019 • Wenhao Ying, Rong Xiang, Qin Lu
Deep learning based general language models have achieved state-of-the-art results in many popular tasks such as sentiment analysis and QA tasks.
Ranked #2 on Emotion Classification on SemEval 2018 Task 1E-c
no code implementations • WS 2018 • Rong Xiang, Yunfei Long, Qin Lu, Dan Xiong, I-Hsuan Chen
Then representation of the major text is learned through an LSTM model whereas the minor text is learned by a separate CNN model.
no code implementations • WS 2018 • Yunfei Long, Mingyu Ma, Qin Lu, Rong Xiang, Chu-Ren Huang
In this work, we propose a dual user and product memory network (DUPMN) model to learn user profiles and product reviews using separate memory networks.
Ranked #6 on Sentiment Analysis on User and product information
no code implementations • IJCNLP 2017 • Yunfei Long, Qin Lu, Rong Xiang, Minglei Li, Chu-Ren Huang
This paper proposes a novel method to incorporate speaker profiles into an attention based LSTM model for fake news detection.
no code implementations • IJCNLP 2017 • Minglei Li, Qin Lu, Yunfei Long
In this paper, we investigate the effectiveness of different affective lexicons through sentiment analysis of phrases.
no code implementations • EMNLP 2017 • Yunfei Long, Qin Lu, Rong Xiang, Minglei Li, Chu-Ren Huang
Evaluations show the CBA based method outperforms the state-of-the-art local context based attention methods significantly.
no code implementations • EMNLP 2017 • Lin Gui, Jiannan Hu, Yulan He, Ruifeng Xu, Qin Lu, Jiachen Du
Emotion cause extraction aims to identify the reasons behind a certain emotion expressed in text.
no code implementations • 18 Aug 2017 • Lin Gui, Jiannan Hu, Yulan He, Ruifeng Xu, Qin Lu, Jiachen Du
Emotion cause extraction aims to identify the reasons behind a certain emotion expressed in text.
Ranked #8 on Emotion Cause Extraction on ECE
no code implementations • CONLL 2017 • I-Hsuan Chen, Yunfei Long, Qin Lu, Chu-Ren Huang
We propose a set of syntactic conditions crucial to event structures to improve the model based on the classification of radical groups.
no code implementations • 30 Mar 2016 • Enrico Santus, Tin-Shing Chiu, Qin Lu, Alessandro Lenci, Chu-Ren Huang
In this paper, we claim that vector cosine, which is generally considered among the most efficient unsupervised measures for identifying word similarity in Vector Space Models, can be outperformed by an unsupervised measure that calculates the extent of the intersection among the most mutually dependent contexts of the target words.
no code implementations • 29 Mar 2016 • Enrico Santus, Tin-Shing Chiu, Qin Lu, Alessandro Lenci, Chu-Ren Huang
In this paper, we describe ROOT13, a supervised system for the classification of hypernyms, co-hyponyms and random words.
1 code implementation • LREC 2016 • Enrico Santus, Alessandro Lenci, Tin-Shing Chiu, Qin Lu, Chu-Ren Huang
When the classification is binary, ROOT9 achieves the following results against the baseline: hypernyms-co-hyponyms 95. 7% vs. 69. 8%, hypernyms-random 91. 8% vs. 64. 1% and co-hyponyms-random 97. 8% vs. 79. 4%.
no code implementations • LREC 2016 • Enrico Santus, Tin-Shing Chiu, Qin Lu, Alessandro Lenci, Chu-Ren Huang
In this paper, we claim that Vector Cosine, which is generally considered one of the most efficient unsupervised measures for identifying word similarity in Vector Space Models, can be outperformed by a completely unsupervised measure that evaluates the extent of the intersection among the most associated contexts of two target words, weighting such intersection according to the rank of the shared contexts in the dependency ranked lists.
no code implementations • LREC 2012 • Hongzhi Xu, Helen Kai-yun Chen, Chu-Ren Huang, Qin Lu, Dingxu Shi, Tin-Shing Chiu
We adopt the corpus-informed approach to example sentence selections for the construction of a reference grammar.