1 code implementation • 12 Oct 2023 • Rui Yang, Li Fang, Yi Zhou
We found that (1) without fine-tuning, LLMs have the capability to further improve the quality of entity text descriptions.
1 code implementation • 3 Feb 2023 • Li Fang, Qingyu Chen, Chih-Hsuan Wei, Zhiyong Lu, Kai Wang
We thoroughly evaluated the performance of Bioformer as well as existing biomedical BERT models including BioBERT and PubMedBERT on 15 benchmark datasets of four different biomedical NLP tasks: named entity recognition, relation extraction, question answering and document classification.
no code implementations • 20 Apr 2022 • Qingyu Chen, Alexis Allot, Robert Leaman, Rezarta Islamaj Doğan, Jingcheng Du, Li Fang, Kai Wang, Shuo Xu, Yuefu Zhang, Parsa Bagherzadeh, Sabine Bergler, Aakash Bhatnagar, Nidhir Bhavsar, Yung-Chun Chang, Sheng-Jie Lin, Wentai Tang, Hongtong Zhang, Ilija Tavchioski, Senja Pollak, Shubo Tian, Jinfeng Zhang, Yulia Otmakhova, Antonio Jimeno Yepes, Hang Dong, Honghan Wu, Richard Dufour, Yanis Labrak, Niladri Chatterjee, Kushagri Tandon, Fréjus Laleye, Loïc Rakotoson, Emmanuele Chersoni, Jinghang Gu, Annemarie Friedrich, Subhash Chandra Pujari, Mariia Chizhikova, Naveen Sivadasan, Zhiyong Lu
To close the gap, we organized the BioCreative LitCovid track to call for a community effort to tackle automated topic annotation for COVID-19 literature.
no code implementations • 14 Apr 2022 • Li Fang, Kai Wang
Our results show that Bioformer outperforms BioBERT and PubMedBERT in this task.
1 code implementation • 27 Oct 2021 • Theodore Jiang, Li Fang, Kai Wang
In this study, we introduce MutFormer, a transformer-based model for the prediction of deleterious missense mutations, which uses reference and mutated protein sequences from the human genome as the primary features.
no code implementations • 2 Dec 2019 • Ziming Liu, Guangyu Gao, Lin Sun, Li Fang
In this paper, except for top-down combining of information for shallow layers, we propose a novel network called Image Pyramid Guidance Network (IPG-Net) to make sure both the spatial information and semantic information are abundant for each layer.
no code implementations • 1 Apr 2019 • Jinguang Sun, Wanli Wang, Xian Wei, Li Fang, Xiaoliang Tang, Yusheng Xu, Hui Yu, Wei Yao
The high dimensionality of hyperspectral images often results in the degradation of clustering performance.