no code implementations • 18 Apr 2024 • Chao Jin, Zili Zhang, Xuanlin Jiang, Fangyue Liu, Xin Liu, Xuanzhe Liu, Xin Jin
We implement RAGCache and evaluate it on vLLM, a state-of-the-art LLM inference system and Faiss, a state-of-the-art vector database.
1 code implementation • 3 Dec 2023 • Jin Liu, Huaibo Huang, Chao Jin, Ran He
Face stylization refers to the transformation of a face into a specific portrait style.
no code implementations • 7 Sep 2023 • Zehua Ren, Yongheng Sun, Miaomiao Wang, Yuying Feng, Xianjun Li, Chao Jin, Jian Yang, Chunfeng Lian, Fan Wang
In this paper, we propose to leverage the idea of counterfactual reasoning coupled with the auxiliary task of brain tissue segmentation to learn fine-grained positional and morphological representations of PWMLs for accurate localization and segmentation.
no code implementations • 23 May 2023 • Xiaolong Chen, Xin Qi, Chunguang Su, Yuan He, Zhijun Wang, Kunxiang Sun, Chao Jin, Weilong Chen, Shuhui Liu, Xiaoying Zhao, Duanyang Jia, Man Yi
To validate the effectiveness of our method, two different typical beam control tasks were performed on China Accelerator Facility for Superheavy Elements (CAFe II) and a light particle injector(LPI) respectively.
no code implementations • 8 Mar 2022 • Lianlian Jiang, Yuexuan Wang, Wenyi Zheng, Chao Jin, Zengxiang Li, Sin G. Teo
In this work, we propose a new approach, LSTMSPLIT, that uses SL architecture with an LSTM network to classify time-series data with multiple clients.
no code implementations • 10 Jan 2022 • Jing Du, ShiLiang Pu, Qinbo Dong, Chao Jin, Xin Qi, Dian Gu, Ru Wu, Hongwei Zhou
Although modern automatic speech recognition (ASR) systems can achieve high performance, they may produce errors that weaken readers' experience and do harm to downstream tasks.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +2
no code implementations • 6 Feb 2021 • Yuxiao Lu, Jie Lin, Chao Jin, Zhe Wang, Min Wu, Khin Mi Mi Aung, XiaoLi Li
Despite the faster HECNN inference, the mainstream packing schemes Dense Packing (DensePack) and Convolution Packing (ConvPack) introduce expensive rotation overhead, which prolongs the inference latency of HECNN for deeper and wider CNN architectures.
no code implementations • 25 Oct 2018 • Jason L. Deglint, Chao Jin, Alexander Wong
This high level of accuracy was achieved using a deep residual convolutional neural network that learns the optimal combination of spectral and morphological features.
no code implementations • 3 May 2018 • Jason L. Deglint, Chao Jin, Angela Chao, Alexander Wong
A number of morphological and spectral fluorescence features are then extracted from the isolated micro-organism imaging data, and used to train neural network classification models designed for the purpose of identification of the six algae types given an isolated micro-organism.