no code implementations • 30 May 2024 • Ke Yi, Yuhui Xu, Heng Chang, Chen Tang, Yuan Meng, Tong Zhang, Jia Li
Large Language Models (LLMs) have advanced rapidly but face significant memory demands.
no code implementations • 23 Apr 2024 • Peiwen Li, Xin Wang, Zeyang Zhang, Yuan Meng, Fang Shen, Yue Li, Jialong Wang, Yang Li, Wenweu Zhu
In the field of Artificial Intelligence for Information Technology Operations, causal discovery is pivotal for operation and maintenance of graph construction, facilitating downstream industrial tasks such as root cause analysis.
no code implementations • 15 Apr 2024 • Haojun Sun, Chen Tang, Zhi Wang, Yuan Meng, Jingyan Jiang, Xinzhu Ma, Wenwu Zhu
Diffusion models have emerged as preeminent contenders in the realm of generative models.
no code implementations • 8 Apr 2024 • Qun Li, Yuan Meng, Chen Tang, Jiacheng Jiang, Zhi Wang
Quantization is a promising technique for reducing the bit-width of deep models to improve their runtime performance and storage efficiency, and thus becomes a fundamental step for deployment.
no code implementations • 3 Jan 2024 • Chen Tang, Yuan Meng, Jiacheng Jiang, Shuzhao Xie, Rongwei Lu, Xinzhu Ma, Zhi Wang, Wenwu Zhu
Conversely, mixed-precision quantization (MPQ) is advocated to compress the model effectively by allocating heterogeneous bit-width for layers.
no code implementations • ICCV 2023 • Xinzhu Ma, Yongtao Wang, Yinmin Zhang, Zhiyi Xia, Yuan Meng, Zhihui Wang, Haojie Li, Wanli Ouyang
In this work, we build a modular-designed codebase, formulate strong training recipes, design an error diagnosis toolbox, and discuss current methods for image-based 3D object detection.
no code implementations • 13 Sep 2023 • Samuel Wiggins, Yuan Meng, Rajgopal Kannan, Viktor Prasanna
Multi-Agent Reinforcement Learning (MARL) has achieved significant success in large-scale AI systems and big-data applications such as smart grids, surveillance, etc.
1 code implementation • 10 Sep 2023 • Yuan Meng, Xuhao Pan, Jun Chang, Yue Wang
Our experiments on a public Gendered Ambiguous Pronouns (GAP) dataset show that with the supervision learning of the syntactic dependency graph and without fine-tuning the entire BERT, we increased the F1-score of the previous best model (RGCN-with-BERT) from 80. 3% to 82. 5%, compared to the F1-score by single BERT embeddings from 78. 5% to 82. 5%.
1 code implementation • 23 May 2023 • Zhenshan Bing, Yuan Meng, Yuqi Yun, Hang Su, Xiaojie Su, Kai Huang, Alois Knoll
Generative model-based deep clustering frameworks excel in classifying complex data, but are limited in handling dynamic and complex features because they require prior knowledge of the number of clusters.
1 code implementation • 22 Feb 2023 • Songlin Zhai, Weiqing Wang, YuanFang Li, Yuan Meng
Specifically, the inherited feature originates from "parent" nodes and is weighted by an inheritance factor.
no code implementations • 14 Feb 2023 • Chen Tang, Kai Ouyang, Zenghao Chai, Yunpeng Bai, Yuan Meng, Zhi Wang, Wenwu Zhu
This general and dataset-independent property makes us search for the MPQ policy over a rather small-scale proxy dataset and then the policy can be directly used to quantize the model trained on a large-scale dataset.
no code implementations • 15 Aug 2022 • Xinzhu Ma, Yuan Meng, Yinmin Zhang, Lei Bai, Jun Hou, Shuai Yi, Wanli Ouyang
We hope this work can provide insights for the image-based 3D detection community under a semi-supervised setting.
no code implementations • 9 Sep 2019 • Yuan Meng
We use a generative model with latent variable to build the relationship between the unobserved confounders and the observed variables(tested variable and the proxy variables).