no code implementations • 6 Dec 2023 • Weitang Liu, Ying Wai Li, Tianle Wang, Yi-Zhuang You, Jingbo Shang
We propose a novel model-centric evaluation framework, OmniInput, to evaluate the quality of an AI/ML model's predictions on all possible inputs (including human-unrecognizable ones), which is crucial for AI safety and reliability.
1 code implementation • 21 May 2023 • Tianle Wang, Zihan Wang, Weitang Liu, Jingbo Shang
State-of-the-art weakly supervised text classification methods, while significantly reduced the required human supervision, still requires the supervision to cover all the classes of interest.
no code implementations • 19 Feb 2023 • Weitang Liu, Ying-Wai Li, Yi-Zhuang You, Jingbo Shang
We first draw the connection between the output distribution of a NN and the density of states (DOS) of a physical system.
1 code implementation • NeurIPS 2021 • Haoran Wang, Weitang Liu, Alex Bocchieri, Yixuan Li
Our results show consistent improvement over previous methods that are based on the maximum-valued scores, which fail to capture joint information from multiple labels.
no code implementations • 8 Aug 2021 • Yaobin Xu, Weitang Liu, Zhongyi Jiang, Zixuan Xu, Tingyun Mao, Lili Chen, Mingwei Zhou
In this paper, we propose a Multi-adaptive Spatiotemporal-flow Graph Neural Network (MAF-GNN) for traffic speed forecasting.
1 code implementation • NeurIPS 2021 • Haoran Wang, Weitang Liu, Alex Bocchieri, Yixuan Li
Our results show consistent improvement over previous methods that are based on the maximum-valued scores, which fail to capture joint information from multiple labels.
no code implementations • 1 Jan 2021 • Haoran Wang, Weitang Liu, Alex Bocchieri, Yixuan Li
Our results show consistent improvement over previous methods that are based on the maximum-valued scores, which fail to capture joint information from multiple labels.
5 code implementations • NeurIPS 2020 • Weitang Liu, XiaoYun Wang, John D. Owens, Yixuan Li
We propose a unified framework for OOD detection that uses an energy score.
3 code implementations • COLING 2020 • Liang Xu, Hai Hu, Xuanwei Zhang, Lu Li, Chenjie Cao, Yudong Li, Yechen Xu, Kai Sun, Dian Yu, Cong Yu, Yin Tian, Qianqian Dong, Weitang Liu, Bo Shi, Yiming Cui, Junyi Li, Jun Zeng, Rongzhao Wang, Weijian Xie, Yanting Li, Yina Patterson, Zuoyu Tian, Yiwen Zhang, He Zhou, Shaoweihua Liu, Zhe Zhao, Qipeng Zhao, Cong Yue, Xinrui Zhang, Zhengliang Yang, Kyle Richardson, Zhenzhong Lan
The advent of natural language understanding (NLU) benchmarks for English, such as GLUE and SuperGLUE allows new NLU models to be evaluated across a diverse set of tasks.
3 code implementations • 13 Jan 2020 • Liang Xu, Yu tong, Qianqian Dong, Yixuan Liao, Cong Yu, Yin Tian, Weitang Liu, Lu Li, Caiquan Liu, Xuanwei Zhang
In this paper, we introduce the NER dataset from CLUE organization (CLUENER2020), a well-defined fine-grained dataset for named entity recognition in Chinese.
Chinese Named Entity Recognition named-entity-recognition +2
no code implementations • 21 Nov 2019 • Weitang Liu, Lifeng Wei, James Sharpnack, John D. Owens
In this paper, we propose a novel architecture that iteratively discovers and segments out the objects of a scene based on the image reconstruction quality.
4 code implementations • 8 Jul 2019 • Zelin Dai, Weitang Liu, Guanhua Zhan
Multiple sequence to sequence models were used to establish an end-to-end multi-turns proactive dialogue generation agent, with the aid of data augmentation techniques and variant encoder-decoder structure designs.
3 code implementations • ICLR 2019 • Kamyar Azizzadenesheli, Brandon Yang, Weitang Liu, Zachary C. Lipton, Animashree Anandkumar
We deploy this model and propose generative adversarial tree search (GATS) a deep RL algorithm that learns the environment model and implements Monte Carlo tree search (MCTS) on the learned model for planning.
no code implementations • 20 May 2018 • Weitang Liu, Emad Barsoum, John D. Owens
Our model can learn and derive the coordinates of the digits better than its convolution counterpart that lacks a routing-by-agreement algorithm, and can also perform well when testing on the multi-digit moving MNIST and KTH datasets.