no code implementations • 1 Jun 2023 • Hejie Cui, Rongmei Lin, Nasser Zalmout, Chenwei Zhang, Jingbo Shang, Carl Yang, Xian Li
Information extraction, e. g., attribute value extraction, has been extensively studied and formulated based only on text.
no code implementations • 14 Sep 2022 • Rongmei Lin, Yonghui Xiao, Tien-Ju Yang, Ding Zhao, Li Xiong, Giovanni Motta, Françoise Beaufays
Automatic Speech Recognition models require large amount of speech data for training, and the collection of such data often leads to privacy concerns.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +2
no code implementations • 8 Jun 2021 • Rongmei Lin, Xiang He, Jie Feng, Nasser Zalmout, Yan Liang, Li Xiong, Xin Luna Dong
Understanding product attributes plays an important role in improving online shopping experience for customers and serves as an integral part for constructing a product knowledge graph.
1 code implementation • 2 Mar 2021 • Weiyang Liu, Rongmei Lin, Zhen Liu, Li Xiong, Bernhard Schölkopf, Adrian Weller
Due to the over-parameterization nature, neural networks are a powerful tool for nonlinear function approximation.
no code implementations • 1 Jan 2021 • Rongmei Lin, Hanjun Dai, Li Xiong, Wei Wei
We propose a generative fairness teaching framework that provides a model with not only real samples but also synthesized samples to compensate the data biases during training.
1 code implementation • CVPR 2021 • Weiyang Liu, Rongmei Lin, Zhen Liu, James M. Rehg, Liam Paull, Li Xiong, Le Song, Adrian Weller
The inductive bias of a neural network is largely determined by the architecture and the training algorithm.
1 code implementation • CVPR 2020 • Rongmei Lin, Weiyang Liu, Zhen Liu, Chen Feng, Zhiding Yu, James M. Rehg, Li Xiong, Le Song
Inspired by the Thomson problem in physics where the distribution of multiple propelling electrons on a unit sphere can be modeled via minimizing some potential energy, hyperspherical energy minimization has demonstrated its potential in regularizing neural networks and improving their generalization power.
4 code implementations • NeurIPS 2018 • Weiyang Liu, Rongmei Lin, Zhen Liu, Lixin Liu, Zhiding Yu, Bo Dai, Le Song
In light of this intuition, we reduce the redundancy regularization problem to generic energy minimization, and propose a minimum hyperspherical energy (MHE) objective as generic regularization for neural networks.
no code implementations • 22 May 2018 • Ziming Zhang, Rongmei Lin, Alan Sullivan
In this paper we propose novel Deformable Part Networks (DPNs) to learn {\em pose-invariant} representations for 2D object recognition.
1 code implementation • CVPR 2018 • Weiyang Liu, Zhen Liu, Zhiding Yu, Bo Dai, Rongmei Lin, Yisen Wang, James M. Rehg, Le Song
Inner product-based convolution has been a central component of convolutional neural networks (CNNs) and the key to learning visual representations.
no code implementations • 15 Nov 2015 • Weiyang Liu, Rongmei Lin, Meng Yang
We propose a robust elastic net (REN) model for high-dimensional sparse regression and give its performance guarantees (both the statistical error bound and the optimization bound).
no code implementations • 14 Nov 2015 • Weiyang Liu, Zhiding Yu, Yandong Wen, Rongmei Lin, Meng Yang
Sparse coding with dictionary learning (DL) has shown excellent classification performance.