1 code implementation • 22 Feb 2024 • Zhuoyan Xu, Zhenmei Shi, Junyi Wei, Fangzhou Mu, Yin Li, YIngyu Liang
An emerging solution with recent success in vision and NLP involves finetuning a foundation model on a selection of relevant tasks, before its adaptation to a target task with limited labeled samples.
no code implementations • 12 Feb 2024 • Jiuxiang Gu, Chenyang Li, YIngyu Liang, Zhenmei Shi, Zhao Song, Tianyi Zhou
Our research presents a thorough analytical characterization of the features learned by stylized one-hidden layer neural networks and one-layer Transformers in addressing this task.
1 code implementation • NeurIPS 2023 • Yiyou Sun, Zhenmei Shi, Yixuan Li
Open-world semi-supervised learning aims at inferring both known and novel classes in unlabeled data, by harnessing prior knowledge from a labeled set with known classes.
1 code implementation • 9 Aug 2023 • Yiyou Sun, Zhenmei Shi, YIngyu Liang, Yixuan Li
This paper bridges the gap by providing an analytical framework to formalize and investigate when and how known classes can help discover novel classes.
1 code implementation • 13 Mar 2023 • Zhenmei Shi, Yifei Ming, Ying Fan, Frederic Sala, YIngyu Liang
In this paper, we propose a simple and effective regularization method based on the nuclear norm of the learned features for domain generalization.
1 code implementation • 28 Feb 2023 • Zhenmei Shi, Jiefeng Chen, Kunyang Li, Jayaram Raghuram, Xi Wu, YIngyu Liang, Somesh Jha
foundation models) has recently become a prevalent learning paradigm, where one first pre-trains a representation using large-scale unlabeled data, and then learns simple predictors on top of the representation using small labeled data from the downstream tasks.
no code implementations • ICLR 2022 • Zhenmei Shi, Junyi Wei, YIngyu Liang
These results provide theoretical evidence showing that feature learning in neural networks depends strongly on the input structure and leads to the superior performance.
1 code implementation • 6 Oct 2021 • Mehmet F. Demirel, Shengchao Liu, Siddhant Garg, Zhenmei Shi, YIngyu Liang
Our experiments demonstrate the strong performance of AWARE in graph-level prediction tasks in the standard setting in the domains of molecular property prediction and social networks.
1 code implementation • 2 Feb 2021 • Zhenmei Shi, Fuhao Shi, Wei-Sheng Lai, Chia-Kai Liang, YIngyu Liang
We present a deep neural network (DNN) that uses both sensor data (gyroscope) and image content (optical flow) to stabilize videos through unsupervised learning.
no code implementations • 4 Aug 2019 • Zhaoyang Yang, Zhenmei Shi, Xiaoyong Shen, Yu-Wing Tai
The proposed SF-Net extracts features in a structured manner and gradually encodes information at the frame level, the gloss level and the sentence level into the feature representation.
no code implementations • 2 Aug 2019 • Zhenmei Shi, Haoyang Fang, Yu-Wing Tai, Chi-Keung Tang
Our Dual Augmented Memory Network (DAWN) is unique in remembering both target and background, and using an improved attention LSTM memory to guide the focus on memorized features.