no code implementations • 17 Apr 2024 • Mohammad Shiri, Monalika Padma Reddy, Jiangwen Sun
We present a novel approach, Supervised Contrastive Vision Transformer (SupCon-ViT), for improving the classification of invasive ductal carcinoma in terms of accuracy and generalization by leveraging the inherent strengths and advantages of both transfer learning, i. e., pre-trained vision transformer, and supervised contrastive learning.
no code implementations • 24 Sep 2022 • Mohammad Shiri, Jiangwen Sun
There have been methods developed to address such negative transfer in other domains, such as computer vision.
1 code implementation • 14 Feb 2018 • Chao Shang, Qinqing Liu, Ko-Shin Chen, Jiangwen Sun, Jin Lu, Jin-Feng Yi, Jinbo Bi
The proposed GCN model, which we call edge attention-based multi-relational GCN (EAGCN), jointly learns attention weights and node features in graph convolution.
1 code implementation • 22 Aug 2017 • Chao Shang, Aaron Palmer, Jiangwen Sun, Ko-Shin Chen, Jin Lu, Jinbo Bi
Especially, when certain samples miss an entire view of data, it creates the missing view problem.
no code implementations • NeurIPS 2016 • Jin Lu, Guannan Liang, Jiangwen Sun, Jinbo Bi
We prove that when the side features can span the latent feature space of the matrix to be recovered, the number of observed entries needed for an exact recovery is $O(\log N)$ where $N$ is the size of the matrix.
no code implementations • NeurIPS 2014 • Xin Wang, Jinbo Bi, Shipeng Yu, Jiangwen Sun
We prove that this framework is mathematically equivalent to the widely used multitask feature learning methods that are based on a joint regularization of all model parameters, but with a more general form of regularizers.