Search Results for author: Zhengqi Gao

Found 10 papers, 5 papers with code

On the Theory of Cross-Modality Distillation with Contrastive Learning

no code implementations6 May 2024 Hangyu Lin, Chen Liu, Chengming Xu, Zhengqi Gao, Yanwei Fu, Yuan YAO

For instance, one typically aims to minimize the L2 distance or contrastive loss between the learned features of pairs of samples in the source (e. g. image) and the target (e. g. sketch) modalities.

Contrastive Learning

KirchhoffNet: A Scalable Ultra Fast Analog Neural Network

no code implementations24 Oct 2023 Zhengqi Gao, Fan-Keng Sun, Ron Rohrer, Duane S. Boning

Essentially, KirchhoffNet is an analog circuit that can function as a neural network, utilizing its initial node voltages as the neural network input and the node voltages at a specific time point as the output.

NeurOLight: A Physics-Agnostic Neural Operator Enabling Parametric Photonic Device Simulation

1 code implementation19 Sep 2022 Jiaqi Gu, Zhengqi Gao, Chenghao Feng, Hanqing Zhu, Ray T. Chen, Duane S. Boning, David Z. Pan

In this work, for the first time, a physics-agnostic neural operator-based framework, dubbed NeurOLight, is proposed to learn a family of frequency-domain Maxwell PDEs for ultra-fast parametric photonic device simulation.

A Simple Data Mixing Prior for Improving Self-Supervised Learning

1 code implementation CVPR 2022 Sucheng Ren, Huiyu Wang, Zhengqi Gao, Shengfeng He, Alan Yuille, Yuyin Zhou, Cihang Xie

More notably, our SDMP is the first method that successfully leverages data mixing to improve (rather than hurt) the performance of Vision Transformers in the self-supervised setting.

Representation Learning Self-Supervised Learning

The Modality Focusing Hypothesis: Towards Understanding Crossmodal Knowledge Distillation

2 code implementations13 Jun 2022 Zihui Xue, Zhengqi Gao, Sucheng Ren, Hang Zhao

Crossmodal knowledge distillation (KD) extends traditional knowledge distillation to the area of multimodal learning and demonstrates great success in various applications.

Knowledge Distillation Transfer Learning

Training-Free Robust Multimodal Learning via Sample-Wise Jacobian Regularization

no code implementations5 Apr 2022 Zhengqi Gao, Sucheng Ren, Zihui Xue, Siting Li, Hang Zhao

Multimodal fusion emerges as an appealing technique to improve model performances on many tasks.

Co-advise: Cross Inductive Bias Distillation

no code implementations CVPR 2022 Sucheng Ren, Zhengqi Gao, Tianyu Hua, Zihui Xue, Yonglong Tian, Shengfeng He, Hang Zhao

Transformers recently are adapted from the community of natural language processing as a promising substitute of convolution-based neural networks for visual learning tasks.

Inductive Bias

Multimodal Knowledge Expansion

1 code implementation ICCV 2021 Zihui Xue, Sucheng Ren, Zhengqi Gao, Hang Zhao

The popularity of multimodal sensors and the accessibility of the Internet have brought us a massive amount of unlabeled multimodal data.

Denoising Knowledge Distillation +1

Projection based Active Gaussian Process Regression for Pareto Front Modeling

no code implementations20 Jan 2020 Zhengqi Gao, Jun Tao, Yangfeng Su, Dian Zhou, Xuan Zeng

A novel projection based active Gaussian process regression (P- aGPR) method is proposed for efficient PF modeling.

Active Learning Decision Making +2

Cannot find the paper you are looking for? You can Submit a new open access paper.