no code implementations • ECCV 2020 • Zixuan Jiang, Keren Zhu, Mingjie Liu, Jiaqi Gu, David Z. Pan
In this work, we formulate the decision problem for reversible operators with training time as the objective function and memory usage as the constraint.
1 code implementation • 31 May 2023 • Jiaqi Gu, Hanqing Zhu, Chenghao Feng, Zixuan Jiang, Ray T. Chen, David Z. Pan
The programmable MOMMI leverages the intrinsic light propagation principle, providing a single-device programmable matrix unit beyond the conventional computing paradigm of one multiply-accumulate (MAC) operation per device.
1 code implementation • NeurIPS 2023 • Zixuan Jiang, Jiaqi Gu, Hanqing Zhu, David Z. Pan
Experiments demonstrate that we can reduce the training and inference time of Pre-LN Transformers by 1% - 10%.
no code implementations • ICCV 2023 • Cheng Fu, Hanxian Huang, Zixuan Jiang, Yun Ni, Lifeng Nai, Gang Wu, Liqun Cheng, Yanqi Zhou, Sheng Li, Andrew Li, Jishen Zhao
One promising way to accelerate transformer training is to reuse small pretrained models to initialize the transformer, as their existing representation power facilitates faster model convergence.
no code implementations • 24 Nov 2022 • Mengting Lan, Xiaogang Xiong, Zixuan Jiang, Yunjiang Lou
Deemed as the third generation of neural networks, the event-driven Spiking Neural Networks(SNNs) combined with bio-plausible local learning rules make it promising to build low-power, neuromorphic hardware for SNNs.
1 code implementation • 30 Jul 2022 • Zixuan Jiang, Jiaqi Gu, Mingjie Liu, David Z. Pan
In this work, we delve into the gradient matching method from a comprehensive perspective and answer the critical questions of what, how, and where to match.
no code implementations • 15 Dec 2021 • Hanqing Zhu, Jiaqi Gu, Chenghao Feng, Mingjie Liu, Zixuan Jiang, Ray T. Chen, David Z. Pan
With the recent advances in optical phase change material (PCM), photonic in-memory neurocomputing has demonstrated its superiority in optical neural network (ONN) designs with near-zero static power consumption, time-of-light latency, and compact footprint.
1 code implementation • NeurIPS 2021 • Jiaqi Gu, Hanqing Zhu, Chenghao Feng, Zixuan Jiang, Ray T. Chen, David Z. Pan
In this work, we propose a closed-loop ONN on-chip learning framework L2ight to enable scalable ONN mapping and efficient in-situ learning.
no code implementations • 6 Sep 2021 • Zixuan Jiang, Ebrahim Songhori, Shen Wang, Anna Goldie, Azalia Mirhoseini, Joe Jiang, Young-Joon Lee, David Z. Pan
In physical design, human designers typically place macros via trial and error, which is a Markov decision process.
1 code implementation • 25 Aug 2021 • Jiaqi Gu, Hanqing Zhu, Chenghao Feng, Mingjie Liu, Zixuan Jiang, Ray T. Chen, David Z. Pan
Deep neural networks (DNN) have shown superior performance in a variety of tasks.
no code implementations • 1 Apr 2021 • Zixuan Jiang, Jiaqi Gu, Mingjie Liu, Keren Zhu, David Z. Pan
Machine learning frameworks adopt iterative optimizers to train neural networks.
1 code implementation • 4 Dec 2020 • Shubham Rai, Walter Lau Neto, Yukio Miyasaka, Xinpei Zhang, Mingfei Yu, Qingyang Yi Masahiro Fujita, Guilherme B. Manske, Matheus F. Pontes, Leomar S. da Rosa Junior, Marilton S. de Aguiar, Paulo F. Butzen, Po-Chun Chien, Yu-Shan Huang, Hoa-Ren Wang, Jie-Hong R. Jiang, Jiaqi Gu, Zheng Zhao, Zixuan Jiang, David Z. Pan, Brunno A. de Abreu, Isac de Souza Campos, Augusto Berndt, Cristina Meinhardt, Jonata T. Carvalho, Mateus Grellert, Sergio Bampi, Aditya Lohana, Akash Kumar, Wei Zeng, Azadeh Davoodi, Rasit O. Topaloglu, Yuan Zhou, Jordan Dotzel, Yichi Zhang, Hanyu Wang, Zhiru Zhang, Valerio Tenace, Pierre-Emmanuel Gaillardon, Alan Mishchenko, Satrajit Chatterjee
If the function is incompletely-specified, the implementation has to be true only on the care set.