no code implementations • 1 Oct 2023 • Cyrus Zhou, Zack Hassman, Ruize Xu, Dhirpal Shah, Vaugnn Richard, Yanjing Li
Our results demonstrate that the dataflow that keeps outputs in SIMD registers while also maximizing both input and weight reuse consistently yields the best performance for a wide variety of inference workloads, achieving up to 3x speedup for 8-bit neural networks, and up to 4. 8x speedup for binary neural networks, respectively, over the optimized implementations of neural networks today.
no code implementations • 19 Dec 2022 • Xiaowen Qiu, Ruize Xu, Boan He, Yingtao Zhang, Wenqiang Zhang, Weifeng Ge
The style removal network removes the original image styles, and the style restoration network recovers image styles in a supervised manner.
1 code implementation • 25 Nov 2022 • Kenan Jiang, Xuehai He, Ruize Xu, Xin Eric Wang
Contrastive Language-Image Pretraining (CLIP) has demonstrated great zero-shot performance for matching images and text.