Search Results for author: Hsu-Tung Shih

Found 2 papers, 0 papers with code

Zebra: Memory Bandwidth Reduction for CNN Accelerators With Zero Block Regularization of Activation Maps

no code implementations2 May 2022 Hsu-Tung Shih, Tian-Sheuan Chang

The large amount of memory bandwidth between local buffer and external DRAM has become the speedup bottleneck of CNN hardware accelerators, especially for activation maps.

A Real Time 1280x720 Object Detection Chip With 585MB/s Memory Traffic

no code implementations2 May 2022 Kuo-Wei Chang, Hsu-Tung Shih, Tian-Sheuan Chang, Shang-Hong Tsai, Chih-Chyau Yang, Chien-Ming Wu, Chun-Ming Huang

Memory bandwidth has become the real-time bottleneck of current deep learning accelerators (DLA), particularly for high definition (HD) object detection.

MORPH Object +2

Cannot find the paper you are looking for? You can Submit a new open access paper.