no code implementations • ICLR 2020 • Ruizhe Zhao, Brian Vogel, Tanvir Ahmed
Mixed precision training (MPT) is becoming a practical technique to improve the speed and energy efficiency of training deep neural networks by leveraging the fast hardware support for IEEE half-precision floating point that is available in existing GPUs.
no code implementations • 1 Aug 2019 • Seiya Tokui, Ryosuke Okuta, Takuya Akiba, Yusuke Niitani, Toru Ogawa, Shunta Saito, Shuji Suzuki, Kota Uenishi, Brian Vogel, Hiroyuki Yamazaki Vincent
Software frameworks for neural networks play a key role in the development and application of deep learning methods.