Search Results for author: Julian Faraone

Found 6 papers, 1 papers with code

A Block Minifloat Representation for Training Deep Neural Networks

no code implementations ICLR 2021 Sean Fox, Seyedramin Rasoulinezhad, Julian Faraone, David Boland, Philip Leong

Training Deep Neural Networks (DNN) with high efficiency can be difficult to achieve with native floating point representations and commercially available hardware.

AddNet: Deep Neural Networks Using FPGA-Optimized Multipliers

no code implementations19 Nov 2019 Julian Faraone, Martin Kumm, Martin Hardieck, Peter Zipf, Xueyuan Liu, David Boland, Philip H. W. Leong

Low-precision arithmetic operations to accelerate deep-learning applications on field-programmable gate arrays (FPGAs) have been studied extensively, because they offer the potential to save silicon area or increase throughput.

Quantization

Monte Carlo Deep Neural Network Arithmetic

no code implementations25 Sep 2019 Julian Faraone, Philip Leong

We present a novel technique, Monte Carlo Deep Neural Network Arithmetic (MCA), for determining the sensitivity of Deep Neural Networks to quantization in floating point arithmetic. We do this by applying Monte Carlo Arithmetic to the inference computation and analyzing the relative standard deviation of the neural network loss.

Image Classification Quantization

SYQ: Learning Symmetric Quantization For Efficient Deep Neural Networks

1 code implementation CVPR 2018 Julian Faraone, Nicholas Fraser, Michaela Blott, Philip H. W. Leong

An efficient way to reduce this complexity is to quantize the weight parameters and/or activations during training by approximating their distributions with a limited entry codebook.

Quantization

Cannot find the paper you are looking for? You can Submit a new open access paper.