1 code implementation • 13 Sep 2023 • Chenghao Li, Dake Chen, Yuke Zhang, Peter A. Beerel
While diffusion models demonstrate a remarkable capability for generating high-quality images, their tendency to `replicate' training data raises privacy concerns.
no code implementations • 26 Apr 2023 • Souvik Kundu, Yuke Zhang, Dake Chen, Peter A. Beerel
Large number of ReLU and MAC operations of Deep neural networks make them ill-suited for latency and compute-efficient private inference.
no code implementations • 23 Jan 2023 • Souvik Kundu, Shunlin Lu, Yuke Zhang, Jacqueline Liu, Peter A. Beerel
For a similar ReLU budget SENet can yield models with ~2. 32% improved classification accuracy, evaluated on CIFAR-100.
no code implementations • ICCV 2023 • Yuke Zhang, Dake Chen, Souvik Kundu, Chenghao Li, Peter A. Beerel
Then, given our observation that external attention (EA) presents lower PI latency than widely-adopted self-attention (SA) at the cost of accuracy, we present a selective attention search (SAS) method to integrate the strength of EA and SA.
no code implementations • 9 Nov 2018 • Ching-Yun Ko, Cong Chen, Yuke Zhang, Kim Batselier, Ngai Wong
Sum-product networks (SPNs) represent an emerging class of neural networks with clear probabilistic semantics and superior inference speed over graphical models.