no code implementations • 27 Mar 2021 • Ziheng Jiang, Animesh Jain, Andrew Liu, Josh Fromm, Chengqian Ma, Tianqi Chen, Luis Ceze
Quantization is a key technique to reduce the resource requirement and improve the performance of neural network deployment.
no code implementations • 20 Jan 2021 • Xin Liu, Yuang Li, Josh Fromm, Yuntao Wang, Ziheng Jiang, Alex Mariakakis, Shwetak Patel
In this work, we demonstrate state-of-the-art latency and accuracy for on-device super-resolution using a novel hybrid architecture called SplitSR and a novel lightweight residual block called SplitSRBlock.
1 code implementation • 5 Oct 2020 • Xin Liu, Ziheng Jiang, Josh Fromm, Xuhai Xu, Shwetak Patel, Daniel McDuff
There are large individual differences in physiological processes, making designing personalized health sensing algorithms challenging.
3 code implementations • NeurIPS 2020 • Xin Liu, Josh Fromm, Shwetak Patel, Daniel McDuff
Telehealth and remote health monitoring have become increasingly important during the SARS-CoV-2 pandemic and it is widely expected that this will have a lasting impact on healthcare practices.
no code implementations • 11 Jul 2018 • Thierry Moreau, Tianqi Chen, Luis Vega, Jared Roesch, Eddie Yan, Lianmin Zheng, Josh Fromm, Ziheng Jiang, Luis Ceze, Carlos Guestrin, Arvind Krishnamurthy
Specialized Deep Learning (DL) acceleration stacks, designed for a specific set of frameworks, model architectures, operators, and data types, offer the allure of high performance while sacrificing flexibility.
no code implementations • ICLR 2018 • Josh Fromm, Shwetak Patel, Matthai Philipose
Recent work has shown that fast, compact low-bitwidth neural networks can be surprisingly accurate.