1 code implementation • 20 Feb 2023 • Kuan-Lin Chen, Ching-Hua Lee, Bhaskar D. Rao, Harinath Garudadri
However, the best-performing design of T-F weights is criterion-dependent in general.
1 code implementation • 13 Oct 2022 • Kuan-Lin Chen, Harinath Garudadri, Bhaskar D. Rao
When the number of pieces is unknown, we prove that, in terms of the number of distinct linear components, the neural complexities of any CPWL function are at most polynomial growth for low-dimensional inputs and factorial growth for the worst-case scenario, which are significantly better than existing results in the literature.
no code implementations • 17 Nov 2021 • Kuan-Lin Chen, Ching-Hua Lee, Bhaskar D. Rao, Harinath Garudadri
Specifically, we study the effects of using different numbers of subbands and various sparsity penalty terms for quasi-sparse, sparse, and dispersive systems.
4 code implementations • NeurIPS 2021 • Kuan-Lin Chen, Ching-Hua Lee, Harinath Garudadri, Bhaskar D. Rao
To codify such a difference in nonlinearities and reveal a linear estimation property, we define ResNEsts, i. e., Residual Nonlinear Estimators, by simply dropping nonlinearities at the last residual representation from standard ResNets.
no code implementations • 9 Jun 2021 • Yu-Chen Lin, Tsun-An Hsieh, Kuo-Hsuan Hung, Cheng Yu, Harinath Garudadri, Yu Tsao, Tei-Wei Kuo
The incompleteness of speech inputs severely degrades the performance of all the related speech signal processing applications.
no code implementations • 7 Apr 2016 • Igor Fedorov, Alican Nalci, Ritwik Giri, Bhaskar D. Rao, Truong Q. Nguyen, Harinath Garudadri
We show that the proposed framework encompasses a large class of S-NNLS algorithms and provide a computationally efficient inference procedure based on multiplicative update rules.