no code implementations • 19 Jun 2022 • HUI ZHANG, Shenglong Zhou, Geoffrey Ye Li, Naihua Xiu
The step function is one of the simplest and most natural activation functions for deep neural networks (DNNs).
no code implementations • 16 Dec 2019 • Huajun Wang, Yuan-Hai Shao, Shenglong Zhou, Ce Zhang, Naihua Xiu
To distinguish all of them, in this paper, we introduce a new model equipped with an $L_{0/1}$ soft-margin loss (dubbed as $L_{0/1}$-SVM) which well captures the nature of the binary classification.
no code implementations • 22 Oct 2019 • Chun-Na Li, Yuan-Hai Shao, Huajun Wang, Yu-Ting Zhao, Ling-Wei Huang, Naihua Xiu, Nai-Yang Deng
The other type constructs all the hyperplanes simultaneously, and it solves one big optimization problem with the ascertained loss of each sample.
1 code implementation • 9 Jan 2019 • Shenglong Zhou, Naihua Xiu, Hou-Duo Qi
Algorithms based on the hard thresholding principle have been well studied with sounding theoretical guarantees in the compressed sensing and more general sparsity-constrained optimization.
Optimization and Control
no code implementations • 17 Jul 2014 • Shenglong Zhou, Naihua Xiu, Ziyan Luo, Lingchen Kong
This paper aims at achieving a simultaneously sparse and low-rank estimator from the semidefinite population covariance matrices.