1 code implementation • 5 Jan 2021 • Hyeongmin Cho, Sangkyun Lee
Classical data quality measures tend to focus only on class separability; however, we suggest that in-class variability is another important data quality factor.
no code implementations • 24 Oct 2019 • Sangkyun Lee, Piotr Sobczyk, Malgorzata Bogdan
Adapting SL1 for probabilistic graphical models, we show that SL1 can be used for the structure learning of Gaussian MRFs using our suggested procedure nsSLOPE (neighborhood selection Sorted L-One Penalized Estimation), controlling the FDR of detecting edges.
1 code implementation • 20 May 2019 • Sangkyun Lee, Jeonghyun Lee
Even though the use of $\ell_1$-based sparse coding for model compression is not new, we show that it can be far more effective than previously reported when we use proximal point algorithms and the technique of debiasing.
no code implementations • 6 Oct 2017 • Philipp J. Kremer, Sangkyun Lee, Malgorzata Bogdan, Sandra Paterlini
We introduce a financial portfolio optimization framework that allows us to automatically select the relevant assets and estimate their weights by relying on a sorted $\ell_1$-Norm penalization, henceforth SLOPE.
no code implementations • 18 Nov 2015 • Sangkyun Lee, Damian Brzyski, Malgorzata Bogdan
In this paper we propose a primal-dual proximal extragradient algorithm to solve the generalized Dantzig selector (GDS) estimation problem, based on a new convex-concave saddle-point (SP) reformulation.