1 code implementation • 21 Nov 2023 • Armenak Petrosyan, Konstantin Pieper, Hoang Tran
We propose and analyze an efficient algorithm for solving the joint sparse recovery problem using a new regularization-based method, named orthogonally weighted $\ell_{2, 1}$ ($\mathit{ow}\ell_{2, 1}$), which is specifically designed to take into account the rank of the solution matrix.
no code implementations • 19 Jan 2023 • Boris Mocialov, Eirik Eythorsson, Reza Parseh, Hoang Tran, Vegard Flovik
This work takes a look at data models often used in digital twins and presents preliminary results specifically from surface reconstruction and semantic segmentation models trained using simulated data.
no code implementations • 12 Oct 2022 • Qinzi Zhang, Hoang Tran, Ashok Cutkosky
We develop a new reduction that converts any online convex optimization algorithm suffering $O(\sqrt{T})$ regret into an $\epsilon$-differentially private stochastic convex optimization algorithm with the optimal convergence rate $\tilde O(1/\sqrt{T} + \sqrt{d}/\epsilon T)$ on smooth losses in linear time, forming a direct analogy to the classical non-private "online-to-batch" conversion.
no code implementations • 12 Oct 2022 • Hoang Tran, Ashok Cutkosky
We introduce new algorithms and convergence guarantees for privacy-preserving non-convex Empirical Risk Minimization (ERM) on smooth $d$-dimensional objectives.
no code implementations • 18 Feb 2022 • Majdi I. Radaideh, Hoang Tran, Lianshan Lin, Hao Jiang, Drew Winder, Sarma Gorti, Guannan Zhang, Justin Mach, Sarah Cousineau
Given that some of the calibrated parameters that show a good agreement with the experimental data can be nonphysical mercury properties, we need a more advanced two-phase flow model to capture bubble dynamics and mercury cavitation.
1 code implementation • 4 Mar 2021 • Hoang Tran, Ashok Cutkosky
We develop a new algorithm for non-convex stochastic optimization that finds an $\epsilon$-critical point in the optimal $O(\epsilon^{-3})$ stochastic gradient and Hessian-vector product computations.
1 code implementation • 3 Nov 2020 • Hoang Tran, Guannan Zhang
The local gradient points to the direction of the steepest slope in an infinitesimal neighborhood.
no code implementations • 13 Apr 2020 • Yiming Xu, Akil Narayan, Hoang Tran, Clayton G. Webster
We first propose a novel criterion that guarantees that an $s$-sparse signal is the local minimizer of the $\ell_1/\ell_2$ objective; our criterion is interpretable and useful in practice.
no code implementations • 21 Feb 2020 • Jiaxing Zhang, Hoang Tran, Guannan Zhang
Evolution strategy (ES) has been shown great promise in many challenging reinforcement learning (RL) tasks, rivaling other state-of-the-art deep RL methods.
1 code implementation • 7 Feb 2020 • Jiaxin Zhang, Hoang Tran, Dan Lu, Guannan Zhang
Standard ES methods with $d$-dimensional Gaussian smoothing suffer from the curse of dimensionality due to the high variance of Monte Carlo (MC) based gradient estimators.
no code implementations • 30 Dec 2018 • Yiyuan She, Hoang Tran
In high-dimensional data analysis, regularization methods pursuing sparsity and/or low rank have received a lot of attention recently.