Search Results for author: Sungyoon Lee

Found 7 papers, 2 papers with code

Prediction Risk and Estimation Risk of the Ridgeless Least Squares Estimator under General Assumptions on Regression Errors

no code implementations22 May 2023 Sungyoon Lee, Sokbae Lee

In recent years, there has been a significant growth in research focusing on minimum $\ell_2$ norm (ridgeless) interpolation least squares estimators.

regression

Towards Better Understanding of Training Certifiably Robust Models against Adversarial Examples

1 code implementation NeurIPS 2021 Sungyoon Lee, Woojin Lee, Jinseong Park, Jaewook Lee

We identify another key factor that influences the performance of certifiable training: \textit{smoothness of the loss landscape}.

Implicit Jacobian regularization weighted with impurity of probability output

no code implementations29 Sep 2021 Sungyoon Lee, Jinseong Park, Jaewook Lee

The eigendecomposition provides a simple relation between the eigenvalues of the low-dimensional matrix and the impurity of the probability output.

Relation

Bridged Adversarial Training

no code implementations25 Aug 2021 Hoki Kim, Woojin Lee, Sungyoon Lee, Jaewook Lee

Adversarial robustness is considered as a required property of deep neural networks.

Adversarial Robustness

GradDiv: Adversarial Robustness of Randomized Neural Networks via Gradient Diversity Regularization

no code implementations6 Jul 2021 Sungyoon Lee, Hoki Kim, Jaewook Lee

Our experiments on MNIST, CIFAR10, and STL10 show that our proposed GradDiv regularizations improve the adversarial robustness of randomized neural networks against a variety of state-of-the-art attack methods.

Adversarial Robustness

Loss Landscape Matters: Training Certifiably Robust Models with Favorable Loss Landscape

no code implementations1 Jan 2021 Sungyoon Lee, Woojin Lee, Jinseong Park, Jaewook Lee

Certifiable training minimizes an upper bound on the worst-case loss over the allowed perturbation, and thus the tightness of the upper bound is an important factor in building certifiably robust models.

Lipschitz-Certifiable Training with a Tight Outer Bound

1 code implementation NeurIPS 2020 Sungyoon Lee, Jaewook Lee, Saerom Park

Our certifiable training algorithm provides a tight propagated outer bound by introducing the box constraint propagation (BCP), and it efficiently computes the worst logit over the outer bound.

Cannot find the paper you are looking for? You can Submit a new open access paper.