Search Results for author: Albert No

Found 9 papers, 6 papers with code

Simple Drop-in LoRA Conditioning on Attention Layers Will Improve Your Diffusion Model

no code implementations7 May 2024 Joo Young Choi, Jaesung R. Park, Inkyu Park, Jaewoong Cho, Albert No, Ernest K. Ryu

Current state-of-the-art diffusion models employ U-Net architectures containing convolutional and (qkv) self-attention layers.

Image Generation

Improved Communication-Privacy Trade-offs in $L_2$ Mean Estimation under Streaming Differential Privacy

no code implementations2 May 2024 Wei-Ning Chen, Berivan Isik, Peter Kairouz, Albert No, Sewoong Oh, Zheng Xu

We study $L_2$ mean estimation under central differential privacy and communication constraints, and address two key challenges: firstly, existing mean estimation schemes that simultaneously handle both constraints are usually optimized for $L_\infty$ geometry and rely on random rotation or Kashin's representation to adapt to $L_2$ geometry, resulting in suboptimal leading constants in mean square errors (MSEs); secondly, schemes achieving order-optimal communication-privacy trade-offs do not extend seamlessly to streaming differential privacy (DP) settings (e. g., tree aggregation or matrix factorization), rendering them incompatible with DP-FTRL type optimizers.

Fully Quantized Always-on Face Detector Considering Mobile Image Sensors

no code implementations2 Nov 2023 Haechang Lee, Wongi Jeong, Dongil Ryu, Hyunwoo Je, Albert No, Kijeong Kim, Se Young Chun

In this study, we aim to bridge the gap by exploring extremely low-bit lightweight face detectors, focusing on the always-on face detection scenario for mobile image sensor applications.

Face Detection

PyNET-QxQ: An Efficient PyNET Variant for QxQ Bayer Pattern Demosaicing in CMOS Image Sensors

1 code implementation8 Mar 2022 Minhyeok Cho, Haechang Lee, Hyunwoo Je, Kijeong Kim, Dongil Ryu, Albert No

Additionally, modern mobile cameras employ non-Bayer color filter arrays (CFA) such as Quad Bayer, Nona Bayer, and QxQ Bayer to enhance image quality, yet most existing deep learning-based ISP (or demosaicing) models focus primarily on standard Bayer CFAs.

Demosaicking Knowledge Distillation

Neural Tangent Kernel Analysis of Deep Narrow Neural Networks

1 code implementation7 Feb 2022 Jongmin Lee, Joo Young Choi, Ernest K. Ryu, Albert No

The tremendous recent progress in analyzing the training dynamics of overparameterized neural networks has primarily focused on wide networks and therefore does not sufficiently address the role of depth in deep learning.

Prune Your Model Before Distill It

1 code implementation30 Sep 2021 Jinhyuk Park, Albert No

Recent results suggest that the student-friendly teacher is more appropriate to distill since it provides more transferable knowledge.

Knowledge Distillation Neural Network Compression

An Information-Theoretic Justification for Model Pruning

1 code implementation16 Feb 2021 Berivan Isik, Tsachy Weissman, Albert No

We study the neural network (NN) compression problem, viewing the tension between the compression ratio and NN performance through the lens of rate-distortion theory.

Data Compression Model Compression

WGAN with an Infinitely Wide Generator Has No Spurious Stationary Points

1 code implementation15 Feb 2021 Albert No, Taeho Yoon, Sehyun Kwon, Ernest K. Ryu

Generative adversarial networks (GAN) are a widely used class of deep generative models, but their minimax training dynamics are not understood very well.

Cannot find the paper you are looking for? You can Submit a new open access paper.