Search Results for author: Byung-Kwan Lee

Found 8 papers, 8 papers with code

MoAI: Mixture of All Intelligence for Large Language and Vision Models

1 code implementation12 Mar 2024 Byung-Kwan Lee, Beomchan Park, Chae Won Kim, Yong Man Ro

Therefore, we present a new LLVM, Mixture of All Intelligence (MoAI), which leverages auxiliary visual information obtained from the outputs of external segmentation, detection, SGG, and OCR models.

Scene Understanding Visual Question Answering

CoLLaVO: Crayon Large Language and Vision mOdel

1 code implementation17 Feb 2024 Byung-Kwan Lee, Beomchan Park, Chae Won Kim, Yong Man Ro

Our findings reveal that the image understanding capabilities of current VLMs are strongly correlated with their zero-shot performance on vision language (VL) tasks.

Large Language Model Object +3

Causal Unsupervised Semantic Segmentation

1 code implementation11 Oct 2023 Junho Kim, Byung-Kwan Lee, Yong Man Ro

Unsupervised semantic segmentation aims to achieve high-quality semantic grouping without human-labeled annotations.

Causal Inference Segmentation +2

Mitigating Adversarial Vulnerability through Causal Parameter Estimation by Adversarial Double Machine Learning

1 code implementation ICCV 2023 Byung-Kwan Lee, Junho Kim, Yong Man Ro

Adversarial examples derived from deliberately crafted perturbations on visual inputs can easily harm decision process of deep neural networks.

Adversarial Robustness

Demystifying Causal Features on Adversarial Examples and Causal Inoculation for Robust Network by Adversarial Instrumental Variable Regression

1 code implementation CVPR 2023 Junho Kim, Byung-Kwan Lee, Yong Man Ro

The origin of adversarial examples is still inexplicable in research fields, and it arouses arguments from various viewpoints, albeit comprehensive investigations.

Adversarial Robustness

Distilling Robust and Non-Robust Features in Adversarial Examples by Information Bottleneck

1 code implementation NeurIPS 2021 Junho Kim, Byung-Kwan Lee, Yong Man Ro

Adversarial examples, generated by carefully crafted perturbation, have attracted considerable attention in research fields.

Adversarial Robustness

Towards Adversarial Robustness of Bayesian Neural Network through Hierarchical Variational Inference

1 code implementation1 Jan 2021 Byung-Kwan Lee, Youngjoon Yu, Yong Man Ro

Recent works have applied Bayesian Neural Network (BNN) to adversarial training, and shown the improvement of adversarial robustness via the BNN's strength of stochastic gradient defense.

Adversarial Defense Adversarial Robustness +3

Cannot find the paper you are looking for? You can Submit a new open access paper.