1 code implementation • 25 Feb 2024 • Xirui Li, Ruochen Wang, Minhao Cheng, Tianyi Zhou, Cho-Jui Hsieh
DrAttack includes three key components: (a) `Decomposition' of the original prompt into sub-prompts, (b) `Reconstruction' of these sub-prompts implicitly by in-context learning with semantically similar but harmless reassembling demo, and (c) a `Synonym Search' of sub-prompts, aiming to find sub-prompts' synonyms that maintain the original intent while jailbreaking LLMs.
1 code implementation • 20 Feb 2024 • Sen Li, Ruochen Wang, Cho-Jui Hsieh, Minhao Cheng, Tianyi Zhou
Moreover, MuLan adopts a vision-language model (VLM) to provide feedback to the image generated in each sub-task and control the diffusion model to re-generate the image if it violates the original prompt.
2 code implementations • 19 Nov 2022 • Justin Cui, Ruochen Wang, Si Si, Cho-Jui Hsieh
The resulting algorithm sets new SOTA on ImageNet-1K: we can scale up to 50 IPCs (Image Per Class) on ImageNet-1K on a single GPU (all previous methods can only scale to 2 IPCs on ImageNet-1K), leading to the best accuracy (only 5. 9% accuracy drop against full dataset training) while utilizing only 4. 2% of the number of data points - an 18. 2% absolute gain over prior SOTA.
1 code implementation • 27 Sep 2022 • Ruochen Wang, Yuanhao Xiong, Minhao Cheng, Cho-Jui Hsieh
Efficient and automated design of optimizers plays a crucial role in full-stack AutoML systems.
2 code implementations • 20 Jul 2022 • Justin Cui, Ruochen Wang, Si Si, Cho-Jui Hsieh
Dataset Condensation is a newly emerging technique aiming at learning a tiny dataset that captures the rich information encoded in the original dataset.
1 code implementation • CVPR 2023 • Yuanhao Xiong, Ruochen Wang, Minhao Cheng, Felix Yu, Cho-Jui Hsieh
Federated learning~(FL) has recently attracted increasing attention from academia and industry, with the ultimate goal of achieving collaborative training under privacy and communication constraints.
1 code implementation • ICLR 2022 • Shoukang Hu, Ruochen Wang, Lanqing Hong, Zhenguo Li, Cho-Jui Hsieh, Jiashi Feng
Efficient performance estimation of architectures drawn from large search spaces is essential to Neural Architecture Search.
no code implementations • ICLR 2022 • Yuanhao Xiong, Li-Cheng Lan, Xiangning Chen, Ruochen Wang, Cho-Jui Hsieh
By constructing a directed graph for the underlying neural network of the target problem, GNS encodes current dynamics with a graph message passing network and trains an agent to control the learning rate accordingly via reinforcement learning.
no code implementations • ICCV 2021 • Ruochen Wang, Xiangning Chen, Minhao Cheng, Xiaocheng Tang, Cho-Jui Hsieh
Predictor-based algorithms have achieved remarkable performance in the Neural Architecture Search (NAS) tasks.
1 code implementation • ICLR 2021 • Ruochen Wang, Minhao Cheng, Xiangning Chen, Xiaocheng Tang, Cho-Jui Hsieh
Differentiable Neural Architecture Search is one of the most popular Neural Architecture Search (NAS) methods for its search efficiency and simplicity, accomplished by jointly optimizing the model weight and architecture parameters in a weight-sharing supernet via gradient-based algorithms.
1 code implementation • ICLR 2021 • Xiangning Chen, Ruochen Wang, Minhao Cheng, Xiaocheng Tang, Cho-Jui Hsieh
This paper proposes a novel differentiable architecture search method by formulating it into a distribution learning problem.