no code implementations • 7 May 2024 • Yijiang Pang, Shuyang Yu, Bao Hoang, Jiayu Zhou
To tackle this challenge, in this paper, we propose a novel parameter-free optimizer, AdamG (Adam with the golden step size), designed to automatically adapt to diverse optimization problems without manual tuning.
no code implementations • 2 Feb 2024 • Yijiang Pang, Jiayu Zhou
The theoretical properties also shed light on a faster and more stable S2P variant, Accelerated S2P (AS2P), through exploiting our new convergence properties that better represent the dynamics of deep models in training.
no code implementations • 2 Feb 2024 • Yijiang Pang, Bao Hoang, Jiayu Zhou
Specifically, in the context of the distributional robustness of CLIP, we propose to leverage natural language inputs to debias the image feature representations, to improve worst-case performance on sub-populations.
no code implementations • 11 Jul 2022 • Yijiang Pang, Boyang Liu, Jiayu Zhou
In this paper, we show a surprising fact that contrastive pre-training has an interesting yet implicit connection with robustness, and such natural robustness in the pre trained representation enables us to design a powerful robust algorithm against adversarial attacks, RUSH, that combines the standard contrastive pre-training and randomized smoothing.
no code implementations • 10 Feb 2020 • Qifei Yu, Zhexin Shen, Yijiang Pang, Rui Liu
Due to heterogeneous robots inside a team and the resilient capabilities of robots, it is challenging to perform a task with an optimal balance between reasonable task allocations and maximum utilization of robot capability.
Multi-agent Reinforcement Learning reinforcement-learning +1