no code implementations • ECCV 2020 • Shuchen Weng, Wenbo Li, Dawei Li, Hongxia Jin, Boxin Shi
We study conditional image repainting where a model is trained to generate visual content conditioned on user inputs, and composite the generated content seamlessly onto a user provided image while preserving the semantics of users' inputs.
no code implementations • 19 Feb 2024 • Haofeng Zhong, Yuchen Hong, Shuchen Weng, Jinxiu Liang, Boxin Shi
This paper studies the problem of language-guided reflection separation, which aims at addressing the ill-posed reflection separation problem by introducing language descriptions to provide layer content.
no code implementations • 19 Feb 2024 • Yean Cheng, Renjie Wan, Shuchen Weng, Chengxuan Zhu, Yakun Chang, Boxin Shi
Though Neural Radiance Fields (NeRF) can produce colorful 3D representations of the world by using a set of 2D images, such ability becomes non-existent when only monochromatic images are provided.
no code implementations • ICCV 2023 • Shuchen Weng, Peixuan Zhang, Zheng Chang, Xinlong Wang, Si Li, Boxin Shi
In this work, we propose Affective Image Filter (AIF), a novel model that is able to understand the visually-abstract emotions from the text and reflect them to visually-concrete images with appropriate colors and textures.
no code implementations • CVPR 2023 • Zheng Chang, Shuchen Weng, Peixuan Zhang, Yu Li, Si Li, Boxin Shi
Language-based colorization produces plausible colors consistent with the language description provided by the user.
1 code implementation • ECCV 2022 • Shuchen Weng, Jimeng Sun, Yu Li, Si Li, Boxin Shi
Automatic image colorization is an ill-posed problem with multi-modal uncertainty, and there remains two main challenges with previous methods: incorrect semantic colors and under-saturation.
no code implementations • CVPR 2022 • Jimeng Sun, Shuchen Weng, Zheng Chang, Si Li, Boxin Shi
Conditional image repainting (CIR) is an advanced image editing task, which requires the model to generate visual content in user-specified regions conditioned on multiple cross-modality constraints, and composite the visual content with the provided background seamlessly.
no code implementations • CVPR 2020 • Shuchen Weng, Wenbo Li, Dawei Li, Hongxia Jin, Boxin Shi
In this paper, we explore synthesizing person images with multiple conditions for various backgrounds.
no code implementations • IEEE Access ( Volume: 8 ) 2020 • Yanbo Fan, Shuchen Weng, Yong Zhang, Boxin Shi, Yi Zhang
To facilitate end-to-end training, we further develop a scenario context information extraction branch to extract context information from raw RGB video directly.
Ranked #83 on Skeleton Based Action Recognition on NTU RGB+D
no code implementations • 20 May 2018 • Shuchen Weng, Wenbo Li, Yi Zhang, Siwei Lyu
Inspired by the dual-stream hypothesis in neural science, we propose a novel dual-stream framework for modeling the interweaved spatiotemporal dependency, and develop a convolutional neural network within this framework that aims to achieve high adaptability and flexibility in STS configurations from various diagonals, i. e., sequential order, dependency range and features.