no code implementations • 29 Mar 2024 • Luozhou Wang, Guibao Shen, Yixun Liang, Xin Tao, Pengfei Wan, Di Zhang, Yijun Li, Yingcong Chen
In this research, we present a novel approach to motion customization in video generation, addressing the widespread gap in the thorough exploration of motion representation within video generative models.
no code implementations • 28 Feb 2024 • Yiyan Huang, Cheuk Hang Leung, Siyi Wang, Yijun Li, Qi Wu
The growing demand for personalized decision-making has led to a surge of interest in estimating the Conditional Average Treatment Effect (CATE).
1 code implementation • 16 Dec 2023 • Yijun Li, Cheuk Hang Leung, Xiangqian Sun, Chaoqun Wang, Yiyan Huang, Xing Yan, Qi Wu, Dongdong Wang, Zhixiang Huang
Consumer credit services offered by e-commerce platforms provide customers with convenient loan access during shopping and have the potential to stimulate sales.
no code implementations • 10 Dec 2023 • Zhipeng Bao, Yijun Li, Krishna Kumar Singh, Yu-Xiong Wang, Martial Hebert
Despite recent significant strides achieved by diffusion-based Text-to-Image (T2I) models, current systems are still less capable of ensuring decent compositional generation aligned with text prompts, particularly for the multi-object generation.
no code implementations • 22 Sep 2023 • Yijun Li, Mengzhuo Guo, Miłosz Kadziński, Qingpeng Zhang
This study presents novel preference learning approaches to multiple criteria sorting problems in the presence of temporal criteria.
no code implementations • 26 Aug 2023 • Chaoqun Wang, Yijun Li, Xiangqian Sun, Qi Wu, Dongdong Wang, Zhixiang Huang
The tensorized LSTM assigns each variable with a unique hidden state making up a matrix $\mathbf{h}_t$, and the standard LSTM models all the variables with a shared hidden state $\mathbf{H}_t$.
no code implementations • 4 Jul 2023 • Zhen Zhu, Yijun Li, Weijie Lyu, Krishna Kumar Singh, Zhixin Shu, Soeren Pirk, Derek Hoiem
We investigate how to generate multimodal image outputs, such as RGB, depth, and surface normals, with a single generative model.
1 code implementation • 26 Jun 2023 • Luozhou Wang, Guibao Shen, Wenhang Ge, Guangyong Chen, Yijun Li, Ying-Cong Chen
The ``Decompose'' phase separates conditions based on pair relationships, computing the result individually for each pair.
1 code implementation • 15 Jun 2023 • Yijun Li, Cheuk Hang Leung, Qi Wu
Multivariate sequential data collected in practice often exhibit temporal irregularities, including nonuniform time intervals and component misalignment.
no code implementations • 10 Mar 2023 • Ziqian Wu, Xingzhe He, Yijun Li, Cheng Yang, Rui Liu, Shiying Xiong, Bo Zhu
We present a lightweighted neural PDE representation to discover the hidden structure and predict the solution of different nonlinear PDEs.
2 code implementations • 6 Feb 2023 • Gaurav Parmar, Krishna Kumar Singh, Richard Zhang, Yijun Li, Jingwan Lu, Jun-Yan Zhu
However, it is still challenging to directly apply these models for editing real images for two reasons.
Ranked #13 on Text-based Image Editing on PIE-Bench
no code implementations • ICCV 2023 • Manuel Ladron De Guevara, Jose Echevarria, Yijun Li, Yannick Hold-Geoffroy, Cameron Smith, Daichi Ito
We present a novel method for automatic vectorized avatar generation from a single portrait image.
no code implementations • 21 Nov 2022 • Lana X. Garmire, Yijun Li, Qianhui Huang, Chuan Xu, Sarah Teichmann, Naftali Kaminski, Matteo Pellegrini, Quan Nguyen, Andrew E. Teschendorff
Deciphering cell type heterogeneity is crucial for systematically understanding tissue homeostasis and its dysregulation in diseases.
no code implementations • 4 Nov 2022 • Yuheng Li, Yijun Li, Jingwan Lu, Eli Shechtman, Yong Jae Lee, Krishna Kumar Singh
We introduce a new method for diverse foreground generation with explicit control over various factors.
no code implementations • 24 Aug 2022 • Yuchen Liu, Zhixin Shu, Yijun Li, Zhe Lin, Richard Zhang, S. Y. Kung
While concatenating GAN inversion and a 3D-aware, noise-to-image GAN is a straight-forward solution, it is inefficient and may lead to noticeable drop in editing quality.
no code implementations • 22 Jun 2022 • Marian Lupascu, Ryan Murdock, Ionut Mironică, Yijun Li
In this work, we propose a complete framework that generates visual art.
1 code implementation • CVPR 2022 • Gaurav Parmar, Yijun Li, Jingwan Lu, Richard Zhang, Jun-Yan Zhu, Krishna Kumar Singh
We propose a new method to invert and edit such complex images in the latent space of GANs, such as StyleGAN2.
no code implementations • 18 Mar 2022 • Yijun Li, Stefan Stanojevic, Lana X. Garmire
Spatial transcriptomics (ST) has advanced significantly in the last few years.
no code implementations • 18 Jan 2022 • Stefan Stanojevic, Yijun Li, Lana X. Garmire
Recently developed technologies to generate single-cell genomic data have made a revolutionary impact in the field of biology.
no code implementations • ICCV 2021 • Yuheng Li, Yijun Li, Jingwan Lu, Eli Shechtman, Yong Jae Lee, Krishna Kumar Singh
We propose a new approach for high resolution semantic image synthesis.
1 code implementation • 14 Sep 2021 • Bing He, Yao Xiao, Haodong Liang, Qianhui Huang, Yuheng Du, Yijun Li, David Garmire, Duxin Sun, Lana X. Garmire
Intercellular heterogeneity is a major obstacle to successful precision medicine.
2 code implementations • CVPR 2021 • Utkarsh Ojha, Yijun Li, Jingwan Lu, Alexei A. Efros, Yong Jae Lee, Eli Shechtman, Richard Zhang
Training generative models, such as GANs, on a target domain containing limited examples (e. g., 10) can easily result in overfitting.
Ranked #3 on 10-shot image generation on Babies
no code implementations • CVPR 2021 • Pei Wang, Yijun Li, Krishna Kumar Singh, Jingwan Lu, Nuno Vasconcelos
We introduce an inversion based method, denoted as IMAge-Guided model INvErsion (IMAGINE), to generate high-quality and diverse images from only a single training sample.
1 code implementation • CVPR 2021 • Pei Wang, Yijun Li, Nuno Vasconcelos
Extensive research in neural style transfer methods has shown that the correlation between features extracted by a pre-trained VGG network has a remarkable ability to capture the visual style of an image.
1 code implementation • CVPR 2021 • Yuchen Liu, Zhixin Shu, Yijun Li, Zhe Lin, Federico Perazzi, S. Y. Kung
We then propose a novel content-aware method to guide the processes of both pruning and distillation.
no code implementations • NeurIPS 2020 • Yijun Li, Richard Zhang, Jingwan Lu, Eli Shechtman
Few-shot image generation seeks to generate more data of a given domain, with only few available training examples.
Ranked #4 on 10-shot image generation on Babies
1 code implementation • 12 Aug 2020 • Wenqing Chu, Wei-Chih Hung, Yi-Hsuan Tsai, Yu-Ting Chang, Yijun Li, Deng Cai, Ming-Hsuan Yang
Caricature is an artistic drawing created to abstract or exaggerate facial features of a person.
1 code implementation • ECCV 2020 • Hung-Yu Tseng, Matthew Fisher, Jingwan Lu, Yijun Li, Vladimir Kim, Ming-Hsuan Yang
People often create art by following an artistic workflow involving multiple stages that inform the overall design.
1 code implementation • CVPR 2020 • Huan Wang, Yijun Li, Yuehai Wang, Haoji Hu, Ming-Hsuan Yang
In this work, we present a new knowledge distillation method (named Collaborative Distillation) for encoder-decoder based neural style transfer to reduce the convolutional filters.
no code implementations • 25 Dec 2019 • Yijun Li, Lu Jiang, Ming-Hsuan Yang
Image extrapolation aims at expanding the narrow field of view of a given image patch.
1 code implementation • CVPR 2019 • Yijun Li, Chen Fang, Aaron Hertzmann, Eli Shechtman, Ming-Hsuan Yang
We propose a high-quality photo-to-pencil translation method with fine-grained control over the drawing style.
1 code implementation • ECCV 2018 • Yijun Li, Chen Fang, Jimei Yang, Zhaowen Wang, Xin Lu, Ming-Hsuan Yang
Existing video prediction methods mainly rely on observing multiple historical frames or focus on predicting the next one-frame.
12 code implementations • ECCV 2018 • Yijun Li, Ming-Yu Liu, Xueting Li, Ming-Hsuan Yang, Jan Kautz
Photorealistic image stylization concerns transferring style of a reference photo to a content photo with the constraint that the stylized photo should remain photorealistic.
no code implementations • 11 Oct 2017 • Yijun Li, Jia-Bin Huang, Narendra Ahuja, Ming-Hsuan Yang
In contrast to existing methods that consider only the guidance image, the proposed algorithm can selectively transfer salient structures that are consistent with both guidance and target images.
15 code implementations • NeurIPS 2017 • Yijun Li, Chen Fang, Jimei Yang, Zhaowen Wang, Xin Lu, Ming-Hsuan Yang
The whitening and coloring transforms reflect a direct matching of feature covariance of the content image to a given style image, which shares similar spirits with the optimization of Gram matrix based cost in neural style transfer.
2 code implementations • CVPR 2017 • Yijun Li, Sifei Liu, Jimei Yang, Ming-Hsuan Yang
In this paper, we propose an effective face completion algorithm using a deep generative model.
no code implementations • CVPR 2017 • Yijun Li, Chen Fang, Jimei Yang, Zhaowen Wang, Xin Lu, Ming-Hsuan Yang
Recent progresses on deep discriminative and generative modeling have shown promising results on texture synthesis.
no code implementations • 17 Jun 2015 • Wei Liu, Yijun Li, Xiaogang Chen, Jie Yang, Qiang Wu, Jingyi Yu
A popular solution is upsampling the obtained noisy low resolution depth map with the guidance of the companion high resolution color image.