Search Results for author: Qilong Kou

Found 5 papers, 2 papers with code

DreamMat: High-quality PBR Material Generation with Geometry- and Light-aware Diffusion Models

no code implementations27 May 2024 Yuqing Zhang, YuAn Liu, Zhiyu Xie, Lei Yang, Zhongyuan Liu, Mengzhou Yang, Runze Zhang, Qilong Kou, Cheng Lin, Wenping Wang, Xiaogang Jin

2D diffusion model, which often contains unwanted baked-in shading effects and results in unrealistic rendering effects in the downstream applications.

A Locality-based Neural Solver for Optical Motion Capture

1 code implementation1 Sep 2023 Xiaoyu Pan, Bowen Zheng, Xinwei Jiang, Guanglong Xu, Xianli Gu, Jingxiang Li, Qilong Kou, He Wang, Tianjia Shao, Kun Zhou, Xiaogang Jin

Finally, we propose a training regime based on representation learning and data augmentation, by training the model on data with masking.

Data Augmentation Representation Learning

RSMT: Real-time Stylized Motion Transition for Characters

1 code implementation21 Jun 2023 Xiangjun Tang, Linjun Wu, He Wang, Bo Hu, Xu Gong, Yuchen Liao, Songnan Li, Qilong Kou, Xiaogang Jin

Styled online in-between motion generation has important application scenarios in computer animation and games.

CTSN: Predicting Cloth Deformation for Skeleton-based Characters with a Two-stream Skinning Network

no code implementations30 May 2023 Yudi Li, Min Tang, Yun Yang, Ruofeng Tong, Shuangcai Yang, Yao Li, Bailin An, Qilong Kou

We present a novel learning method to predict the cloth deformation for skeleton-based characters with a two-stream network.

Real-time Controllable Motion Transition for Characters

no code implementations5 May 2022 Xiangjun Tang, He Wang, Bo Hu, Xu Gong, Ruifan Yi, Qilong Kou, Xiaogang Jin

Then, during generation, we design a transition model which is essentially a sampling strategy to sample from the learned manifold, based on the target frame and the aimed transition duration.

Cannot find the paper you are looking for? You can Submit a new open access paper.