Search Results for author: Takato Yoshikawa

Found 1 papers, 0 papers with code

StyleHumanCLIP: Text-guided Garment Manipulation for StyleGAN-Human

no code implementations26 May 2023 Takato Yoshikawa, Yuki Endo, Yoshihiro Kanamori

We propose a framework for text-guided full-body human image synthesis via an attention-based latent code mapper, which enables more disentangled control of StyleGAN than existing mappers.

Image Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.