Domain Adaptation Meets Disentangled Representation Learning and Style Transfer

25 Dec 2017  ·  Hoang Tran Vu, Ching-Chun Huang ·

Many methods have been proposed to solve the domain adaptation problem recently. However, the success of them implicitly funds on the assumption that the information of domains are fully transferrable. If the assumption is not satisfied, the effect of negative transfer may degrade domain adaptation. In this paper, a better learning network has been proposed by considering three tasks - domain adaptation, disentangled representation, and style transfer simultaneously. Firstly, the learned features are disentangled into common parts and specific parts. The common parts represent the transferrable features, which are suitable for domain adaptation with less negative transfer. Conversely, the specific parts characterize the unique style of each individual domain. Based on this, the new concept of feature exchange across domains, which can not only enhance the transferability of common features but also be useful for image style transfer, is introduced. These designs allow us to introduce five types of training objectives to realize the three challenging tasks at the same time. The experimental results show that our architecture can be adaptive well to full transfer learning and partial transfer learning upon a well-learned disentangled representation. Besides, the trained network also demonstrates high potential to generate style-transferred images.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here