no code implementations • 30 Jan 2022 • Yuan Wu, Diana Inkpen, Ahmed El-Roby
Multi-domain text classification (MDTC) aims to leverage all available resources from multiple domains to learn a predictive model that can generalize well on these domains.
no code implementations • 29 Jan 2022 • Yuan Wu, Diana Inkpen, Ahmed El-Roby
Multi-domain text classification (MDTC) has obtained remarkable achievements due to the advent of deep learning.
no code implementations • 14 Aug 2021 • Yuan Wu, Diana Inkpen, Ahmed El-Roby
Adversarial domain adaptation has made impressive advances in transferring knowledge from the source domain to the target domain by aligning feature distributions of both domains.
no code implementations • EACL (AdaptNLP) 2021 • Yuan Wu, Diana Inkpen, Ahmed El-Roby
We provide theoretical analysis for the CAN framework, showing that CAN's objective is equivalent to minimizing the total divergence among multiple joint distributions of shared features and label predictions.
no code implementations • 31 Jan 2021 • Yuan Wu, Diana Inkpen, Ahmed El-Roby
Using the shared-private paradigm and adversarial training has significantly improved the performances of multi-domain text classification (MDTC) models.
no code implementations • 1 Jan 2021 • Yuan Wu, Diana Inkpen, Ahmed El-Roby
Domain adaptation sets out to address this problem, aiming to leverage labeled data in the source domain to learn a good predictive model for the target domain whose labels are scarce or unavailable.
no code implementations • ECCV 2020 • Yuan Wu, Diana Inkpen, Ahmed El-Roby
Second, samples from the source and target domains alone are not sufficient for domain-invariant feature extracting in the latent space.