Unsupervised Meta Learning for One Shot Title Compression in Voice Commerce

22 Feb 2021  ·  Snehasish Mukherjee ·

Product title compression for voice and mobile commerce is a well studied problem with several supervised models proposed so far. However these models have 2 major limitations; they are not designed to generate compressions dynamically based on cues at inference time, and they do not transfer well to different categories at test time. To address these shortcomings we model title compression as a meta learning problem where we ask can we learn a title compression model given only 1 example compression? We adopt an unsupervised approach to meta training by proposing an automatic task generation algorithm that models the observed label generation process as the outcome of 4 unobserved processes. We create parameterized approximations to each of these 4 latent processes to get a principled way of generating random compression rules, which are treated as different tasks. For our main meta learner, we use 2 models; M1 and M2. M1 is a task agnostic embedding generator whose output feeds into M2 which is a task specific label generator. We pre-train M1 on a novel unsupervised segment rank prediction task that allows us to treat M1 as a segment generator that also learns to rank segments during the meta-training process. Our experiments on 16000 crowd generated meta-test examples show that our unsupervised meta training regime is able to acquire a learning algorithm for different tasks after seeing only 1 example for each task. Further, we show that our model trained end to end as a black box meta learner, outperforms non parametric approaches. Our best model obtains an F1 score of 0.8412, beating the baseline by a large margin of 25 F1 points.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here