no code implementations • 31 May 2022 • Shahrzad Naseri, Sravana Reddy, Joana Correia, Jussi Karlgren, Rosie Jones
We find that a pretrained transformer-based language model in a zero-shot setting -- i. e., out of the box with no further training on our data -- is powerful for capturing song-mood associations.
no code implementations • 9 Mar 2021 • Shahrzad Naseri, Jeffrey Dalton, Andrew Yates, James Allan
We find that CEQE outperforms static embedding-based expansion methods on multiple collections (by up to 18% on Robust and 31% on Deep Learning on average precision) and also improves over proven probabilistic pseudo-relevance feedback (PRF) models.
no code implementations • 2 Jul 2019 • Shahrzad Naseri, Sheikh Muhammad Sarwar, James Allan
A common approach for knowledge-base entity search is to consider an entity as a document with multiple fields.