3SHNet: Boosting Image-Sentence Retrieval via Visual Semantic-Spatial Self-Highlighting

26 Apr 2024  ยท  Xuri Ge, Songpei Xu, Fuhai Chen, Jie Wang, Guoxin Wang, Shan An, Joemon M. Jose ยท

In this paper, we propose a novel visual Semantic-Spatial Self-Highlighting Network (termed 3SHNet) for high-precision, high-efficiency and high-generalization image-sentence retrieval. 3SHNet highlights the salient identification of prominent objects and their spatial locations within the visual modality, thus allowing the integration of visual semantics-spatial interactions and maintaining independence between two modalities. This integration effectively combines object regions with the corresponding semantic and position layouts derived from segmentation to enhance the visual representation. And the modality-independence guarantees efficiency and generalization. Additionally, 3SHNet utilizes the structured contextual visual scene information from segmentation to conduct the local (region-based) or global (grid-based) guidance and achieve accurate hybrid-level retrieval. Extensive experiments conducted on MS-COCO and Flickr30K benchmarks substantiate the superior performances, inference efficiency and generalization of the proposed 3SHNet when juxtaposed with contemporary state-of-the-art methodologies. Specifically, on the larger MS-COCO 5K test set, we achieve 16.3%, 24.8%, and 18.3% improvements in terms of rSum score, respectively, compared with the state-of-the-art methods using different image representations, while maintaining optimal retrieval efficiency. Moreover, our performance on cross-dataset generalization improves by 18.6%. Data and code are available at https://github.com/XuriGe1995/3SHNet.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Cross-Modal Retrieval COCO 2014 3SHNet Image-to-text R@1 67.9 # 20
Image-to-text R@10 95.4 # 17
Image-to-text R@5 90.5 # 18
Text-to-image R@1 50.3 # 22
Text-to-image R@10 87.7 # 18
Text-to-image R@5 79.3 # 21
Cross-Modal Retrieval Flickr30k 3SHNet Image-to-text R@1 87.1 # 12
Image-to-text R@10 99.2 # 12
Image-to-text R@5 98.2 # 12
Text-to-image R@1 69.5 # 13
Text-to-image R@10 94.7 # 13
Text-to-image R@5 91.0 # 13
Cross-Modal Retrieval MSCOCO 3SHNet Image-to-text R@1 85.8 # 1

Methods


No methods listed for this paper. Add relevant methods here