1 code implementation • ECCV 2020 • Zhensheng Shi, Cheng Guan, Liangjie Cao, Qianqian Li, Ju Liang, Zhaorui Gu, Haiyong Zheng, Bing Zheng
Current relation models mainly reason about relations of invisibly implicit cues, while important relations of visually explicit cues are rarely considered, and the collaboration between them is usually ignored.
no code implementations • 13 Jun 2023 • Yan Shi, Yao Tian, Chengwei Tong, Chunyan Zhu, Qianqian Li, Mengzhu Zhang, Wei Zhao, Yong Liao, Pengyuan Zhou
Social network plays an important role in propagating people's viewpoints, emotions, thoughts, and fears.
no code implementations • 18 Nov 2022 • Qianqian Li, Giuliano Punzo, Craig Robson, Hadi Arbabi, Martin Mayfield
A 95-year horizon on the resilience of the railway system is drawn.
1 code implementation • ICCV 2021 • Zhensheng Shi, Ju Liang, Qianqian Li, Haiyong Zheng, Zhaorui Gu, Junyu Dong, Bing Zheng
In this paper, we propose a novel multi-action relation model for videos, by leveraging both relational graph convolutional networks (GCNs) and video multi-modality.
2 code implementations • NeurIPS Workshop Document_Intelligen 2019 • W. Ronny Huang, Yike Qi, Qianqian Li, Jonathan Degange
In addition to high segmentation accuracy, we show that our cleansed images achieve a significant boost in recognition accuracy by popular OCR software such as Tesseract 4. 0.
Optical Character Recognition Optical Character Recognition (OCR)