Search Results for author: Xiaojun Qi

Found 6 papers, 2 papers with code

Residual Graph Convolutional Network for Bird's-Eye-View Semantic Segmentation

no code implementations7 Dec 2023 Qiuxiao Chen, Xiaojun Qi

In this paper, we propose to incorporate a novel Residual Graph Convolutional (RGC) module in deep CNNs to acquire both the global information and the region-level semantic relationship in the multi-view image domain.

Autonomous Driving Bird's-Eye View Semantic Segmentation +2

Enhancing the Performance of Automated Grade Prediction in MOOC using Graph Representation Learning

1 code implementation18 Oct 2023 Soheila Farokhi, Aswani Yaramala, Jiangtao Huang, Muhammad F. A. Khan, Xiaojun Qi, Hamid Karimi

However, current automated assessment approaches overlook the structural links between different entities involved in the downstream tasks, such as the students and courses.

Graph Embedding Graph Representation Learning

Facial Expression Recognition in the Wild via Deep Attentive Center Loss

1 code implementation7 Jan 2021 Amir Hossein Farzaneh, Xiaojun Qi

Learning discriminative features for Facial Expression Recognition (FER) in the wild using Convolutional Neural Networks (CNNs) is a non-trivial task due to the significant intra-class variations and inter-class similarities.

Ranked #15 on Facial Expression Recognition (FER) on RAF-DB (using extra training data)

Facial Expression Recognition Facial Expression Recognition (FER) +2

Structured Group Local Sparse Tracker

no code implementations17 Feb 2019 Mohammadreza Javanmardi, Xiaojun Qi

Sparse representation is considered as a viable solution to visual tracking.

Visual Tracking

Robust Structured Multi-task Multi-view Sparse Tracking

no code implementations6 Jun 2018 Mohammadreza Javanmardi, Xiaojun Qi

Specifically, we extract features of the target candidates from different views and sparsely represent them by a linear combination of templates of different views.

Visual Tracking

Cannot find the paper you are looking for? You can Submit a new open access paper.