Generative Adversarial Networks Based on Transformer Encoder and Convolution Block for Hyperspectral Image Classification

Nowadays, HSI classification can reach a high classification accuracy when given sufficient labeled samples as training set. However, the performances of existing methods decrease sharply when trained on few labeled samples. Existing methods in few-shot problems usually require another dataset in order to improve the classification accuracy. However, the cross-domain problem exists in these methods because of the significant spectral shift between target domain and source domain. Considering above issues, we propose a new method without requiring external dataset through combining a Generative Adversarial Network, Transformer Encoder and convolution block in a unified framework. The proposed method has both a global receptive field provided by Transformer Encoder and a local receptive field provided by convolution block. Experiments conducted on Indian Pines, PaviaU and KSC datasets demonstrate that our method exceeds the results of existing deep learning methods for hyperspectral image classification in the few-shot learning problem.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Benchmark
Hyperspectral Image Classification Indian Pines TC-GAN OA@15perclass 87.47±1.45 # 3
Hyperspectral Image Classification Kennedy Space Center TC-GAN OA@15perclass 98.39±0.63 # 2
Hyperspectral Image Classification Pavia University TC-GAN OA@15perclass 93.20±0.59 # 2

Methods