In recent years, deep learning HS-MS fusion has become a very active research tool for the super resolution of hyperspectral image. The deep conventional neural networks (CNN) help to extract more detailed spectral and spatial features from the hyperspectral image. In CNN, each convolution layer takes the input from the previous layer which may cause the problems of information loss as the depth of the network increases. This loss of information causes vanishing gradient problems, particularly in the case of very high-resolution images. To overcome this problem in this work we propose a novel HS–MS ResNet fusion architecture with help of skip connection. The ResNet fusion architecture contains residual block with different stacked convolution layer, in this work we tested the residual block with two-, three-, and four- stacked convolution layers. To strengthens the gradients and for decreases negative effects from gradient vanishing, we implemented ResNet fusion architecture with different skip connections like short, long, and dense skip connection. We measure the strength and superiority of our ResNet fusion method against traditional methods by using four public datasets using standard quality measures and found that our method shows outstanding performance than all other compared methods.
Part of the book: Hyperspectral Imaging