Simplified object-based deep neural network for very high resolution remote sensing image classification

Pan, Xin and Zhang, Ce and Xu, Jun and Zhao, Jian (2021) Simplified object-based deep neural network for very high resolution remote sensing image classification. ISPRS Journal of Photogrammetry and Remote Sensing, 181. pp. 218-237. ISSN 0924-2716

[thumbnail of pan_2021_accepted]
Text (pan_2021_accepted)
pan_2021_accepted.pdf - Accepted Version
Available under License Creative Commons Attribution-NonCommercial-NoDerivs.

Download (14MB)

Abstract

For the object-based classification of high resolution remote sensing images, many people expect that introducing deep learning methods can improve then classification accuracy. Unfortunately, the input shape for deep neural networks (DNNs) is usually rectangular, whereas the shapes of the segments output by segmentation methods are usually according to the corresponding ground objects; this inconsistency can lead to confusion among different types of heterogeneous content when a DNN processes a segment. Currently, most object-based methods utilizing convolutional neural networks (CNNs) adopt additional models to overcome the detrimental influence of such heterogeneous content; however, these heterogeneity suppression mechanisms introduce additional complexity into the whole classification process, and these methods are usually unstable and difficult to use in real applications. To address the above problems, this paper proposes a simplified object-based deep neural network (SO-DNN) for very high resolution remote sensing image classification. In SO-DNN, a new segment category label inference method is introduced, in which a deep semantic segmentation neural network (DSSNN) is used as the classification model instead of a traditional CNN. Since the DSSNN can obtain a category label for each pixel in the input image patch, different types of content are not mixed together; therefore, SO-DNN does not require an additional heterogeneity suppression mechanism. Moreover, SO-DNN includes a sample information optimization method that allows the DSSNN model to be trained using only pixel-based training samples. Because only a single model is used and only a pixel-based training set is needed, the whole classification process of SO-DNN is relatively simple and direct. In experiments, we use very high-resolution aerial images from Vaihingen and Potsdam from the ISPRS WG II/4 dataset as test data and compare SO-DNN with 6 traditional methods: O-MLP, O+CNN, OHSF-CNN, 2-CNN, JDL and U-Net. Compared with the best-performing method among these traditional methods, the classification accuracy of SO-DNN is improved by up to 7.71% and 10.78% for single images from Vaihingen and Potsdam, respectively, and the average classification accuracy is improved by 2.46% and 2.91% for the Vaihingen and Potsdam images, respectively. SO-DNN relies on fewer models and easier-to-obtain samples than traditional methods, and its stable performance makes SO-DNN more valuable for practical applications.

Item Type:
Journal Article
Journal or Publication Title:
ISPRS Journal of Photogrammetry and Remote Sensing
Additional Information:
This is the author’s version of a work that was accepted for publication in ISPRS Journal of Photogrammetry and Remote Sensing. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in ISPRS Journal of Photogrammetry and Remote Sensing, 181, 2021 DOI: 10.1016/j.isprsjprs.2021.09.014
Uncontrolled Keywords:
/dk/atira/pure/subjectarea/asjc/2200/2201
Subjects:
?? cnnvery high resolutionsemantic segmentationclassificationobiaengineering (miscellaneous)atomic and molecular physics, and opticscomputers in earth sciencescomputer science applicationsgeography, planning and development ??
ID Code:
160076
Deposited By:
Deposited On:
05 Oct 2021 08:15
Refereed?:
Yes
Published?:
Published
Last Modified:
12 Feb 2024 00:42