An object-based convolutional neural network (OCNN) for urban land use classification

Zhang, Ce and Sargent, Isabel and Pan, Xin and Li, Huapeng and Gardiner, Andy and Hare, Jonathon and Atkinson, Peter Michael (2018) An object-based convolutional neural network (OCNN) for urban land use classification. Remote Sensing of Environment, 216. pp. 57-70. ISSN 0034-4257

[thumbnail of OCNN_Manuscript_RSE_Ce_Accepted]
PDF (OCNN_Manuscript_RSE_Ce_Accepted)
OCNN_Manuscript_RSE_Ce_Accepted.pdf - Accepted Version
Available under License Creative Commons Attribution-NonCommercial-NoDerivs.

Download (867kB)


Urban land use information is essential for a variety of urban-related applications such as urban planning and regional administration. The extraction of urban land use from very fine spatial resolution (VFSR) remotely sensed imagery has, therefore, drawn much attention in the remote sensing community. Nevertheless, classifying urban land use from VFSR images remains a challenging task, due to the extreme difficulties in differentiating complex spatial patterns to derive high-level semantic labels. Deep convolutional neural networks (CNNs) offer great potential to extract high-level spatial features, thanks to its hierarchical nature with multiple levels of abstraction. However, blurred object boundaries and geometric distortion, as well as huge computational redundancy, severely restrict the potential application of CNN for the classification of urban land use. In this paper, a novel object-based convolutional neural network (OCNN) is proposed for urban land use classification using VFSR images. Rather than pixel-wise convolutional processes, the OCNN relies on segmented objects as its functional units, and CNN networks are used to analyse and label objects such as to partition within-object and between-object variation. Two CNN networks with different model structures and window sizes are developed to predict linearly shaped objects (e.g. Highway, Canal) and general (other non-linearly shaped) objects. Then a rule-based decision fusion is performed to integrate the class-specific classification results. The effectiveness of the proposed OCNN method was tested on aerial photography of two large urban scenes in Southampton and Manchester in Great Britain. The OCNN combined with large and small window sizes achieved excellent classification accuracy and computational efficiency, consistently outperforming its sub-modules, as well as other benchmark comparators, including the pixel-wise CNN, contextual-based MRF and object-based OBIA-SVM methods. The proposed method provides the first object-based CNN framework to effectively and efficiently address the complicated problem of urban land use classification from VFSR images.

Item Type:
Journal Article
Journal or Publication Title:
Remote Sensing of Environment
Additional Information:
This is the author’s version of a work that was accepted for publication in Remote Sensing of Environment. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Remote Sensing of Environment, 216, 2018 DOI: 10.1016/j.rse.2018.06.034
Uncontrolled Keywords:
?? convolutional neural networkobiaurban land use classificationvfsr remotely sensed imageryhigh-level feature representationssoil sciencecomputers in earth sciencesgeology ??
ID Code:
Deposited By:
Deposited On:
25 Jun 2018 08:06
Last Modified:
27 Apr 2024 06:27