Salient object detection based on super-pixel clustering and unified low-rank representation

Zhang, Qiang and Liu, Yi and Liu, Siyang and Han, Jungong (2017) Salient object detection based on super-pixel clustering and unified low-rank representation. Computer Vision and Image Understanding, 161. pp. 51-64. ISSN 1077-3142

Full text not available from this repository.

Abstract

In this paper, we present a novel salient object detection method, efficiently combining Laplacian sparse subspace clustering (LSSC) and unified low-rank representation (ULRR). Unlike traditional low-rank matrix recovery (LRMR) based saliency detection methods which mainly extract saliency from pixels or super-pixels, our method advocates the saliency detection on the super-pixel clusters generated by LSSC. By doing so, our method succeeds in extracting large-size salient objects from cluttered backgrounds, against the detection of small-size salient objects from simple backgrounds obtained by most existing work. The entire algorithm is carried out in two stages: region clustering and cluster saliency detection. In the first stage, the input image is segmented into many super-pixels, and on top of it, they are further grouped into different clusters by using LSSC. Each cluster contains multiple super-pixels having similar features (e.g., colors and intensities), and may correspond to a part of a salient object in the foreground or a local region in the background. In the second stage, we formulate the saliency detection of each super-pixel cluster as a unified low-rankness and sparsity pursuit problem using a ULRR model, which integrates a Laplacian regularization term with respect to the sparse error matrix into the traditional low-rank representation (LRR) model. The whole model is based on a sensible cluster-consistency assumption that the spatially adjacent super-pixels within the same cluster should have similar saliency values, similar representation coefficients as well as similar reconstruction errors. In addition, we construct a primitive dictionary for the ULRR model in terms of the local-global color contrast of each super-pixel. On top of it, a global saliency measure covering the representation coefficients and a local saliency measure considering the sparse reconstruction errors are jointly employed to define the final saliency measure. Comprehensive experiments over diverse publicly available benchmark data sets demonstrate the validity of the proposed method.

Item Type:
Journal Article
Journal or Publication Title:
Computer Vision and Image Understanding
Uncontrolled Keywords:
/dk/atira/pure/subjectarea/asjc/1700/1711
Subjects:
?? salient object detectionlaplacian sparse subspace clusteringunified low-rank representationprimitive saliency dictionary constructionsuper-pixel clustersignal processingsoftwarecomputer vision and pattern recognition ??
ID Code:
87888
Deposited By:
Deposited On:
20 Sep 2017 15:16
Refereed?:
Yes
Published?:
Published
Last Modified:
15 Jul 2024 17:12