Shen, Yiran and Hu, Wen and Liu, Junbin and Yang, Mingrui and Wei, Bo and Chou, Chun Tung (2012) Efficient background subtraction for real-time tracking in embedded camera networks. In: SenSys 2012: Proceedings of the 10th ACM Conference on Embedded Network Sensor Systems :. Association for Computing Machinery (ACM), pp. 295-308. ISBN 9781450311694
Full text not available from this repository.Abstract
Background subtraction is often the first step of many computer vision applications. For a background subtraction method to be useful in embedded camera networks, it must be both accurate and computationally efficient because of the resource constraints on embedded platforms. This makes many traditional background subtraction algorithms unsuitable for embedded platforms because they use complex statistical models to handle subtle illumination changes. These models make them accurate but the computational requirement of these complex models is often too high for embedded platforms. In this paper, we propose a new background subtraction method which is both accurate and computational efficient. The key idea is to use compressive sensing to reduce the dimensionality of the data while retaining most of the information. By using multiple datasets, we show that the accuracy of our proposed background subtraction method is comparable to that of the traditional background subtraction methods. Moreover, real implementation on an embedded camera platform shows that our proposed method is at least 5 times faster, and consumes significantly less energy and memory resources than the conventional approaches. Finally, we demonstrated the feasibility of the proposed method by the implementation and evaluation of an end-to-end real-time embedded camera network target tracking application.