Guo, Yukang and Hazas, M. (2011) Localising speech, footsteps and other sounds using resource-constrained devices. In: Information Processing in Sensor Networks (IPSN), 2011 10th International Conference on. IEEE Press, pp. 330-341. ISBN 978-1-61284-854-9Full text not available from this repository.
While a number of acoustic localisation systems have been proposed over the last few decades, these have typically either relied on expensive dedicated microphone arrays and workstation-class processing, or have been developed to detect a very specific type of sound in a particular scenario. However, as people live and work indoors, they generate a wide variety of sounds as they interact and move about. These human-generated sounds can be used to infer the positions of people, without requiring them to wear trackable tags. In this paper, we take a practical yet general approach to localising a number of human-generated sounds. Drawing from signal processing literature, we identify methods for resource-constrained devices in a sensor network to detect, classify and locate acoustic events such as speech, footsteps and objects being placed onto tables. We evaluate the classification and time-of-arrival estimation algorithms using a data set of human-generated sounds we captured with sensor nodes in a controlled setting. We show that despite the variety and complexity of the sounds, their localisation is feasible for sensor networks, with typical accuracies of a half metre or better. We specifically discuss the processing and networking considerations, and explore the performance trade-offs which can be made to further conserve resources.
|Item Type:||Contribution in Book/Report/Proceedings|
|Uncontrolled Keywords:||Design ; Experimentation ; General Terms-Algorithms ; Measurement ; Performance|
|Subjects:||Q Science > QA Mathematics > QA75 Electronic computers. Computer science|
|Deposited On:||19 Jul 2012 17:17|
|Last Modified:||23 Jul 2014 15:16|
Actions (login required)