Energy consumption models for smart-camera networks

 

[Abstract][Validation experiments][Sample results][Downloads][References]

This page provides additional material and software for the paper:

 

Energy consumption models for smart-camera networks [link]

J. SanMiguel (show email) and A. Cavallaro

IEEE Transactions on Circuits and Systems for Video Technology (accepted 2016)

 

Abstract Camera networks require heavy visual-data processing and high-bandwidth communication. In this paper, we identify key factors underpinning the development of resource-aware algorithms and we propose a comprehensive energy consumption model for the resources employed by smart-camera networks, which are composed of cameras that process data locally and collaborate with their neighbours. We account for the main parameters that influence consumption when sensing (framesize and framerate), processing (dynamic frequency scaling and task load) and communication (output power and bandwidth) are considered. Next we define an abstraction based on clock frequency and duty cycle that accounts for active, idle and sleep operational states. We demonstrate the importance of the proposed model for a multi-camera tracking task and show how one may significantly reduce consumption with only minor performance degradation when choosing to operate with an appropriately reduced hardware capacity. Moreover, we quantify the dependency on local computation resources and on bandwidth availability. The proposed consumption model can be easily adjusted to account for new platforms, thus providing a valuable tool for the design of resource-aware algorithms and further research in resource-aware camera networks.

 


Validation with power measurements

To validate our model we have performed a set of experiments using measurements obtained from the battery discharge of a smart-camera system running video applications on an Ubuntu OS 14.04 64bit.

We emulated a high-end smart camera with the following devices (see Fig. 1):

·         USB QuickCam Ultra Vision for sensing

·         Toshiba Portege R-700 i5-450M for processing

·         AC600 Wireless Dual Band USB Adapter for communication.

Battery readings are obtained from the OS kernel system file /sys/class/power_supply/BAT0/uevent where the discharge over time has been extracted each 5 seconds.


Figure 1: Overview of the system used to collect power measurements from the battery discharge of a smart camera. [click here to enlarge]

 

These experiments have been divided in two phases:

1.     Get the measurements and model fitting using Eq. (4) using three UNIX C++ programs for sensing, processing and communication

2.     Design and development of a video application for multi-target people tracking based on the upper-body parts of people [1] which transmits visual descriptors that can be used for people re-identification purposes [2]. The OpenCV library was used to implement the visual analysis algorithms.

 

A detailed description of the protocol to get the measurements is available here.

 


Sample results

 

Selected results that can be extracted from the measurements obtained:

·         Sensing power (runs): [Active] [Idle] [Comparison]

·         Processing power (runs): [Active] [Idle] [Comparison]

·         Communication power (runs): [Comparison]

·         Video application (runs): [Active]

 

Other statistics that can be obtained from the extracted data:

·         Sensing utilization (runs): [Active]

·         Communication bitrate: [Active]

·         Video application: [CPU utilization] [Overall activation time]

 

Comparison with related state-of-the-art:

 


Figure 2: Energy consumption estimated by the proposed model and the utilization-based models. The top (black) curve corresponds to the ground-truth data obtained from battery discharge readings. [click here to enlarge]

 

 

 


Downloads (Please cite this paper if you use this material)

The software employed in this section can be freely used for research purposes. [Github]

 

The Unix programs employed and the measurements obtained:
(tested on Ubuntu OS 14.04 64bit and OpenCV 3.1.01) 

·         Software & bash scripts to execute the experiments: [github] [local copy] 1.8 MB (18/07/16)

·         Data measurements: [github] [local copy] 1.3 MB (18/07/16)

 

Matlab scripts to generate the figures presented in the paper [github] [local copy] 8.5 MB (18/07/16)

 

1You can get the OpenCV Library here

 


References

[1] R. Vezzani, D. Baltieri, and R. Cucchiara, “People reidentification in surveillance and forensics: A survey,” ACM Computing Surveys (CSUR), vol. 46, no. 2, p. 29, 2013.

[2] R. Mazzon, S. Tahir, and A. Cavallaro, “Person re-identification in crowd,” Pattern Recognition Letters, vol. 33, no. 14, pp. 1828–1837, 2012.

[3] Y. Sharrab and N. Sarhan, “Aggregate power consumption modeling of live video streaming Syst.” in ACM Conf. on Multimedia Syst. (MMSys), Feb. 2013, pp. 60–71.

[4] A. Wang and A. Chandrakasan, “Energy-efficient DSPs for wireless sensor networks,” IEEE Signal Process. Mag., vol. 19, no. 4, pp. 68–78, Jul. 2002.

[5] Z. He and D. Wu, “Resource allocation and performance analysis of wireless video sensors,” IEEE Trans. Circuits Syst. Video Technol.,vol. 16, no. 5, pp. 590–599, May 2006.

[6] A. Redondi, D. Buranapanichkit, M. Cesana, M. Tagliasacchi, and Y. Andreopoulos, “Energy Consumption of Visual Sensor Networks: Impact of Spatio-Temporal Coverage Based on Single-Hop Topologies,” IEEE Trans. Circuits Syst. Video Technol., vol. 24, no. 12, pp. 2117–2131, Dec. 2014.

 

 

© Video Processing & Understanding Lab (http://www-vpu.eps.uam.es/)