Software-Defined Imaging for Energy-Efficient Visual Computing
From augmented reality and autonomous vehicles to computational photography and the Internet of Things, applications built on real-time, high-quality visual sensing are becoming central to everyday computing. All of these applications exhibit two competing demands: complex, domain-specific sensing capabilities on one hand and energy efficiency on the other. Energy constraints, along with stagnant battery technology, are holding back advances in camera-driven capabilities: for example, running continuous face detection on a Google Glass unit exhausts its battery in just 45 minutes. Within a tight power budget, applications cannot exploit the advanced sensing capabilities they depend on, such as extreme dynamic range, low latency, high resolution, and high frame rate.
The proposed work aims to satisfy all of these application demands through hardware–software co-design of image sensing systems. We envision mobile platforms that can flexibly adapt to the sensing demands of current and future visual applications. We propose a set of interlocking research projects, from hardware prototyping to language design, to support end-to-end applications that deeply customize camera behavior. Current software-defined image sensors primarily control sensor knobs and provide metadata for APIs (Application Programming Interfaces). In contrast, we will design new mixed-signal image sensor hardware that exposes a brand new set of configuration parameters to software, computational architectures to efficiently process this new visual data, and operating system support for interacting with the new hardware. Using these new tools, applications will enjoy improved trade-offs between energy efficiency and imaging quality metrics such as frame rate, resolution, and dynamic range. We will evaluate our designs by implementing important use cases from across visual computing with an emphasis on real energy measurement and performance characterization
Findings and Impact
This project is still on-going, but has led to several papers published in PI Jayasuriya and Co-PI LiKamWa’s labs, as well as the building of functional embedded imaging systems.