The invention of the CMOS image sensor led to an explosion of digital photography and visual data that is captured, shared, and analyzed today. But the basic image sensor design has remained unchanged, a two-dimensional regular array of photodiodes whose on-board data processing is agnostic to the imaging application. In this talk, I will discuss my research in expanding the design space of a camera: sampling and processing additional dimensions of light such as angle and polarization; designing software-defined image sensors and imaging pipelines; and the co-design of machine learning algorithms with novel computational cameras. In particular, I will discuss my work using Angle Sensitive Pixels (ASPs), photodiodes with integrated diffractive metal gratings, as a hardware platform for capturing additional dimensions of light. I will show how ASPs can recover high resolution 4D spatio-angular rays of light, how they can be combined with time-of-flight imaging for robust depth mapping, and even optically computing the first layer of convolutional neural networks for energy-efficient deep learning. This research aims to reimagine our concept of a camera, co-designing the sensing along with our algorithms, to yield better energy efficiency and novel imaging applications for robotics, human-computer interaction, and visual entertainment.
Suren Jayasuriya is a postdoctoral fellow at the Robotics Institute at Carnegie Mellon University. His research interests are in computational imaging, computer vision, and sensors. He obtained his PhD in January 2017 at Cornell University in ECE. Before that, he graduated from the University of Pittsburgh in 2012 with a B.S. in Mathematics and a B.A. in Philosophy. He has received the NSF Graduate Research Fellowship in 2012, the Qualcomm Innovation Fellowship in 2015, and the Cornell ECE Outstanding PhD TA award in 2015, and the best paper award at ICCP 2014.