Abstract

Light detection and ranging (LIDAR) systems use time of flight (TOF) in combination with raster scanning of the scene to form depth maps, and TOF cameras instead make TOF measurements in parallel by using an array of sensors. Here we present a framework for depth map acquisition using neither raster scanning by the illumination source nor an array of sensors. Our architecture uses a spatial light modulator (SLM) to spatially pattern a temporally-modulated light source. Then, measurements from a single omnidirectional sensor provide adequate information for depth map estimation at a resolution equal that of the SLM. Proof-of-concept experiments have verified the validity of our modeling and algorithms.

© 2012 Optical Society of America

PDF Article
More Like This
Blind Transmitted and Reflected Image Separation Using Depth Diversity and Time–of–Flight Sensors

Ayush Bhandari, Aurélien Bourquard, Shahram Izadi, and Ramesh Raskar
CT4F.2 Computational Optical Sensing and Imaging (COSI) 2015

Spectral+Depth Imaging with a Time-of-Flight Compressive Snapshot Camera

Hoover Rueda, Daniel L. Lau, and Gonzalo R. Arce
CTh1B.2 Computational Optical Sensing and Imaging (COSI) 2017

Compressive reflectance field acquisition using confocal imaging with variable coded apertures

Ryoichi Horisaki, Yusuke Tampa, and Jun Tanida
CTu3B.4 Computational Optical Sensing and Imaging (COSI) 2012

References

You do not have subscription access to this journal. Citation lists with outbound citation links are available to subscribers only. You may subscribe either as an OSA member, or as an authorized user of your institution.

Contact your librarian or system administrator
or
Login to access OSA Member Subscription