Optical Science and Engineering ETDs


Rakesh Kumar

Publication Date



The capability to achieve three-dimensional (3D) imaging in a single snapshot is a highly coveted goal for the imaging community. To be able to extend the depth-of-field while simultaneously encoding depth in the point spread function (PSF) itself allows an imaging solution which is less time consuming and less data intensive than the inefficient multiscan based conventional 3D imaging. Light-field cameras also achieve 3D imaging in a snapshot, but at the cost of greatly reduced resolution. Phase mask based depth encoding solutions have been proposed by other groups, but they all suffer from a relatively smaller depth-of-field. Recently there has been much progress in the field of live-cell imaging where intracellular molecules have been imaged with nano-meter (nm) resolution. Our rotating point spread function (RPSF) imager will allow for nm resolution in all three dimensions in a single snapshot over a much larger axial field depth. This will help indirectly in achieving a much better temporal resolution by means of 3D video sequences in order to study dynamics of protein molecules inside a cell. We have shown how to implement compressive sensing tools to improve the temporal resolution even better. Reconstruction of extended 3D objects is much harder for techniques based on pupil plane coding, including our RPSF imager, but point sources are easily localized and resolved in 3D by using such techniques. Based on this observation, we have proposed, developed and analyzed the idea of a new 3D shape acquisition technique for non-self-luminous objects using external laser point illumination. This technique, which we call Shape Recovery by Point Illumination (ShaRPI), uses multiple frames to illuminate the 3D object surface via arrays of well separated laser spots, one array per frame. Since the tight laser spots may be regarded as point sources, they can be well localized in 3D by the RPSF imager, frame by frame. The smooth object surface can then be reconstructed in 3D from such point-illumination localization estimates with high accuracy in an appropriate basis, such as the wavelet basis, that takes advantage of sparsity resulting from the smoothness of the surface shape. We can in this way also achieve improved temporal resolution by tailoring the illumination frames and limiting their number, subject minimally to the sought degree of spatial resolution, so the 3D spatial information can be acquired efficiently in time. In future work, a phase engineered approach to perform joint polarization-3D localization estimation is under way.

Degree Name

Optical Science and Engineering

Level of Degree


Department Name

Optical Science and Engineering

First Advisor

Prasad, Sudhakar

First Committee Member (Chair)

Lidke, Keith

Second Committee Member

Hayat, Majeed

Third Committee Member

Duan, Huaiy

Document Type