Photon-counting compressive sensing laser radar for 3D imaging G. A. Howland,* P. B. Dixon, and J. C. Howell Department of Physics and Astronomy, University of Rochester, Rochester, New York 14627, USA *Corresponding author: [email protected] Received 23 May 2011; revised 16 August 2011; accepted 17 August 2011; posted 17 August 2011 (Doc. ID 147905); published 20 October 2011

We experimentally demonstrate a photon-counting, single-pixel, laser radar camera for 3D imaging where transverse spatial resolution is obtained through compressive sensing without scanning. We use this technique to image through partially obscuring objects, such as camouflage netting. Our implementation improves upon pixel-array based designs with a compact, resource-efficient design and highly scalable resolution. © 2011 Optical Society of America OCIS codes: 280.3640, 100.6890, 110.3010, 110.3080.

The use of lasers for ranging (lidar) has greatly improved spatial and longitudinal resolution in ranged detectors [1]. Traditional lidar systems are singlepixel devices that obtain transverse resolution via scanning. In the past decade, there has been much interest in replacing scanning with spatially resolving detectors to produce ranged cameras [2–4]. These devices rapidly acquire three-dimensional images and utilize range gating to reveal objects obscured behind foliage or other camouflaging materials. The primary challenge is developing detectors with useful spatial resolution, high sensitivity, and fast timing. The most successful approach to date is the use of arrays of avalanche photo-diodes operating in the spirit of a CCD camera. A variety of high resolution systems have been brought to market with linear mode avalanche photodiode (APD) arrays [5]. For best performance, however, it is desirable to instead operate the APDs in geiger-mode to count discrete, single-photon arrivals. These photon-counting detectors have single-photon precision and sensitivity that approaches the shot noise limit with subnano-second timing. Such an array was developed for the state-of-the-art Jigsaw system created at MIT Lincoln Labs [6], which has been field tested with impressive results. The Jigsaw sensor consists of a 0003-6935/11/315917-04$15.00/0 © 2011 Optical Society of America

32 × 32 array of APDs detecting single-photon arrivals in a time-of-flight (TOF) lidar configuration. While single geiger-mode APDs are well developed, high resolution arrays are difficult to fabricate and remain primarily a research subject. As such, they present pragmatic difficulties. The highest resolution commercially available sensor is only 32 × 32 pixels, with 32 × 128 in development [7,8]. Lincoln Labs has reported up to 64 × 256 pixels [9,10]. Jigsaw must incorporate prism-based scanning to improve its resolution and field-of-view. Current arrays also have limited spectral range with peak quantum efficiency in the midvisible spectrum. For ranging, significant supporting equipment is required to correlate each pixel with illuminating pulses. Because individual pixels are small and optical flux must be distributed across the entire array, shot noise is significant. At present, such arrays are generally resource heavy in development and implementation. We show that these difficulties can be resolved by applying single-pixel camera technology [11] to generate transverse spatial resolution. This technique, pioneered by Baraniuk, uses compressive sensing to detect images with a single detector [12]. Approximately 1=4 the measurements of a raster scanning system are required. Our system couples this scheme to a TOF, photon-counting lidar. The system scales easily to useful resolutions and operates at any wavelength with an available APD, including the 1 November 2011 / Vol. 50, No. 31 / APPLIED OPTICS

5917

IR. In this paper, we present a proof-of-principle implementation with 64 × 64 pixel transverse resolution and 30 cm range resolution operating at 780 nm. The system is compact, inexpensive, and constructed entirely of off-the-shelf components. We demonstrate range-gated scene reconstruction and imaging through obscurants. Compressive sensing (CS) is a measurement technique that employs optimization to detect a sparse ndimensional signal with m < n measurements [13]. The approach is so-named because the signal is effectively compressed during measurement. CS assumes that a signal of interest can be sparsely represented in a known basis. For imaging, typical examples are the bases of discrete cosines or Haar wavelets. A signal is most compactly represented by considering only significant elements in the sparse basis. Though sparsity is assumed, it is not known prior to measurement which elements contain appreciable amplitude. The measurement must both determine which elements are important and the values taken by their coefficients. The following description primarily follows Candés et al.[14] and Figueiredo et al. [15]. To measure a sparsely represented n-dimensional signal vector x, we construct a m < n dimensional measurement vector y by multiplying x by a m × n matrix A describing the measuring device. For notational simplicity, we express A and x in the sparse basis. Including a noise vector ϕ, the measurement process is written: y ¼ Ax þ ϕ:

ð1Þ

Because m < n, a given y does not specify a unique x. CS formulates the following optimization problem to reconstruct x: 1 min ‖y − Ax‖22 þ τ‖x‖1 ; x 2

ð2Þ

where for example ‖b‖p represents the ℓp norm of b and τ is a constant scalar weighting the relative strength of the two terms. The first term of Eq. (2) is small when x is consistent with Eq. (1). It requires the optimal x to match the measured data to within a small error. In an ideal noiseless system, this term is exactly zero. The second term of Eq. (2) is small when x is sparse. Statistically, the quantity of nonsparse possible x satisfying Eq. (1) greatly exceeds the sparse quantity. Assuming a sparse signal, this term selects the most physically meaningful x. A suitable measurement matrix A must be used. Because only m measurements are taken, each measurement basis, represented by a row of A, must acquire information about many elements of x. Rows of A should therefore be incoherent with the sparse basis [16]. Even using a completely random A works well, as random vectors are generally incoherent with useful basis sets. Incredibly, this actually 5918

APPLIED OPTICS / Vol. 50, No. 31 / 1 November 2011

improves on measuring in a basis of interest, like that of pixels. The number of measurements required to reconstruct a particular signal depends on the signal sparsity, the amount of noise, and the reconstruction algorithm. A well-established empirical ruleof-thumb for ℓ1 minimization consistent with our experience requires m ≥ 4k ≈ n=4 for a good reconstruction, where k is the signal sparsity [14]. For detailed analysis see [16,17]. The experimental apparatus is given in Fig. 1. A function generator produces 2 ns square pulses with a repetition rate of 10 MHz. This signal drives a 780 nm laser diode with 50 mW peak power to illuminate a scene with objects at depths ranging from 0.3 to 2:8 m from the device. The scene is imaged onto a digital micromirror device (DMD) [18] through a 10 nm filter centered at 780 nm with a 38 mm lens focused to infinity with respect to the DMD. The DMD consists of a 1024 × 768 array of individually addressable mirrors. Each mirror can be turned from an ‘ON’ state, where light is retro-reflected, to an ‘OFF’ state that directs light out of the system. This multiplies the imaged scene by a binary pattern placed on the array. Light retro-reflecting off the DMD is focused onto a geiger-mode APD. Its output is correlated with the original pulse signal to produce a histogram of singlephoton arrival times. Peaks in the timing histogram indicate objects at different distances. The use of a photon-counting detector limits detector noise to dark counts and the poissonian uncertainty in the field. To detect an image, m random 64 × 64 patterns are sequentially placed on the DMD, where each pattern corresponds to a row of A. For each pattern, a timing histogram is recorded with acquisition time ta . The measurement vector y is constructed by integrating over depths of interest in the histogram. To view the entire range, the entire histogram is used. To view objects that fall between depths d1 and d2 , only arrivals between t1 ¼ d1 =c and t2 ¼ d2 =c are considered. The measured y and A are input into reconstruction software. To solve Eq. (2), we use Figueiredo et al.’s gradient projection algorithm and GPSR 6.0 [15]. The basis of Haar wavelets is used for the sparse basis. The reconstructed x is transformed to the pixel basis for viewing with brightness and contrast adjustments. This has no effect on the reconstruction’s information content.

Fig. 1. (Color online) Experimental setup for ranged imaging.

counts/sec

(a)

(b)

(c)

400 200 0 10

15

20

25

30

35

ns (d)

Fig. 2. (Color online) Reconstructions for objects ‘U’ and ‘R’ at depths 1:75 m and 2:10 m. (a) and (b) consider only ‘U’ and ‘R’, respectively, while (c) considers a range including both. Timing histogram (d) peaks represent ‘U’, ‘R’, and the room wall in left-toright order.

The test system is compact, requiring approximately one square foot of optical table space. With minimal engineering, a second generation device should fit into an enclosure similar to that of an SLR camera. Device size for a particular application should be competitive with any array based instrument. Note that CS is only used to generate transverse spatial resolution. Range information is obtained identically to traditional TOF lidar and is unaffected by the CS reconstruction. We used a TOF scheme for simplicity, but in principal any common ranging technique could be used. Additionally, the photon-counting detector allows for accurate, sub-nano-second timing. Analysis of the single-pixel camera noise performance has shown a significant improvement over raster scanning and is competitive with array based designs [11]. We considered several scenes to demonstrate the camera’s capabilities. Figure 2 gives an example of a three-dimensional reconstruction. The scene consisted of two objects, cardboard cutouts of the letters ‘U’ and ‘R’ placed 1:75 m and 2:10 m from the camera, respectively. A typical arrival-time histogram is given in Fig. 2(d), with the left-most peak corresponding to the ‘U’ and the center peak corresponding to the ‘R’. The right-most peak indicates the room wall. 1500 random patterns were used with ta ¼ 500 ms to generate the 64 × 64 reconstructions given in Figs. 2(a)–2(c). In Figs. 2(a) and 2(b), the measurement vector y consists of values within the full-width of the histo-

500

750

gram peaks corresponding to only the ‘U’ and ‘R’ objects, respectively. Alternately, the measurement vector for Fig. 2(c) encompasses both peaks, effectively ignoring the range information. The ‘U’ and ‘R’ are distinguished despite a longitudinal separation less than 60 cm. Given that the range peaks have a temporal width of approximately 2 ns, objects less than 30 cm apart could be resolved. Figure 3 presents reconstructions of the ‘U’ object from Fig. 2(a) for various number of measurements m. An interesting benefit of CS is that it is progressive. Increasing the number of measurements improves the quality of the reconstruction, but information about the entire scene is revealed even for small m. This contrasts with scanning, where fewer measurements leaves gaps in the signal. We also considered the ability to image through obscuring material. A burlap sheet was suspended 1:4 m from the camera, sufficiently far to be located in the image plane. This sheet filled the camera’s entire field-of-view. Cardboard cutouts of the letters ‘UR’ were placed behind the burlap 2:1 m from the camera. 2000 DMD patterns were used with ta ¼ 4 s. Reconstructions are given in Figs 4(a) and 4(b), with a representative timing histogram in Fig. 4(d). The burlap manifests as the strong left-most peak, with the ‘UR’ object appearing as the center peak. Again, the right-most peak is the room wall and is not considered. For comparison, a 400 × 400 pixel image taken with a conventional CCD camera is given in Fig. 4(c). In the first reconstruction Fig. 4(a), the range of interest includes both the burlap and the object of interest. This corresponds to ignoring range information as in a conventional camera. The second result Fig. 4(b) only considers the range containing the object to pierce the burlap cover. Despite being completely obscured to the nonranging cameras, our system reveals the object. Particularly impressive is the fact that the signal returned by the object is about 12 times weaker than the signal returned by the burlap. In conclusion, we experimentally demonstrate a compact, resource-light, photon-counting lidar camera that obtains transverse resolution via compressive sensing without scanning. We produce 64 × 64 pixel images with depth resolution of 30 cm and show three-dimensional reconstruction and imaging through obscuring material. This system demonstrates the potential for high resolution, photon-counting ranged cameras constructed from inexpensive, off-the-shelf components.

1000

1500

2000

Fig. 3. Reconstructions for object ‘U’ for increasing measurement number m. An increase in m causes a corresponding increase in reconstruction quality, but information about the entire scene is obtained even for small m, demonstrating the progressive nature of CS. 1 November 2011 / Vol. 50, No. 31 / APPLIED OPTICS

5919

counts/sec

(a)

(b)

(c)

3000 1500 0 10

15

20

25

30

35

ns

(d) Fig. 4. (Color online) 64 × 64 reconstructions for an object ‘UR’ obscured by burlap camouflage material. (a) Includes both object and obscurant while (b) includes the object only. (c) Gives a conventional CCD picture. Timing histogram (d) peaks represent the burlap, ‘UR’ object, and room wall in left-to-right order.

This work was supported by Defense Advanced Research Projects Agency Defense Sciences Office (DARPA DSO) InPho grant W911NF-10-1-0404, United States Army Research Office (USARO) MURI grant W911NF-05-1-0197, and Department of Defense (DOD) PECASE award W900NF-05-01-0018. References 1. T. J. Kane, W. J. Kozlovsky, R. L. Byer, and C. E. Byvik, “Coherent laser radar at 1:06 μm using nd:yag lasers,” Opt. Lett. 12, 239–241 (1987). 2. M. A. Albota, “Three-dimensional imaging laser radars with geiger-mode avalange photodiode arrays,” Lincoln Lab. J. 13, 351–367 (2002). 3. B. W. Schilling, D. N. Barr, G. C. Templeton, L. J. Mizerka, and C. W. Trussell, “Multiple-return laser radar for threedimensional imaging through obscurations,” Appl. Opt. 41, 2791–2799 (2002). 4. J. Degnan, R. Machan, E. Leventhal, D. Lawrence, G. Jodor, and C. Field, “Inflight performance of a second-generation photon-counting 3d imaging lidar,” Proc. SPIE 6950, 695007 (2008).

5920

APPLIED OPTICS / Vol. 50, No. 31 / 1 November 2011

5. For example, see www.advancedscientificconcepts.com or www.selex‑comms.com. 6. R. M. Marino and W. R. Davis, Jr., “Real-time 3d ladar imaging,” Lincoln Lab. J. 15, 23–35 (2005). 7. V. C. Coffey, “Seeing in the dark: Defense applications of IR imaging,” Opt. Photon. News 22(4), 27–31 (2011). 8. M. A. Itzler, M. Entwistle, M. Owens, K. Patel, X. Jiang, K. Slomkowski, S. Rangwala, P. F. Zalud, T. Senko, J. Tower, and J. Ferraro, “Geiger-mode avalanche photodiode focal plane arrays for three-dimensional imaging ladar,” Proc. SPIE 7808, 78080C (2010). 9. G. Smith, J. Donnelly, K. McIntosh, E. Duerr, D. Shaver, S. Verghese, J. Funk, L. Mahoney, K. Molvar, D. Chapman, and D. Oakley, “Reliable large format arrays of geiger-mode avalanche photodiodes,” in 20th International Conference on Indium Phosphide and Related Materials, 2008 (IPRM 2008) (2008), pp. 1–3. 10. A. McIntosh, “Arrays of gieger-mode avalanche photodiodes for ladar and laser communications,” in Applications of Lasers for Sensing and Free Space Communications (Optical Society of America, 2010), p. LSWC1. 11. M. Duarte, M. Davenport, D. Takhar, J. Laska, T. Sun, K. Kelly, and R. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25, 83–91 (2008). 12. R. Baraniuk, “Compressive sensing [lecture notes],” IEEE Signal Process. Mag. 24, 118–121 (2007). 13. D. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52, 1289–1306 (2006). 14. E. Candes and M. Wakin, “An introduction to compressive sampling,” IEEE Signal Process. Mag. 25, 21–30 (2008). 15. M. Figueiredo, R. Nowak, and S. Wright, “Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems,” IEEE J. Sel. Top. Signal Process. 1, 586–597 (2007). 16. E. Cands and J. Romberg, “Sparsity and incoherence in compressive sampling,” Inverse Probl. 23, 969–985 (2007). 17. E. Candes, J. Romberg, and T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory 52, 489–509 (2006). 18. D. Dudley, W. M. Duncan, and J. Slaughter, “Emerging digital micromirror device (dmd) applications,” Proc. SPIE 4985, 14–25 (2003).

Photon-counting compressive sensing laser radar for ...

in the spirit of a CCD camera. A variety of ... applying single-pixel camera technology [11] to gen- ..... Slomkowski, S. Rangwala, P. F. Zalud, T. Senko, J. Tower,.

268KB Sizes 1 Downloads 199 Views

Recommend Documents

Compressive Sensing for Through-the-Wall Radar ... - Amazon AWS
the wall imaging,” Progress in Electromagnetics Research M, vol. 7, pp. 1-13, 2009. [24] T. S. Ralston, G. L. Charvat, and J. E. Peabody, “Real-time through-wall imaging using an ultrawideband multiple-input multiple-output (MIMO) phased array ra

COMPRESSIVE SENSING FOR THROUGH WALL ...
SCENES USING ARBITRARY DATA MEASUREMENTS. Eva Lagunas1, Moeness G. Amin2, Fauzia Ahmad2, and Montse Nájar1. 1 Universitat Polit`ecnica de Catalunya (UPC), Barcelona, Spain. 2 Radar Imaging Lab, Center for ... would increase the wall subspace dimensi

TC-CSBP: Compressive Sensing for Time-Correlated ...
School of Electrical and Computer Engineering ... where m

Object Detection by Compressive Sensing
[4] demonstrate that the most discriminative features can be learned online to ... E Rn×m where rij ~N(0,1), as used in numerous works recently [9]. ..... 7.1 Sushma MB has received Bachelor degree in Electronics and communication in 2001 ...

Generalized compressive sensing matching pursuit algorithm
Generalized compressive sensing matching pursuit algorithm. Nam H. Nguyen, Sang Chin and Trac D. Tran. In this short note, we present a generalized greedy ...

Beamforming using compressive sensing
as bandwidth compression, image recovery, and signal recovery.5,6 In this paper an alternating direction ..... IEEE/MTS OCEANS, San Diego, CA, Vol. 5, pp.

Compressive Sensing With Chaotic Sequence - IEEE Xplore
Index Terms—Chaos, compressive sensing, logistic map. I. INTRODUCTION ... attributes of a signal using very few measurements: for any. -dimensional signal ...

Generalized compressive sensing matching pursuit ...
Definition 2 (Restricted strong smoothness (RSS)). The loss function L .... Denote R as the support of the vector (xt−1 − x⋆), we have. ∥. ∥(xt−1 − x⋆)R\Γ. ∥. ∥2.

Compressive Sensing for Ultrasound RF Echoes using ... - FORTH-ICS
B. Alpha-stable distributions for modelling ultrasound data. The ultrasound image .... hard, there exist several sub-optimal strategies which are used in practice. Most of .... best case is observed for reweighted lp-norm minimization with p = α −

A Lecture on Compressive Sensing 1 Scope 2 ...
Audio signals and many communication signals are compressible in a ..... random number generator (RNG) sets the mirror orientations in a pseudorandom 0/1 pattern to ... tion from highly incomplete frequency information,” IEEE Trans. Inform.

GBCS: a Two-Step Compressive Sensing ...
Oklahoma State University, Stillwater, OK 74078. Emails: {ali.talari ... Abstract—Compressive sensing (CS) reconstruction algorithms can recover a signal from ...

A Compressive Sensing Based Secure Watermark Detection And ...
the cloud will store the data and perform signal processing. or data-mining in an encrypted domain in order to preserve. the data privacy. Meanwhile, due to the ...

A Lecture on Compressive Sensing 1 Scope 2 ...
The ideas presented here can be used to illustrate the links between data .... a reconstruction algorithm to recover x from the measurements y. Initially ..... Baraniuk, “Analog-to-information conversion via random demodulation,” in IEEE Dallas.

Project Radar
Project RADAR is an initiative of VDH's Division of Injury & Violence Prevention that was developed to enable health care providers to effectively recognize and respond to intimate partner violence. (IPV) by providing: ➢“Best Practice” Policies

Unequal Compressive Imaging
Consequently, employing the idea of UEP erasure coding from [14, 15], we propose to concentrate the L non-zero elements of Φ at its columns which sample the important coefficients of x. With this setup, more important coefficients are incorporated i

Wall Clutter Mitigations for Compressive Imaging of Building Interiors
contributions and eliminate its potentially overwhelming signature in the image. Availability ...... different random measurement matrix was used to generate the reduced set of ..... using compressive sensing,” Journal of Electronic Imaging, vol.

Wall Clutter Mitigations for Compressive Imaging of Building Interiors
contributions and eliminate its potentially overwhelming signature in the image. ... frequencies at different antenna positions would impede the application of either method. This is because ..... z corresponding to the mth transceiver can be express

PDF Download Airborne Pulsed Doppler Radar (Radar Library) Read ...
(Radar Library) Read Full Book ... principles and system design rationale for establishing radar characteristics, signal processing for target detection ...