We present a high-speed single pixel flow imager based on an all-optical Haar wavelet transform of moving objects. Spectrally-encoded wavelet measurement patterns are produced by chirp processing of broad-bandwidth mode-locked laser pulses. A complete wavelet pattern set serially illuminates the object via a spectral disperser. This high-rate structured illumination transforms the scene into a set of sparse coefficients. We show that complex scenes can be compressed to less than 30% of their Nyquist rate by thresholding and storing the most significant wavelet coefficients. Moreover by employing temporal multiplexing of the patterns we are able to achieve pixel rates in excess of 360 MPixels/s.
© 2017 Optical Society of America
High-speed image capture, analysis, and characterization has transformed fields such as high-throughput microscopy and machine vision. Traditionally, image acquisition is performed in the electronic domain using CMOS image sensors or CCDs. Unfortunately these devices face two major limitations: First, frame rate for array based detectors is limited to a few MHz of continuous readout due to slow electronic data transfer rate. Second, pixel exposure time is a function of device charge time and cannot be reduced arbitrarily, which in turn results in image blurring and motion artifacts. Recent research has focused on alleviating these shortcomings by exploiting fiber-optic technologies. For example imaging systems based on photonic time-stretch and serial readout have achieved real-time, continuous imaging at frame rates of greater than 6 MHz . However, these systems inherently produce an enormous amount of data to be processed and stored, and while photonic time stretch is an extremely powerful tool to facilitate image capture, it does not take full advantage of signal processing capabilities afforded in the analog domain by photonic technologies.
With the increased demand for imaging speed, the capture, storage, and processing of image data in the electronic domain presents a serious bottleneck. By transferring some of these conventional signal processing tasks such as buffering, digitization, transformation, and data compression to the photonic domain it is possible to achieve a significant reduction in the electronic workload. In particular, real-time linear transforms, as one of the most fundamental signal processing tasks, take a significant amount of processing power on CPUs and FPGAs. This limited electronic throughput has motivated many groups to investigate electro-optic alternatives. For instance, high-speed Hilbert transformation , short term Fourier transformation , and wavelet transformation  of RF and microwave signals was recently demonstrated using customized chirped fiber Bragg grating based microwave photonic filters. Additionally, the anamorphic transform [5,6] is another example of an all-optical processor that reduces the data bandwidth by introducing a nonlinear phase profile. Beyond these schemes that are system specific and require custom made components, much research has focused on more flexible architectures. For example, a fully reconfigurable integrated photonic signal processor which is able to perform transient integration, differentiation, and Hilbert transformation on RF signals was recently demonstrated .
Compressed sensing (CS) is another area that has attracted much recent attention. Most of the early work in this domain was based on the single pixel camera incorporating digital micro-mirror devices (DMD) . Since then CS has been applied to fields such as fluorescence microscopy , 3D imaging , hyperspectral imaging , and high speed video acquisition . Recently we demonstrated a high-speed CS imager capable of imaging at 39.6 Gigapixels/s with the images compressed down to 2% of their original size . Despite the impressive acquisition speed and compression rate, reconstructing an image using CS requires algorithms that are time consuming, which presents a challenge when real-time signal processing is desired.
An alternative single pixel imaging approach to CS is to acquire the image through a basis scan (BS). Unlike CS, the object is illuminated with a complete set of patterns associated with a particular basis (e.g. Fourier or wavelet). The image in these schemes is simply recovered using an efficient and fast inverse transform operation. Despite this straightforward reconstruction, data acquisition using BS can be slow. For instance, a recently reported BS imager that uses phase-shifting sinusoidal structured illumination for Fourier scan  took over 5 hours to capture the 120052 measurements required in order to reconstruct a pixel image. In order to increase the sampling speed compared to , Guo et. al. recently reported an imaging flow cytometer based on high speed electro-optic modulation of the Fourier basis functions . They used thresholding in order to achieve up to 10% image compression.
Fourier transformation is not the only domain which can benefit from basis scan. Notably, BS can also be used for wavelet transformation. The wavelet transform is a signal processing tool that decomposes a given signal into its time-frequency or space-frequency components of varying resolution . Wavelets are commonly used for data compression and feature detection. Optical wavelet transforms have been applied to high speed RF and microwave signals . Wavelets have also been implemented in 2D settings for image analysis [17,18]. Specifically, Freysz et. al. demonstrated an optical wavelet transform of fractal aggregates . They used a free space VanderLugt correlator with a multi-reference matched filter that contains many daughter wavelets. Though this architecture transforms the object in real time, image capture and storage is still limited by the CMOS and CCD cameras.
Here we present a high-speed BS imaging system based on an all-optical Haar wavelet transform. To acquire the wavelet coefficients, we use ultrafast structured illumination through dispersive chirp processing of broadband laser pulses . Spectrally patterned pulses are spatially dispersed using a diffraction grating in order to create a 1D spectral shower across the imaging field of view. A temporal multiplexing scheme is implemented in order to achieve an eight-fold increase in measurement rate over the laser repetition rate. The vector inner product between the moving object and the spatially dispersed spectral patterns is acquired serially using a high-speed balanced photodetector. In addition to image acquisition we demonstrate wavelet based image compression using this system.
2. Principle of Haar wavelet scan and compression
The wavelet operation transforms most natural signals into a sparse domain where most of the coefficients are near zero or insignificant. These small coefficients correspond to features at different time-frequency or space-frequency resolution that contribute very little to the signal under study. It is therefore possible to exploit wavelet sparsity and achieve data compression by storing the location and value associated with the largest coefficients. Similarly it is possible to identify the spatial or temporal coordinate of features such as edges or discontinuities in the signal by investigating the significant wavelet coefficients. Here we implement an all-optical Haar wavelet transform for image acquisition and compression. Image acquisition is based on a 1D basis scan of the object flowing through the field of view. A 2D image is then constructed by stacking up the 1D lines.
In wavelet transforms, various basis functions (i.e. daughter wavelets), associated with different resolution levels, are commonly derived from what is known as the mother wavelet. In the Haar wavelet transform , the mother wavelet is a binary function and is defined as,
Daughter wavelets are obtained by weighting, time/space scaling, and shifting this mother wavelet. Temporal or spatial scales correspond to varying temporal or spatial resolutions and weights are introduced in order to maintain the orthogonality condition,
As it is inferred from the above equation an n-pixel signal is fully decomposed using n daughter wavelets. For example, Fig. 1 shows a conceptual block diagram of an 8-pixel optical Haar transform. Each coefficient is obtained by calculating the inner product between the corresponding basis function and the signal under study. As it is seen from Eq. (1), the mother wavelet here is described by a positive ( + 1) step followed by a negative (−1) step function. We implement this bivariate function by modulating two consecutive laser pulses using a pulse pattern generator (PPG). After illuminating the scene, these pulses are overlapped in time and detected using a balanced photodetector in order to measure the wavelet coefficients. As an example, for an 8-pixel line scan image we can mathematically explain the image acquisition process using matrix operations:Eq. (4) as an example. In general an n-pixel transform comprises of wavelet levels, where each level j is scaled by the factor . Image compression in our scheme is achieved by thresholding and storing the most significant subset of the weighted coefficients. These coefficients can then be inverse wavelet transformed to reconstruct the 1D line passing through the field of view. Figure 1 shows a conceptual representation of our ultrafast Haar wavelet transform. Every pulse pair, representing a single daughter wavelet, illuminate the scene. Reflected pulses are then overlapped in time and detected in a balanced photodiode in order to measure the inner product between the object and the bivariate Haar function. This process is repeated serially in order to generate individual wavelet coefficients.
3. Experimental system
Figure 2(a) shows a detailed experimental diagram. The system functions by modulating wavelet patterns at a high-rate onto the optical spectra of mode-locked laser pulses. Using a dispersion compensating fiber (DCF) with a total group velocity dispersion of −853 ps/nm and dispersion slope of −2.92 ps/nm2 at 1550 nm, 300-fs mode-locked laser pulses with 90-MHz repetition rate are chirped to greater than 11 ns. Subsequently, an electro-optic intensity modulator (MZM) driven by a PPG at 11.52 Gb/s imparts up to 128 unique Haar wavelet pairs onto the spectrum of a sequence of 256 laser pulses with modulation depth in excess of 15 dB (Fig. 2(b)). The pulses are then time-compressed to a few ps using optical fiber with complementary dispersion. We use a programmable spectral filter (Finisar Waveshaper) in order to flatten the laser spectrum and equalize the optical power for all 128 spatial features that illuminate the object.
The spectrally-patterned and compressed laser pulses pass through a spatial disperser wherein a diffraction grating and spherical lens map the optical spectrum to a 1D line at the object plane. Objects pass through the structured illumination in the disperser and the scattered light returns through the diffraction grating back into an optical fiber. Each pair of returned pulses represents the vector inner product between a single Haar basis function and the scene. After passing through the circulator, these pulse pairs are overlapped in time and detected using a balanced photodetector. Output of the balanced photodetector is sampled using an ADC where each BS measurement is represented by a single ADC measurement.
Image resolution in our setup is dependent on factors such as the pattern rate, fiber dispersion, diffraction grating resolution, as well as the resolution of the imaging optics . Here we implement two realizations of this system in the form of a low magnification and a high magnification imager. The low magnification system is used to demonstrate the wavelet compression of scenes with varying degrees of complexity. Test objects for this setup were printed on a transparency, which was attached to the surface of a spinning hard disk. The low magnification disperser is composed of a 600-line/mm ruled diffraction grating and 123-mm effective focal length spherical lens. This imaging system illuminates the field of view with a 1.1-mm5.4-µm line where each of the 128 features occupies an area of 8.5-µm5.4-µm. In contrast, the high magnification disperser employs the same grating with a 200-mm focal length spherical lens to form an intermediate structured illumination image before a 200-mm tube lens and a 50near-IR microscope objective (Nikon Plan Fluor 50X, NA = 0.80). Large-area high-resolution optics are specifically chosen to allow diffraction limited imaging of the flowing objects in this high magnification realization.
3.1 Optical multiplexing
Basis scan is a Nyquist technique where the reconstruction accuracy is dependent on the number of Haar pixels, n, as well as the object’s flow velocity, V. In order to assess the reconstruction dependence on the flow rate, we carried out a numerical analysis of imaging performance vs the moving speed (Fig. 3). Simulations are done by considering that the object’s height is L = 1.5 mm in the flow direction. Assuming a pattern rate of, number of rows in the reconstructed image is given by:
Equation (5) suggests that it is possible to maintain satisfactory image reconstruction at higher flow rates by increasing the pattern rate. This can be done using optical bit rate multiplying stages . Pattern multiplexing is achieved using asymmetric Mach-Zehnder modulators where one arm experiences a delay equivalent to pattern rate. Figure 4(a) shows a conceptual picture of this process where the structured illumination rate is doubled. For this conceptual explanation, we only consider a sequence of 5 patterns for simplicity. Interleaving the patterns by Lmux (2.5 cycles in this example), doubles the sequence frequency. Multiplexing stages can be inserted in series in order to further increase the rate by factors of 4 and 8 respectively. Figure 4(b) shows the block diagram for the setup used in our experiments in order to demonstrate a 720-MHz pattern repetition rate. This rate corresponds to 360 MPixels/s.
4. Experimental results
4.1 Low magnification imaging and wavelet compression
A high speed flowing object was constructed by laser-printing a soccer ball, a Johns Hopkins University shield, and letters “JHU” respectively. The transparency was attached to the outer edge of a spinning hard disc. This apparatus was used in conjunction with our low magnification disperser. The disc was driven at 7200 revolutions per minute (RPM). This speed corresponds to a 34.3 m/s flow rate. Figure 5 shows the conceptual diagram of the optical wavelet compression approach. Wavelet coefficients are obtained by balanced photodetection of the laser projected daughter wavelets onto the flowing scene. The detected coefficients are weighted in accordance to the basis scales (Eq. (1)).
Figure 6(a) and Fig. 6(b) show our reconstructed images acquired at sampling rates of 90-MHz and 180-MHz, respectively, at various data compression levels. In order to achieve image compression ratio, , we simply keep the largest coefficients and set the rest to zero. For instance a compression in a pixel Haar transform corresponds to 6 non-zero coefficients. Percentage is calculated by averaging per row compression factors across the entire image. As it is seen from Fig. 6(a) and 6(b), simple objects such as the soccer ball and the “JHU” letters preserve their shape despite the large degree of compression. In contrast, the added complexity in the shield makes this object less compressible. This is in agreement with what is expected from the wavelet theory. The details in the shield appear at many intermediate wavelet levels and comprise a greater fraction of the total wavelet coefficients for this more complex object. This in turn leads to less compressibility. It is important to emphasize that compression does not necessarily discard the finest wavelet coefficients. In contrast, as it is seen in the subset of Figs. 6(a) and 6(b), depending on the scene, fine coefficients are kept while some of the coarse coefficients are thrown away.
4.2 High resolution imaging at 720-MHz
As it was mentioned in section 3, we used our optical transform engine in a high magnification microscope in order to image objects at a diffraction limited resolution. Each spectral feature in this microscope occupies an area of 1.2-μm1.2-μm. In order to verify the imaging resolution we imaged the smallest elements from group 7 of a USAF test target. The test target was mounted to a motorized translation stage and the stage was moved across the field of view in 250-nm steps. As it is seen from Fig. 7, the microscope is able to clearly resolve fine features in both the horizontal and vertical directions. It should be noted that the smallest element in our test target is 2.19-μm wide.
Once the imaging resolution was verified, we used the system for high-speed imaging of micron scale objects. These objects are chosen to resemble microfluidic particles typically used in imaging flow cytometry applications in terms of size and contrast. The images were acquired by mounting the sample to a reflective surface which was subsequently attached to a motorized stage. The sample was translated across the field of view in 250-nm steps and signals were captured in single shot (no averaging). This operation is equivalent to ~0.5-m/s in flow speed in both the rate of line-by-line movement of the object and in the measurement signal to noise ratio. Figure 8(a) shows clusters of 10-μm microsphere beads acquired at 180-MHz. This figure also shows the images obtained after carrying out the Haar compression operation. As it is seen the majority of wavelet coefficients are not significant and therefore average compression ratios in excess of 20% are obtained. Figure 8(b) shows a sample of 5-µm microsphere cluster, diluted in index matching fluid, acquired at 720-MHz, corresponding to 360 Mpixel/s. We’d like to emphasize that the multiplexing speed in our measurements was limited by the bandwidth of our amplified balanced detector. Low contrast in these samples reduces the particle visibility compared with the images shown in Fig. 6 and Fig. 8(a). Nevertheless the individual particles are clearly distinguishable from the background. These figures demonstrate the ability of this approach in imaging microscopic objects without introducing any computational artifacts. It is important to reiterate that in contrast to other high speed imaging techniques which are based on time stretch readout, our approach benefits from higher signal to noise ratio. Moreover, detection is done at the imaging pulse rate, using low and moderate speed detectors and ADCs.
5. Summary and conclusions
We have demonstrated an all-optical Haar wavelet transform engine that takes advantage of ultra-high speed chirped pulse patterning in order to create structured illumination. Patterned pulses (i.e. Haar basis) are scanned across the imaging field of view at 720-MHz and images are obtained by a simple inverse wavelet transform operation. In contrast to compressed sensing based single pixel imagers [8–13,23], the current architecture benefits from more rapid image reconstruction thanks to the significantly lower computational complexity. The most efficient compressed sensing algorithms such as two-step iterative shrinkage/thresholding (TWIST) and gradient projection for sparse reconstruction (GPSR) go through multiple iterations with minimum computational complexity per iteration in the order of O(n log(n)). In contrast, the 1D inverse Haar transform has a complexity O(n) [20,24,25] and this translates to an image reconstruction time that is many orders of magnitude smaller.
It is important to note that the imaging speed can be further increased by using more temporal multiplexing stages. The current set-up can also be extended to include a 2 dimensional (2-D) disperser. This concept has already been demonstrated in both the time stretched based and compressive sensing based optical microscopes [23,26].
The demonstrated system reduces the signal processing workload by integrating wavelet transformation and compression into the acquisition step. Unlike the conventional image compression algorithms that require a digitized image prior to compression, our scheme acquires the image as sparse coefficients in the wavelet domain. Compression is simply achieved by storing the most significant coefficients on a rolling basis, a task that can be easily done in real time in a state of the art FPGA. In addition to image compression, the current architecture utilizes the single pixel camera concept in order to reduce the pixel readout rate. This in turn eliminates the need for high speed ADCs and large digital buffers and opens up the architecture for applications such as flow cytometry [1,27] and high throughput microscopy  where tens of millions of particles are to be imaged and characterized continuously.
National Science Foundation (NSF) (ECCS-1254610).
Portions of this work were presented at the Conference on Lasers and Electro-Optics in 2016 (paper SM2I.8) .
References and links
1. K. Goda, A. Ayazi, D. R. Gossett, J. Sadasivam, C. K. Lonappan, E. Sollier, A. M. Fard, S. C. Hur, J. Adam, C. Murray, C. Wang, N. Brackbill, D. Di Carlo, and B. Jalali, “High-throughput single-microparticle imaging flow analyzer,” Proc. Natl. Acad. Sci. U.S.A. 109(29), 11630–11635 (2012). [CrossRef] [PubMed]
2. M. Li and J. Yao, “Experimental demonstration of a wideband photonic temporal Hilbert transformer based on a single fiber Bragg grating,” IEEE Photonics Technol. Lett. 22(21), 1559–1561 (2010). [CrossRef]
3. M. Li and J. P. Yao, “All-optical short-time Fourier transform based on a temporal pulse shaping system incorporating an array of cascaded linearly chirped fiber Bragg gratings,” IEEE Photonics Technol. Lett. 23(20), 1439–1441 (2011). [CrossRef]
4. L. Ming and J. Yao, “Ultrafast all-optical wavelet transform based on temporal pulse shaping incorporating a 2-D array of cascaded linearly chirped fiber Bragg gratings,” IEEE Photonics Technol. Lett. 24(15), 1319–1321 (2012). [CrossRef]
7. L. Weilin, M. Li, R. S. Guzzon, E. J. Norberg, J. S. Parker, M. Lu, L. A. Coldren, and J. Yao, “A fully reconfigurable photonic integrated signal processor,” Nat. Photonics 10(3), 190–195 (2016). [CrossRef]
8. D. Takhar, J. N. Laska, M. B. Wakin, M. F. Duarte, D. Baron, S. Sarvotham, K. F. Kelly, and R. G. Baraniuk, “A new compressive imaging camera architecture using optical-domain compression,” Proc. SPIE 6065, 606509 (2006). [CrossRef]
9. V. Studer, J. Bobin, M. Chahid, H. S. Mousavi, E. Candes, and M. Dahan, “Compressive fluorescence microscopy for biological and hyperspectral imaging,” Proc. Natl. Acad. Sci. U.S.A. 109(26), E1679–E1687 (2012). [CrossRef] [PubMed]
10. S. S. Welsh, M. P. Edgar, R. Bowman, P. Jonathan, B. Sun, and M. J. Padgett, “Fast full-color computational imaging with single-pixel detectors,” Opt. Express 21(20), 23068–23074 (2013). [CrossRef] [PubMed]
12. M. P. Edgar, G. M. Gibson, R. W. Bowman, B. Sun, N. Radwell, K. J. Mitchell, S. S. Welsh, and M. J. Padgett, “Simultaneous real-time visible and infrared video with single-pixel detectors,” Sci. Rep. 5, 10669 (2015). [CrossRef] [PubMed]
13. B. T. Bosworth, J. R. Stroud, D. N. Tran, T. D. Tran, S. Chin, and M. A. Foster, “High-speed flow microscopy using compressed sensing with ultrafast laser pulses,” Opt. Express 23(8), 10521–10532 (2015). [CrossRef] [PubMed]
15. Q. Guo, H. Chen, Y. Wang, Y. Guo, P. Liu, X. Zhu, Z. Cheng, Z. Yu, S. Yang, M. Chen, and S. Xie, “High-Speed Compressive Microscopy of Flowing Cells Using Sinusoidal Illumination Patterns,” IEEE Photonics J. 9(1), 1–11 (2017).
16. S. Mallat, “A theory for multiresolution signal decomposition: the wavelet representation,” IEEE Trans. Pattern Anal. Mach. Intell. 11(7), 674–693 (1989). [CrossRef]
19. M. Alemohammad and M. A. Foster, “Real-Time Image Compression Based on All-Optical Haar Wavelet Transform,” in Conference on Lasers and Electro-Optics, OSA Technical Digest (2016) (Optical Society of America, 2016), paper SM2I.8. [CrossRef]
20. G. Strang and T. Q. Nguyen, Wavelets and Filter Banks (Wellesley-Cambridge, 1998).
22. C. S. Brès, A. O. Wiberg, J. Coles, and S. Radic, “160-Gb/s optical time division multiplexing and multicasting in parametric amplifiers,” Opt. Express 16(21), 16609–16615 (2008). [PubMed]
23. Q. Guo, H. Chen, Z. Weng, M. Chen, S. Yang, and S. Xie, “Compressive sensing based high-speed time-stretch optical microscopy for two-dimensional image acquisition,” Opt. Express 23(23), 29639–29646 (2015). [CrossRef] [PubMed]
24. T. Blumensath and M. E. Davies, “Iterative hard thresholding for compressed sensing,” Appl. Comput. Harmon. Anal. 27(3), 265–274 (2009). [CrossRef]
25. M. A. Figueiredo, R. D. Nowak, and S. J. Wright, “Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems,” IEEE J. Sel. Top. Signal Process. 1(4), 586–597 (2008). [CrossRef]
26. A. C. S. Chan, A. K. S. Lau, K. K. Y. Wong, E. Y. Lam, and K. K. Tsia, “Arbitrary two-dimensional spectrally encoded pattern generation—a new strategy for high-speed patterned illumination imaging,” Optica 2(12), 1037–1044 (2015). [CrossRef]