Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Multiplexed structured image capture to increase the field of view for a single exposure

Open Access Open Access

Abstract

In this work, we introduce multiplexed structured image capture (MUSIC) as a means to increase the field of view during a single exposure. MUSIC works by applying a unique spatial modulation pattern to the light collected from different parts of the scene. Our work demonstrates two unique setups for collecting light from different parts of the scene—a single lens configuration and a dual lens configuration. Post-processing of the modulated images allows for the two scenes to be easily separated using a Fourier analysis of the captured image. We demonstrate MUSIC for still scene, schlieren, and flame chemiluminescence imaging to increase the field of view. Though we demonstrate only two imaging scenes, more scenes can be added by using extra patterns and extending the optical setup.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Imaging remains one of the most important and reliable means of measurement in aerospace and combustion science. Schlieren imaging, developed in 1864, is still used today for visualization of fluid flow in even the most modern wind-tunnel experiments [1–3]. PLIF [4], 2D Raman scattering [5,6], and chemiluminescence [4,7] imaging techniques are essential for combustion diagnostics, providing information about local species concentration, geometry, and more. Modern tomographic measurements for combustion processes are an example of the high-quality information that can be gained from imaging [8,9]. Often however, a reduction in the field of view occurs when imaging with high-speed cameras due to reducing the on-camera region of interest to account for limitations on electronic read-out times. Furthermore, multiple cameras are sometimes needed for imaging of larger objects. Thus, more spatial information from a single measurement would be beneficial for capturing dynamics on a larger scale.

Light modulation techniques have shown great promise in imaging [10]. Techniques such as CUP [11,12], structured illumination [13–15], FRAME [16] and MUSIC [17] have all been demonstrated for ultrafast-imaging or for resolution enhancement. However, the image multiplexing involved in the multiplexed structured image capture (MUSIC) technique can be used to not only store transient information from the same scene, but also multiple scenes at a given point in time. Image multiplexing exploits the fact that most of the important information in an image occupies the center of the Fourier domain (i.e. is low frequency information) [18]. Hence, the Fourier domain of an image is usually negligible away from the origin due to the low magnitude of high-frequency image components, meaning the image generally has a sparse Fourier domain. This sparseness is one of the key ideas behind compressed sensing techniques for imaging purposes. Figure 1(a-b)

 figure: Fig. 1

Fig. 1 (a) Single lens version of the multiplexing system for field of view extension. Light collected by an achromatic lens is split into two paths, with each path viewing a different part of the scene. Ronchi rulings/gratings are placed on each path to provide unique modulations of the images. Note that modulation occurs when focusing the image onto the Ronchi ruling. The modulated image are then simultaneously recorded by a camera focused to each of the Ronchi rulings through a beam splitter. (b) A two-lens version of (a) that can be used for very large objects. It also eliminates one of the beam splitters. (c) Diagram showing the effect of the optical multiplexing systems.

Download Full Size | PDF

show two optical setups for multiplexing images to increase the field of view and Fig. 1(c) provides a diagrammatic representation of the effect of the multiplexing optical setups. The light is first optically split into two paths (i.e. channels), with each channel showing a different part of the scene. The adjustment of the first beam splitter will determine which part of the scene goes to channel two. Each channel in the optical setup which will apply a unique modulation to the corresponding image by focusing onto a Ronchi ruling rotated to a unique angle. The information from both channels is then recombined using a beam splitter and simultaneously imaged by a camera with an achromatic lens. However, the unique modulations allow for each channel to be separated and restored in post-processing [13,16,19]. Finally, the images can be stitched together using image analysis software to yield the net larger field of view.

A mathematical description of MUSIC for field of view (FOV) extension begins by considering image information exiting the optical systems in Fig. 1., IG(r,t), which consists of light that has traveled two paths:

IG(r,t)=n=12In(r,tΔtDn)Mn(r)εn
Here, In is the image information defined as:
In(r,t)={I1(r,t)Scene1I2(r,t)Scene2
Mn is the modulation applied by the Ronchi rulings, and εn is the optical efficiency along path n. Also, ∆tDn is the time delay relative to path one caused by the extra travel distance. Assuming the camera gate is much longer than the time delay ∆tDn, the approximation ∆tDn ≈ 0 can be made. This holds true for MHz and kHz imaging systems given that our system has ∆tDn ≈ 1ns. Therefore, both arms of the imaging system will contribute information from nearly the same moment in time. Note that the modulation masks consist of periodic stripes due to the structure of the Ronchi rulings. Also, note that vertical stripes can be modeled as period square waves in the x-direction [19]:
M(x,y)=M(x+T)={1|x|<T10T1<xT/2
Where T is the period of the modulation and T1=T/4. Combining Eq. (1) and Eq. (3) and taking the spatial Fourier transform yields:
IG(k,t)=n=12[εnIn(k,t)m=2sin(mk0T1)mδ(kxnmk0)]
Here, k0 is the fundamental spatial frequency for the modulation mask, and kxn is the x-spatial frequency variable for path n. Each path must have a unique modulation, which is best accomplished by rotation of the spatial modulation pattern. Since a coordinate rotation in the spatial domain corresponds to a rotation in the frequency domain, Eq. (4) becomes:
IG(k,t)=n=12[εnIn(k,t)m=2sin(mk0T1)mδ(kxnmk0)]
In Eq. (5), kxn=kxcosθn+kvsinθn and θn is the rotation angle of the modulation pattern. Equation (5) shows that the delta function will create copies of the original image’s Fourier transform and shift it to various harmonics each while reducing amplitude. However, since these copies contain the original images information, but shifted to a unique frequency location corresponding to the modulation pattern, overlapped images can be separated. Recovery of images is done using an algorithm detailed in previous work, and was developed for structured illumination measurements [13,19]. The algorithm multiplies the composite multiplexed image by sinusoidal functions to shift the information from one of the sinc function offsets to the center of the Fourier domain, and then a low-pass filter is used to isolate the Fourier information corresponding to one of the multiplexed images, which is then inverse-transformed to recover the image. We use a Gaussian filter for the filtering step, similar to previous work with MUSIC [17,19].

To show the implications of Eq. (5) in practice, a computer-based example of image multiplexing was done, as shown in Fig. 2

 figure: Fig. 2

Fig. 2 (a) Example image and (b) its Fourier transform. (c) Computational example of overlapped, uniquely modulated images, (d) the Fourier transform, and (e-f) the raw recovered images. The yellow and green rings show the location of the shifted information. Note that the Fourier plots are logarithmic in intensity.

Download Full Size | PDF

. Figure 2(a) represents the desired full scene. The input image has most of its Fourier domain information near the origin (i.e. low frequencies) as shown by Fig. 2(b). By splitting the picture in two, applying unique binary modulation masks, and summing, the image in Fig. 2(c) was created, which would represent the image captured in the real experiment. Figure 2(d) shows the spatial Fourier transform of Fig. 2(c) and features the copied information at the offset spatial frequencies which correspond to the applied modulations (30° and 300° for top and bottom scenes respectively). The images in Fig. 2(e-f) show that each image was successfully split and recovered from the combined overlapped image. The reduction in resolution occurs due to the low-pass filtering present in the image recovery algorithm and is dependent on the size (i.e. bandwidth) of the applied filter as discussed in previous works [17,19]. Finally, note that sinusoidal modulation patterns are generally favorable, due to the lack of higher harmonic generation. If sinusoidal patterns are used, the second sum in Eq. (5) will be replaced by the sum of delta functions at ±k0, the modulation frequency. This also removes the higher-order harmonics associated with the sinc function, which can be problematic due to possible cross-talk between multiplexed images during recovery. However, square wave modulation is used here due to the Ronchi rulings/gratings used to modulate the images in the experiment.

2. Experiment setup

The multiplexing apparatus was constructed in two ways, as shown in Fig. 1(a-b). Note that all beam splitters were 50R/50T, except for the first beam splitter in the single lens configuration which was 70R/30T. Rotated Ronchi rulings/gratings (5 cy/mm) were used to modulate the image in each path. All lenses in the system where achromatic lenses, and silver mirrors were used. All optics were 2” in diameter. An HS-PowerView camera was used for imaging of the USAF 1951 resolution target and for flame chemiluminescence imaging. A standard Canon DLSR camera was used for schlieren imaging. For all cases, 2” achromatic lenses were used for the camera lens. Schlieren imaging, as shown in Fig. 3

 figure: Fig. 3

Fig. 3 Wind tunnel Z-type schlieren setup.

Download Full Size | PDF

, was done in a supersonic wind tunnel with Mach 3 flow conditions. For flame chemiluminescence imaging, a methane diffusion flame was used with a tube inner diameter of 4.5 mm and a flow rate of 200 mL/min.

3. Results and discussion

3.1 USAF resolution card

Experimental demonstration of the MUSIC technique for FOV extension was first carried out on a USAF 1951 optical test pattern to provide a standard for determining the quality of recovered images and to aid in alignment of recovered images from the two channels. Figure 4

 figure: Fig. 4

Fig. 4 (a) Raw Image of USAF 1951 standard pattern with multiple views, (b) its Fourier transform with logarithmic scaling, and (c-d) the recovered scenes for channel one and two.

Download Full Size | PDF

shows the results for multiplexed imaging of the USAF pattern target. In Fig. 4(b), there are four offset regions in the Fourier domain with one pair corresponding to the modulation pattern in channel one and another pair for channel two. Only two offsets are unique due to symmetry of the Fourier domain. Figure 4(c-d) show the recovered images using the techniques discussed in previous work, which involve filtering and demodulation of the images [19]. Note that the extra lens in the path for channel two, as shown in Fig. 1(a), inverts the image relative to channel one.

To establish proper truth images for comparison, Ronchi rulings were removed and imaging was done one channel at a time with one channel physically blocked. Figure 5(a-c)

 figure: Fig. 5

Fig. 5 The ground truth images are shown in for each channel are shown in (a) and (b). Note that the displacement between the two channels has been done is post processing to show the true imaging location with respect to each other. (c) The stitched together ground truth image. (d-e) Images recovered from the multiplexed image in Fig. 4. (f) The stitched together recovered images from the multiplexed results in (d) and (e).

Download Full Size | PDF

shows the truth images. In Fig. 5(a-b), the images were properly scaled and displaced to correspond to the actual scene. This was also done for the multiplexed results in Fig. 5(d-e). Figure 5(c) and Fig. 5(f) show the stitched together images for the truth images and the MUSIC results respectively. The MUSIC results qualitatively show the same scene, but with slightly less resolution due to the filtering in the image recovery process. A 2D correlation coefficient between the truth and MUSIC images was calculated in MATLAB to be 0.9262.

3.2 Flame chemiluminescence

MUSIC imaging of methane flame chemiluminescence is shown in Fig. 6

 figure: Fig. 6

Fig. 6 (a) Raw flame chemiluminescence imaging using the MUSIC technique. (b) The Fourier transform of the row image plotted in logarithmic scale. (c) A close-up of the important region for analysis. Note that the red and green circles indicate channel one and channel two information respectively. Exposure time was 2 ms.

Download Full Size | PDF

. Again, each channel has a marked, unique contribution to the image Fourier domain, as shown in Fig. 6(b). For a general image, the distribution of power in the frequency domain is determined by the geometry of the scene. Since each channel is looking at a different part of the flame, there is a notable difference in the geometry in the images seen by channel. This is reflected by the drastically different shapes of the offsets shown in Fig. 6(c), and further shows that each of the offsets corresponds only to a unique scene. The reconstructed and stitched together images are shown in Fig. 7
 figure: Fig. 7

Fig. 7 Flame chemiluminescence preliminary results. Recovered images from the image in Fig. 6, with (a) and (b) the properly aligned to true spatial position. The stitched together image is shown in (c).

Download Full Size | PDF

. Note that the resolution of the image is slightly degraded by the image recovery process, specifically the application of a low-pass Gaussian filter with respect to the modulated information. Thus, a byproduct of the MUSIC technique is an inherent smoothing of the recovered images, with the degree of smoothing determined by the filter size during recovery. A small intensity discontinuity is present in the recovered images in Fig. 7 that is due to imperfection in the alignment of the Ronchi grating and the image plane. The alignment is crucial since it optically performs the multiplication in Eq. (1). Since the alignment is not perfect in practice, the optical efficiency εn has a spatial dependence rather than being a unique constant for each optical path. The result of this slight inefficiency is small variable intensity fluctuations since some information may not be copied and shifted correctly in the Fourier domain. For quantitative measurements, a calibration scheme is needed, and is an ongoing research effort at this time.

3.3 Schlieren imaging

Figure 8

 figure: Fig. 8

Fig. 8 Schlieren imaging of a wedge in supersonic flow split into two channels by MUSIC. Truth schlieren image of the individual channels are shown in (a) and (b) and the stitched image is shown in (c). The camera settings were an exposure time of 33 ms and ISO of 800.

Download Full Size | PDF

shows truth images of the individual MUSIC channels and the stitched composite of the full schlieren image of a wedge in supersonic flow. Figure 9
 figure: Fig. 9

Fig. 9 (a) Raw multiplexed schlieren image using MUSIC. (b) Fourier transform of the filtered image in logarithmic scale. (c) Close-up of the individual channel information in the Fourier domain. Channels 1 and 2 are indicated by the red and green circles respectively. +

Download Full Size | PDF

shows MUSIC applied to the schlieren image using the setup in Fig. 3. The raw filtered image in Fig. 9(a) shows that the individual regions imaged by the two channels comprised two partial images of the full wedge as shown in Fig. 8(a-b), overlapping the shock regions within the confines of the available camera window. The variances between the channel contributions to the Fourier domain shown in Fig. 9(b-c) again demonstrate that the two channels captured unique, partial regions of the full scene. The reconstructed and stitched images are shown in Fig. 10
 figure: Fig. 10

Fig. 10 Recovered images from the multiplexed image in Fig. 9. Individual channel recovered images are shown in (a) and (b) and the stitched image is shown in (c). The results are highly similar to the truth images in Fig. 8.

Download Full Size | PDF

. The results show high similarity to the truth images.

4. Summary and conclusions

In summary, the MUSIC imaging technique has been successfully applied for extending the field of view during a single exposure. The technique was demonstrated for flame chemiluminescence and schlieren imaging, which are very popular imaging techniques in the aerospace and combustion optical diagnostics community. Note that although we applied the technique to look at scenes near each other, it can be applied for simultaneous imaging of two unrelated or vastly separated scenes, which could be useful for imaging of large scale models in wind-tunnels with a single camera. Furthermore, since high-speed cameras generally require restriction of the camera region of interest, MUSIC could allow researchers to capture more data to help offset this restriction. Two optical setups for FOV extension using MUSIC have been demonstrated: a single lens approach suitable for applications with optical access restrictions, and a more efficient dual-lens approach when no such restrictions exist. Though we only demonstrate two-scene imaging, additional scenes can be encoded into a single image using more patterns and optics. However, restrictions on reconstruction resolution limit the number of usable patterns, as discussed in literature [17,19].

References

1. G. S. Settles, Schlieren and Shadowgraph Techniques: Visualizing Phenomena in Transparent Media (Springer Berlin Heidelberg, 2012).

2. D. Baccarella, Q. Liu, A. Passaro, T. Lee, and H. Do, “Development and testing of the ACT-1 experimental facility for hypersonic combustion research,” Meas. Sci. Technol. 27(4), 045902 (2016). [CrossRef]  

3. S. J. Laurence, A. Wagner, and K. Hannemann, “Experimental study of second-mode instability growth and breakdown in a hypersonic boundary layer using high-speed schlieren visualization,” J. Fluid Mech. 797, 471–503 (2016). [CrossRef]  

4. J. D. Miller, S. J. Peltier, M. N. Slipchenko, J. G. Mance, T. M. Ombrello, J. R. Gord, and C. D. Carter, “Investigation of transient ignition processes in a model scramjet pilot cavity using simultaneous 100kHz formaldehyde planar laser-induced fluorescence and CH* chemiluminescence imaging,” Proc. Combust. Inst. 36(2), 2865–2872 (2017). [CrossRef]  

5. N. Jiang, P. S. Hsu, J. G. Mance, Y. Wu, M. Gragston, Z. Zhang, J. D. Miller, J. R. Gord, and S. Roy, “High-speed 2D Raman imaging at elevated pressures,” Opt. Lett. 42(18), 3678–3681 (2017). [CrossRef]   [PubMed]  

6. J. D. Miller, M. N. Slipchenko, J. G. Mance, S. Roy, and J. R. Gord, “1-kHz two-dimensional coherent anti-Stokes Raman scattering (2D-CARS) for gas-phase thermometry,” Opt. Express 24(22), 24971–24979 (2016). [CrossRef]   [PubMed]  

7. B. A. Rankin, D. R. Richardson, A. W. Caswell, A. G. Naples, J. L. Hoke, and F. R. Schauer, “Chemiluminescence imaging of an optically accessible non-premixed rotating detonation engine,” Combust. Flame 176, 12–22 (2017). [CrossRef]  

8. B. R. Halls, D. J. Thul, D. Michaelis, S. Roy, T. R. Meyer, and J. R. Gord, “Single-shot, volumetrically illuminated, three-dimensional, tomographic laser-induced-fluorescence imaging in a gaseous free jet,” Opt. Express 24(9), 10040–10049 (2016). [CrossRef]   [PubMed]  

9. T. R. Meyer, B. R. Halls, N. Jiang, M. N. Slipchenko, S. Roy, and J. R. Gord, “High-speed, three-dimensional tomographic laser-induced incandescence imaging of soot volume fraction in turbulent flames,” Opt. Express 24(26), 29547–29555 (2016). [CrossRef]   [PubMed]  

10. J. Liang and L. V. Wang, “Single-shot ultrafast optical imaging,” Optica 5(9), 1113–1127 (2018). [CrossRef]  

11. L. Gao, J. Liang, C. Li, and L. V. Wang, “Single-shot compressed ultrafast photography at one hundred billion frames per second,” Nature 516(7529), 74–77 (2014). [CrossRef]   [PubMed]  

12. J. Liang, L. Gao, P. Hai, C. Li, and L. V. Wang, “Encrypted Three-dimensional Dynamic Imaging using Snapshot Time-of-flight Compressed Ultrafast Photography,” Sci. Rep. 5(1), 15504 (2015). [CrossRef]   [PubMed]  

13. K. Dorozynska and E. Kristensson, “Implementation of a multiplexed structured illumination method to achieve snapshot multispectral imaging,” Opt. Express 25(15), 17211–17226 (2017). [CrossRef]   [PubMed]  

14. E. Kristensson and E. Berrocal, “Recent development of methods based on structured illumination for combustion studies,” in Imaging and Applied Optics 2016(Optical Society of America, Heidelberg, 2016), p. LT4F.1.

15. M. Saxena, G. Eluru, and S. S. Gorthi, “Structured illumination microscopy,” Adv. Opt. Photonics 7(2), 241–275 (2015). [CrossRef]  

16. A. Ehn, J. Bood, Z. Li, E. Berrocal, M. Aldén, and E. Kristensson, “FRAME: femtosecond videography for atomic and molecular dynamics,” Light Sci. Appl. 6(9), e17045 (2017). [CrossRef]   [PubMed]  

17. M. Gragston, C. Smith, D. Kartashov, M. N. Shneider, and Z. Zhang, “Single-shot nanosecond-resolution multiframe passive imaging by multiplexed structured image capture,” Opt. Express 26(22), 28441–28452 (2018). [CrossRef]   [PubMed]  

18. D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006). [CrossRef]  

19. M. Gragston, C. D. Smith, and Z. Zhang, “High-speed flame chemiluminescence imaging using time-multiplexed structured detection,” Appl. Opt. 57(11), 2923–2929 (2018). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1
Fig. 1 (a) Single lens version of the multiplexing system for field of view extension. Light collected by an achromatic lens is split into two paths, with each path viewing a different part of the scene. Ronchi rulings/gratings are placed on each path to provide unique modulations of the images. Note that modulation occurs when focusing the image onto the Ronchi ruling. The modulated image are then simultaneously recorded by a camera focused to each of the Ronchi rulings through a beam splitter. (b) A two-lens version of (a) that can be used for very large objects. It also eliminates one of the beam splitters. (c) Diagram showing the effect of the optical multiplexing systems.
Fig. 2
Fig. 2 (a) Example image and (b) its Fourier transform. (c) Computational example of overlapped, uniquely modulated images, (d) the Fourier transform, and (e-f) the raw recovered images. The yellow and green rings show the location of the shifted information. Note that the Fourier plots are logarithmic in intensity.
Fig. 3
Fig. 3 Wind tunnel Z-type schlieren setup.
Fig. 4
Fig. 4 (a) Raw Image of USAF 1951 standard pattern with multiple views, (b) its Fourier transform with logarithmic scaling, and (c-d) the recovered scenes for channel one and two.
Fig. 5
Fig. 5 The ground truth images are shown in for each channel are shown in (a) and (b). Note that the displacement between the two channels has been done is post processing to show the true imaging location with respect to each other. (c) The stitched together ground truth image. (d-e) Images recovered from the multiplexed image in Fig. 4. (f) The stitched together recovered images from the multiplexed results in (d) and (e).
Fig. 6
Fig. 6 (a) Raw flame chemiluminescence imaging using the MUSIC technique. (b) The Fourier transform of the row image plotted in logarithmic scale. (c) A close-up of the important region for analysis. Note that the red and green circles indicate channel one and channel two information respectively. Exposure time was 2 ms.
Fig. 7
Fig. 7 Flame chemiluminescence preliminary results. Recovered images from the image in Fig. 6, with (a) and (b) the properly aligned to true spatial position. The stitched together image is shown in (c).
Fig. 8
Fig. 8 Schlieren imaging of a wedge in supersonic flow split into two channels by MUSIC. Truth schlieren image of the individual channels are shown in (a) and (b) and the stitched image is shown in (c). The camera settings were an exposure time of 33 ms and ISO of 800.
Fig. 9
Fig. 9 (a) Raw multiplexed schlieren image using MUSIC. (b) Fourier transform of the filtered image in logarithmic scale. (c) Close-up of the individual channel information in the Fourier domain. Channels 1 and 2 are indicated by the red and green circles respectively. +
Fig. 10
Fig. 10 Recovered images from the multiplexed image in Fig. 9. Individual channel recovered images are shown in (a) and (b) and the stitched image is shown in (c). The results are highly similar to the truth images in Fig. 8.

Equations (5)

Equations on this page are rendered with MathJax. Learn more.

I G ( r ,t)= n=1 2 I n ( r ,tΔ t Dn ) M n ( r ) ε n
I n ( r ,t)={ I 1 ( r ,t) Scene1 I 2 ( r ,t) Scene2
M(x,y)=M(x+T)={ 1 | x |< T 1 0 T 1 < x T/2
I G ( k ,t)= n=1 2 [ ε n I n ( k ,t) m= 2sin(m k 0 T 1 ) m δ( k xn m k 0 )]
I G ( k ,t)= n=1 2 [ ε n I n ( k ,t) m= 2sin(m k 0 T 1 ) m δ( k xn m k 0 )]
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.