Abstract

A microelectromechanical systems (MEMS) based self-referencing cascaded line-scan camera using single-pixel detectors is proposed and verified. Single-pixel detectors make it an attractive low-cost alternative of a traditional line-scan camera that can operate at any wavelength. The proposed system is composed of several identical cascaded line imager units driven by a common actuator. Each unit is an integration of an imaging slit, a MEMS encoding mask, a light concentrator and a single-pixel detector. The spatial resolution of the proposed line-scan camera can thus be N-fold immediately by cascading N units to achieve high spatial resolution. For prototype demonstration, a cascaded line-scan camera composed of two imager units are prepared, with each unit having a single-pixel detector and being capable of resolving 71 spatial pixels along the slit. Hadamard transform multiplexing detection is applied to enhance the camera’s signal-to-noise ratio (SNR). The MEMS encoding mask is resonantly driven at 250 Hz indicating an ideal frame-rate of 500 fps of the line-scan camera prototype. Further increase of frame-rate can be achieved through optimization of the MEMS actuator. Additionally, the MEMS encoding mask incorporates a self-referencing design which simplifies data acquisition process, thus enabling the camera system to work in a simple but efficient open-loop condition.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

A line-scan camera is an imaging device with a single row of pixel sensors. Compared with traditional area-scan cameras, line-scan cameras have the advantages of low-cost, small-size, high framerate, smear-free imaging of fast moving objects, and processing efficiency in stitching overlapping frames. Line-scan cameras have been used in a variety of applications such as optical coherence tomography (OCT) [1], hyperspectral imaging [23], industrial inspection [46], stereo machine vision [79] and medical imaging [1012]. The line-scan cameras are affordable and compact in the visible and near-infrared (VNIR) wavelengths (0.4 µm to 1 µm). The main reason is that silicon as a material is very efficient for photon-to-electron conversion in this spectral region and at the same time it is also ideal for large-scale electronics integration and the technology is well developed. Imaging, however, with arrayed photodetectors at wavelengths where silicon is blind (for example IR wavelengths) is considerably more complicated, bulky, and expensive, which requires non-silicon technology such as InGaAs and InSb. Therefore, low-cost alternatives become more and more attractive.

In recent years, imaging systems with just a single-pixel photodetector have attracted much attention [1322] due to their outstanding advantages. Using a single-pixel detector can significantly lower the overall cost, package size, and weight of the imaging system without sacrificing the speed [15]. Furthermore, it enables the imager to operate at wavelengths currently unavailable or prohibitively expensive for conventional arrayed-based detectors and also significantly simplifies the system’s calibration procedures. For single-pixel imaging/sensing, multiplexing schemes had also been proven to be an effective approach to increase the sensor’s SNR through an inherent Fellgett’s advantage. A well-known example of enhancing SNR through multiplexing is Fourier-Transform infrared (FTIR) spectrometers [23], which are dominantly used in infrared regions. Hadamard transform [24] is one such multiplexing scheme.

Two single-pixel imaging architectures have been reported [16], where one architecture encodes directly the image before it reaches the single-pixel detector [14,22] and the other encodes the object through structured light illumination [17,20]. Majority of these single-pixel imaging systems such as compressive sensing cameras [22], simultaneous real-time visible and infrared video imaging [14], and Hadamard single-pixel imaging [25] are implemented with the digital micromirror devices (DMDs) [26] from Texas Instruments (TI) with programmable features and high operation speed. DMDs are relatively expensive and work in a reflection mode, where imaging optics and collection/illumination optics need to be located on the same side of the DMD. This requirement could make a compact imager design challenging. Another way of implementation is through the use of an encoding mask operated in a transmission mode, where a weighted pattern (encoding pattern) is generated at the incoming image plane that allows or blocks the light from designated points/pixels to transmit through and reach the single-pixel detector. In this way, the imaging optics and collection/illumination optics can then be conveniently placed on two separate sides of the encoding mask thus facilitating a compact imager design. In this paper, we present a microelectromechanical systems (MEMS) based encoding mask working in a transmission mode for compact line-scan camera applications.

For single-pixel Hadamard imaging [24,25], the number of encoded measurements recorded has to be equal to the number of pixels in the image in order for a successful image reconstruction. Therefore, at a given encoding pattern generation speed (or encoding speed), the imaging frame rate is inversely proportional to the number of imaging pixels. In other words, there is an inherent trade-off between the spatial resolution of the image and the imaging frame rate, if the detection is limited to the use of only one single-pixel detector. This trade-off however can be avoided by employing a cascading scheme with multiple single-pixel photodetectors at the expense of slightly increasing the system size and overall cost, for example image retrieval using a quadrant detector [17].

In this paper, several novel concepts are proposed and applied to line-scan cameras. Firstly, a MEMS driven resonant mask is utilized to generate Hadamard encoding patterns, and this encoder works in a transmission mode. Secondly, the MEMS encoding mask meets a self-referencing design with trigger signals for data sampling fully integrated in the Hadamard encoding pattern, thereby eliminates the need for precision position sensors and reduces the system complexity in data acquisition, synchronization, and control. Thirdly, compound parabolic concentrators (CPC), which are originally used in non-imaging optics for efficient solar energy collection, are first miniaturized and applied to single-pixel imaging systems. Last but not the least, a cascading scheme where multiple MEMS encoding masks driven by a common actuation platform is proposed to significantly enhance the spatial resolution of the proposed line-scan camera.

2. System working principle and design

The proposed self-referencing cascaded line-scan camera consists several identical line imager units. As shown schematically in Fig. 1, each unit is composed of a slit, an encoding mask, a CPC, and a single-pixel detector. The light passing through the slit is encoded by the mask, and then concentrated onto the single-pixel detector by the CPC. The spatial intensity distribution along the slit can be reconstructed through decoding by processing the signals from the single-pixel detector after a complete encoding cycle. The resolution of each line imager unit is determined by the number of pixels in its encoding pattern, and the overall resolution of this proposed line imaging system can be N-fold extended immediately by cascading N units, as Fig. 1 presented. For prototype demonstration, the cascaded line-scan camera demonstrated in this paper is composed of two line imager units.

 

Fig. 1. The schematic of the self-referencing cascaded line-scan camera.

Download Full Size | PPT Slide | PDF

In each line imager, an encoding mask is placed immediately behind a narrow slit. The encoding mask consists of M rows of encoding patterns designed according to a cyclic S matrix of order M. The rows are aligned parallel to the slit, which allows only one row of encoding pattern to encode the light through the slit at a time and block all the others at the same time as presented in Fig. 2(a). In our prototype design, the width of the slit is equal to the width of a single row of encoding pattern, and they are both set at 8 µm. Each row of encoding pattern has 71 pixels along its slit direction with a pixel length of 38 µm. Hence, the total encoded length of the slit is around 2.7 mm. A 4 µm wide opaque gap is inserted between every two adjacent rows of encoding patterns for self-referencing. The encoding mask is driven to oscillate vertically, which makes the encoding patterns to move across the slit sequentially to encode the light. As shown in Fig. 2(a), at the bottom of the encoding mask, there is a long transparent pattern followed by a long opaque pattern for generating a frame beginning signal. The rest area of the mask is the M×M encoding pattern array, where M defines the resolution of the line imager unit (M = 71 in our current prototype). The expected detector output time sequence is presented in Fig. 2(b). When the encoding mask is higher than the slit and moves downward, the frame beginning pattern will arrive the slit first to generate a high-level signal and subsequently a deep dip in the detector output, which represents the beginning of a new frame. And then, it is followed by the encoded signals. In our design, after every encoding pattern signal, there is also a shallow dip in the detector output generated by the opaque gap on the encoding mask when its moves across the slit. Consequently, the correct encoded signals can be effectively extracted at the mid of every two successive shallow dips after the high-level frame beginning signal. This self-referencing design allows the encoded data to be accurately sampled without the need for monitoring the real-time position of the oscillating mask, which enables the system to work in a simple but efficient open-loop condition. A microscope picture of one encoding mask is presented in Fig. 2(c), where the light orange area is opaque and the dark brown regions are transparent.

 

Fig. 2. (a) The schematic of a slit and an encoding mask. (b) Self-referencing working mechanism. (c) Microscope picture of one encoding mask.

Download Full Size | PPT Slide | PDF

The detailed encoding/decoding principle to reconstruct the light intensity distribution along the slit is as follows. The encoding mask is set to the ith row pattern (i = 1, 2, …, M) to encode the radiation passing through the slit, and the resultant encoded radiation passing through is collected by the single-pixel detector. The MEMS encoding mask is then set to the next row, and the process is repeated until all the M measurements are finished. Mathematically, this can be expressed as:

$${m_i} = \sum\limits_j^M {{a_{ij}}\textrm{I}({x_j})}$$
where mi is the ith measured intensity signal at the single-pixel detector, I(xj) is the radiation intensity at a position xj (j = 1, 2, …, M) along the slit, aij is the attenuation at position xj according to the encoding mask setting at the ith row configuration. The values of aij are either 1 (transparent) or 0 (blocked), corresponding to the mask patterns. Equivalently, Eq. (1) may be rewritten in a single matrix equation:
$$\textbf{M} = \textbf{AI}$$
where M = [mi], A = [aij] and I = [Ij] = I(xj). Consequently, the line image I(xj) or I, i.e. the intensity distribution along the slit can be reconstructed by:
$$\textbf{I} = {\textbf{A}^{ - 1}}\textbf{M}$$
where A−1 is the inverse matrix of A. In this work, A is a cyclic S-Matrix of order M, and its inverse can be easily obtained as [24]:
$${\textbf{A}^{ - 1}} = \frac{2}{{M + 1}}(2{\textbf{A}^{\textrm{T}}} - \textbf{J})$$
where J denote a M×M matrix with every element equal to 1, and T denote the matrix transpose.

The encoding mask is integrated on a MEMS chip, and the chip is fabricated using silicon-on-insulator (SOI) micromachining technology. The fabrication processes are presented in Fig. 3(a). The SOI wafer used here consists of a 400 µm thick Si substrate, a 2 µm thick SiO2 buried layer and a 25 µm thick silicon device layer. The MEMS structures and encoding mask patterns are fabricated by lithography and deep reactive ion etching (deep RIE) processes on the silicon device layer. Then, another lithography and deep RIE is performed on the back side of the SOI wafer to remove the Si substrate under the MEMS movement parts. And finally, the MEMS structures and encoding masks are released by partially etching of the SiO2 buried layer with a buffered hydrofluoric (HF) acid solution. The fabricated SOI MEMS chip is presented in Fig. 3(b), showing two encoding masks suspended side-by-side by folded-beam flexures to implement the proposed cascading scheme. The simulated resonance mode using finite element method (FEM) is presented in Fig. 3(c), which illustrates the fundamental mode used for scanning the encoding masks across their respective slits.

 

Fig. 3. (a) Fabrication process. (b) Microscope image of the SOI MEMS chip having two cascaded encoding masks. (c) The FEM simulated resonance mode shape.

Download Full Size | PPT Slide | PDF

Next, the SOI MEMS chip is mount onto an oscillation platform. The platform, fabricated with stainless steel using precision machining, is a displacement amplifier of a piezoelectric translators (PZT) actuator. The amplification performance obtained in a FEM simulation is presented in Fig. 4(a). 10 µm horizontal expanding displacement inputs are applied to the structure equivalently replacing the PZT actuator, and the simulation result indicates that the vertical movement of the platform mounting the MEMS chip can reach up to 100.8 µm, which means the displacement amplification ratio is 10.08. This quasi-static displacement output of the platform should be enough to drive the MEMS encoding mask to oscillate with sufficient amplitude to complete a full encoding cycle, when operated at the resonance of the MEMS structure.

 

Fig. 4. (a) FEM simulated amplification result. (b) Top view of the oscillation platform. The oscillation amplitudes of (c) left MEMS encoding mask and (d) right MEMS encoding mask, respectively as functions of the driving frequency.

Download Full Size | PPT Slide | PDF

The top view of the fabricated oscillation platform integrated with the SOI MEMS chip is presented in Fig. 4(b). The PZT actuator is excited by a sinusoidal voltage source to keep the MEMS chip oscillation. As presented in Figs. 4(c) and 4(d), the oscillation amplitudes of the left and right MEMS encoding masks on the chip are measured under an optical microscope to find out the best working condition. In Fig. 4(c), the black solid curve with square symbols indicates that the oscillation recorded when a 30 V ∼ 20 V sinusoidal voltage (with a 25 V DC component and 10 V Vpp AC sine wave component) is applied as the PZT actuator’s excitation. The oscillation begins to be obvious from 200 Hz, and then the amplitude increases rapidly with the driving frequency until 238 Hz. Finally the amplitude plummets after 238 Hz. The largest peak-to-peak mechanical amplitude of 400 µm appears at 238 Hz. While the encoding mask contains 71 pattern rows, frame beginning pattern and 72 opaque gaps between them, which is about 872 µm in total. So the MEMS encoding mask’s oscillation amplitude is expected to be higher than that to be able to complete a full encoding cycle. The red dashed curve with dot symbols is the measured result when a 60 V ∼ 20 V sinusoidal voltage (with a 40 V DC component and 40 V Vpp AC sine wave component) is applied to drive the PZT actuator. The mask oscillation amplitude increases rapidly from 200 Hz to 242 Hz, reaching up to about 900 µm. Then the amplitude increases little when the drive frequency is getting further higher. Finally, the oscillation amplitude suddenly drops at 290 Hz. This performance meets the nonlinear spring hardening effect predicted by the Duffing’s equation [27]. The oscillation characteristic of the right MEMS encoding mask, as presented in Fig. 4(d), is similar to that of the left one. As shown in Figs. 4(c) and 4(d), a 60 V ∼ 20 V sinusoidal voltage at 250 ∼ 290 Hz with both encoding masks oscillating about 900 µm peak-to-peak mechanical amplitudes meet the system working requirement. Considering the system robustness and to avoid the amplitude jump due to unexpected external disturbances, 250 Hz is determined to be the operation condition for prototype demonstration. The MEMS chip oscillates back and forth, driving the integrated encoding masks to resonate. The mask passes through the slit twice in one oscillation period. Consequently, two sets of image data can be acquired in one oscillation period. In other words, the ideal imaging frame rate is about 500 Hz, i.e. two times of the mechanical oscillation frequency. Further increase of frame rate to kHz is possible through MEMS designs by reducing the mass of the encoding mask or increasing the spring constant of the flexural mask suspension, albeit at an expense of potential reduction of the system SNR due to the shortening of the sampling period.

The S10993-05GT Si PIN photodiodes from Hamamatsu Photonics are used as the single-pixel detectors in this proposed line-scan camera system. Due to the photo-sensitive area of detectors (1.06 mm × 1.06 mm) is comparatively smaller in one dimension than the slit (2.821 mm × 8 µm), a light concentrator is necessary to converge the encoded light onto the detector. It should be noted that a conventional imaging system using a simple convex lens can easily achieve this detector area matching with an optical magnification ratio less than one. However, considering the required parameters of the imaging system including the focal length, f-number, and optical magnification, a relatively large distance is required from the object (slit) to the image (photosensitive area on detector) thus making the design bulky. Here, we also note that an imaging system between the encoder and the detector is completely unnecessary, because the function of the single-pixel detector is just to collect the total amount of light energy passing through the encoder instead of resolving the light intensity distribution on the encoder. The CPC [2831], a non-imaging concentrator originally used in solar energy concentration with high concentration ratio and high efficiency, matches our expectation well. CPCs are designed using the edge-ray principle [28], and are usually large in size. In order to be used in our proposed single-pixel imaging system, CPCs need to be miniaturized. Here we select dielectric CPCs due to their enhanced acceptance angle. To achieve the miniaturization, we developed a novel fabrication process.

The proposed fabrication process is presented in Fig. 5(a). Firstly, a stainless steel mold having the shape of the CPC is fabricated by precision diamond turning as shown in step (i). The shape of the CPC, as shown in a perspective view in Fig. 5(b), can be found in [30]. The relationships between the concentration ratio, length of the CPC, and maximum acceptance angle can be expressed by the following equations:

$${d_2}/{d_1} = \sin \theta {^{\prime}_{\max }}$$
$$L = (1/2)({d_1} + {d_2})\cot \theta {^{\prime}_{\max }}$$
$${N_{\textrm{CPC}}}\sin \theta {^{\prime}_{\max }} = {N_{\textrm{air}}}\sin {\theta _{\max }}$$
where d1 and d2 are the diameters of the entrance aperture and exit aperture, respectively. L is the CPC length. θ'max is the acceptance angle inside the CPC at the entrance aperture. Since the material of CPC is dielectric, the actual maximum acceptance angle in air, θmax, can be calculated by Snell’s Law, where NCPC and Nair are the refractive indices of the polymer used and air, respectively. The entrance and exit aperture sizes are fixed based on dimensions of the preceding and succeeding components. With careful optimization to the length and maximum acceptance angle while considering the efficiency within the range of acceptance, d1, d2, L and θmax are designed to be around 3.3 mm, 0.8 mm, 6 mm and 21.4°, respectively. Using a dielectric CPC as compared to a conventional hollow mirrored CPC, the acceptance angle increased by approximately 8° (i.e. comparing θmax with θ'max), thus increasing the signal collected. Then, in step (ii) of the fabrication process in Fig. 5(a), polydimethylsiloxane (PDMS) is poured on the stainless steel mold surface and cured by heating at 75°C for 2 hours to create the complimentary CPC mold. After its removal from the stainless steel surface, ultraviolet (UV)-curable polymer (Norland Optical Adhesive 63) is injected into the PDMS mold, and then covered with a glass substrate and cured under UV light for 30 minutes. Detaching the PDMS, the CPC is complete as shown in step (v) of Fig. 5(a). A photo showing the completed CPCs are presented in Fig. 5(c).

 

Fig. 5. (a) Miniature CPC fabrication process. (b) A perspective view of a CPC. (c) Completed CPCs.

Download Full Size | PPT Slide | PDF

For compact design consideration, a PCB board integrated with photodetectors with amplification circuits is prepared as presented in Fig. 6(a). The fabricated miniature dielectric CPCs are then aligned and attached to their respective detectors as shown in the Fig. 6. The PCB board is then assembled with the SOI MEMS chip and its PZT driver with each CPC aligned to its respective encoding mask. The top view of the SOI MEMS chip combined with the dielectric CPCs is presented in Fig. 6(b). And, a perspective view of the PZT oscillation platform combined with PCB board is presented in Fig. 6(c). Finally, the slit fabricated using photolithography and etching of a thin Chromium layer on a glass substrate is then aligned to the MEMS encoding mask under an optical microscope with the help of their respective alignment marks. The slit is then assembled and secured to the PZT platform using UV curable glue.

 

Fig. 6. (a) The PCB board with photodetectors integrated with their respective CPCs and amplification circuits. (b) SOI MEMS chip combined with CPCs. (c) Oscillation platform combined with PCB board.

Download Full Size | PPT Slide | PDF

To demonstrate the operation principle of this line-scan camera, a testing system is constructed as presented schematically in Fig. 7. A white LED array, acting as an object, is placed before a camera lens, and the compact cascaded MEMS encoding device driven by a PZT actuator is placed after the camera lens on the image plane of the LED array. The PCB board is powered by an external ± 5 V power source and the PZT actuator is driven by a 60∼20 V sine wave at 250 Hz generated from an Agilent 33220A waveform generator and a FLC Electronics A400DI Voltage amplifier. The readout signals from two Si PIN diodes are acquired by a NI-DAQ system. The vertical position of the MEMS encoding device is controlled by a manual precision stage, which allows us to emulate a moving camera for push-broom scan operation [32].

 

Fig. 7. Schematic showing the experimental system construction of a line-scan camera.

Download Full Size | PPT Slide | PDF

3. Results and discussion

A typical image recovery process of a line imager unit can be described as follows. Firstly, the detector output data is acquired. Secondly, the output data is self-referencing distinguished by the dips in the data curve. Thirdly, the effective encoded signals are extracted at the mid of two successive dips. And finally, a decoding process of the effective encoded signals using Eq. (3) is performed to achieve the line imaging result, thereby obtaining a 1D intensity distribution along the slit. Six LEDs in a row is used as an object, as Fig. 8(a) presented. The raw data recorded from the photodetector over one period of mask oscillation when all LEDs are turned on are shown in Fig. 8(b)(i) to clearly illustrate the characteristics of the signal. As shown, there is an obvious high-level frame beginning signal followed by 71 signal dips between every two successive encoded data. After that, there is a reversed 71 signal patterns and another frame beginning signal which is generated when the MEMS mask is moving backward. For clearer display, Fig. 8(b)(ii) zooms in the data recorded from 0.5 ms to 0.8 ms to show the details of signal patterns. The movement speed of the MEMS mask isn’t a constant in an oscillation period but the data acquisition equipment keeps a fixed sampling rate of 360 kHz. As a consequence, the durations of the patterns recorded from the photodetector are not uniform as shown in Fig. 8(b)(ii). However, the post-processing with our self-referencing design makes it simple by extraction the peaks between every two successive signal dips. Then, a complete set of encoded data (71 in total) are extracted as presented in Fig. 8(c). Next, a decoding process is carried out on the extracted data. The recovered line image when all 6 LEDs are turned on is presented in Fig. 8(d). The intensity result indicates that there are 6 peaks corresponding to 6 turned-on LEDs, which successfully demonstrates the proposed working principle using MEMS encoding masks. It is also noted that the intensities of the LEDs in the recorded image does not appear to be uniform. This might be due to the fact that the imaging slit in our line-scan camera setup is not perfectly aligned to go through the centers of all LEDs.

 

Fig. 8. (a) Photos showing the object – an array of LEDs when they are turned (i) off and (ii) on. In (a)(ii), the environment light is turned off, which is the actual experimental condition used. (b) The acquired raw data of (i) one period and (ii) the frame beginning. (c) The extracted effective outputs. (d) The recovered line image of the 6 turned-on LEDs.

Download Full Size | PPT Slide | PDF

Next, two line imager units are cascaded in the experiment, which doubles the total active pixel number to 142. Here, 12 LEDs in a row are applied as the object. When the 12 LEDs are individually turned on or off according to the designated patterns, the corresponding recovered line images are presented in Figs. 9(a)–9(c), respectively. For comparison, the corresponding images of the LEDs taken with a conventional camera are also presented in the insets of Figs. 9(a)–9(c), respectively. In our design, because the two MEMS encoding masks are not immediately next to each other as shown in Fig. 3(b), there is a small spatial gap between two line imager units where the line-scan camera is not responsive. This gap is represented with a total of 18 dead pixels, which are assigned with zero intensity values in the captured line images. The number of dead pixels can be significantly reduced by revising the SOI MEMS encoder design to place the two masks one immediately next to the other, which is at the expense of increasing requirement on the alignment accuracy between the mask and CPC in the assembly process. As shown in Fig. 9, the experimental results indicate that the cascaded line-scan camera works well with satisfactory performance. To emulate the push-broom scanning operation of the developed line-scan camera, the cascaded MEMS encoding device is moved vertically in the image plane of the lens by manually controlling a precision xyz stage. At the same time, an array of 40 white LEDs arranged in alphabet shapes “M” and “W” is used as an object, as shown by a conventional camera photo in Fig. 9(d). The recovered 2D image of the object with 6 steps of push-broom scanning each centered at its respective row of LEDs is presented in Fig. 9(e). The recovered image displayed a clear shape of “M” and “W”, which proves the principle of push-broom scanning using the cascaded line-scan camera for 2D imaging.

 

Fig. 9. The reconstructed 1D image when (a) 12 LEDs are all turned on, (b) the 1st, 2nd, 5th, 6th, 8th, 9th, 10th and 12th LEDs are turned on, (c) the 1st, 3rd, 4th, 5th, 6th, 7th, 9th, 11th and 12th LEDs are turned on, (d) 40 LEDs arranged in alphabet shapes “W” and “M”. (e) The 2D image recovered by the cascaded line-scan camera using push-broom scanning.

Download Full Size | PPT Slide | PDF

4. Conclusion

A self-referencing cascaded line-scan camera is proposed and verified. This system with single-pixel detectors applied is a low-cost alternative of a traditional line-scan camera. And more importantly, it can be easily configured to function at any wavelengths. The basic line imager unit is an integration of a slit, a MEMS encoding mask, a light concentrator, and a single-pixel photodetector. We further demonstrate that the spatial resolution of the line-scan camera can be N-fold immediately by cascading N proposed line imager unites. This approach can potentially achieve high spatial resolution with low cost. For prototype demonstration, a line-scan camera composed of two line imager units are prepared and tested. Multiplexing detection using cyclic S matrices is applied to the encoding/decoding process which can potentially enhance the SNR. The encoding masks are implemented using MEMS technology and they incorporate a self-referencing design which simplifies data acquisition process and thus enables the system to work in a simple but efficient open-loop mode. The encoding masks are driven by MEMS at their resonant frequencies at about 250 Hz, which indicates an ideal frame-rate of 500 Hz. The framerate potentially can be further increased by MEMS optimization. The proposed line-scan camera might be useful in future industrial inspection and other relevant applications.

Funding

Ministry of Education - Singapore (MOE) (Tier1 R-265-000-557-112).

References

1. A. Kho and V. J. Srinivasan, “Compensating spatially dependent dispersion in visible light OCT,” Opt. Lett. 44(4), 775–778 (2019). [CrossRef]  

2. G. Polder, P. M. Blok, H. A. C. De Villiers, J. M. Van Der Wolf, and J. Kamp, “Potato Virus Y Detection in Seed Potatoes Using Deep Learning on Hyperspectral Images,” Front. Plant Sci. 10, 209 (2019). [CrossRef]  

3. J. Jiang, X. Feng, F. Liu, Y. Xu, and H. Huang, “Multi-Spectral RGB-NIR Image Classification Using Double-Channel CNN,” IEEE Access 7, 20607–20613 (2019). [CrossRef]  

4. J. Qin, M. S. Kim, K. Chao, L. Bellato, W. F. Schmidt, B.-K. Cho, and M. Huang, “Inspection of maleic anhydride in starch powder using line-scan hyperspectral Raman chemical imaging technique,” Int. J. Agric. Biol. Eng. 11(6), 120–125 (2018). [CrossRef]  

5. J. Qin, M. S. Kim, K. Chao, M. Gonzalez, and B.-K. Cho, “Quantitative detection of benzoyl peroxide in wheat flour using line-scan macroscale Raman chemical imaging,” Appl. Spectrosc. 71(11), 2469–2476 (2017). [CrossRef]  

6. J. Qin, M. S. Kim, K. Chao, S. Dhakal, H. Lee, B.-K. Cho, and C. Mo, “Detection and quantification of adulterants in milk powder using a high-throughput Raman chemical imaging technique,” Food Addit. Contam., Part A 34(2), 152–161 (2017). [CrossRef]  

7. B. Sun, J. Zhu, L. Yang, Y. Guo, and J. Lin, “Stereo line-scan sensor calibration for 3D shape measurement,” Appl. Opt. 56(28), 7905–7914 (2017). [CrossRef]  

8. E. Lilienblum and A. Al-Hamadi, “A structured light approach for 3-D surface reconstruction with a stereo line-scan system,” IEEE Trans. Instrum. Meas. 64(5), 1258–1266 (2015). [CrossRef]  

9. Z. Liu, S. Wu, Q. Wu, C. Quan, and Y. Ren, “A novel stereo vision measurement system using both line scan camera and frame camera,” IEEE Trans. Instrum. Meas. (to be published) (2018).

10. A. Davis, O. Levecq, H. Azimani, D. Siret, and A. Dubois, “Simultaneous dual-band line-field confocal optical coherence tomography: application to skin imaging,” Biomed. Opt. Express 10(2), 694–706 (2019). [CrossRef]  

11. A. Dubois, O. Levecq, H. Azimani, A. Davis, J. Ogien, D. Siret, and A. Barut, “Line-field confocal time-domain optical coherence tomography with dynamic focusing,” Opt. Express 26(26), 33534–33542 (2018). [CrossRef]  

12. A. Dubois, O. Levecq, H. Azimani, D. Siret, A. Barut, M. Suppa, V. del Marmol, J. Malvehy, E. Cinotti, P. Rubegni, and J. Perrot, “Line-field confocal optical coherence tomography for high-resolution noninvasive imaging of skin tumors,” J. Biomed. Opt. 23(10), 1 (2018). [CrossRef]  

13. N. Radwell, K. J. Mitchell, G. M. Gibson, M. P. Edgar, R. Bowman, and M. J. Padgett, “Single-pixel infrared and visible microscope,” Optica 1(5), 285–289 (2014). [CrossRef]  

14. M. P. Edgar, G. M. Gibson, R. W. Bowman, B. Sun, N. Radwell, K. J. Mitchell, S. S. Welsh, and J. Padgett, “Simultaneous real-time visible and infrared video with single-pixel detectors,” Sci. Rep. 5(1), 10669 (2015). [CrossRef]  

15. Z. Xu, W. Chen, J. Penuelas, M. Padgett, and M. Sun, “1000 fps computational ghost imaging using LED-based structured illumination,” Opt. Express 26(3), 2427–2434 (2018). [CrossRef]  

16. M. Sun and J. Zhang, “Single-pixel imaging and its application in three-dimensional reconstruction: a brief review,” Sensors 19(3), 732 (2019). [CrossRef]  

17. M. Sun, W. Chen, T. Liu, and L. Li, “Image retrieval in spatial and temporal domains with a quadrant detector,” IEEE Photonics J. 9(5), 1–6 (2017). [CrossRef]  

18. E. Aguénounon, F. Dadouche, W. Uhring, N. Ducros, and S. Gioux, “Single snapshot imaging of optical properties using a single-pixel camera: a simulation study,” J. Biomed. Opt. 24(7), 071612 (2019). [CrossRef]  

19. S. Jiao, M. Sun, Y. Gao, T. Lei, Z. Xie, and X. Yuan, “Motion estimation and quality enhancement for a single image in dynamic single-pixel imaging,” Opt. Express 27(9), 12841–12854 (2019). [CrossRef]  

20. D. Shi, J. Huang, W. Meng, K. Yin, B. Sun, Y. Wang, K. Yuan, C. Xie, D. Liu, and W. Zhu, “Radon single-pixel imaging with projective sampling,” Opt. Express 27(10), 14594–14609 (2019). [CrossRef]  

21. L. Liao, K. Li, C. Yang, and J. Liu, “Low-cost image compressive sensing with multiple measurement rates for object detection,” Sensors 19(9), 2079 (2019). [CrossRef]  

22. M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008). [CrossRef]  

23. R. J. Bell, Introductory Fourier Transform Spectroscopy (Academic, 1972).

24. M. Harwit, Hadamard Transform Optics (Elsevier, The Netherlands, 1979).

25. Z. Zhang, X. Wang, G. Zheng, and J. Zhong, “Hadamard single-pixel imaging versus Fourier single-pixel imaging,” Opt. Express 25(16), 19619–19639 (2017). [CrossRef]  

26. L. J. Hornbeck, “Digital Light ProcessingTM for high-brightness, high-resolution applications,” Proc. SPIE 3013, 27–40 (1997). [CrossRef]  

27. A. M. Elshurafa, K. Khirallah, H. H. Tawfik, A. Emira, A. K. S. A. Aziz, and S. M. Sedky, “Nonlinear dynamics of spring softening and hardening in folded-MEMS comb drive resonators,” J. Microelectromech. Syst. 20(4), 943–958 (2011). [CrossRef]  

28. W. T. Welford and R. Winston, High Collection Nonimaging Optics (Academic, 1989).

29. R. Levi-Setti, D. A. Park, and R. Winston, “The corneal cones of Limulus as optimised light concentrators,” Nature 253(5487), 115–116 (1975). [CrossRef]  

30. R. Winston and J. M. Enoch, “Retinal cone receptor as an ideal light collectort,” J. Opt. Soc. Am. 61(8), 1120–1121 (1971). [CrossRef]  

31. L. Li, B. Wang, J. Pottas, and W. Lipiński, “Design of a compound parabolic concentrator for a multi-source high-flux solar simulator,” Sol. Energy 183, 805–811 (2019). [CrossRef]  

32. P. Mouroulis, R. O. Green, and T. G. Chrien, “Design of pushbroom imaging spectrometers for optimum recovery of spectroscopic and spatial information,” Appl. Opt. 39(13), 2210–2220 (2000). [CrossRef]  

References

  • View by:
  • |
  • |
  • |

  1. A. Kho and V. J. Srinivasan, “Compensating spatially dependent dispersion in visible light OCT,” Opt. Lett. 44(4), 775–778 (2019).
    [Crossref]
  2. G. Polder, P. M. Blok, H. A. C. De Villiers, J. M. Van Der Wolf, and J. Kamp, “Potato Virus Y Detection in Seed Potatoes Using Deep Learning on Hyperspectral Images,” Front. Plant Sci. 10, 209 (2019).
    [Crossref]
  3. J. Jiang, X. Feng, F. Liu, Y. Xu, and H. Huang, “Multi-Spectral RGB-NIR Image Classification Using Double-Channel CNN,” IEEE Access 7, 20607–20613 (2019).
    [Crossref]
  4. J. Qin, M. S. Kim, K. Chao, L. Bellato, W. F. Schmidt, B.-K. Cho, and M. Huang, “Inspection of maleic anhydride in starch powder using line-scan hyperspectral Raman chemical imaging technique,” Int. J. Agric. Biol. Eng. 11(6), 120–125 (2018).
    [Crossref]
  5. J. Qin, M. S. Kim, K. Chao, M. Gonzalez, and B.-K. Cho, “Quantitative detection of benzoyl peroxide in wheat flour using line-scan macroscale Raman chemical imaging,” Appl. Spectrosc. 71(11), 2469–2476 (2017).
    [Crossref]
  6. J. Qin, M. S. Kim, K. Chao, S. Dhakal, H. Lee, B.-K. Cho, and C. Mo, “Detection and quantification of adulterants in milk powder using a high-throughput Raman chemical imaging technique,” Food Addit. Contam., Part A 34(2), 152–161 (2017).
    [Crossref]
  7. B. Sun, J. Zhu, L. Yang, Y. Guo, and J. Lin, “Stereo line-scan sensor calibration for 3D shape measurement,” Appl. Opt. 56(28), 7905–7914 (2017).
    [Crossref]
  8. E. Lilienblum and A. Al-Hamadi, “A structured light approach for 3-D surface reconstruction with a stereo line-scan system,” IEEE Trans. Instrum. Meas. 64(5), 1258–1266 (2015).
    [Crossref]
  9. Z. Liu, S. Wu, Q. Wu, C. Quan, and Y. Ren, “A novel stereo vision measurement system using both line scan camera and frame camera,” IEEE Trans. Instrum. Meas. (to be published) (2018).
  10. A. Davis, O. Levecq, H. Azimani, D. Siret, and A. Dubois, “Simultaneous dual-band line-field confocal optical coherence tomography: application to skin imaging,” Biomed. Opt. Express 10(2), 694–706 (2019).
    [Crossref]
  11. A. Dubois, O. Levecq, H. Azimani, A. Davis, J. Ogien, D. Siret, and A. Barut, “Line-field confocal time-domain optical coherence tomography with dynamic focusing,” Opt. Express 26(26), 33534–33542 (2018).
    [Crossref]
  12. A. Dubois, O. Levecq, H. Azimani, D. Siret, A. Barut, M. Suppa, V. del Marmol, J. Malvehy, E. Cinotti, P. Rubegni, and J. Perrot, “Line-field confocal optical coherence tomography for high-resolution noninvasive imaging of skin tumors,” J. Biomed. Opt. 23(10), 1 (2018).
    [Crossref]
  13. N. Radwell, K. J. Mitchell, G. M. Gibson, M. P. Edgar, R. Bowman, and M. J. Padgett, “Single-pixel infrared and visible microscope,” Optica 1(5), 285–289 (2014).
    [Crossref]
  14. M. P. Edgar, G. M. Gibson, R. W. Bowman, B. Sun, N. Radwell, K. J. Mitchell, S. S. Welsh, and J. Padgett, “Simultaneous real-time visible and infrared video with single-pixel detectors,” Sci. Rep. 5(1), 10669 (2015).
    [Crossref]
  15. Z. Xu, W. Chen, J. Penuelas, M. Padgett, and M. Sun, “1000 fps computational ghost imaging using LED-based structured illumination,” Opt. Express 26(3), 2427–2434 (2018).
    [Crossref]
  16. M. Sun and J. Zhang, “Single-pixel imaging and its application in three-dimensional reconstruction: a brief review,” Sensors 19(3), 732 (2019).
    [Crossref]
  17. M. Sun, W. Chen, T. Liu, and L. Li, “Image retrieval in spatial and temporal domains with a quadrant detector,” IEEE Photonics J. 9(5), 1–6 (2017).
    [Crossref]
  18. E. Aguénounon, F. Dadouche, W. Uhring, N. Ducros, and S. Gioux, “Single snapshot imaging of optical properties using a single-pixel camera: a simulation study,” J. Biomed. Opt. 24(7), 071612 (2019).
    [Crossref]
  19. S. Jiao, M. Sun, Y. Gao, T. Lei, Z. Xie, and X. Yuan, “Motion estimation and quality enhancement for a single image in dynamic single-pixel imaging,” Opt. Express 27(9), 12841–12854 (2019).
    [Crossref]
  20. D. Shi, J. Huang, W. Meng, K. Yin, B. Sun, Y. Wang, K. Yuan, C. Xie, D. Liu, and W. Zhu, “Radon single-pixel imaging with projective sampling,” Opt. Express 27(10), 14594–14609 (2019).
    [Crossref]
  21. L. Liao, K. Li, C. Yang, and J. Liu, “Low-cost image compressive sensing with multiple measurement rates for object detection,” Sensors 19(9), 2079 (2019).
    [Crossref]
  22. M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008).
    [Crossref]
  23. R. J. Bell, Introductory Fourier Transform Spectroscopy (Academic, 1972).
  24. M. Harwit, Hadamard Transform Optics (Elsevier, The Netherlands, 1979).
  25. Z. Zhang, X. Wang, G. Zheng, and J. Zhong, “Hadamard single-pixel imaging versus Fourier single-pixel imaging,” Opt. Express 25(16), 19619–19639 (2017).
    [Crossref]
  26. L. J. Hornbeck, “Digital Light ProcessingTM for high-brightness, high-resolution applications,” Proc. SPIE 3013, 27–40 (1997).
    [Crossref]
  27. A. M. Elshurafa, K. Khirallah, H. H. Tawfik, A. Emira, A. K. S. A. Aziz, and S. M. Sedky, “Nonlinear dynamics of spring softening and hardening in folded-MEMS comb drive resonators,” J. Microelectromech. Syst. 20(4), 943–958 (2011).
    [Crossref]
  28. W. T. Welford and R. Winston, High Collection Nonimaging Optics (Academic, 1989).
  29. R. Levi-Setti, D. A. Park, and R. Winston, “The corneal cones of Limulus as optimised light concentrators,” Nature 253(5487), 115–116 (1975).
    [Crossref]
  30. R. Winston and J. M. Enoch, “Retinal cone receptor as an ideal light collectort,” J. Opt. Soc. Am. 61(8), 1120–1121 (1971).
    [Crossref]
  31. L. Li, B. Wang, J. Pottas, and W. Lipiński, “Design of a compound parabolic concentrator for a multi-source high-flux solar simulator,” Sol. Energy 183, 805–811 (2019).
    [Crossref]
  32. P. Mouroulis, R. O. Green, and T. G. Chrien, “Design of pushbroom imaging spectrometers for optimum recovery of spectroscopic and spatial information,” Appl. Opt. 39(13), 2210–2220 (2000).
    [Crossref]

2019 (10)

A. Kho and V. J. Srinivasan, “Compensating spatially dependent dispersion in visible light OCT,” Opt. Lett. 44(4), 775–778 (2019).
[Crossref]

G. Polder, P. M. Blok, H. A. C. De Villiers, J. M. Van Der Wolf, and J. Kamp, “Potato Virus Y Detection in Seed Potatoes Using Deep Learning on Hyperspectral Images,” Front. Plant Sci. 10, 209 (2019).
[Crossref]

J. Jiang, X. Feng, F. Liu, Y. Xu, and H. Huang, “Multi-Spectral RGB-NIR Image Classification Using Double-Channel CNN,” IEEE Access 7, 20607–20613 (2019).
[Crossref]

A. Davis, O. Levecq, H. Azimani, D. Siret, and A. Dubois, “Simultaneous dual-band line-field confocal optical coherence tomography: application to skin imaging,” Biomed. Opt. Express 10(2), 694–706 (2019).
[Crossref]

M. Sun and J. Zhang, “Single-pixel imaging and its application in three-dimensional reconstruction: a brief review,” Sensors 19(3), 732 (2019).
[Crossref]

E. Aguénounon, F. Dadouche, W. Uhring, N. Ducros, and S. Gioux, “Single snapshot imaging of optical properties using a single-pixel camera: a simulation study,” J. Biomed. Opt. 24(7), 071612 (2019).
[Crossref]

S. Jiao, M. Sun, Y. Gao, T. Lei, Z. Xie, and X. Yuan, “Motion estimation and quality enhancement for a single image in dynamic single-pixel imaging,” Opt. Express 27(9), 12841–12854 (2019).
[Crossref]

D. Shi, J. Huang, W. Meng, K. Yin, B. Sun, Y. Wang, K. Yuan, C. Xie, D. Liu, and W. Zhu, “Radon single-pixel imaging with projective sampling,” Opt. Express 27(10), 14594–14609 (2019).
[Crossref]

L. Liao, K. Li, C. Yang, and J. Liu, “Low-cost image compressive sensing with multiple measurement rates for object detection,” Sensors 19(9), 2079 (2019).
[Crossref]

L. Li, B. Wang, J. Pottas, and W. Lipiński, “Design of a compound parabolic concentrator for a multi-source high-flux solar simulator,” Sol. Energy 183, 805–811 (2019).
[Crossref]

2018 (4)

Z. Xu, W. Chen, J. Penuelas, M. Padgett, and M. Sun, “1000 fps computational ghost imaging using LED-based structured illumination,” Opt. Express 26(3), 2427–2434 (2018).
[Crossref]

A. Dubois, O. Levecq, H. Azimani, A. Davis, J. Ogien, D. Siret, and A. Barut, “Line-field confocal time-domain optical coherence tomography with dynamic focusing,” Opt. Express 26(26), 33534–33542 (2018).
[Crossref]

A. Dubois, O. Levecq, H. Azimani, D. Siret, A. Barut, M. Suppa, V. del Marmol, J. Malvehy, E. Cinotti, P. Rubegni, and J. Perrot, “Line-field confocal optical coherence tomography for high-resolution noninvasive imaging of skin tumors,” J. Biomed. Opt. 23(10), 1 (2018).
[Crossref]

J. Qin, M. S. Kim, K. Chao, L. Bellato, W. F. Schmidt, B.-K. Cho, and M. Huang, “Inspection of maleic anhydride in starch powder using line-scan hyperspectral Raman chemical imaging technique,” Int. J. Agric. Biol. Eng. 11(6), 120–125 (2018).
[Crossref]

2017 (5)

J. Qin, M. S. Kim, K. Chao, M. Gonzalez, and B.-K. Cho, “Quantitative detection of benzoyl peroxide in wheat flour using line-scan macroscale Raman chemical imaging,” Appl. Spectrosc. 71(11), 2469–2476 (2017).
[Crossref]

J. Qin, M. S. Kim, K. Chao, S. Dhakal, H. Lee, B.-K. Cho, and C. Mo, “Detection and quantification of adulterants in milk powder using a high-throughput Raman chemical imaging technique,” Food Addit. Contam., Part A 34(2), 152–161 (2017).
[Crossref]

B. Sun, J. Zhu, L. Yang, Y. Guo, and J. Lin, “Stereo line-scan sensor calibration for 3D shape measurement,” Appl. Opt. 56(28), 7905–7914 (2017).
[Crossref]

M. Sun, W. Chen, T. Liu, and L. Li, “Image retrieval in spatial and temporal domains with a quadrant detector,” IEEE Photonics J. 9(5), 1–6 (2017).
[Crossref]

Z. Zhang, X. Wang, G. Zheng, and J. Zhong, “Hadamard single-pixel imaging versus Fourier single-pixel imaging,” Opt. Express 25(16), 19619–19639 (2017).
[Crossref]

2015 (2)

M. P. Edgar, G. M. Gibson, R. W. Bowman, B. Sun, N. Radwell, K. J. Mitchell, S. S. Welsh, and J. Padgett, “Simultaneous real-time visible and infrared video with single-pixel detectors,” Sci. Rep. 5(1), 10669 (2015).
[Crossref]

E. Lilienblum and A. Al-Hamadi, “A structured light approach for 3-D surface reconstruction with a stereo line-scan system,” IEEE Trans. Instrum. Meas. 64(5), 1258–1266 (2015).
[Crossref]

2014 (1)

2011 (1)

A. M. Elshurafa, K. Khirallah, H. H. Tawfik, A. Emira, A. K. S. A. Aziz, and S. M. Sedky, “Nonlinear dynamics of spring softening and hardening in folded-MEMS comb drive resonators,” J. Microelectromech. Syst. 20(4), 943–958 (2011).
[Crossref]

2008 (1)

M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008).
[Crossref]

2000 (1)

1997 (1)

L. J. Hornbeck, “Digital Light ProcessingTM for high-brightness, high-resolution applications,” Proc. SPIE 3013, 27–40 (1997).
[Crossref]

1975 (1)

R. Levi-Setti, D. A. Park, and R. Winston, “The corneal cones of Limulus as optimised light concentrators,” Nature 253(5487), 115–116 (1975).
[Crossref]

1971 (1)

Aguénounon, E.

E. Aguénounon, F. Dadouche, W. Uhring, N. Ducros, and S. Gioux, “Single snapshot imaging of optical properties using a single-pixel camera: a simulation study,” J. Biomed. Opt. 24(7), 071612 (2019).
[Crossref]

Al-Hamadi, A.

E. Lilienblum and A. Al-Hamadi, “A structured light approach for 3-D surface reconstruction with a stereo line-scan system,” IEEE Trans. Instrum. Meas. 64(5), 1258–1266 (2015).
[Crossref]

Azimani, H.

Aziz, A. K. S. A.

A. M. Elshurafa, K. Khirallah, H. H. Tawfik, A. Emira, A. K. S. A. Aziz, and S. M. Sedky, “Nonlinear dynamics of spring softening and hardening in folded-MEMS comb drive resonators,” J. Microelectromech. Syst. 20(4), 943–958 (2011).
[Crossref]

Baraniuk, R. G.

M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008).
[Crossref]

Barut, A.

A. Dubois, O. Levecq, H. Azimani, A. Davis, J. Ogien, D. Siret, and A. Barut, “Line-field confocal time-domain optical coherence tomography with dynamic focusing,” Opt. Express 26(26), 33534–33542 (2018).
[Crossref]

A. Dubois, O. Levecq, H. Azimani, D. Siret, A. Barut, M. Suppa, V. del Marmol, J. Malvehy, E. Cinotti, P. Rubegni, and J. Perrot, “Line-field confocal optical coherence tomography for high-resolution noninvasive imaging of skin tumors,” J. Biomed. Opt. 23(10), 1 (2018).
[Crossref]

Bell, R. J.

R. J. Bell, Introductory Fourier Transform Spectroscopy (Academic, 1972).

Bellato, L.

J. Qin, M. S. Kim, K. Chao, L. Bellato, W. F. Schmidt, B.-K. Cho, and M. Huang, “Inspection of maleic anhydride in starch powder using line-scan hyperspectral Raman chemical imaging technique,” Int. J. Agric. Biol. Eng. 11(6), 120–125 (2018).
[Crossref]

Blok, P. M.

G. Polder, P. M. Blok, H. A. C. De Villiers, J. M. Van Der Wolf, and J. Kamp, “Potato Virus Y Detection in Seed Potatoes Using Deep Learning on Hyperspectral Images,” Front. Plant Sci. 10, 209 (2019).
[Crossref]

Bowman, R.

Bowman, R. W.

M. P. Edgar, G. M. Gibson, R. W. Bowman, B. Sun, N. Radwell, K. J. Mitchell, S. S. Welsh, and J. Padgett, “Simultaneous real-time visible and infrared video with single-pixel detectors,” Sci. Rep. 5(1), 10669 (2015).
[Crossref]

Chao, K.

J. Qin, M. S. Kim, K. Chao, L. Bellato, W. F. Schmidt, B.-K. Cho, and M. Huang, “Inspection of maleic anhydride in starch powder using line-scan hyperspectral Raman chemical imaging technique,” Int. J. Agric. Biol. Eng. 11(6), 120–125 (2018).
[Crossref]

J. Qin, M. S. Kim, K. Chao, M. Gonzalez, and B.-K. Cho, “Quantitative detection of benzoyl peroxide in wheat flour using line-scan macroscale Raman chemical imaging,” Appl. Spectrosc. 71(11), 2469–2476 (2017).
[Crossref]

J. Qin, M. S. Kim, K. Chao, S. Dhakal, H. Lee, B.-K. Cho, and C. Mo, “Detection and quantification of adulterants in milk powder using a high-throughput Raman chemical imaging technique,” Food Addit. Contam., Part A 34(2), 152–161 (2017).
[Crossref]

Chen, W.

Z. Xu, W. Chen, J. Penuelas, M. Padgett, and M. Sun, “1000 fps computational ghost imaging using LED-based structured illumination,” Opt. Express 26(3), 2427–2434 (2018).
[Crossref]

M. Sun, W. Chen, T. Liu, and L. Li, “Image retrieval in spatial and temporal domains with a quadrant detector,” IEEE Photonics J. 9(5), 1–6 (2017).
[Crossref]

Cho, B.-K.

J. Qin, M. S. Kim, K. Chao, L. Bellato, W. F. Schmidt, B.-K. Cho, and M. Huang, “Inspection of maleic anhydride in starch powder using line-scan hyperspectral Raman chemical imaging technique,” Int. J. Agric. Biol. Eng. 11(6), 120–125 (2018).
[Crossref]

J. Qin, M. S. Kim, K. Chao, M. Gonzalez, and B.-K. Cho, “Quantitative detection of benzoyl peroxide in wheat flour using line-scan macroscale Raman chemical imaging,” Appl. Spectrosc. 71(11), 2469–2476 (2017).
[Crossref]

J. Qin, M. S. Kim, K. Chao, S. Dhakal, H. Lee, B.-K. Cho, and C. Mo, “Detection and quantification of adulterants in milk powder using a high-throughput Raman chemical imaging technique,” Food Addit. Contam., Part A 34(2), 152–161 (2017).
[Crossref]

Chrien, T. G.

Cinotti, E.

A. Dubois, O. Levecq, H. Azimani, D. Siret, A. Barut, M. Suppa, V. del Marmol, J. Malvehy, E. Cinotti, P. Rubegni, and J. Perrot, “Line-field confocal optical coherence tomography for high-resolution noninvasive imaging of skin tumors,” J. Biomed. Opt. 23(10), 1 (2018).
[Crossref]

Dadouche, F.

E. Aguénounon, F. Dadouche, W. Uhring, N. Ducros, and S. Gioux, “Single snapshot imaging of optical properties using a single-pixel camera: a simulation study,” J. Biomed. Opt. 24(7), 071612 (2019).
[Crossref]

Davenport, M. A.

M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008).
[Crossref]

Davis, A.

De Villiers, H. A. C.

G. Polder, P. M. Blok, H. A. C. De Villiers, J. M. Van Der Wolf, and J. Kamp, “Potato Virus Y Detection in Seed Potatoes Using Deep Learning on Hyperspectral Images,” Front. Plant Sci. 10, 209 (2019).
[Crossref]

del Marmol, V.

A. Dubois, O. Levecq, H. Azimani, D. Siret, A. Barut, M. Suppa, V. del Marmol, J. Malvehy, E. Cinotti, P. Rubegni, and J. Perrot, “Line-field confocal optical coherence tomography for high-resolution noninvasive imaging of skin tumors,” J. Biomed. Opt. 23(10), 1 (2018).
[Crossref]

Dhakal, S.

J. Qin, M. S. Kim, K. Chao, S. Dhakal, H. Lee, B.-K. Cho, and C. Mo, “Detection and quantification of adulterants in milk powder using a high-throughput Raman chemical imaging technique,” Food Addit. Contam., Part A 34(2), 152–161 (2017).
[Crossref]

Duarte, M. F.

M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008).
[Crossref]

Dubois, A.

Ducros, N.

E. Aguénounon, F. Dadouche, W. Uhring, N. Ducros, and S. Gioux, “Single snapshot imaging of optical properties using a single-pixel camera: a simulation study,” J. Biomed. Opt. 24(7), 071612 (2019).
[Crossref]

Edgar, M. P.

M. P. Edgar, G. M. Gibson, R. W. Bowman, B. Sun, N. Radwell, K. J. Mitchell, S. S. Welsh, and J. Padgett, “Simultaneous real-time visible and infrared video with single-pixel detectors,” Sci. Rep. 5(1), 10669 (2015).
[Crossref]

N. Radwell, K. J. Mitchell, G. M. Gibson, M. P. Edgar, R. Bowman, and M. J. Padgett, “Single-pixel infrared and visible microscope,” Optica 1(5), 285–289 (2014).
[Crossref]

Elshurafa, A. M.

A. M. Elshurafa, K. Khirallah, H. H. Tawfik, A. Emira, A. K. S. A. Aziz, and S. M. Sedky, “Nonlinear dynamics of spring softening and hardening in folded-MEMS comb drive resonators,” J. Microelectromech. Syst. 20(4), 943–958 (2011).
[Crossref]

Emira, A.

A. M. Elshurafa, K. Khirallah, H. H. Tawfik, A. Emira, A. K. S. A. Aziz, and S. M. Sedky, “Nonlinear dynamics of spring softening and hardening in folded-MEMS comb drive resonators,” J. Microelectromech. Syst. 20(4), 943–958 (2011).
[Crossref]

Enoch, J. M.

Feng, X.

J. Jiang, X. Feng, F. Liu, Y. Xu, and H. Huang, “Multi-Spectral RGB-NIR Image Classification Using Double-Channel CNN,” IEEE Access 7, 20607–20613 (2019).
[Crossref]

Gao, Y.

Gibson, G. M.

M. P. Edgar, G. M. Gibson, R. W. Bowman, B. Sun, N. Radwell, K. J. Mitchell, S. S. Welsh, and J. Padgett, “Simultaneous real-time visible and infrared video with single-pixel detectors,” Sci. Rep. 5(1), 10669 (2015).
[Crossref]

N. Radwell, K. J. Mitchell, G. M. Gibson, M. P. Edgar, R. Bowman, and M. J. Padgett, “Single-pixel infrared and visible microscope,” Optica 1(5), 285–289 (2014).
[Crossref]

Gioux, S.

E. Aguénounon, F. Dadouche, W. Uhring, N. Ducros, and S. Gioux, “Single snapshot imaging of optical properties using a single-pixel camera: a simulation study,” J. Biomed. Opt. 24(7), 071612 (2019).
[Crossref]

Gonzalez, M.

Green, R. O.

Guo, Y.

Harwit, M.

M. Harwit, Hadamard Transform Optics (Elsevier, The Netherlands, 1979).

Hornbeck, L. J.

L. J. Hornbeck, “Digital Light ProcessingTM for high-brightness, high-resolution applications,” Proc. SPIE 3013, 27–40 (1997).
[Crossref]

Huang, H.

J. Jiang, X. Feng, F. Liu, Y. Xu, and H. Huang, “Multi-Spectral RGB-NIR Image Classification Using Double-Channel CNN,” IEEE Access 7, 20607–20613 (2019).
[Crossref]

Huang, J.

Huang, M.

J. Qin, M. S. Kim, K. Chao, L. Bellato, W. F. Schmidt, B.-K. Cho, and M. Huang, “Inspection of maleic anhydride in starch powder using line-scan hyperspectral Raman chemical imaging technique,” Int. J. Agric. Biol. Eng. 11(6), 120–125 (2018).
[Crossref]

Jiang, J.

J. Jiang, X. Feng, F. Liu, Y. Xu, and H. Huang, “Multi-Spectral RGB-NIR Image Classification Using Double-Channel CNN,” IEEE Access 7, 20607–20613 (2019).
[Crossref]

Jiao, S.

Kamp, J.

G. Polder, P. M. Blok, H. A. C. De Villiers, J. M. Van Der Wolf, and J. Kamp, “Potato Virus Y Detection in Seed Potatoes Using Deep Learning on Hyperspectral Images,” Front. Plant Sci. 10, 209 (2019).
[Crossref]

Kelly, K. F.

M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008).
[Crossref]

Khirallah, K.

A. M. Elshurafa, K. Khirallah, H. H. Tawfik, A. Emira, A. K. S. A. Aziz, and S. M. Sedky, “Nonlinear dynamics of spring softening and hardening in folded-MEMS comb drive resonators,” J. Microelectromech. Syst. 20(4), 943–958 (2011).
[Crossref]

Kho, A.

Kim, M. S.

J. Qin, M. S. Kim, K. Chao, L. Bellato, W. F. Schmidt, B.-K. Cho, and M. Huang, “Inspection of maleic anhydride in starch powder using line-scan hyperspectral Raman chemical imaging technique,” Int. J. Agric. Biol. Eng. 11(6), 120–125 (2018).
[Crossref]

J. Qin, M. S. Kim, K. Chao, S. Dhakal, H. Lee, B.-K. Cho, and C. Mo, “Detection and quantification of adulterants in milk powder using a high-throughput Raman chemical imaging technique,” Food Addit. Contam., Part A 34(2), 152–161 (2017).
[Crossref]

J. Qin, M. S. Kim, K. Chao, M. Gonzalez, and B.-K. Cho, “Quantitative detection of benzoyl peroxide in wheat flour using line-scan macroscale Raman chemical imaging,” Appl. Spectrosc. 71(11), 2469–2476 (2017).
[Crossref]

Laska, J. N.

M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008).
[Crossref]

Lee, H.

J. Qin, M. S. Kim, K. Chao, S. Dhakal, H. Lee, B.-K. Cho, and C. Mo, “Detection and quantification of adulterants in milk powder using a high-throughput Raman chemical imaging technique,” Food Addit. Contam., Part A 34(2), 152–161 (2017).
[Crossref]

Lei, T.

Levecq, O.

Levi-Setti, R.

R. Levi-Setti, D. A. Park, and R. Winston, “The corneal cones of Limulus as optimised light concentrators,” Nature 253(5487), 115–116 (1975).
[Crossref]

Li, K.

L. Liao, K. Li, C. Yang, and J. Liu, “Low-cost image compressive sensing with multiple measurement rates for object detection,” Sensors 19(9), 2079 (2019).
[Crossref]

Li, L.

L. Li, B. Wang, J. Pottas, and W. Lipiński, “Design of a compound parabolic concentrator for a multi-source high-flux solar simulator,” Sol. Energy 183, 805–811 (2019).
[Crossref]

M. Sun, W. Chen, T. Liu, and L. Li, “Image retrieval in spatial and temporal domains with a quadrant detector,” IEEE Photonics J. 9(5), 1–6 (2017).
[Crossref]

Liao, L.

L. Liao, K. Li, C. Yang, and J. Liu, “Low-cost image compressive sensing with multiple measurement rates for object detection,” Sensors 19(9), 2079 (2019).
[Crossref]

Lilienblum, E.

E. Lilienblum and A. Al-Hamadi, “A structured light approach for 3-D surface reconstruction with a stereo line-scan system,” IEEE Trans. Instrum. Meas. 64(5), 1258–1266 (2015).
[Crossref]

Lin, J.

Lipinski, W.

L. Li, B. Wang, J. Pottas, and W. Lipiński, “Design of a compound parabolic concentrator for a multi-source high-flux solar simulator,” Sol. Energy 183, 805–811 (2019).
[Crossref]

Liu, D.

Liu, F.

J. Jiang, X. Feng, F. Liu, Y. Xu, and H. Huang, “Multi-Spectral RGB-NIR Image Classification Using Double-Channel CNN,” IEEE Access 7, 20607–20613 (2019).
[Crossref]

Liu, J.

L. Liao, K. Li, C. Yang, and J. Liu, “Low-cost image compressive sensing with multiple measurement rates for object detection,” Sensors 19(9), 2079 (2019).
[Crossref]

Liu, T.

M. Sun, W. Chen, T. Liu, and L. Li, “Image retrieval in spatial and temporal domains with a quadrant detector,” IEEE Photonics J. 9(5), 1–6 (2017).
[Crossref]

Liu, Z.

Z. Liu, S. Wu, Q. Wu, C. Quan, and Y. Ren, “A novel stereo vision measurement system using both line scan camera and frame camera,” IEEE Trans. Instrum. Meas. (to be published) (2018).

Malvehy, J.

A. Dubois, O. Levecq, H. Azimani, D. Siret, A. Barut, M. Suppa, V. del Marmol, J. Malvehy, E. Cinotti, P. Rubegni, and J. Perrot, “Line-field confocal optical coherence tomography for high-resolution noninvasive imaging of skin tumors,” J. Biomed. Opt. 23(10), 1 (2018).
[Crossref]

Meng, W.

Mitchell, K. J.

M. P. Edgar, G. M. Gibson, R. W. Bowman, B. Sun, N. Radwell, K. J. Mitchell, S. S. Welsh, and J. Padgett, “Simultaneous real-time visible and infrared video with single-pixel detectors,” Sci. Rep. 5(1), 10669 (2015).
[Crossref]

N. Radwell, K. J. Mitchell, G. M. Gibson, M. P. Edgar, R. Bowman, and M. J. Padgett, “Single-pixel infrared and visible microscope,” Optica 1(5), 285–289 (2014).
[Crossref]

Mo, C.

J. Qin, M. S. Kim, K. Chao, S. Dhakal, H. Lee, B.-K. Cho, and C. Mo, “Detection and quantification of adulterants in milk powder using a high-throughput Raman chemical imaging technique,” Food Addit. Contam., Part A 34(2), 152–161 (2017).
[Crossref]

Mouroulis, P.

Ogien, J.

Padgett, J.

M. P. Edgar, G. M. Gibson, R. W. Bowman, B. Sun, N. Radwell, K. J. Mitchell, S. S. Welsh, and J. Padgett, “Simultaneous real-time visible and infrared video with single-pixel detectors,” Sci. Rep. 5(1), 10669 (2015).
[Crossref]

Padgett, M.

Padgett, M. J.

Park, D. A.

R. Levi-Setti, D. A. Park, and R. Winston, “The corneal cones of Limulus as optimised light concentrators,” Nature 253(5487), 115–116 (1975).
[Crossref]

Penuelas, J.

Perrot, J.

A. Dubois, O. Levecq, H. Azimani, D. Siret, A. Barut, M. Suppa, V. del Marmol, J. Malvehy, E. Cinotti, P. Rubegni, and J. Perrot, “Line-field confocal optical coherence tomography for high-resolution noninvasive imaging of skin tumors,” J. Biomed. Opt. 23(10), 1 (2018).
[Crossref]

Polder, G.

G. Polder, P. M. Blok, H. A. C. De Villiers, J. M. Van Der Wolf, and J. Kamp, “Potato Virus Y Detection in Seed Potatoes Using Deep Learning on Hyperspectral Images,” Front. Plant Sci. 10, 209 (2019).
[Crossref]

Pottas, J.

L. Li, B. Wang, J. Pottas, and W. Lipiński, “Design of a compound parabolic concentrator for a multi-source high-flux solar simulator,” Sol. Energy 183, 805–811 (2019).
[Crossref]

Qin, J.

J. Qin, M. S. Kim, K. Chao, L. Bellato, W. F. Schmidt, B.-K. Cho, and M. Huang, “Inspection of maleic anhydride in starch powder using line-scan hyperspectral Raman chemical imaging technique,” Int. J. Agric. Biol. Eng. 11(6), 120–125 (2018).
[Crossref]

J. Qin, M. S. Kim, K. Chao, M. Gonzalez, and B.-K. Cho, “Quantitative detection of benzoyl peroxide in wheat flour using line-scan macroscale Raman chemical imaging,” Appl. Spectrosc. 71(11), 2469–2476 (2017).
[Crossref]

J. Qin, M. S. Kim, K. Chao, S. Dhakal, H. Lee, B.-K. Cho, and C. Mo, “Detection and quantification of adulterants in milk powder using a high-throughput Raman chemical imaging technique,” Food Addit. Contam., Part A 34(2), 152–161 (2017).
[Crossref]

Quan, C.

Z. Liu, S. Wu, Q. Wu, C. Quan, and Y. Ren, “A novel stereo vision measurement system using both line scan camera and frame camera,” IEEE Trans. Instrum. Meas. (to be published) (2018).

Radwell, N.

M. P. Edgar, G. M. Gibson, R. W. Bowman, B. Sun, N. Radwell, K. J. Mitchell, S. S. Welsh, and J. Padgett, “Simultaneous real-time visible and infrared video with single-pixel detectors,” Sci. Rep. 5(1), 10669 (2015).
[Crossref]

N. Radwell, K. J. Mitchell, G. M. Gibson, M. P. Edgar, R. Bowman, and M. J. Padgett, “Single-pixel infrared and visible microscope,” Optica 1(5), 285–289 (2014).
[Crossref]

Ren, Y.

Z. Liu, S. Wu, Q. Wu, C. Quan, and Y. Ren, “A novel stereo vision measurement system using both line scan camera and frame camera,” IEEE Trans. Instrum. Meas. (to be published) (2018).

Rubegni, P.

A. Dubois, O. Levecq, H. Azimani, D. Siret, A. Barut, M. Suppa, V. del Marmol, J. Malvehy, E. Cinotti, P. Rubegni, and J. Perrot, “Line-field confocal optical coherence tomography for high-resolution noninvasive imaging of skin tumors,” J. Biomed. Opt. 23(10), 1 (2018).
[Crossref]

Schmidt, W. F.

J. Qin, M. S. Kim, K. Chao, L. Bellato, W. F. Schmidt, B.-K. Cho, and M. Huang, “Inspection of maleic anhydride in starch powder using line-scan hyperspectral Raman chemical imaging technique,” Int. J. Agric. Biol. Eng. 11(6), 120–125 (2018).
[Crossref]

Sedky, S. M.

A. M. Elshurafa, K. Khirallah, H. H. Tawfik, A. Emira, A. K. S. A. Aziz, and S. M. Sedky, “Nonlinear dynamics of spring softening and hardening in folded-MEMS comb drive resonators,” J. Microelectromech. Syst. 20(4), 943–958 (2011).
[Crossref]

Shi, D.

Siret, D.

Srinivasan, V. J.

Sun, B.

Sun, M.

M. Sun and J. Zhang, “Single-pixel imaging and its application in three-dimensional reconstruction: a brief review,” Sensors 19(3), 732 (2019).
[Crossref]

S. Jiao, M. Sun, Y. Gao, T. Lei, Z. Xie, and X. Yuan, “Motion estimation and quality enhancement for a single image in dynamic single-pixel imaging,” Opt. Express 27(9), 12841–12854 (2019).
[Crossref]

Z. Xu, W. Chen, J. Penuelas, M. Padgett, and M. Sun, “1000 fps computational ghost imaging using LED-based structured illumination,” Opt. Express 26(3), 2427–2434 (2018).
[Crossref]

M. Sun, W. Chen, T. Liu, and L. Li, “Image retrieval in spatial and temporal domains with a quadrant detector,” IEEE Photonics J. 9(5), 1–6 (2017).
[Crossref]

Sun, T.

M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008).
[Crossref]

Suppa, M.

A. Dubois, O. Levecq, H. Azimani, D. Siret, A. Barut, M. Suppa, V. del Marmol, J. Malvehy, E. Cinotti, P. Rubegni, and J. Perrot, “Line-field confocal optical coherence tomography for high-resolution noninvasive imaging of skin tumors,” J. Biomed. Opt. 23(10), 1 (2018).
[Crossref]

Takhar, D.

M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008).
[Crossref]

Tawfik, H. H.

A. M. Elshurafa, K. Khirallah, H. H. Tawfik, A. Emira, A. K. S. A. Aziz, and S. M. Sedky, “Nonlinear dynamics of spring softening and hardening in folded-MEMS comb drive resonators,” J. Microelectromech. Syst. 20(4), 943–958 (2011).
[Crossref]

Uhring, W.

E. Aguénounon, F. Dadouche, W. Uhring, N. Ducros, and S. Gioux, “Single snapshot imaging of optical properties using a single-pixel camera: a simulation study,” J. Biomed. Opt. 24(7), 071612 (2019).
[Crossref]

Van Der Wolf, J. M.

G. Polder, P. M. Blok, H. A. C. De Villiers, J. M. Van Der Wolf, and J. Kamp, “Potato Virus Y Detection in Seed Potatoes Using Deep Learning on Hyperspectral Images,” Front. Plant Sci. 10, 209 (2019).
[Crossref]

Wang, B.

L. Li, B. Wang, J. Pottas, and W. Lipiński, “Design of a compound parabolic concentrator for a multi-source high-flux solar simulator,” Sol. Energy 183, 805–811 (2019).
[Crossref]

Wang, X.

Wang, Y.

Welford, W. T.

W. T. Welford and R. Winston, High Collection Nonimaging Optics (Academic, 1989).

Welsh, S. S.

M. P. Edgar, G. M. Gibson, R. W. Bowman, B. Sun, N. Radwell, K. J. Mitchell, S. S. Welsh, and J. Padgett, “Simultaneous real-time visible and infrared video with single-pixel detectors,” Sci. Rep. 5(1), 10669 (2015).
[Crossref]

Winston, R.

R. Levi-Setti, D. A. Park, and R. Winston, “The corneal cones of Limulus as optimised light concentrators,” Nature 253(5487), 115–116 (1975).
[Crossref]

R. Winston and J. M. Enoch, “Retinal cone receptor as an ideal light collectort,” J. Opt. Soc. Am. 61(8), 1120–1121 (1971).
[Crossref]

W. T. Welford and R. Winston, High Collection Nonimaging Optics (Academic, 1989).

Wu, Q.

Z. Liu, S. Wu, Q. Wu, C. Quan, and Y. Ren, “A novel stereo vision measurement system using both line scan camera and frame camera,” IEEE Trans. Instrum. Meas. (to be published) (2018).

Wu, S.

Z. Liu, S. Wu, Q. Wu, C. Quan, and Y. Ren, “A novel stereo vision measurement system using both line scan camera and frame camera,” IEEE Trans. Instrum. Meas. (to be published) (2018).

Xie, C.

Xie, Z.

Xu, Y.

J. Jiang, X. Feng, F. Liu, Y. Xu, and H. Huang, “Multi-Spectral RGB-NIR Image Classification Using Double-Channel CNN,” IEEE Access 7, 20607–20613 (2019).
[Crossref]

Xu, Z.

Yang, C.

L. Liao, K. Li, C. Yang, and J. Liu, “Low-cost image compressive sensing with multiple measurement rates for object detection,” Sensors 19(9), 2079 (2019).
[Crossref]

Yang, L.

Yin, K.

Yuan, K.

Yuan, X.

Zhang, J.

M. Sun and J. Zhang, “Single-pixel imaging and its application in three-dimensional reconstruction: a brief review,” Sensors 19(3), 732 (2019).
[Crossref]

Zhang, Z.

Zheng, G.

Zhong, J.

Zhu, J.

Zhu, W.

Appl. Opt. (2)

Appl. Spectrosc. (1)

Biomed. Opt. Express (1)

Food Addit. Contam., Part A (1)

J. Qin, M. S. Kim, K. Chao, S. Dhakal, H. Lee, B.-K. Cho, and C. Mo, “Detection and quantification of adulterants in milk powder using a high-throughput Raman chemical imaging technique,” Food Addit. Contam., Part A 34(2), 152–161 (2017).
[Crossref]

Front. Plant Sci. (1)

G. Polder, P. M. Blok, H. A. C. De Villiers, J. M. Van Der Wolf, and J. Kamp, “Potato Virus Y Detection in Seed Potatoes Using Deep Learning on Hyperspectral Images,” Front. Plant Sci. 10, 209 (2019).
[Crossref]

IEEE Access (1)

J. Jiang, X. Feng, F. Liu, Y. Xu, and H. Huang, “Multi-Spectral RGB-NIR Image Classification Using Double-Channel CNN,” IEEE Access 7, 20607–20613 (2019).
[Crossref]

IEEE Photonics J. (1)

M. Sun, W. Chen, T. Liu, and L. Li, “Image retrieval in spatial and temporal domains with a quadrant detector,” IEEE Photonics J. 9(5), 1–6 (2017).
[Crossref]

IEEE Signal Process. Mag. (1)

M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008).
[Crossref]

IEEE Trans. Instrum. Meas. (1)

E. Lilienblum and A. Al-Hamadi, “A structured light approach for 3-D surface reconstruction with a stereo line-scan system,” IEEE Trans. Instrum. Meas. 64(5), 1258–1266 (2015).
[Crossref]

Int. J. Agric. Biol. Eng. (1)

J. Qin, M. S. Kim, K. Chao, L. Bellato, W. F. Schmidt, B.-K. Cho, and M. Huang, “Inspection of maleic anhydride in starch powder using line-scan hyperspectral Raman chemical imaging technique,” Int. J. Agric. Biol. Eng. 11(6), 120–125 (2018).
[Crossref]

J. Biomed. Opt. (2)

E. Aguénounon, F. Dadouche, W. Uhring, N. Ducros, and S. Gioux, “Single snapshot imaging of optical properties using a single-pixel camera: a simulation study,” J. Biomed. Opt. 24(7), 071612 (2019).
[Crossref]

A. Dubois, O. Levecq, H. Azimani, D. Siret, A. Barut, M. Suppa, V. del Marmol, J. Malvehy, E. Cinotti, P. Rubegni, and J. Perrot, “Line-field confocal optical coherence tomography for high-resolution noninvasive imaging of skin tumors,” J. Biomed. Opt. 23(10), 1 (2018).
[Crossref]

J. Microelectromech. Syst. (1)

A. M. Elshurafa, K. Khirallah, H. H. Tawfik, A. Emira, A. K. S. A. Aziz, and S. M. Sedky, “Nonlinear dynamics of spring softening and hardening in folded-MEMS comb drive resonators,” J. Microelectromech. Syst. 20(4), 943–958 (2011).
[Crossref]

J. Opt. Soc. Am. (1)

Nature (1)

R. Levi-Setti, D. A. Park, and R. Winston, “The corneal cones of Limulus as optimised light concentrators,” Nature 253(5487), 115–116 (1975).
[Crossref]

Opt. Express (5)

Opt. Lett. (1)

Optica (1)

Proc. SPIE (1)

L. J. Hornbeck, “Digital Light ProcessingTM for high-brightness, high-resolution applications,” Proc. SPIE 3013, 27–40 (1997).
[Crossref]

Sci. Rep. (1)

M. P. Edgar, G. M. Gibson, R. W. Bowman, B. Sun, N. Radwell, K. J. Mitchell, S. S. Welsh, and J. Padgett, “Simultaneous real-time visible and infrared video with single-pixel detectors,” Sci. Rep. 5(1), 10669 (2015).
[Crossref]

Sensors (2)

M. Sun and J. Zhang, “Single-pixel imaging and its application in three-dimensional reconstruction: a brief review,” Sensors 19(3), 732 (2019).
[Crossref]

L. Liao, K. Li, C. Yang, and J. Liu, “Low-cost image compressive sensing with multiple measurement rates for object detection,” Sensors 19(9), 2079 (2019).
[Crossref]

Sol. Energy (1)

L. Li, B. Wang, J. Pottas, and W. Lipiński, “Design of a compound parabolic concentrator for a multi-source high-flux solar simulator,” Sol. Energy 183, 805–811 (2019).
[Crossref]

Other (4)

R. J. Bell, Introductory Fourier Transform Spectroscopy (Academic, 1972).

M. Harwit, Hadamard Transform Optics (Elsevier, The Netherlands, 1979).

W. T. Welford and R. Winston, High Collection Nonimaging Optics (Academic, 1989).

Z. Liu, S. Wu, Q. Wu, C. Quan, and Y. Ren, “A novel stereo vision measurement system using both line scan camera and frame camera,” IEEE Trans. Instrum. Meas. (to be published) (2018).

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. The schematic of the self-referencing cascaded line-scan camera.
Fig. 2.
Fig. 2. (a) The schematic of a slit and an encoding mask. (b) Self-referencing working mechanism. (c) Microscope picture of one encoding mask.
Fig. 3.
Fig. 3. (a) Fabrication process. (b) Microscope image of the SOI MEMS chip having two cascaded encoding masks. (c) The FEM simulated resonance mode shape.
Fig. 4.
Fig. 4. (a) FEM simulated amplification result. (b) Top view of the oscillation platform. The oscillation amplitudes of (c) left MEMS encoding mask and (d) right MEMS encoding mask, respectively as functions of the driving frequency.
Fig. 5.
Fig. 5. (a) Miniature CPC fabrication process. (b) A perspective view of a CPC. (c) Completed CPCs.
Fig. 6.
Fig. 6. (a) The PCB board with photodetectors integrated with their respective CPCs and amplification circuits. (b) SOI MEMS chip combined with CPCs. (c) Oscillation platform combined with PCB board.
Fig. 7.
Fig. 7. Schematic showing the experimental system construction of a line-scan camera.
Fig. 8.
Fig. 8. (a) Photos showing the object – an array of LEDs when they are turned (i) off and (ii) on. In (a)(ii), the environment light is turned off, which is the actual experimental condition used. (b) The acquired raw data of (i) one period and (ii) the frame beginning. (c) The extracted effective outputs. (d) The recovered line image of the 6 turned-on LEDs.
Fig. 9.
Fig. 9. The reconstructed 1D image when (a) 12 LEDs are all turned on, (b) the 1st, 2nd, 5th, 6th, 8th, 9th, 10th and 12th LEDs are turned on, (c) the 1st, 3rd, 4th, 5th, 6th, 7th, 9th, 11th and 12th LEDs are turned on, (d) 40 LEDs arranged in alphabet shapes “W” and “M”. (e) The 2D image recovered by the cascaded line-scan camera using push-broom scanning.

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

m i = j M a i j I ( x j )
M = AI
I = A 1 M
A 1 = 2 M + 1 ( 2 A T J )
d 2 / d 1 = sin θ max
L = ( 1 / 2 ) ( d 1 + d 2 ) cot θ max
N CPC sin θ max = N air sin θ max

Metrics