## Abstract

To obtain a high imaging frame rate, a computational ghost imaging system scheme is proposed based on optical fiber phased array (OFPA). Through high-speed electro-optic modulators, the randomly modulated OFPA can provide much faster speckle projection, which can be precomputed according to the geometry of the fiber array and the known phases for modulation. Receiving the signal light with a low-pixel APD array can effectively decrease the requirement on sampling quantity and computation complexity owing to the reduced data dimensionality while avoiding the image aliasing due to the spatial periodicity of the speckles. The results of analysis and simulation show that the frame rate of the proposed imaging system can be significantly improved compared with traditional systems.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

## 1. Introduction

Ghost imaging (GI) has been attracting worldwide researchers’ attention for its distinctive feature of nonlocal imaging with single-pixel detector that provides no spatial resolution since it was first reported by Pittman and Shih in 1995 [1]. In the original experimental setup, two-photon entangled source was used, but afterwards, the classical thermal light was experimentally proved to be feasible [2–4]. The mostly used pseudo-thermal light is obtained from a coherent beam through a dynamic diffuser (such as rotating ground glass plate) [5–7] and thus the coincidence or correlation measurement between the signal and the reference arm is necessary to obtain the image of object. In 2008, Shapiro proposed the computational ghost imaging (CGI) scheme based on phase spatial-light-modulator (SLM) [8], where a high-resolution sensor in the reference arm is not compulsory because the reference speckle field can be precomputed according to the model for the free-space wave propagation. In 2009, the CGI using a phase-only SLM (i.e. liquid-crystal SLM) is first demonstrated [9]. Since then, CGI becomes one of research focuses owing to the convenience of system configuration to practical applications. Many different CGI systems have been developed [10–18], using different light projectors based on digital micro-mirror device (DMD) [10, 13–15], liquid-crystal SLM (LC-SLM) [11,12,16], InP-based integrated optical phased array (OPA) [17] and LED-based illumination module [18]. To improve the image quality, various compressive sensing based imaging methods were studied [19–22]. In 2015, a novel CGI scheme was presented using spatial sectioning and multiple bucket detectors based on multi-fluorescent screen, which obtains a better performance in the image quality and the samples requirement than the conventional ones [23]. Furthermore, over the years, the selective CGI and modified selective CGI method were put forward in succession for improving the image reconstruction quality [24, 25]. Meanwhile, many researchers turn their attention to the application of CGI in some fields, such as remote sensing [26], three-dimensional imaging technique [27–30], the imaging in turbulent atmosphere [31, 32], optical encryption [24, 33, 34], digital hologram [35] and computational temporal GI [36], etc.

Imaging speed or frame rate is a pivotal factor of the application of CGI in some fields. As one of single-pixel imaging techniques, CGI requires multiple temporal measurements to retrieve a spatial-resolved image of a scene and thus a relative low frame rate, which is very similar to the other using DMD, namely single-pixel camera [37–40]. In theory, the imaging speed of a thermal GI system depends on the speed of the speckle patterns projection and the image recovery algorithm. Using the same algorithm, the faster speckle patterns are projected, the more samples per unit time and thus the higher imaging speed can be achieved. Most of the existing systems employ either a LC-SLM or a DMD to generate the speckles. However, the speed of speckles projection is restricted physically by the response speed, which is about kilohertz (kHz) level for LC-SLM and no more than 40 kHz for DMD, respectively. To increase the refresh rate, a coprime-frequencied sinusoidal modulation method for a LC-SLM was proposed [41]. To bypass the limitation of DMD, the researchers presented a modulation frequency multiplication scheme based on spatial sweeping using a pair of galvanic mirrors and obtained a frame rate of 42Hz @ 80 × 80 resolution [42]. Just recently, a notable CGI scheme using LED-based high-speed illumination module was reported and an exciting frame rate of 1000fps @ 32 × 32 resolution was achieved [18]. In particular, a new spatial sampling scheme using a quadrant detector was investigated for single-pixel camera [40], which provides a new approach to improve the imaging frame rate.

In this paper, we will present a CGI system scheme using randomly phased fiber array beams, or optical fiber phased array (OFPA) employing high speed electro-optic phase modulators of up to tens of gigahertz (GHz), which makes it possible to modulate laser beam much faster than the conventional SLM and DMD systems and thus attains much faster speckle patterns projection. Furthermore, a low-pixel APD array (e.g. 5 × 5) detector is adopted to receive the signal light, which can lower the sampling times requirement and further increase imaging speed whilst avoiding spatial aliasing. By these two ways, a theoretical frame-rate of over 100 kHz CGI can be expected. The rest part of the paper is organized as follow: In Section 2, the schematic system of the light source using OFPA is first illustrated and the properties of the light field are then investigated. Based on the analysis, the receiving scheme and the principle CGI system are correspondingly designed in Section 3. In Section 4, the imaging process is formulated and the performance of the proposed CGI system is analyzed and evaluated by numerical simulations. Finally, the paper is summarized and the challenges in implementing the experimental system are discussed in brief.

## 2. Fast time-variant speckle source based on OFPA

In theory, the speckle or structured patterns for CGI form by spatially modulating the wavefront of laser beam, which is performed conventionally with a SLM (LC-SLM or DMD). Therefore, the speed of patterns formation hinges on the modulation speed the used SLM can provide. To bypass the limitation of the response time of the conventional SLM, a fast time-variant speckle source based on OFPA is proposed, in which high-speed electro-optic phase modulators are selected to enhance the modulation speed.

#### 2.1 Schematic system

As shown in Fig. 1, the schematic system of the speckle source is mainly composed of a coherent seed laser (e.g. DFB laser) driven by a continuous (i.e. direct current) or pulsed signal, optical fiber amplifiers (OFA), electro-optic phase modulators (PM) and their driver, and fiber array (the parallel arranged fibers bundle) as the emission antenna. Excited by the signal source, the continuous or pulsed coherent laser is divided after pre-amplifying OFA and coupled into *N* single-mode polarization-maintaining fibers, which are then high-speed modulated using phase modulators driven by random phase source. Away from a specific fiber array that may be circular, square or hexagonal, the coherently spatial superimposition of *N* randomly phased beams forms a fast time-variant speckle field, namely pseudo-thermal light field. It should be noted that the optical fibers used for interconnection and the fiber array should be polarization maintaining. For improving the output power, the laser beam before the splitter can be further amplified using multistage OFA.

In accordance with the output seed laser, the speckle source can operate in two modes, i.e. the continuous or pulsed speckles projection, which aims at imaging the two-dimensional (or small depth of field) scene and the three-dimensional (or relative large depth of field) scene, respectively. Under the continuous laser mode, the available speed of the speckle patterns projection rests mainly on the modulation speed of the used phase modulators that can practically reach the level of up to several GHz, which is at least three orders of magnitude faster than the existing systems. For the pulse mode, the pulse duration must be long enough to meet the requirement of coherent combination; and meanwhile, the speed of phase modulation should be in coincidence with the repetition rate of the pulsed laser. Consequently, the modulation speed must be restricted to prevent the range ambiguity in recovering the image; even so, the speckle patterns can still be projected at a frequency of no less than megahertz (MHz) level, which is much faster than the conventional SLM systems.

#### 2.2 Properties of the light field

Before designing the receiving scheme for imaging system and analyzing the imaging performance, it is quite necessary to investigate the properties of the light field.

In general, a beam from single-mode fiber can always be regarded as Gaussian beam. Given a two-dimensional array, each beam at the output end of fiber array can accordingly be modeled as

*a*and (

_{n}*x*,

_{n}*y*) is the amplitude and the center coordinate in

_{n}*x*-

*y*plane (i.e. the output plane of fiber array) of the

*n*th beam, respectively;

*ω*

_{0}is the size of light spot. The term

*ϕ*

_{n}_{,}

*is the random phase of the*

_{p}*p*th period imposed on the

*n*th beam. The initial phase

*ϕ*of the

_{in}*n*th beam is slow time-variant and herein deemed constant because of the much higher modulation speed.

When the observed distance *D* in the paraxial region is long enough so that the Fraunhofer approximation can be met, the field distribution is the Fourier transformation of the total field in *x*-*y* plane according to Fraunhofer diffraction theory. In the *ξ*-*η* plane, the light field of the *p*th modulation period, or the *p*th speckle pattern, can be shown as

*k*is the spatial wave number,

*D*is the distance between

*x*-

*y*plane and the observed

*ξ*-

*η*plane. Thus, the intensity distribution of the

*p*th speckle can be expressed by

*a*and the phases

_{n}*ϕ*must be measured or compensated in some way for the computation of the pattern.

_{in}As the random phase modulation is continuously conducted, a fast time-variant speckle field, or pseudo-thermal light forms. The spatial correlation of the light field can be analyzed with the spatial cross-correlation coefficient of any given two points, **ρ**_{1}(*ξ*_{1},*η*_{1}) and **ρ**_{2}(*ξ*_{2},*η*_{2}) in the observed plane, which is defined as

Without loss of generalization, assume that the phases *ϕ _{in}* have been compensated, and the modulation phases

*ϕ*

_{n}_{,}

*are independent of each other and satisfy the uniform distribution between 0 and 2*

_{p}*π*. Substituting Eq. (3) into Eq. (6), it can be shown [43] that

Evidently, the kernel function *I _{KF}* (

*ξ*,

*η*) is primarily determined with the array geometry, since the amplitudes of

*N*beams can be so close to each other that their influence can be neglected. Let the amplitudes be equal to each other, then the kernel function

*I*(

_{KF}*ξ*,

*η*) will show the same form as the intensity distribution

*I*(

_{SP}*ξ*,

*η*) when

*N*beams are in-phase [43] and the auto-correlation coefficient (i.e. the maximum of the cross-correlation coefficient) can be simplified to 2-1/

*N*, which will become very close to 2 as the value of

*N*is large enough.

For some regular array, the kernel function and thus the cross-correlation coefficient will have a closed form. For example, the kernel function of a *S* × *S* square array can be shown as

*d*is the spacing between two neighbor beams.

Although the array manifold can be theoretically arbitrary, in practice it should be tightly arranged in some regular array types in consideration of the difficulty in processing. To visually analyze the effect of array geometry on the light field, three types of tightly arranged regular array, namely circular, square, and hexagonal array, are selected and their array geometry, the intensity distribution of arbitrary speckle pattern and the kernel function of the cross-correlation coefficient are shown in Fig. 2. For plotting the figure, the size *ω*_{0} of each beam is set to 5μm and the spacing between neighbor fibers is 80μm; moreover, the light wavelength *λ* is 1550nm and the observing distance *D* is equal to 25m.

Due to the fact that the spacing between neighbor beams is much larger than the wavelength, the speckle patterns of both square and hexagonal array are spatially periodical, and the pattern of circle array is quite peculiar despite no spatial periodicity. By the same token, the kernel function and thus the cross-correlation coefficients are also quite different from that of the pseudo-thermal light from rotating ground glass, in which there exists only one mainlobe with very low sidelobes. By contrast, there exist many grating lobes for the square and hexagonal array; that is to say, the cross-correlation coefficient is spatially periodical. For circular array, very high-level sidelobes arise though no grating lobe exists.

## 3. Computational ghost imaging system scheme

As presented in Section 2, the OFPA based speckles source is very promising in improving significantly the imaging speed because it can provide much faster speckle projection than the conventional systems. However, exploiting the source directly into the conventional GI system with a single-pixel detector will cause trouble, including image aliasing and quality degradation. Since the speckle patterns and the spatial correlation of the light field are heavily dependent on the geometry of fiber array, it will be compulsory for the imaging system using the OFPA based source to design the fiber array and the corresponding receiving scheme.

#### 3.1 Design of the fiber array and the receiving scheme for imaging

To eliminate the grating lobes or the high-level sidelobes of regular array, it is a natural means to adopt an optimal fiber array, such as random array, which however may increase the difficulty in manufacturing the fiber array. Besides the array optimization, another feasible way is to design a new receiving scheme taking advantage of the spatially periodicity of the light field so as to obviate the disadvantage.

As seen in Fig. 2, the spatial distribution of the speckle patterns and the kernel function are square periodicity when the fiber array is square array; and similarly, the spatial periodicity of the light field is also hexagonal (rotated 90 degrees clockwise) for the hexagonal fiber array. For the square or hexagonal array, the spatial aliasing can be avoided when receiving the signal light with the corresponding detector array or an array detector. Although the detector array can be flexibly arranged as different array manifolds, the relatively large volume and weight are unfavorable to the system miniaturization. Fortunately, there exists the integrated array detector of squarely arranged pixels matching with the square fiber array, e.g. PIN or APD array. Consequently, square fiber array and high-sensitive APD array are selected for the imaging system.

The schematic diagram of the light receiving scheme using a 5 × 5 APD array is shown in Fig. 3. Primarily, the field of view (FOV) of the receiver should be coincident with the transmitter, which can be actualized by designing a coaxial optical path (i.e. mono-static structure) and a receiving telescope; and then by adjusting the relative position of the detector, we can assure that active areas of APD array match with the distribution of the kernel function or the correlation coefficient. As a result, each active area or pixel of APD array will only be responsible for receiving the signal light from the corresponding local region, which is similar to the conventional way; and thus the spatial aliasing will be avoided. What’s more, a micro-lens array (MLA) should be packaged together with APD array to obtain a fill-factor of approaching 100%.

#### 3.2 Schematic system of the computational ghost imaging

Based on the above analysis, we design a CGI system using the OFPA based speckle source with square fiber array and a low-pixel APD array, the schematic diagram of which is shown in Fig. 4. The schematic system consists of the OFPA based speckle source, the coaxial optical system for transmitting and receiving, a low-pixel APD array and the multi-channel sampling circuit (i.e. analog-to-digital conversion, ADC), the system control and imaging module (Computer). The whole system performs under the control of computer.

Driven by random phase driver, the OFPA based source projects fast the speckles toward the scene or object to be imaged with a certain frequency. For the continuous laser, the modulation speed of ~GHz level is conducted by the driver configured by the computer; and under the pulsed laser mode, the pulse source will work as the trigger signal (illustrated with the red dashed line in Fig. 4) of the driver to ensure that the modulation speed is in accordance with the pulse repetition rate. Through the coaxial optical system that includes two polarized beam splitters (PBS) and a lens for adjusting the FOV, the reflected signal light is collected and then converted to electric signal by the low-pixel APD array. After the multi-channel ADC, the digital signals are sent into computer to reconstruct the image of object by correlating the samples data with the precomputed speckle patterns.

As mentioned in Section 3.1, it is crucial for avoiding the spatial aliasing to match the pixels of the APD array with the spatial periodicity of the projected patterns. Providing the two PBSs and the lens have been well adjusted and the active areas of the APD array have been perpendicular to the optical axis, the key is to adjust the lateral and radial position of the APD array. To achieve that, we can project a unique pattern that is the output of the superimposition of *N* in-phase beams, which can be done by the phase compensation with the phase modulators. The pattern distribution is just like the correlation coefficient as shown in Fig. 3, and thus we can adjust the APD array to an appropriate position by monitoring the output of each pixel of APD array.

Since each pixel of the APD array will only be responsible for receiving the signal light from the corresponding local region; therefore, only the local image of object can be recovered by correlating the output of each pixel with the referenced local patterns. The panorama of object can be constructed simply by stitching all these local images. Generally, the minimal sampling rate of ADC should be no less than the speed of the speckles projection. The higher the sampling rate, the more samples can be obtained and by time accumulation technique, the detector noise can be suppressed and thus an improved signal-to-noise ratio (SNR) of the output electric signal.

#### 3.3 Imaging process and algorithm

To recover the image of object from the output multi-channel data of APD array, the space-time correlation or compressed sensing based imaging algorithm can be used. According to the proposed system, the imaging process will be formulated as follow.

Suppose that *M* speckle patterns are projected for imaging the object, which requires *M* times phase modulation and a *P*-pixel ($P=K\times K$) APD array is employed to receive the signal light. The projected patterns can be computed based on Eqs. (3)-(5) with the known modulation phases and the array geometry, provided that unknown amplitude *a _{n}* and the phase

*ϕ*of each beam are measured and compensated in some way, respectively. The

_{in}*m*th speckle pattern or referenced intensity distribution

**I**

*can be digitally expressed by*

_{m}*p*th pixel of APD array and

*L*×

*L*is the spatially sampling number, which must comply with the spatial Nyquist sampling theorem to avoid the loss of information.

The digital output of the *p*th pixel of APD array corresponding to the *M* patterns can be written by

*R*(

_{p}*ξ*,

*η*) is the reflective index distribution of the imaged object in the FOV of the

*p*th pixel of APD array.

By correlating the time sequence **O*** _{p}* with the corresponding

*M*referenced patterns

**I**

_{p}_{,}

*, we can achieve the local image of object*

_{m}**G**

*in the FOV of the*

_{p}*p*th pixel of APD array. In principle, each recovered local image

**G**

*should be a complete mapping of the local part of the imaged object because of the usage of MLA integrated with the APD array. Therefore, the panorama of object*

_{p}**G**with

*KL*×

*KL*pixels can be obtained by directly stitching these local images in accordance with the spatial arrangement of APD array.

**G**

*= [*

_{p}*G*(

_{p}*u*,

*v*)]

_{L}_{×}

*,*

_{L}*G*(

_{p}*u*,

*v*) is the value of the (

*u*,

*v*) pixel.

To recover each pixel value *G _{p}*(

*u*,

*v*) of the

*p*th local image, the space-time correlation (STC) or compressed sensing (CS) based algorithm can be used. The imaging process using space-time correlation algorithm can be formulated by

**I**

*(*

_{p}*u*,

*v*) = [

*I*(

_{p}*u*,

*v*;

*m*)]

_{M}_{× 1}and the superscript ‘T’ and the symbol <*> stand for the transposition and the arithmetic average operator, respectively. For improving the image SNR, the differential correlation (DC) algorithm [44] is a good choice, which is formulated by

**S**

*= [*

_{p}*S*(

_{p}*m*)]

_{M}_{× 1}and ${S}_{p}\left(m\right)={\displaystyle \iint {I}_{p}\left(u,v;m\right)}\mathrm{d}u\mathrm{d}v$.

It has been shown that the CS based algorithm is a very effective way for ghost imaging to improve the image quality (e.g. mean-square-error, MSE) whilst using much less samples than the space-time correlation algorithm. From the view of compressed sensing, the *p*th local image **G**′* _{p}* is obtained by solving the following

*l*

_{1}-norm optimization problem as

**M**

*is written as*

_{p}Obviously, the dimensionality of the matrix **M*** _{p}* is significantly reduced compared to that of the traditional systems, which is quite favorable to reduce the computational quantity and samples quantity demand in recovering the image.

## 4. Performance analysis of the imaging system

In this section, the performance of the proposed CGI system is to be investigated based on numerical simulations, which involves the requirement on the sampling number and computation amount for the image recovery, the resolution and the frame rate evaluation.

#### 4.1 Numerical imaging experiments for the proposed CGI system

Besides providing much faster speckle patterns projecting, in the proposed system, the usage of a low-pixel APD array, namely multi-channel receiving, can decrease the dimension of the sampled data by *P* times and further the requirement on sampling number and computational complexity in recovering the image with a certain resolution. In order to show the results, the imaging simulations of three given different object models, including a weak reflective (1#), a strong reflective (2#) and a gray image (3#), are carried out according to the imaging process presented in Section 3.3. The system parameters for simulation are listed in Table 1.

As shown in Fig. 5, the 1# object model is composed of four capital letters ‘S’, ‘P’, ‘O’ and ‘E’, whose reflectivity or reflection coefficients are all equal to 1. Obviously the object model can be viewed as a weak reflective object.

Under ideal conditions and without the consideration of noise, the image of the object (1#) is reconstructed by use of the STC algorithms by Eq. (13) and the CS based algorithms, respectively. The iterative convex optimization algorithm and two-dimensional discrete cosine transform (2D-DCT) are used to solve the problem by Eq. (15). The reconstructed images with different sampling number *M* (i.e. the number of the projected patterns mentioned in Section 3.3 when the sampling speed is equal to the pattern projection speed) are shown in Fig. 6, where the upper four images is the results with the STC algorithm and the lower four images is the results with the CS based algorithm.

Evidently, the image quality improves as the sampling number *M* increases for both algorithms; and what’ more the CS algorithm can perform much better even though much smaller amount of the samples are used. For this reason, the images of the 2# and 3# objects are reconstructed only using the CS algorithm, which are shown in Fig. 7.

The 2# object model, a part of the badge of Xidian University, is strong reflective, because the reflectivity in most parts is equal to one besides the black parts are set to zero. The 3# object, the famous Cameraman, is a multilevel (16 bits) gray image, is a typical and representative model. As presented in Fig. 7, the proposed imaging system can provide a basically satisfying performance only with very small amount of samples that implies a relatively low computation burden. There is no need for reticence that there exists visible boundary in the stitched panorama with the local images, which however will be relieved as the sampling number *M* increases, or some image fusion technique is used.

As a matter of fact, it is the multi-channel receiving using array detector that effectively reduce the dimension of the imaging data at the cost of system complexity and thus the remarkably lowered requirement on sampling number in the image reconstruction. The sampling number *M* represents the requirement on the number of patterns projection for imaging; therefore, the faster patterns projection and the lower the requirement on the sampling number, the higher imaging speed will be obtained.

#### 4.2 Requirement on sampling number for the image recovery

In this section, another simulation is done to evaluate the effect of the multi-channel receiving using array detector and the two imaging algorithms on the sampling number requirement in the image recovery. In the simulation, instead of the OFPA based speckles, the speckles without spatial periodicity are produced to ensure that a single-pixel detector can work, and the used 2# model is resized to be a 128 × 128 image. The STC algorithm by Eq. (14) is adopted to improve the SNR as the 2# object is strong reflective. The recovered images of the 2# object are presented in Fig. 8, where a 4 × 4 array detector and CS algorithm (i), a single-pixel detector and CS algorithm (II), a 4 × 4 array detector and STC algorithm (III) and a single-pixel detector and STC algorithm (IV) are used, respectively.

It can be seen that compared to the single channel receiving using a single-pixel detector, the multi-channel receiving using 4 × 4 array detector significantly decreases the demand on the sampling number by more than one order of magnitude no matter which algorithm is used, and using the same sampling number, the 4 × 4 multi-channel receiving can enable the STC algorithm perform better than the CS algorithm in conjunction with a single-pixel detector, especially when the used samples number *M* is less than 2000. It must be noted that the image quality should be corresponding to the total samples quantity in spatial and temporal domain [40]. Apparently, in the case of array detector, the total samples quantity should be the sampling number *M* times the number of the detector pixels *P*, i.e. *MP*, which means that the sampling number *M* will be reduced only 1/*P* of the required total samples using array detector of *P* pixels. For example, we can see that the recovered image for *M* = 500 in Fig. 8 (III) is as good as the image for *M* = 8000 in Fig. 8 (IV).

Note that in Fig. 8, a slightly increased samples number *M* is required than that in Fig. 7 in achieving the analogous image quality with the CS algorithm. Besides the array detector of smaller pixels number, this is mainly because the simulated speckle patterns for Fig. 8 provide a higher resolution, or smaller coherence area, than that used for Fig. 7.

As a complement, the MSE variation with the sampling number *M* is offered as a quantitative result of the improvement from the multi-channel receiving. The MSE is calculated according to the definition by

*N*,

_{R}*N*is the row and column number of the image, respectively;

_{C}*B*(

*u*,

*v*) is the pixel value of the original image and

*G*(

*u*,

*v*) is the pixel value of the reconstructed image.

Figure 9 shows the curves of MSE between the original and reconstructed image with a single-pixel or array detector, which is obtained by averaging over 20 times simulations. As the pixels number of detector increases, the MSE will signally drop in the case of a given sampling *M* for both algorithms and meanwhile, it can also be found that the MSE values of the recovered images are very close to each other with the same total samples quantity *MP*. Notably, using array detector and the CS algorithm, much lower MSE or much better quality image can be achieved only with a very small amount of samples. By comparison, the same image quality cannot be gained using the STC algorithm with even over tens of times samples. For example, the MSE value is less than 0.1 for *M* = 50 in Fig. 9(d) when using 4 × 4 array detector while the corresponding MSE value in Fig. 9 (b) is still more than 0.2 for *M* = 1000.

Overall, the CS based algorithm requires much less samples than the STC algorithm while offering better quality of image. Generally, the CS based algorithm has much higher computational complexity, whereas the space-time correlation algorithm should be better one for some practical single-pixel systems. However, this situation will change when array detector is used.

#### 4.3 Computation amount for the image recovery

Besides the requirement on sampling number, the computational complexity is another major factor that influences the imaging frame rate of CGI system. As mentioned in Section 4.2, the CS based imaging algorithm can provide a much better image quality than the STC algorithm while much less samples are required. However, less samples requirement does not necessarily mean lower computational complexity.

To analysis the effect of the multi-channel receiving using array detector on the computational complexity of the two imaging algorithms, the time consumption are measured with a desktop PC that runs Windows 10 and MATLAB 2016b, and the corresponding MSE values are also calculated, as listed together in Table 2. The listed values are the average over 10 times imaging simulations. It must be noted that the measured values for array detector are the average time consumption in recovering a local image instead of the whole panorama providing the local images of object can be reconstructed by parallel data processing, which times the pixels number of array detector is the whole time when recovering the local images in a sequential manner.

At the first sight we can find that the time consumption and MSE obviously drop as the pixels number of detector rises, especially for those by the CS based algorithm. Evidently from the results by the STC algorithm, the consumed time is proportional to the pixels number of the image with a given sampling number *M*. In general, the computational complexity of the CS based algorithm is between *O*(*X*^{3}) and *O*(*XM*) where *X* is the pixels number of the image, namely the image dimensionality, which accords with the results listed in Table 2. In addition, the multi-channel receiving make the STC algorithm perform better. For example, using 4 × 4 array detector and 2000 times sampling, the image quality by the STC algorithm is analogous to that by the CS based algorithm using a single-pixel detector and 300 times sampling, only with a much less time consumption.

What’s more, by the multi-channel receiving using array detector, the data dimensionality reduction can be equivalently attained at the cost of the system complexity, which is always difficult in mathematics; and simultaneously the dimension-reduced data further reduce the sampling number requirement in the image recovery. As a result, the computational complexity of the CS algorithm becomes completely acceptable and more practical combining with the multi-channel receiving and parallel data processing. Take the 4 × 4 array detector for example, to obtain a similar image quality, the time consumption of the CS algorithm with *M* = 100 (MSE = 0.07) is only 3 times longer than that of the STC algorithm with *M* = 2000 (MSE = 0.122), whereas the sampling number of the STC algorithm is 20 times as many as that of the CS algorithm. Obviously, the CS algorithm will provide a higher imaging speed if the actual processing speed and the sampling speed are comparative, which is very significant especially for the traditional SLM based imaging systems. A trade-off between the time consumptions of the imaging computation and data sampling should be considered for the selection of imaging algorithm.

According to the above analysis, we can select two different algorithm schemes for the CGI system under the two operation mode of the speckle source, respectively. Under the continuous laser mode, the speed of patterns projection can reach the level of GHz, the STC algorithm should be better one to obtain a remarkably high frame rate; and under the pulsed projection mode of ~MHz, the CS algorithm will perform better because of low requirement on sampling number.

#### 4.4 Image resolution of the CGI system

In principle, the image resolution of GI system should be the speckle size or the coherent area of the light field projected on the object to be imaged [3, 44]. The lateral coherent length (the one-dimensional version of the speckle size) can be used as an evaluation criteria for the resolution of the proposed CGI system, which is the full width at half maximum (FWHM) of the central lobe of the cross-correlation coefficient.

According to Eqs. (7) and (9), the lateral coherent length of the OFPA based speckle field with the *S* × *S* square array can be estimated by *σ* = 0.886*λD*/(*Sd*), where the symbols are defined the same as those used above and further the speckle size can be expressed with *σ*^{2}. The area of each local FOV of array detector is (*λD*/*d*)·(*λD*/*d*), therefore, the fundamental pixels number of each local image will be 1.13*S* × 1.13*S* and the total pixels number of the panorama, or the fundamental resolution of the imaging system, should be 1.13*SK* × 1.13*SK* or 1.27*N* × *P* with a *P* = *K* × *K* array detector. Evidently, the resolution depends on the pixels number of array detector and the number of array fibers. Take the values listed in Table 1 for example, the fundamental resolution of the panorama image is about 34 × 34, which is relatively low. As described in Section 3.3, a recovered image of at least 2.26*SK* × 2.26*SK* pixels can be obtained for the referenced patterns with at least twice the fundamental resolution. That is to say, the pixels number of the recovered image can be several times as many as the fundamental resolution. For example, in Section 4.1, a 160 × 160 image can be recovered with the spatial sampling rate of the patterns is about 5 times the resolution.

To improve the resolution, an easy way is to improve either of the fibers number *N* and the detector pixels *P*, or both. For example, given that *N* = 8 × 8 and *P* = 8 × 8, then the resolution will become higher, i.e. 72 × 72; however, the cost and complexity of the system will be increased to a certain extent. It is also necessary to develop other effective and economical methods. Since the available samples is very sufficient, the CS based superresolution reconstruction method will be worth trying within acceptable imaging frame rate. Besides, some valuable ideas, such as spatial sectioning [23] and digital microscanning [38] can be used for reference, which will be one direction of our future works.

#### 4.5 Expected frame rate of the imaging system

From the above analysis, it is shown that the proposed CGI system can offer a much higher imaging frame rate than traditional ones owing to the OFPA based speckles source, the multi-channel receiving using array detector and the following parallel data processing.

Under the continuous projection mode, i.e. the two-dimensional scene imaging mode, providing the patterns are projected at a frequency of 1 GHz and 2000 times sampling is required to recover the image, the maximum frame rate can reach 500 kHz. The frame rate available for the proposed system is restricted mainly by the capability of the data processing for imaging since the patterns projection is so fast (up to GHz level) that sufficient samples can be obtained. By the usage of the STC algorithm, high-speed digital signal processors and parallel data processing, providing the actual speed of the specialized circuit in processing the data stream in real time can be up to 100 kHz, and then the frame rate can reach the same 100 kHz. Taking into account the effect of some non-ideal factors, including the system itself and external environment, the frame rate might reach a level of at least tens of kHz, which is still much higher than the existing ones.

With the pulse projection mode, i.e. the three-dimensional scene imaging mode, the speed of patterns projection must match the imaging range, or the repetition period of laser pulse. The recovered image should be the two-dimensional mapping of a three-dimensional scene in a certain range (or depth of field) due to relative long pulse duration or low range resolution. Given that the pulse duration is *T _{p}*, the depth of field will be

*cT*/2, where

_{p}*c*is the light speed. For preventing the interference of the echoes between two different pulse periods, the imaging range is limited within a range of

*cT*/2, where

_{e}*T*is the pulse period. Provided that the pulse duration is 60ns and the pulse period is 1μs, the corresponding depth of field and the maximum imaging range are 9 meters and 150 meters, respectively. Assume that 200 times sampling is required to recover the image using the CS algorithm and the patterns can be projected at a speed of 1 MHz, then the maximum frame rate can reach 5 kHz. Taking into account some non-ideal factors, the frame rate might reach a level of ~kHz, which is also very attractive to some applications.

_{e}## 5. Summary and discussion

Based on the OFPA based speckles source and a low-pixel array detector, a CGI system is proposed to improve the imaging frame rate. The schematic system is illustrated and then imaging process is accordingly formulated. The performance of the proposed system is analyzed through the numerical simulations. Owing to the high-speed phase modulators, the OFPA based source can offer the much faster speckles projection than the traditional ones. Benefited from the multi-channel receiving using array detector, the data dimensionality is equivalently reduced and thus a remarkably reduced sampling number requirement and computational complexity. Combining both two advantages and high-speed parallel data processing, the imaging frame rate of the proposed CGI system is expected to reach the level of over 100 kHz for the continuous laser mode and ~kHz for the pulsed laser mode.

According to the performance analysis by the numerical simulations under ideal conditions, the proposed CGI system scheme is proved to be very promising in raising remarkably the imaging frame-rate; however, there exist some difficulties and challenges in the system realization and the imaging experiments such as the compensation of the slow time-variant initial phases of array beams, the fabrication of fiber array, the design and fabrication of high-speed random phase driver, and the high precise computation of the projected speckle patterns, etc. For instance, the high precise computation of the projected speckle patterns is all important to reconstruct the image; in practical, nevertheless, many factors will influence the computational precise, of which the unideal fiber array is primary. In principle, to reconstruct the image, the referenced local patterns should match the samples from the pixels of APD array. Once fabricated, however, the imperfect fiber array may deviate the designed physical structure and thus the ideal mathematical model cannot strictly hold for the computation of the referenced speckle patterns, which will lead to the reduction of the image quality and the increase of the samples requirement. Hence, a certain processing precise for fiber array have to be guaranteed and also it is very necessary to modify the computational model according to the experiment measurement. Moreover, the realization cost might be higher than the conventional ones; it still is of significance for the practical application of CGI system to carry out the relevant study in our further work.

## Funding

Natural National Science Foundation of China (NSFC) (61401341) and the 111 Project (B17035).

## References and links

**1. **T. B. Pittman, Y. H. Shih, D. V. Strekalov, and A. V. Sergienko, “Optical imaging by means of two-photon quantum entanglement,” Phys. Rev. A **52**(5), R3429–R3432 (1995). [CrossRef] [PubMed]

**2. **R. S. Bennink, S. J. Bentley, and R. W. Boyd, “Two-photon” coincidence imaging with a classical source,” Phys. Rev. Lett. **89**(11), 113601 (2002). [CrossRef] [PubMed]

**3. **F. Ferri, D. Magatti, A. Gatti, M. Bache, E. Brambilla, and L. A. Lugiato, “High-resolution ghost image and ghost diffraction experiments with thermal light,” Phys. Rev. Lett. **94**(18), 183602 (2005). [CrossRef] [PubMed]

**4. **Y. Cai and S.-Y. Zhu, “Ghost imaging with incoherent and partially coherent light radiation,” Phys. Rev. E Stat. Nonlin. Soft Matter Phys. **71**(5), 056607 (2005). [CrossRef] [PubMed]

**5. **L. Basano and P. Ottonello, “A conceptual experiment on single-beam coincidence detection with pseudothermal light,” Opt. Express **15**(19), 12386–12394 (2007). [CrossRef] [PubMed]

**6. **K. W. Chan, M. N. O’Sullivan, and R. W. Boyd, “Optimization of thermal ghost imaging: high-order correlations vs. background subtraction,” Opt. Express **18**(6), 5562–5573 (2010). [CrossRef] [PubMed]

**7. **W. Gong and S. Han, “Experimental investigation of the quality of lensless super-resolution ghost imaging via sparsity constraints,” Phys. Lett. A **376**(17), 1519–1522 (2012). [CrossRef]

**8. **J. H. Shapiro, “Computational ghost imaging,” Phys. Rev. A **78**(6), R061802 (2008). [CrossRef]

**9. **Y. Bromberg, O. Katz, and Y. Silberberg, “Ghost imaging with a single detector,” Phys. Rev. A **79**(5), 053840 (2009). [CrossRef]

**10. **B. Sun, M. Edgar, R. Bowman, L. Vittert, S. Welsh, A. Bowman, and M. Padgett, “Differential Computational Ghost Imaging,” in *Imaging and Applied Optics*, OSA Technical Digest (online) (Optical Society of America, 2013), paper CTu1C.4.

**11. **S. Sun, W. T. Liu, H. Z. Lin, E. F. Zhang, J. Y. Liu, Q. Li, and P. X. Chen, “Multi-scale Adaptive Computational Ghost Imaging,” Sci. Rep. **6**(1), 37013 (2016). [CrossRef] [PubMed]

**12. **D. B. Phillips, R. He, Q. Chen, G. M. Gibson, and M. J. Padgett, “Non-diffractive computational ghost imaging,” Opt. Express **24**(13), 14172–14182 (2016). [CrossRef] [PubMed]

**13. **Y. Zhu, J. Shi, Y. Yang, and G. Zeng, “Polarization difference ghost imaging,” Appl. Opt. **54**(6), 1279–1284 (2015). [CrossRef] [PubMed]

**14. **Y. Liu, J. Shi, and G. Zeng, “Single-photon-counting polarization ghost imaging,” Appl. Opt. **55**(36), 10347–10351 (2016). [CrossRef] [PubMed]

**15. **M. Le, G. Wang, H. Zheng, J. Liu, Y. Zhou, and Z. Xu, “Underwater computational ghost imaging,” Opt. Express **25**(19), 22859–22868 (2017). [CrossRef] [PubMed]

**16. **J. Huang and D. Shi, “Multispectral computational ghost imaging with multiplexed illumination,” J. Opt. **19**(7), 07570 (2017). [CrossRef]

**17. **K. Komatsu, Y. Ozeki, Y. Nakano, and T. Tanemura, “Ghost imaging using integrated optical phased array,” in *Optical Fiber Communication Conference*, OSA Technical Digest (online) (Optical Society of America, 2017), paper Th3H.4. [CrossRef]

**18. **Z.-H. Xu, W. Chen, J. Penuelas, M. Padgett, and M.-J. Sun, “1000 fps computational ghost imaging using LED-based structured illumination,” Opt. Express **26**(3), 2427–2434 (2018). [CrossRef] [PubMed]

**19. **O. Katz, Y. Bromberg, and Y. Silberberg, “Compressive ghost imaging,” Appl. Phys. Lett. **95**(13), 131110 (2009). [CrossRef]

**20. **V. Katkovnik and J. Astola, “Compressive sensing computational ghost imaging,” J. Opt. Soc. Am. A **29**(8), 1556–1567 (2012). [CrossRef] [PubMed]

**21. **M. Amann and M. Bayer, “Compressive adaptive computational ghost imaging,” Sci. Rep. **3**(1), 1545 (2013). [CrossRef] [PubMed]

**22. **H. Wu, X. Zhang, J. Gan, and C. Luo, “High-Quality Computational Ghost Imaging Using an Optimum Distance Search Method,” IEEE Photonics J. **8**, 1–9 (2016).

**23. **H. Ghanbari-Ghalehjoughi, S. Ahmadi-Kandjani, and M. Eslami, “High quality computational ghost imaging using multi-fluorescent screen,” J. Opt. Soc. Am. A **32**(2), 323–328 (2015). [CrossRef] [PubMed]

**24. **M. Zafari, R. Kheradmand, and S. Ahmadi-Kandjani, “Optical encryption with selective computational ghost imaging,” J. Opt. **16**(10), 105405 (2014). [CrossRef]

**25. **M. Zafari, S. Ahmadi-Kandjani, and R. Kheradmand, “Noise reduction in selective computational ghost imaging using genetic algorithm,” Opt. Commun. **387**, 182–187 (2017). [CrossRef]

**26. **B. I. Erkmen, “Computational ghost imaging for remote sensing,” J. Opt. Soc. Am. A **29**(5), 782–789 (2012). [CrossRef] [PubMed]

**27. **N. D. Hardy and J. H. Shapiro, “Computational ghost imaging versus imaging laser radar for three-dimensional imaging,” Phys. Rev. A **87**(2), 023820 (2013). [CrossRef]

**28. **B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3D Computational Imaging with Single-Pixel Detectors,” Science **340**(6134), 844–847 (2013). [CrossRef] [PubMed]

**29. **M.-J. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb, and M. J. Padgett, “Single-pixel three-dimensional imaging with time-based depth resolution,” Nat. Commun. **7**, 12010 (2016). [CrossRef] [PubMed]

**30. **X. Yang, Y. Zhang, C. Yang, L. Xu, Q. Wang, and Y. Zhao, “Heterodyne 3D ghost imaging,” Opt. Commun. **368**, 1–6 (2016). [CrossRef]

**31. **C. Luo, J. Cheng, A. Chen, and Z.-M. Liu, “Computational ghost imaging with higher-order cosh-Gaussian modulated incoherent sources in atmospheric turbulence,” Opt. Commun. **352**, 155–160 (2015). [CrossRef]

**32. **C. Luo and L. Zhuo, “High-resolution computational ghost imaging and ghost diffraction through turbulence via a beam-shaping method,” Laser Phys. Lett. **14**(1), 015201 (2016). [CrossRef]

**33. **P. Clemente, V. Durán, V. Torres-Company, E. Tajahuerce, and J. Lancis, “Optical encryption based on computational ghost imaging,” Opt. Lett. **35**(14), 2391–2393 (2010). [CrossRef] [PubMed]

**34. **H. C. Liu, B. Yang, Q. Guo, J. Shi, C. Guan, G. Zheng, H. Mühlenbernd, G. Li, T. Zentgraf, and S. Zhang, “Single-pixel computational ghost imaging with helicity-dependent metasurface hologram,” Sci. Adv. **3**(9), e1701477 (2017). [CrossRef] [PubMed]

**35. **P. Clemente, V. Durán, E. Tajahuerce, V. Torres, and J. Lancis, “Single-pixel digital holography based on computational “ghost” imaging,” in *Conference on Lasers and Electro-Optics 2012*, OSA Technical Digest (Optical Society of America, 2012), paper JTh2A.5. [CrossRef]

**36. **F. Devaux, P. A. Moreau, S. Denis, and E. Lantz, “Computational temporal ghost imaging,” Optica **3**(7), 698 (2016). [CrossRef]

**37. **M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. Kelly, and B. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. **25**(2), 83–91 (2008). [CrossRef]

**38. **M.-J. Sun, M. P. Edgar, D. B. Phillips, G. M. Gibson, and M. J. Padgett, “Improving the signal-to-noise ratio of single-pixel imaging using digital microscanning,” Opt. Express **24**(10), 10476–10485 (2016). [CrossRef] [PubMed]

**39. **D. B. Phillips, M.-J. Sun, J. M. Taylor, M. P. Edgar, S. M. Barnett, G. M. Gibson, and M. J. Padgett, “Adaptive foveated single-pixel imaging with dynamic supersampling,” Sci. Adv. **3**(4), e1601782 (2017). [CrossRef] [PubMed]

**40. **M.-J. Sun, W. Chen, T.-F. Liu, and L.-J. Li, “Image Retrieval in Spatial and Temporal Domains With a Quadrant Detector,” IEEE Photonics J. **9**(5), 3601206 (2017). [CrossRef]

**41. **Q. Li, Z. Duan, H. Lin, S. Gao, S. Sun, and W. Liu, “Coprime-frequencied sinusoidal modulation for improving the speed of computational ghost imaging with a spatial light modulator,” Chin. Opt. Lett. **14**(11), 111103 (2016). [CrossRef]

**42. **Y. Wang, Y. Liu, J. Suo, G. Situ, C. Qiao, and Q. Dai, “High Speed Computational Ghost Imaging via Spatial Sweeping,” Sci. Rep. **7**, 45325 (2017). [CrossRef] [PubMed]

**43. **C. Liu, F. Lu, C. Wu, and X. Han, “Spatial correlation properties of coherent array beams modulated by space-time random phase,” Opt. Commun. **346**, 26–33 (2015). [CrossRef]

**44. **F. Ferri, D. Magatti, L. A. Lugiato, and A. Gatti, “Differential ghost imaging,” Phys. Rev. Lett. **104**(25), 253603 (2010). [CrossRef] [PubMed]