Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Defect detection and response non-uniformity correction of a monocentric camera based on fiber optic relay imaging

Open Access Open Access

Abstract

The monocentric camera based on fiber relay imaging offers benefits of light weight, compact size envelope, vast field of view, and high resolution, which can fully fulfill the index requirements of space-based surveillance systems. However, the fiber optic plate's (FOP) defects will result in the loss of imaging data, and the FOP's discrete structural features will exacerbate the imaging's non-uniformity. A global defect detection approach based on manual threshold segmentation of saturated frames is suggested to detect FOP defect features. The suggested method's efficacy and accuracy are confirmed when compared to the classical Otsu algorithm. Additionally, through tests, the relative imaging response coefficients of each pixel are identified, the response non-uniformity of the pixels is corrected, and the whole image non-uniformity drops from 10.01% to 0.78%. The study in this paper expedites the use of fiber relay imaging-based monocentric cameras in the field of space-based surveillance, and the technique described in this paper is also appropriate for large-array optical fiber coupled relay image transmission systems.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

The monocentric objective lens has the advantages that the conventional transmitted focal-plane imaging objective lens cannot achieve in the large field of view, large relative aperture, and high imaging resolution [13]. It also has the characteristics of small size, light weight, and strict symmetry structure. Additionally, all the monocentric objective lenses are absolutely monocentric, and high symmetry removes most geometric aberrations. Therefore, it is not necessary to correct external aberrations like coma, astigmatism, and lateral chromatic aberration when optimizing the imaging quality and correcting the positive aberration of the monocentric objective imaging system. This greatly reduces the time and expense of aberration correction and is helpful for enhancing the resolution and clarity of images [4]. However, the imaging focus surface is a curved hemispherical surface due to the unique optical design of the monocentric objective lens, making it challenging to correctly couple with the traditional array image sensor. It will result in severe defocus and considerable imaging target dispersion and blur at the array sensor's edge, which previously restricted the use of monocentric objective lenses.

A type of discrete, passive, rigid optical fiber image transmission device known as an fiber optic plate (FOP) is created by secondary drawing, melting, and cutting millions of optical fiber monofilaments. FOPs have found widespread usage in the domains of national defense, scientific research, aerospace, and medicine thanks to their optical properties of high numerical aperture, low optical loss rate, zero optical thickness, high resolution, and high transmittance. The most significant advantage of the FOP is that its end face can be prepared into the shape of an optical surface, which not only allows it to function as a means of transferring images between different optical surfaces but also significantly reduces the size of the entire optical structure, improving the stability of the system.

By turning the incident-exit end face of the FOP into a concave spheric-plane and coupling it directly with the hemispherical focal plane of the monocentric objective lens and the surface of the image sensor as a relay image transmission device, the issue that the focal plane of the monocentric objective lens cannot be coupled with the sensitive surface of the sensor can be solved perfectly. In order to produce a large-field image with hundreds of millions or even billions of pixels [5], the several FOPs coupled large-field array space-class image sensors are spliced through a particular spatial arrangement that takes into account the large field of view and high resolution imaging [4,69].

The introduction of FOPs will, however, inevitably cause a decline in the imaging quality of monocentric cameras, such as lower transmittance, imaging information discontinuity, an increase in imaging target centroid locating error [10], and so on since they are discrete imaging devices. Moreover, during the production of FOPs, intrinsic flaws like dark filaments, broken filaments, or even block damaged filaments would unavoidably happen, leading to uneven or even non-responsive pixel responses of image sensors.

Currently, the technical specification of Photonis in France, which clearly states: ‘A spot is defined as any area (chicken wire and/or dead fibers) where the relative transmission is less than 70%,’ serves as the international definition of inherent defects of optical fiber image transmission devices. Research on optical fiber image transmission devices and their uses has been done in-depth by Schott Company in Germany and Hongsheng Optoelectronics in China. They produce a variety of high resolution, low monofilament diameter, large section FOPs while continuously improving the production process and strictly regulating the proportion of defective monofilament. It is vital to identify and address the inherent flaws in each FOP during the practical application process since, despite substantial advancements in the fabrication technology of FOPs, these flaws cannot entirely be eradicated.

Image segmentation and edge detection are the two main techniques used to find defects in FOPs. Image segmentation is the process of separating the defect area from the background of the image in order to achieve the goal of immediately labeling the faulty area. Edge detection should first establish the perimeter of the image defect area before using morphological processing to designate the size and position of the defect. Wu et al [11]. achieved the FOP shadow defect edge detection in 2008 utilizing the canny operator. Two years later, they [12] realized the adaptive threshold determination, integrated the Otsu algorithm into the canny operator, and attained a nearly perfect detection result for the defective edge of the FOP. The image segmentation approach was later discovered by the researchers to be more practical and effective for the fault identification of FOPs. The most popular global threshold segmentation algorithm, the Otsu algorithm, can provide good defect detection effects, but it can also result in misleading defect detections like noise and dark filaments [13]. In 2017, Yang et al [14]. introduced an improved adaptive watershed segmentation technique that can successfully control the over-segmentation of FOP flaws. A defect detection technique based on the enhanced Fuzzy C-means clustering algorithm was put out by Zhang et al [13]. in 2018. This approach can prevent detection errors brought on by fictitious flaws, but it converges quickly locally and operates slowly with big data sets. Because of this, it is not appropriate for large-section FOPs having a specific number of unevenly distributed flaws. Additionally, in recent years, researchers have used machine vision technology to detect winding defects in fiber coils [15,16] and end faces [17], but this technology is more suited for real-time, linear array detection or local range detection and is not appropriate for detecting global defects in large array FOPs.

The image surface of a monocentric camera is made up of a huge array of space-class CMOS sensors, with a maximum pixel count of 100 million. Using the global threshold segmentation approach to find defects in FOPs is obviously more practical and effective. The FOP also contains a large number of dark filaments with low transmittance, which can nonetheless transfer energy and reveal the target's radiation properties. If the black filaments are mistakenly identified as flaws, the processed image will have extra pitting areas, which cannot be used. It makes more sense, therefore, to think of these black filaments as the non-uniformity phenomenon of camera image. In this paper, a global defect detection method based on manual threshold segmentation of saturated frames is proposed in light of the defect characteristics of FOPs. This technique can prevent the unintentional detection of noise and dark filaments. The proposed method's efficacy and accuracy are confirmed when compared to the traditional Otsu algorithm. Additionally, various FOP-related picture inhomogeneity events are described in detail. Experiments are used to determine the relative response coefficients of each pixel, and response non-uniformity is corrected at the pixel level. In the realm of space-based surveillance, the study in this paper expedites the use of monocentric cameras based on fiber relay imaging, and the research methodology is also transferable to the large array fiber relay image transmission system.

2. Defect pixel detection of monocentric camera

The term “defect pixels” in the context of classic optical cameras primarily refers to those pixels, also known as “poor points,” such as bright spots, dark spots, and defective rows or columns, which are unable to respond correctly during the manufacture and packing of image sensors. In addition to the poor pixels brought on by the image sensor's inherent flaws, monocentric optical cameras also have bad pixels brought on by the FOP's inherent flaws. The following are the primary categories of FOPs’ inherent flaws [18,19] (see Fig. 1):

  • 1. In the process of single drawing, optical fiber is prone to produce completely opaque broken wires, and cluster broken wires formed by more than two adjacent broken wires are more harmful to the image. Banded cluster fracture filaments are called chicken filaments, circular cluster fracture filaments are called dead spots.
  • 2. The fiber bundle's border is easily deformed during secondary drawing, creating an opaque barrier called as a grid.
  • 3. In order to make the output end face of the FOP fit perfectly with the image sensor, the FOP is embedded with a completely opaque rectangular mechanical frame in the final preparation process, which is called the frame.
  • 4. Some irregular shape dirt and impurities will be attached to the end face of the FOP, which will also reduce the transmittance of the FOP, known as pollutants.

 figure: Fig. 1.

Fig. 1. Types of inherent defects of FOPs: (a) chicken filaments (b) dead spots (c) grid (d) frame (e) pollutants.

Download Full Size | PDF

The aforementioned intrinsic flaws will cause the final output signal value of the corresponding pixel to be much lower than that of the pixel without flaws in the condition of uniform light incidence. The optical fiber transmittance will drop as a result of the FOP's flaws, but the rate at which it will decrease will vary depending on the type of flaw. Hence, identifying these damaged pixels and noting their precise locations is crucial for following image calibration and enhancing cameras’ overall imaging quality. The most used image segmentation technique is Otsu algorithm, and it is typically used to distinguish the inherent flaws of FOPs from the regular image. The Otsu algorithm, which determines the threshold for image binarization segmentation, is based on the maximization of inter-class variance. According to the global threshold, the algorithm thinks that image pixels can be split into background and target sections. In order to achieve the greatest degree of differentiation between the two types of pixels, the best threshold is computed to separate the two types of pixels. Although the Otsu algorithm is straightforward and effective, it also has the following drawbacks: (1) It is simple to create local noise misdetection since the algorithm is particularly sensitive to noise. (2) The inter-class variance function may exhibit double or numerous peaks when the target and background have too many pixels in common, which lowers the algorithm's detection accuracy.

The geometric optics principle states that when the camera's field of view Angle $\omega$ increases, the side illumination will result in an energy attenuation proportionate to ${\cos ^4}\omega$, creating an image that is bright in the center and dark on the edges, resulting in a dark Angle. The dark corner area with a low output signal value will be treated as a defect at this moment since the Otsu method is being used for segmentation, which causes a partial loss of picture data. A qualified FOP's defect ratio should also not be higher than 0.5% [18,20]. Low accuracy in the segmentation of FOP defects is caused by the Otsu defect segmentation method for global images’ ease in displaying double-peak or multi-peak inter-class variance functions. The FOP will simultaneously produce low transmittance dark fiber filaments that are scattered at random. While having a lower transmittance than optical fiber monofilaments with normal image transmission, this type of dark mercerization can nonetheless transfer images and characterize variations in target radiation brightness. The non-uniformity of the subsequent imaging response can totally remedy the filaments issue, therefore it cannot be viewed as a flaw. The Otsu algorithm, however, will mistakenly identify this filaments behavior as a flaw.

In order to avoid the interference of random noise, the erroneous identification of optical fiber dark filament, and the effects of dark Angle on the detection accuracy, this paper provides a straightforward, effective, and highly accurate FOP defect detection method. The detection method's precise operational flow is as follows:

  • 1. The whole sensing surface of the image sensor is saturated by regulating the brightness of the integrating sphere light source and the exposure period of the camera. At this point, the pixel output signal value impacted by the FOP defect won't be saturated because light cannot currently pass through the fault.
  • 2. The image's gray value is normalized after the saturated image is collected. The segmentation threshold was set to be endlessly close to 1 for the binary segmentation of the threshold explicitly specified for saturated images (0.99999 in this scheme according to the image display digit). Unsaturated pixels impacted by FOP flaws are given the number 1, whereas saturated pixels are given the number 0. To locate the defective pixels on an FOP, a binary image the same size as the regular image can be generated.

It is important to clarify that the monocentric optical camera's image surface is made up of several image sensors and has a wide field of view. The irradiance received by the corners and edges of the picture plane will result in an attenuation proportionate to ${\cos ^4}\omega$, according to the geometric optics principle. The attenuation increases with increasing field of view. The image surface of the image sensor in the center field of view must be supersaturated in order to guarantee the overall saturation of all image sensor surfaces. This will make the faults of some FOPs in the center also saturated, resulting in the inaccuracy of defect identification. In order to avoid the inaccuracy of defect detection brought on by irradiance attenuation, it is necessary to roughly adjust the camera's lens pointing when acquiring saturated frame images so that the lens area corresponding to the center field of view of the acquired image sensor is facing the integrating sphere (see Fig. 2). This method involves changing the pointing of the camera lens for each saturated frame image component that is captured by the image sensor. Using the aforementioned technique, the binary image of the position of the FOP defect is obtained. The global DN mean value of the following photos is not computed using defective pixels.

 figure: Fig. 2.

Fig. 2. Schematic diagram of monocentric camera defect detection.

Download Full Size | PDF

To identify the five FOP flaws depicted in Fig. 1, we employed the defect detection method described in this paper and the Otsu algorithm, respectively. In Fig. 3, the test results are displayed. For the sake of readability, we define white as a defective pixel and black as a regular pixel. The defect detection approach in this study has higher detection accuracy, wider coverage, and a larger degree of reduction when compared to the conventional Otsu algorithm, as can be shown by comparing the detection results of the two detection methods.

 figure: Fig. 3.

Fig. 3. Comparison of the defect detection results of this article's method and the Otsu algorithm: (a) represent the defects that need to be detected; (b) represent the defect detection results of this article's method; (c) represent the defect detection results of the Otsu algorithm.

Download Full Size | PDF

3. Non-uniformity correction of imaging response for monocentric camera

Even after defect identification and dark field calibration, the output gray value of each pixel of the image captured by the camera under the condition of uniform light incidence is still inconsistent, demonstrating non-uniformity. The numerous causes of imaging heterogeneity can be loosely categorized into three groups:

  • 1. With an increase in field Angle $\omega$, the edge Angle irradiance will result in an attenuation from the center of the zero field of view outward that is proportional to ${\cos ^4}\omega$. The attenuation is most pronounced when several image sensors are merged into a broad field of view [21].
  • 2. Moreover, there are variations in the response coefficients between pixels, leading to certain erroneous “patterns”.
  • 3. The image sensor will also receive some non-uniform radiation as a result of stains (dust and scratches) on the surface of the optical components used in imaging, stray light from the calibration instrument and the outside during the non-uniformity correction of the imaging response, and other factors.

The deployment of FOPs will make the pixel response heterogeneity worse. Figure 4 (a), (b), and (c) depict three different types of pixel response heterogeneity brought on by an FOP, respectively:

  • 1. Dark filaments or block dark filaments and some other optical fibers that can transmit light, only the transmittance is lower than the overall average transmittance of the FOP, the response coefficient of such fibers will be lower than the response coefficient of normal pixels, but it can also characterize the radiation characteristics of the target.
  • 2. After a single drawing, the overall light transmittance between the fiber bundles is uneven, making it simple to make hexagonal “speckles” with alternating light and dark.
  • 3. The monofilament's cladding is opaque. It will eventually show up in the image after stacking, creating slanted stripes between light and dark.

 figure: Fig. 4.

Fig. 4. Response nonuniformity phenomenon caused by the FOP: (a) (group) dark filaments (b) hexagonal spots (c) inclined stripes.

Download Full Size | PDF

Whatever the root cause of the aforementioned problem of non-uniform imaging response, it will eventually result in inconsistent response between image sensor elements. It is required to rectify the non-uniformity of the imaging response for calibrated images in order to remove the non-uniformity of imaging under uniform illumination conditions.

3.1 Linear fitting of flat field image based on least square method

It is important to calibrate the pixel level relative response relationship between the camera output gray value and entry pupil radiance prior to the non-uniformity correction of the image response. When the radiance of the camera entering the pupil is defined as L, the output DN of the image sensor pixel with coordinate $({m,n} )$ is defined as $D{N_0}({L,m,n} )$. Pixel output DN value excluding dark field image $Dark({t,m,n} )$ is defined as:

$$DN({L,m,n} )= D{N_0}({L,m,n} )- Dark({t,m,n} )$$

Similar to this, we can set the image sensor's pixel output DN value to $DN({L,m,n} )= [{DN({{L_1},m,n} ),DN({{L_2},m,n} ),\ldots ,DN({{L_k},m,n} )} ]$ when the entrance pupil radiance is $L = [{{L_1},{L_2}},$$\ldots ,{L_k} ]$. Assuming that A and B are positively correlated, the following fitting formula can be obtained by linear fitting based on least square method:

$$DN({L,m,n} )= R({m,n} )\times L + B({m,n} )$$
where, $R({m,n} )$ is the imaging response coefficient of the pixel with coordinate $({m,n} )$, and $B({m,n} )$ is the fixed background output of the pixel with coordinate $({m,n} )$. The value of $B({m,n} )$ will be close to zero if the image sensor has strong linearity and a tiny offset. To obtain the flat field image, which filters out the defect pixels and subtracts the dark field, multiply the defect detection templates $Imag{e_{filt}}({m,n} )$ and $DN({L,m,n} )$. When the camera enters the pupil radiance is L, the average of the flat-field image can be derived.
$$\overline {DN(L )} = \frac{{\sum\limits_m {\sum\limits_n {DN({L,m,n} )\times Imag{e_{filt}}({m,n} )} } }}{{M \times N - {N_{filt}}}}$$
where, ${N_{filt}}$ stands for the number of defective pixels of the image sensor, M stands for the number of rows of pixels, and N stands for the number of columns of pixels. Then we can get that the mean value of the flat field image is $\overline {DN(L )} = [{\overline {DN({{L_1}} )} ,\overline {DN({{L_2}} )} ,\ldots ,\overline {DN({{L_k}} )} } ]$ when the inlet pupil radiance is $L = [{{L_1},{L_2},\ldots ,{L_k}} ]$ respectively. Similarly, assuming that $\overline {DN(L )}$ and L are linearly correlated, the following fitting formula can be obtained by linear fitting based on the least square method:
$$\overline {DN(L )} = \overline R \times L + \overline B$$
where $\overline R$ is the image sensor's average imaging response coefficient and $\overline B$ is its average output of fixed background. The 20 frames of each radiation brightness level's images were superimposed (pixels sum and then average). According to the formula (3), the image mean $\overline {DN(L )}$ was obtained, and the linear fitting between $\overline {DN(L )}$ and L was conducted, so that $\overline R$ and $\overline B$ could be obtained. The fitting results were shown in Fig. 5.

 figure: Fig. 5.

Fig. 5. The least-square method-based linear fitting graph between the image means and radiance.

Download Full Size | PDF

Figure 5 shows intuitively that the image sensor has a good linear response to radiation brightness, but it is still unknown what the relationship between the two is. First, the absolute difference between the actual image output mean and the linear fitting value under each radiation brightness level is obtained, namely the residual $\delta$. In the linear regression analysis, the residual $\delta$ obeys the normal distribution $\textrm{N}({\overline \delta ,{\sigma_\delta }^2} )$. Where, $\overline \delta $ stands for the residual mean, and ${\sigma _\delta }$ stands for the residual standard deviation. Define the standardized residual $\delta ^{\prime} = {{({\delta - \overline \delta } )} / {{\sigma _\delta }}}$, which follows the standard normal distribution $\textrm{N}({0,1} )$. We can obtain the residuals and standard residuals between the actual output mean and fitting values of the images at each brightness level (see Table 1). The standard residual has a 0.05 chance of falling outside of range $({ - 2,2} )$. With a 95% confidence level, an experimental point will be classified as abnormal and excluded from the regression linear fitting if its standardized residual is outside the interval. Table 1 demonstrates that all of the data points used in the fitting process have standardized residuals that fall within interval $({ - 2,2} )$. The correlation coefficient between the radiation brightness level and the actual image output gray mean was also obtained; its value of 99.99%, almost 1, shows a positively linear correlation between the two.

Tables Icon

Table 1. The fitting residuals and standard residuals between image means and radiance

3.2 Non-uniformity correction of imaging response

Non-uniformity correction of imaging response refers to the fact that, after correction, the pixel output DN value of the image sensor remains constant when the camera is exposed to uniform light. The single point method, double point method, multi point method, and others are often used flat field correction techniques. If both the gray value of the picture to be calibrated and the correction coefficient of non-uniformity of the image response are known, the single point technique can calibrate the gray value of the image after non-uniformity correction. This method requires the camera to have a high linearity, and after subtracting the dark field signal, the output signal value of the camera almost has no offset, that is, the offset is nearly regarded as zero. The principle formula of single point method can be expressed as:

$$D{N_{rev}} = {R_{rev}} \times D{N_{test}}$$
where ${R_{rev}}$ is the imaging response's non-uniformity correction coefficient, $D{N_{test}}$ is the calibrated image's output gray value after dark field subtraction, and $D{N_{rev}}$ is the calibrated image's output gray value after non-uniformity correction. It is clear from Section 3.1 that the image sensor exhibits strong linear responsiveness. Moreover, even after subtracting the dark field, the camera's overall output signal has nearly no offset, hence our work uses the single point method to adjust for non-uniformity.

The pixel output DN value following correction is set as the image mean value to guarantee that the total amount of irradiance received by the corrected image plane stays constant. The following formula is established if, after correction, the output DN value of the pixel with coordinate $({m,n} )$ is $D{N_{rev}}({L,m,n} )$.

$$D{N_{rev}}({L,m,n} )= \overline {DN(L )} \times Imag{e_{filt}}({m,n} )$$
$\overline {DN(L )}$ can be obtained by eliminating L from formula (2) and (4) simultaneously, and put into formula (6) to obtain the image output DN value after non-uniformity correction:
$$D{N_{rev}}({L,m,n} )= \left[ {\frac{{\overline R \times DN({L,m,n} )}}{{R({m,n} )}} + \overline B - \frac{{\overline R }}{{R({m,n} )}}B({m,n} )} \right] \times Imag{e_{filt}}({m,n} )$$

After deducting the dark field image, it is found that $\overline B - \frac{{\overline R }}{{R({m,n} )}}B({m,n} )$ approaches zero. We define the non-uniform correction coefficient of pixel imaging response as ${R_{rev}}({m,n} )$, then the following formula is valid.

$${R_{rev}}({m,n} )= \frac{{\overline R \times Imag{e_{filt}}({m,n} )}}{{R({m,n} )}}$$

Combined with formula (1) and (8), formula (7) can be rewritten as:

$$D{N_{rev}}({m,n} )= {R_{rev}}({m,n} )\times [{D{N_0}({m,n} )- Dark({t,m,n} )} ]$$

In formula (9), image $D{N_0}({m,n} )$ to be calibrated is a known quantity, and the value of ${R_{rev}}({m,n} )$ can be obtained by formula (8). There are typically two ways to get the dark field $Dark({t,m,n} )$ of the calibrated image.

Case 1: Use the identical shooting conditions to capture the dark field image as you did for the calibration image.

Case 2: Maintain the working temperature of the calibrated image, fit the integration time and dark current value linearly, and use the fitting coefficient to derive the calibrated image's dark current value with a known exposure duration.

The calibration image's dark field is best obtained using Case 1. To sum up, when image $D{N_0}({m,n} )$ to be calibrated is input. According to the correction formula (9), the image $D{N_{rev}}({m,n} )$ after the non-uniformity correction of imaging response can be obtained.

3.3 Results of imaging response non-uniformity correction

The CMOS installed on the monocentric optical camera is an aerospace-class image sensor, employed primarily for the detection of weak and indistinct space targets. As a result, this sensor has high sensitivity and gain, as well as the qualities of being easily saturated. After the initial test, the integrating sphere can be controlled by only one bulb being opened at once, which is sufficient to fulfill the conditions of the experiment. Open the camera visor, place the front imaging lens close to the integrating ball's light outlet, and make sure the camera's entire field of vision can receive uniform light incidence from the ball (see Fig. 6). It is discovered that the camera output DN value is approximately one-tenth of the saturated DN value by providing instructions to regulate the camera integration time as unit integration time (${t_{unit}}$, 51.2 microseconds). As a result, we set the camera's integral time to be $t = [{{t_{unit}},2{t_{unit}},\ldots ,9{t_{unit}}} ]$ correspondingly. Twenty frames of pictures were measured for superposition processing under each integral time condition. It's important to note that temperature management or heat dissipation techniques should be used to maintain the image sensor's operating temperature during all integral time shooting circumstances.

 figure: Fig. 6.

Fig. 6. Diagram of monocentric camera imaging correction experiment.

Download Full Size | PDF

According to $L \propto E(\lambda )\times t$, it can be shown that integration time t and target irradiance $E(\lambda )$ are proportional to entrance pupil radiance L. The change in the dark current value of the picture obtained under each integration time condition is also very modest and may be disregarded because the camera integration time is on the order of unit integration time. Controlling the change of integrating time can therefore be equal to managing the change of the entry pupil's radiance when the brightness of the integrating sphere light source is fixed. The assumption underlying the equivalence relation is that the integration time is on the order of unit integration time. However, as the integration time increases, the dark current output value on the order of seconds will change significantly, negating the applicability of the aforementioned equivalent relation. Moreover, because absolute radiation calibration calls for calibrating the radiance value of the integrating sphere, this approach is only relevant to relative radiation calibration. Also, because of the incredibly little exposure duration, less random noise will be added, which might lessen the impact of noise on the calibration accuracy.

The field of view of a monocentric optical camera is composed of image sensors with multiple coupled FOPs. Figure 7 displays the grayscale value of the signal output obtained by each image sensor under the circumstance of uniform light incidence with various brightness levels.

 figure: Fig. 7.

Fig. 7. The DN mean values of all spliced image sensors under different irradiance conditions.

Download Full Size | PDF

Image sensors 1, 4, 5, and 8 are far from the center field of view area, hence the DN value is low. Image sensors 2, 3, 6, and 7 are in the center field of view area, indicating a high DN value. As a result, we must choose an appropriate pixel output value to minimize the non-uniformity coefficient of the corrected average pixel response, which can be achieved by using the effective pixel output mean value of the spliced image sensor [22]. Formula (3) is changed to formula (10) after setting the number of stitching image sensors to k. The mean value of effective pixel output and the fitting curve for each brightness level are displayed in Fig. 7. The fitting curve can be used to determine the average response coefficient of the image sensors after stitching.

$$\overline {DN} = \frac{{\sum\limits_k {DN({k,m,n} )\times Imag{e_{filt}}({k,m,n} )} }}{{k \times M \times N - \sum\limits_k {{N_{filt}}(k )} }}$$
We used the technique outlined in Section 3.2 to rectify non-uniformity in flat-field photographs. The photographs before and after rectification are shown in Fig. 8. The findings demonstrate that the radiation calibration method presented in this study is capable of not only identifying FOP flaws but also effectively correcting for imaging inhomogeneity phenomena such black filaments, hexagonal “speckles”, and oblique fringes.

 figure: Fig. 8.

Fig. 8. The flat field images before and after non-uniformity correction (IS-X stands for image sensor X): (a) Before correction; (b) After correction.

Download Full Size | PDF

The accuracy of the non-uniformity correction is evaluated using the image non-uniformity. The following formula defines image non-uniformity as the ratio of the standard deviation and mean values of the output signal values of the effective pixels of the image sensor under uniform light incoming conditions:

$$U = \frac{1}{{\overline {DN} }}\sqrt {\frac{{\sum\limits_m {\sum\limits_n {\sum\limits_k {{{({DN({Imag{e_{filt}}({k,m,n} )} )- \overline {DN} } )}^2}} } } }}{{k \times M \times N - \sum\limits_k {{N_{filt}}(k )} }}}$$
where, U stands for the image non-uniformity, $DN({Imag{e_{filt}}({k,m,n} )} )$ is the signal output value of the effective pixel filtered by the binary defect template (minus the dark background), and $\overline {DN}$ is the mean value of the pixel signal output of all splicing sensors filtered by the defect template (minus the dark background). The following outcomes were achieved after measuring the non-uniformity of the photographs both before and after correction: The overall non-uniformity of the mosaic image was 10.01% prior to the non-uniformity correction; after the correction processing described in this study, it was reduced to 0.78%. It demonstrates that the monocentric optical camera's image heterogeneity may be corrected using the method described in this research. We utilize a monocentric optical camera to image the real scene and apply non-uniformity correction to the image in order to confirm that the non-uniform correction coefficient of imaging response has the same corrective impact on the actual image. Figure 9 presents the calibration results. The non-uniform phenomena produced by the FOP, such as hexagonal specks, dark filaments, and oblique fringes, are homogenized after rectification, and the overall image is crisper, as can be observed by comparing the two figures.

 figure: Fig. 9.

Fig. 9. The realistic scene images before and after non-uniformity correction: (a) Before correction; (b) After correction.

Download Full Size | PDF

4. Conclusion

In this study, a novel type of space-based surveillance system, a monocentric optical camera based on fiber optic relay image transmission, is used for imaging defect detection and response non-uniformity correction. First, a defect detection method of manually binarizing segmenting saturated frames is proposed based on the defect characteristics of FOPs. Results show that this method is more applicable and has a higher detection accuracy when compared to the traditional image segmentation method Otsu algorithm. In order to correct the imaging response heterogeneity, it is also necessary to mark the relative response coefficients between the pixels of flat-field images with varying levels of radiation brightness. The correction coefficients are then derived from the experimental data. The analysis of the pixel inhomogeneity before and after rectification comes last. The outcomes demonstrate that after rectification, the heterogeneity of pixel response falls from 10.01% to 0.78%. The method described in this study can reduce the monocentric camera's image surface's pitting points, remove interference from the camera's response error, and enable the camera image to respond to actual target characteristics. The study's findings can be used to improve other optical fiber relay picture transmission systems. Next, we will use the findings from this paper's research to process real star maps while also examining the target energy concentration, the probe star, and other indicators of processed images to hasten the implementation of the monocentric camera based on fiber relay imaging in the field of space-based surveillance.

Funding

National Natural Science Foundation of China (62105331).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. J. E. Ford, “System optimization of compact monocentric lens imagers,” in Frontiers in Optics (Optica Publishing Group, 2013), p. FM4I. 2.

2. I. Stamenov, I. Agurok, and J. E. Ford, “Optimization of high-performance monocentric lenses,” Appl. Opt. 52(34), 8287–8304 (2013). [CrossRef]  

3. I. Stamenov, I. P. Agurok, and J. E. Ford, “Optimization of two-glass monocentric lenses for compact panoramic imagers: general aberration analysis and specific designs,” Appl. Opt. 51(31), 7648–7661 (2012). [CrossRef]  

4. J. E. Ford, I. Stamenov, S. Olivas, G. Schuster, N. Motamedi, I. P. Agurok, R. A. Stack, A. Johnson, and R. Morrison, “Fiber-coupled monocentric lens imaging,” in Computational Optical Sensing and Imaging (Optica Publishing Group, 2013), p. CW4C. 2.

5. D. Golish, E. Vera, K. Kelly, Q. Gong, P. Jansen, J. Hughes, D. Kittle, D. Brady, and M. Gehm, “Development of a scalable image formation pipeline for multiscale gigapixel photography,” Opt. Express 20(20), 22048–22062 (2012). [CrossRef]  

6. A. Arianpour, N. Motamedi, I. P. Agurok, and J. E. Ford, “Enhanced signal coupling in wide-field fiber-coupled imagers,” Opt. Express 23(4), 5285–5299 (2015). [CrossRef]  

7. S. J. Olivas, A. Arianpour, I. Stamenov, R. Morrison, R. A. Stack, A. R. Johnson, I. P. Agurok, and J. E. Ford, “Image processing for cameras with fiber bundle image relay,” Appl. Opt. 54(5), 1124–1137 (2015). [CrossRef]  

8. I. Stamenov, A. Arianpour, S. J. Olivas, I. P. Agurok, A. R. Johnson, R. A. Stack, R. L. Morrison, and J. E. Ford, “Panoramic monocentric imaging using fiber-coupled focal planes,” Opt. Express 22(26), 31708–31721 (2014). [CrossRef]  

9. J. Zhang, X. Wang, X. Wu, C. Yang, and Y. Chen, “Wide-viewing integral imaging using fiber-coupled monocentric lens array,” Opt. Express 23(18), 23339–23347 (2015). [CrossRef]  

10. Y. Huang, D. Xie, C. Yan, and C. Wu, “Effect of Fiber Optic Plate on Centroid Locating Accuracy of Monocentric Imager,” Appl. Sci. 11(5), 1993 (2021). [CrossRef]  

11. Y. Wu, Q. Hu, and Z. Cao, “Optical fiber panel shadow detection based on the improved Canny operator,” Optical Instruments 30, 5 (2008).

12. Y. Wu, M. Wang, and Z. Cao, “Optical fiber panel shadow self-adaptive detection,” Optical Instruments 32, 5 (2010).

13. K. Zhang, “Study On Defect Detection Technology of Optical Fiber Element for Image Transmission,” (North University of China, 2018).

14. Y. Bingqian, W. Mingquan, Z. Junsheng, and G. Jinkai, “Research of fiber optical face plate defects segmentation based on improved watershed algorithm,” Proc. SPIE 10452, 104526H (2017). [CrossRef]  

15. X. Chen, R. Yang, C. Guo, and Q. Zhang, “FOC winding defect detection based on improved texture features andlow-rank representation model,” Appl. Opt. 61(19), 5599–5607 (2022). [CrossRef]  

16. R. Yang, X. Chen, and C. Guo, “Automated defect detection and classification for fiber-optic coil based on wavelet transform and self-adaptive GA-SVM,” Appl. Opt. 60(32), 10140–10150 (2021). [CrossRef]  

17. S. J. Song, J. F. Jing, and W. Cheng, “Online Monitoring System for Macro-Fatigue Characteristics of Glass Fiber Composite Materials Based on Machine Vision,” IEEE Trans. on Instrumentation and Measurement 71, 1–12 (2022). [CrossRef]  

18. P. Jiao, J. Jia, Y. Fu, L. Zhang, Y. Wang, J. Wang, Y. Zhou, P. Shi, R. Zhao, and Y. Huang, “Detection of blemish for fiber-optic imaging elements,” Opt. Eng. 59(05), 1–053105 (2020). [CrossRef]  

19. P. Jiao, K. Zhou, Y. Huang, R. Zhao, Y. Zhang, Y. Wang, Y. Fu, J. Wang, Y. Zhou, Y. Du, and J. Jia, “Formation mechanism of blemishes in a fiber-optic imaging element,” Appl. Opt. 61(8), 1947–1957 (2022). [CrossRef]  

20. P. Jiao, Y. Huang, Y. Wang, Y. Zhou, Y. Fu, and J. Wang, “Research progress of fiber optic imaging elements,” in Conference on Applied Optics and Photonics China (2020).

21. Y. Ji, C. Zeng, F. Tan, A. Feng, and J. Han, “Non-uniformity correction of wide field of view imaging system,” Opt. Express 30(12), 22123–22134 (2022). [CrossRef]  

22. X. Li, H. Liu, J. Sun, and C. Xue, “Relative Radiometric Calibration for Space Camera with Optical Focal Plane Assembly,” Acta Opt. Sin. 37(8), 0828006 (2017). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. Types of inherent defects of FOPs: (a) chicken filaments (b) dead spots (c) grid (d) frame (e) pollutants.
Fig. 2.
Fig. 2. Schematic diagram of monocentric camera defect detection.
Fig. 3.
Fig. 3. Comparison of the defect detection results of this article's method and the Otsu algorithm: (a) represent the defects that need to be detected; (b) represent the defect detection results of this article's method; (c) represent the defect detection results of the Otsu algorithm.
Fig. 4.
Fig. 4. Response nonuniformity phenomenon caused by the FOP: (a) (group) dark filaments (b) hexagonal spots (c) inclined stripes.
Fig. 5.
Fig. 5. The least-square method-based linear fitting graph between the image means and radiance.
Fig. 6.
Fig. 6. Diagram of monocentric camera imaging correction experiment.
Fig. 7.
Fig. 7. The DN mean values of all spliced image sensors under different irradiance conditions.
Fig. 8.
Fig. 8. The flat field images before and after non-uniformity correction (IS-X stands for image sensor X): (a) Before correction; (b) After correction.
Fig. 9.
Fig. 9. The realistic scene images before and after non-uniformity correction: (a) Before correction; (b) After correction.

Tables (1)

Tables Icon

Table 1. The fitting residuals and standard residuals between image means and radiance

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

D N ( L , m , n ) = D N 0 ( L , m , n ) D a r k ( t , m , n )
D N ( L , m , n ) = R ( m , n ) × L + B ( m , n )
D N ( L ) ¯ = m n D N ( L , m , n ) × I m a g e f i l t ( m , n ) M × N N f i l t
D N ( L ) ¯ = R ¯ × L + B ¯
D N r e v = R r e v × D N t e s t
D N r e v ( L , m , n ) = D N ( L ) ¯ × I m a g e f i l t ( m , n )
D N r e v ( L , m , n ) = [ R ¯ × D N ( L , m , n ) R ( m , n ) + B ¯ R ¯ R ( m , n ) B ( m , n ) ] × I m a g e f i l t ( m , n )
R r e v ( m , n ) = R ¯ × I m a g e f i l t ( m , n ) R ( m , n )
D N r e v ( m , n ) = R r e v ( m , n ) × [ D N 0 ( m , n ) D a r k ( t , m , n ) ]
D N ¯ = k D N ( k , m , n ) × I m a g e f i l t ( k , m , n ) k × M × N k N f i l t ( k )
U = 1 D N ¯ m n k ( D N ( I m a g e f i l t ( k , m , n ) ) D N ¯ ) 2 k × M × N k N f i l t ( k )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.