## Abstract

In this paper, a novel approach to effectively extract location coordinates of 3-D objects employing a blur metric has been proposed. With elemental images of 3-D objects, plane object images (POIs) were reconstructed along the output plane using the CIIR (computational integral imaging reconstruction) algorithm, in which only the POIs reconstructed on the output planes where 3-D objects were originally located are focused whereas the other ones are blurred. Therefore, by calculating these blur metrics of the reconstructed POIs depth data of 3-D objects could be extracted. That is, the blur metric is the lowest on the focused point, but it starts to increase as (fill in the blank) moves away from that point. Accordingly, by finding out the points of inflection in the map of blur metric variation, the output planes where the objects were located were finally detected. To show the feasibility of our proposed scheme, some experiments were carried out and its results are presented as well.

© 2008 Optical Society of America

## 1. Introduction

Extraction of depth cue of three-dimensional (3-D) objects in the real world has been known as one of the important issues in the fields of machine vision, target recognition, tracking and video surveillance, and etc. [1,2].

For this, a camera system with an active range-finding sensor, called a depth camera has been introduced [3]. It can provide us with depth data of 3-D objects in space in addition to its color image data. But, this camera system basically has been developed for the purpose of television program production, and so its structure and cost have been known to be too complex and too high for commonplace uses.

Moreover, stereo camera–based depth extraction schemes [4,5] also have been proposed. In these systems, stereoscopic video images of 3-D objects were captured with a stereo camera, from which depth data of 3-D objects were extracted by estimation of disparities between the right and left captured images.

Recently, some new integral imaging-based depth extraction methods have been proposed [6]. As elemental images picked-up in the integral imaging system have their own perspectives of 3-D objects, disparities must exist between the picked-up elemental images. Therefore, the depth data can be estimated from disparities between elemental images in these schemes. However, as elemental images are captured through a pinhole (or a lenslet) array, the absolute resolution of each picked-up elemental image should be severely decreased according to the dimension of an employed pinhole array. Hence, it might be difficult to obtain the accurate depth data of 3-D objects from these low-resolution elemental images in the conventional integral imaging-based depth extraction methods.

Basically, there are two kinds of integral imaging reconstruction techniques, optical integral imaging reconstruction (OIIR) technique [7] and computational integral imaging reconstruction (CIIR) technique [8–13]. Between them, the CIIR technique can computationally reconstruct object images by mapping elemental images inversely using a virtual pinhole (or lenslet) array based on ray optics. In this method, object images can be reconstructed as a form of depth-dependent plane object images (POIs) along the output plane.

Recently, a CIIR based 3-D image correlator system for extraction of location data of the 3D target in space was proposed [9]. Here, in this system, elemental images of the reference and target objects were picked up with lenslet arrays, and using these elemental images reference and target plane images were reconstructed at the output plane by using the CIIR technique. Then, just by cross-correlations between the reconstructed reference and target plane images, the longitudinal distances of target objects could be extracted.

An ability to reconstruct the POIs of 3-D objects along the output plane is a unique feature of the CIIR method. In this approach, only the POIs reconstructed on the output planes where 3-D objects were originally located are clearly focused, whereas the other ones reconstructed away from these focused planes are getting out of focus. That is, the reconstructed output images might be consisted of the clearly focused and the blurred POIs. Therefore, depth data of 3-D objects in a scene can be extracted by estimating the blur metric of each POI [14–16].

Since blur is perceptually apparent along edges or in textured noises, blur measurement is based on the smoothing effect of blur on edges and consequently attempted to measure the spread of edges [14,15]. The blur metric of each POI is estimated for examining which POI is in focus or out of focus. With this estimated blur metric, we can examine the points of inflection. That is, the gradient of the blur metric always changes from the negative to the positive value, which means that the POIs reconstructed at the points where the objects were originally located are focused, and before and after these points, the reconstructed images would start to be out of focus. In other words, at the focused point, the blur metric become to be the lowest, whereas moving away from that point, the blur metric sharply increases. Therefore, from these estimated blur metrics, the right output planes where the objects were originally located can be detected.

Accordingly, in this paper, a novel approach to effectively extract depth data of 3-D objects in a scene by estimating the blur metrics of POIs reconstructed by the CIIR technique is proposed and its performance is analyzed. In addition, to test the feasibility of the proposed method, some experiments with test objects are carried out and the results are discussed.

## 2. Operational characteristics of the integral imaging system

Fundamentally, an integral imaging system consists of two processes: pickup and display as shown in Fig. 1. In the pickup process, intensity and directional information of the rays coming from a 3-D object through a pinhole array is recorded by use of a charge coupled device (CCD) camera as a form of two-dimensional (2-D) elemental image array (EIA) representing different perspectives of a 3-D object as shown in Fig. 1(a). On the other hand, in the display process, as it is the reverse of the pickup process, the recorded EIA are displayed on a display panel such as a liquid crystal display (LCD) and then the 3-D image can be optically reconstructed and observed through a display pinhole array as shown in Fig. 1(b).

## 3. Proposed blur metric-based depth extraction method

Figure 2 shows a flowchart of the proposed method to effectively extract depth data of 3D objects from the computationally reconstructed POIs by employing a novel blur metric. The proposed scheme consists of three parts: pickup, reconstruction and depth extraction.

First, intensity and directional information of the rays coming from a virtual 3-D object is digitally generated as a form of 2-D elemental images representing different perspectives of a virtual 3-D object. From these computationally generated elemental images, a set of the POIs of a 3-D object are reconstructed along the output plane by using the CIIR technique [11]. Then, the blur metric of each reconstructed POI is estimated and finally depth data of the objects are extracted by examining the points of inflection in the estimated blur metric. Specifically, at the focused point, the blur metric become to be the lowest, whereas the blur metric sharply increases moving away from that point. Therefore, from this estimated blur metric, the right output planes where the objects were originally located can be detected.

#### 3.1 Pickup part

As illustrated in Fig. 3, a 3-D test object used in this paper is consisted of three 2-D objects, named ‘Target 1’, ‘Target 2’ and ‘Target 3’, which are located at *z _{1}* = 30

*mm*,

*z*= 45

_{2}*mm*and

*z*= 60

_{3}*mm*in front of the pinhole array, respectively. In the pickup part, elemental images of the 3-D test object are computationally picked up. Here, the distance between the virtual pinhole array and the elemental image plane is assumed to be 3

*mm*. The resolution of the picked-up EIA is given by 1,292 by 760 because it is assumed that the lenslet array is consisted of 34 by 20 lenslets and the resolution of each lenslet is 38 by 38 pixels. Resolution of each of the three 2-D objects is given by 430 by 360 pixels, and its center location in the image plane of 1,292 by 760 is set to be (291, 548), (599, 201) and (970, 520), respectively.

Figure 4 shows the computationally generated elemental image array of the test object having a resolution of 1,292 by 760 pixels.

#### 3.2 Reconstruction part

POIs of the test object can be computationally reconstructed from the picked-up elemental images of Fig. 4 by using the CIIR technique. Figure 5 shows a conceptual diagram of the CIIR technique for reconstruction of the POIs on the output plane of *z _{l}* (

*z*=

*L*) [11]. At the distance of

*z*, each picked-up elemental image is projected inversely through the corresponding virtual pinhole array. By digitally simulating the reconstruction process basing on the ray optics approach, the EIA can be inversely magnified according to the magnification factor of

_{l}*M*=

*L*/

*g*, in which M is the ratio of the distance between the virtual pinhole array and the reconstructed image plane (

*L*) to the distance between the virtual pinhole array and the EIA plane (-

*g*).

For *M*>*1*, the inversely mapped images through each virtual pinhole array are overlapped and summated with each other on the reconstructed image plane of *z _{l}*. Assuming that the respective sizes of an elemental image in the vertical and horizontal directions are given by

*a*and

*b*, the vertical and the horizontal sizes of the mapped image on the reconstructed image plane are given by

*M*and

_{a}*M*, respectively according to the magnification factor of

_{b}*M*=

*L*/

*g*. The enlarged elemental image is overlapped and summed at the corresponding pixels of the reconstruction image plane. For the complete reconstruction of the POI at a given distance, the process mentioned above is repeatedly performed to all of the obtained elemental images through each corresponding pinhole.

As the reconstruction plane is differently taken by increasing the distance from the pinhole array with a small incremental step of *Δz*, a set of POIs can be finally reconstructed.

Figure 6 shows 9 kinds of POIs of the test object reconstructed from the picked-up EIA of Fig. 4 by using the CIIR technique with an incremental step of *Δz* = 3 *mm*.

Figure 6 shows that only the POIs reconstructed on the output planes of *z _{1}* = 30

*mm*,

*z*= 45

_{2}*mm*and

*z*= 60

_{3}*mm*where the objects were located during the pickup process are clearly focused, but getting away from these planes, the POIs are getting out of focus and appearing to be blurred. Therefore, depth data of the objects could be obtained from the estimation of the focusing (or defocusing) parameters of the reconstructed POIs, so-called a blur metric.

#### 3.3 Depth extraction part

As shown in Fig 6, CIIR technique can provide us with a set of POIs reconstructed along the output plane by increasing the z distance with a small step away from the pinhole array.

Here, the blur metrics of the reconstructed POIs are estimated and basing on these values depth data of the test objects are finally extracted by examining the points of inflection of these estimated blur metrics [15]. From these depth data of the test objects, the right output planes where the test objects were originally located can be finally detected.

### 3.3.1 Estimation of the blur metric

First, an input gray or color image *I(x,y)* is given, in which *x* and *y* are the row and column coordinate in an image, respectively. Additionally, *I _{i}(x,y)* is the

*i*th channel of the image

*I(x,y)*, i.e.,

*i*= 1 for gray channel,

*i*= 3 for R, G, B channel. The gradient at any pixel point of

*P(x,y)*can be calculated by use of a 2-D directional derivative (sobel operator has been used in this paper) as Eq. (1).

Then, the magnitude and orientation of the gradient can be obtained from *G _{x}* and

*G*as shown in Eq. (2) and Eq. (3).

_{y}From Eq. (1), each edge point in a POI is set to zero unless it is a local maximum point along a line oriented along the gradient orientation by non-maximum suppression. Let *P _{e}(x_{e},y_{e})* be a local maximum edge point in a POI, then its magnitude and orientation are given by |

*| and*

^{∇}I_{i}(x_{e},y_{e})*θ*, respectively. In order to calculate the spatial variation, the

_{i}(x_{e},y_{e})*ψ*-axis is introduced here, in which the origin of the

*ψ*-axis is at the pixel point of

*P*and the direction is the normal to the

_{e}(x_{e},y_{e})*θ*, as shown in Fig. 7.

_{i}(x_{e},y_{e})Consider *m _{l}(x_{l},y_{l})* and

*m*as the nearest local minima on left and right side of the local maximum point

_{r}(x_{r},y_{r})*P*, respectively i.e.,

_{e}(x_{e},y_{e})*m*is on the negative

_{l}*ψ*–axis and

*m*is on the positive

_{l}*ψ*–axis because the origin of the

*ψ*–axis is at

*P*. As a discrete probability distribution with the mean at the point of

_{e}(x_{e},y_{e})*P*, corresponding to

_{e}(x_{e},y_{e})*ψ*= 0, mathematically, the spatial variance is calculated by Eq. (4).

The blur metric *β _{i}*(

*P*) for a local maximum edge point

_{e}*P*can be obtained by computing the weighted average of the standard deviation

_{e}(x_{e},y_{e})*σ*and the magnitude of an edge |

_{i}*|, which is given by Eq. (5) [15].*

^{∇}I_{i}(p_{e})where *σ _{imax}* and |

*|*

^{∇}I_{i}(p_{e})*are normalization terms denoting the maximum values for all standard deviations and for all edge gradient magnitudes, respectively. The weight*

_{max}*η*is related to image contrast. For the case of the identical weight of

_{β}*σ*and |

*|, regardless of image contrast, Eq. (5) can be modified into Eq. (6).*

^{∇}_{i}(p_{e})This makes it possible to eliminate processing time for the multi-scale retinex (MSR) algorithm [17]. Therefore, blur measure of the edges of the reconstructed POI on an arbitrary output plane can be calculated by Eq. (6).

### 3.3.2 Calculation of the mean blur metric

The mean blur metric (MBM) of each POI as show in Eq. (7) can be calculated by using the following procedure. First, each blur metric for a local maximum edge point in a POI is summed and divided by the number of local maximum edge point as the average blur metric per local maximum edge point, precisely, *β _{imean}*. And then,

*β*is divided by the ratio of the number of local maximum points per channel

_{imean}*M*to the number of pixels in a POI

_{i}*N*. Therefore,

*α*represents the percentage of local maximum edge points per channel in a POI.

The mean blur metric in a POI, *β _{MBM}* is then given by Eq. (7) and it can provide discrimination in depth detection of the target objects per POI, not per point.

Where *N* is the number of pixels in a POI and *M _{i}* is the number of local maximum points per channel. From Eq. (7), the MBM of each POI can be calculated and their results are depicted in Fig. 8. Figure 8 shows a variation of the MBM along the output plane and we can see three points of inflection in this figure, which indicates that there are three potential objects. As noted above, the MBM becomes to be the lowest at the focused point, where the object was originally located, whereas the MBM sharply increases moving away from that point.

Table 1 shows location, MBM, and gradient values of three points of inflection. For example, as shown in Fig. 8 the first point of inflection occurred at *z* = 30 *mm*, on which the MBM value was found to be 30.47×10^{-4}, whereas it sharply increased up to 54.57×10^{-4}, 50.17×10^{-4} on the neighboring points of 27 and 33 *mm*, respectively. At the same time the gradients of the MBM on the neighboring points of 27 and 33 *mm* were calculated to be - 8.03×10^{-4} and +6.56×10^{-4}, respectively. This change of the gradient of the MBM from negative to the positive value finally confirmed that there should be a point of inflection between them and that one object might exist on the plane of *z* = 30 *mm*. Therefore, from Table 1, the right output planes where the ‘Target 1’, ‘Target 2’ and ‘Target 3’ were originally located could be easily found to be *z* = 30, 45 and 60 *mm*, respectively.

In addition, once the longitudinal positions where the objects are located have been detected, lateral coordinates of the objects can be also obtained through correlations between the reference object images and the corresponding POIs reconstructed on the right planes of *z* = 30, 45 and 60 *mm*, where the objects were originally located. Figure 9 shows lateral profiles of their correlation results. From the correlation outputs of Figs. 9(a), 9(b), 9(c), NCC (normalized cross correlation) values and lateral correlation positions for the ‘Target 1’, ‘Target 2’ and ‘Target 3’ were found to be 0.8659, 0.5723, 0.8087 and (291, 548), (599, 201), (970, 520), respectively. Finally 3D location coordinates of the test objects in space can be obtained by (291, 548, 30), (599, 201, 45) and (970, 520, 60), respectively at the Cartesian coordinates system basing on the experimental results of Fig. 8 and Fig. 9. By comparing these results with the original location coordinates of the test objects it was found that they are exactly the same, which might confirm that the proposed method can provide a good discrimination performance in detection of depth and location coordinate of the target objects in space.

Accordingly, even though prior information about the output planes where objects were originally located was not given in advance, depth data of the objects can be accurately detected by simply estimating the MBM of the POIs in the proposed method. Finally, these successful results in experiments explained above reveal that depth and 3D location data of the target object in space could be effectively detected by use of the proposed scheme employing a blur metric and confirm the feasibility of the proposed method in many practical applications such as machine vision, target recognition and tracking, video surveillance, and etc.

#### 3.4 Depth extraction for the case of totally overlapped objects

So far, we discussed the proposed depth extraction method for the case of totally non-overlapped objects (here we call it ‘Case 1’) along the z-direction. Here, we performed the same experiments for the extreme case of totally overlapped objects (here we call it ‘Case 2’) to confirm the versatility of the proposed scheme. As illustrated in Fig. 10, a test 3-D object is consisted of three 2-D objects, named ‘Target 1’, ‘Target 2’ and ‘Target 3’, which are located at *z _{1}* = 30

*mm*,

*z*= 45

_{2}*mm*and

*z*= 60

_{3}*mm*in front of the lenslet array, respectively just like those of Fig. 3, except that they are totally overlapped along the

*z*-direction. The employed lenslet array is identical with that of Fig. 3. Resolution of each of three 2-D objects is also given by 430 by 360 pixels, but the center locations of the objects are set to be (622, 375), (623, 378) and (625, 372), respectively, so that they are totally overlapped along the

*z*-direction.

Table 2 also shows location, MBM, and gradient values of three points of inflection for ‘Case 2’. For example, as shown in Fig. 11 the second point of inflection occurred at *z* = 45 *mm*, on which the MBM value was found to be 27.0×10^{-4}, whereas it gradually increased up to 30.97×10-4, 33.87×10^{-4} on the neighboring points of 42 and 48 *mm*, respectively. At the same time the gradients of the MBM on the neighboring points of 42 and 48 *mm* were calculated to be -1.32×10^{-4} and +2.29×10^{-4}, respectively. This change of the gradient of the MBM from the negative to the positive value finally confirmed that there should be a point of inflection between them and one object might exist on the plane of *z* = 45 *mm*. Therefore, from Table 2, the right output planes where the ‘Target 1’, ‘Target 2’ and ‘Target 3’ were originally located could be found to be *z* = 30, 45 and 60 *mm*, respectively.

Moreover, through correlations between the reference object images and the corresponding POIs reconstructed on the right planes of *z* = 30, 45 and 60 *mm*, lateral positions of ‘Target 1’, ‘Target 2’ and ‘Target 3’ were found to be (622, 375), (623, 378) and (625, 372), respectively. Accordingly, 3D location coordinates of the test objects were finally obtained to be (622, 375, 30), (623, 378, 45) and (625, 372, 60), respectively. Comparing these results with the original location coordinates of the test objects they are also found to be the same with each other.

By comparing the results of ‘Case 1’ with those of ‘Case 2’, it indisputably confirms that in both cases three points of inflection were occurred at the same planes of *z* = 30, 45 and 60 *mm* where the objects were originally located, so that depth data of the three objects could be accurately detected from the proposed method whether they might be overlapped or not along the output plane. However there are some differences in gradient and MBM values between two extreme cases. First, there were much bigger changes in gradient values around the inflection points in ‘Case 1’ than ‘Case 2’. These results are due to relatively weaker interaction between target objects in ‘Case 1’ comparing with that in ‘Case 2’. Second, the overall MBM values of the ‘Case 2’ are relatively lower than those of the ‘Case 1’. This is due to *M _{i}* which is the number of local maximum points in a POI of ‘Case 2’ is much less than that of the ‘Case 1’.

Therefore, as target objects are getting overlapped together along the *z*-direction, discrimination performance of the points of inflection in the MBM curve might be getting deteriorated a little bit. Anyhow, through successful experiments on depth extraction for two extreme cases of totally non-overlapped and overlapped objects, the feasibility of the proposed method in the practical applications can be validated.

#### 3.5 Depth extraction for the case of real objects

In addition, some optical experiments with a real 3D object were also carried out, in this paper, to suggest a possibility of practical implementation of the proposed method. In the experiment, a simple 3-D object was used, which is consisted of two 2-D pattern objects, named ‘Object 1’, ‘Object 2’, and located at *z _{o1}* = 30 mm and

*z*= 45 mm, respectively in front of the lenslet array. Here, a lenslet array with 34×20 lenslets was used, which was located at z=0 mm. Each lenslet size is 1.08 mm, the focal length of a lenslet is 3 mm and a single elemental image is composed of 38×38 pixels.

_{o2}Figure 12 shows the experimental setup for pickup of elemental images of totally non-overlapped real objects, ‘Object 1’ and ‘Object 2’. With this optical pickup system of Fig. 12, elemental images of the real objects were captured by use of the lenslet array and the CCD camera.

From these picked-up elemental images, POIs of the real object were reconstructed by use of the CIIR algorithm. Then, the MBM of each POI is also calculated and the results are illustrated in Fig. 13. Figure 13 shows a variation of the MBM along the output plane and we can also see two points of inflection in this figure just like the computational cases of Figs. 8 and 11.

Table 3 shows the calculated location, MBM, and gradient values of two points of inflection for real objects. That is, by the change of the gradient of the MBM from the negative to the positive value, the right output planes where the ‘Object 1’ and ‘Object 2’ were originally located could be found to be *z* = 30 and 45 *mm*, respectively. Comparing these results with the original location coordinates of the test objects, they were found to be the same with each other, which finally confirmed the feasibility of the proposed method even in the practical implementations.

Moreover, through correlations between the reference object images and the corresponding POIs reconstructed on the right planes of *z* = 30 and 45 *mm*, lateral positions of ‘Object 1’ and ‘Object 2’ were also additionally found to be (430, 285) and (735, 308), respectively. Accordingly, 3D location coordinates of the real objects were finally obtained to be (430, 285, 30) and (735, 308, 45), respectively.

## 4. Conclusion

In this paper, a novel approach to effectively extract depth data of 3-D objects using a blur metric was proposed. A set of POIs of 3D objects were reconstructed along the output planes using the CIIR algorithm, in which only the POIs reconstructed on the output planes where 3-D objects were located are focused whereas the other ones are blurred. Then, by estimating blur metrics of the reconstructed POIs depth information of 3-D objects were finally extracted. In addition, computational experiments were carried out for two extreme cases of non-overlapped and totally overlapped test objects as well as optical experiments with real objects were also performed. From successful results for all these cases the feasibility of the proposed blur metric-based location coordinated extraction method was finally confirmed.

## Acknowledgment

This research was supported by the MIC (Ministry of Information an Communication), Korea, under the ITRC (Information Technology Research Center) support program supervised by the IITA (Institute of Information Technology Assessment) (IITA-2008-C1090-0801-0018)

## References and links

**1. **J.-I. Park and S. Inoue, “Acquisition of sharp depth map from multiple cameras,” Signal Processing: Image Commun. **14**, 7–19 (1998). [CrossRef]

**2. **J.-H. Ko and E.-S. Kim, “Stereoscopic video surveillance system for detection of Target’s 3D location coordinates and moving trajectories,” Opt. Commun. **191**, 100–107 (2006).

**3. **G. J. Iddan and G. Yahav, “Three-dimensional imaging in the studio and elsewhere,” Proc. SPIE **4298**, 48–55 (2000). [CrossRef]

**4. **J.-H. Lee, J.-H. Ko, K.-J. Lee, J.-H. Jang, and E.-S. Kim, “Implementation of stereo camera-based automatic unmanned ground vehicle system for adaptive target detection,” Proc. SPIE **5608**, 188–197 (2004). [CrossRef]

**5. **J.-H. Park, S. Jung, H. Choi, Y. Kim, and B. Lee, “Depth extraction by use of a rectangular lens array and one-dimensional elemental image modification,” Appl. Opt. **43**, 4882–4895 (2004). [CrossRef] [PubMed]

**6. **J.-H. Park, Y. Kim, J. Kim, S.-W. Min, and B. Lee, “Three-dimensional display scheme based on integral imaging with three-dimensional information processing,” Opt. Express. **12**, 6020–6032 (2004). [CrossRef] [PubMed]

**7. **S.-W. Min, B. Javidi, and B. Lee, “Enhanced three-dimensional integral imaging system by use of double display devices,” Appl. Opt. **42**, 4186–4195 (2003). [CrossRef] [PubMed]

**8. **B. Javidi, R. Ponce-Díaz, and S. -H. Hong, “Three-dimensional recognition of occluded objects by using computational integral imaging,” Opt. Lett. , **31**, 1106–1108 (2006). [CrossRef] [PubMed]

**9. **J.-S. Park, D.-C. Hwang, D.-H. Shin, and E.-S. Kim, “Resolution-enhanced three-dimensional image correlator using computationally reconstructed integral images,” Opt Commun. **276**, 72–79 (2007). [CrossRef]

**10. **H. Arimoto and B. Javidi, “Integral three-dimensional imaging with digital reconstruction,” Opt. Lett. **26**, 157–159 (2001). [CrossRef]

**11. **S. Hong, J.-S. Jang, and B. Javidi, “Three-dimensional volumetric object reconstruction using computational integral imaging,” Opt. Express. **12**, 483–491 (2004). [CrossRef] [PubMed]

**12. **D.-H. Shin, E.-S. Kim, and B. Lee, “Computational reconstruction technique of three-dimensional object in integral imaging using a Lenslet Array,” Jpn. J. Appl. Phys. **44**, 8016–8018 (2005). [CrossRef]

**13. **D.-H. Shin, M. Cho, K.-C. Park, and E.-S. Kim, “Computational technique of volumetric object reconstruction in integral imaging by use of real and virtual image fields,” ETRI J. **27**, 208–712 (2005). [CrossRef]

**14. **P. Marziliano, F. Dufaux, S. Winkler, and T. Ebrahimi, “A no-reference perceptual blur metric,” in the Proceedings of the International Conference on Image Processing , **3**, 57–60 (2002).

**15. **Y. C. Chung, J. M. Wang, R. R. Bailey, and S. W. Chen, “A non-parametric blur measure based on edge analysis for image processing applications,” in *the Proceedings of IEEE Conference on Cybernetics and Intelligent Systems* (IEEE, 2004), 1, pp. 356–360.

**16. **R. Youmaran and A. Adler, “Using red-eye to improve face detection in low quality video image,” *IEEE Canadian Conference on Electrical and Computer Engineering* (IEEE, 2006), pp. 1940–1943.

**17. **Z. Rahman, D. J. Jobson, and G. A. Woodell, “A multiscale retinex for color rendition and dynamic range compression,” NASA Langley Technical Report (1996).