Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Real-time image stabilization for arbitrary motion blurred image based on opto-electronic hybrid joint transform correlator

Open Access Open Access

Abstract

An efficient approach was put forward to keep real-time image stabilization based on opto-electronic hybrid processing, by which image motion vector can be effectively detected and point spread function (PSF) was accurately modeled instantaneously, it will alleviate greatly the complexity of image restoration algorithm. The approach applies to arbitrary motion blurred images. We have also constructed an image stabilization measurement system. The experimental results show that the proposed method has advantages of real time and preferable effect.

©2011 Optical Society of America

1. Introduction

High-resolution imaging systems are of interest for a wide variety of applications, particularly in remote sensing, airborne reconnaissance, and aerial photograph. However, image resolution is always limited by image motions [13] induced by flying aircraft, various vibration and varying attitude, which are often the major factors of image degradation.

In order to compensate the image motion and keep image stabilization, an image stabilizer is required to improve image quality. Generally, image stabilizers can be divided into two types, digital image stabilizers (DIS) and optical image stabilizers (OIS) [4], respectively. A DIS reduces the image blur by using an image-restoration filter or algorithm [57], such as inverse filter, Wiener filter or blind image deconvolution [8,9]. This approach extracts image motions vectors, and acquires the PSF from the blurred images without the need for external measurement devices such as gyroscopes or accelerometers [1]. However, a DIS requires additional buffer memory for image processing and takes a long time to measure and correct the image, so real-time performance of high-resolution image is bad. Moreover, for an image-restoration filter, knowledge of point spread function (PSF) is often required in advance. However, direct sensing of the PSF and then calculation of the modulation transfer function (MTF) are often complex and inaccurate in most applications. Numerous restoration algorithms have been reported to derive PSF from the blurred image itself or from the image sequence. However, the accuracy of PSF is limited because of the complexity and randomness of various motions. In addition, another limitation of these algorithms is their dependency on the image content. If the images do not contain well-defined edges, an accurate estimation of the motion function can be difficult. So the compensation accuracy of motion image is limited for a DIS. Although Blind image deconvolution need no the knowledge of PSF. Unfortunately, this iterative algorithm requires an initial guess of the PSF, the accuracy of the estimated PSF and the quality of the restored image depends on this guess completely.

An OIS method is to move the lens system or sensor [3,10] to compensate image motion. Moving lens (may rotate mirror) or sensor can change the optical path to keep image stabilization. But this approach needs a complicated optical-mechanical-electrical system to measure motion changes by gyroscopes or accelerometers. It makes image stabilizer complexity, high cost and also low precision due to friction and wind resistance. Moreover, real-time image for OIS method is bad.

In this paper, for the above reasons, we put forward an efficient approach to achieve real-time image stabilization by opto-electronic hybrid processing based on opto-electronic hybrid joint transform correlator (JTC) [1115]. Firstly, image velocity vector can be detected and obtained by image sequence captured by high-speed CCD based on JTC. And then accurate PSF model can be constructed according the velocity vector; it will alleviate greatly the complexity of image restoration algorithm. Finally, an appropriate algorithm is implemented to restore the blurred image rapidly. We have also constructed an image stabilization measurement system. This paper describes the principle of the proposed approach and presents some preliminary experimental results.

2. Theory

2.1 Principle of image stabilization

In most cases, assuming that the imaging system is linear and spatially shift invariant, the motion blurred image g(x,y) can be represented as

g(x,y)=h(x,y)f(x,y)+n(x,y)
where * denotes two-dimensional convolution operation, h(x,y) refers to the PSF of the degradation process, f(x,y) represents the original object image, and n(x,y) is additive noise. Then, the goal of image restoration is to model h(x,y) accurately. Here supposing n(x,y) is ignored, Eq. (1) can be rewritten as
g(x,y)=h(x,y)f(x,y)
According to previous work by N. S. Kopeika et al, h(x,y) can also be defined as [16]
h(x,y)=i=1n1Tvi(x,y)
where T denotes the integration period, represents velocity vector of images captured by high-speed CCD camera, n is quantity of captured images during an integration period. So actually, the goal of image stabilization turns to how to detect velocity vector vi(x,y) accurately.

2.2 Motion vector detection and Image restoration algorithms

Figure 1 shows the schematic diagram of Opto-electronic hybrid JTC for implementing image stabilization. During an integration period, adjacent sequence images fi(x, y-a) and fi + 1(x + ∆x, y + a + ∆y) captured by a high-speed CCD camera are displayed side by side onto SLM1 placed in a front focal plane of a Fourier transforming lens2. Here assuming current frame image fi(x, y-a) is reference image r(x, y), and next frame image fi + 1(x + ∆x, y + a + ∆y) denotes object image t(x, y), a denotes image displacement in the y direction. ∆x and ∆y is motion displacement of image fi + 1(x + ∆x, y + a + ∆y) compared image fi(x, y-a) in the x and y direction, respectively.

 figure: Fig. 1

Fig. 1 Schematic diagram of opto-electronic hybrid JTC for real-time image stabilization.

Download Full Size | PDF

By illuminating the SLM1 with a collimated laser, the displayed joint images are Fourier transformed by the lens2. And then a joint power spectrum (JPS) is generated at the back focal plane of the lens2. The JPS captured by CCD1 can be mathematically expressed as

|S(u,v)|2=|R(u,v)|2+|T(u,v)|2+R(u,v)T*(u,v)exp[2iπuΔx2iπv(2a+Δy)]+T(u,v)R*(u,v)exp[2iπuΔx+2iπv(2a+Δy)]
where CCD1 is a square law detector which records JPS, S(u,v), R(u,v) and T(u,v) represent the Fourier transforms of the joint input image s(x, y), reference image r(x, y) and object image t(x, y), respectively. (u,v) are the spatial frequency coordinates in the x and y directions at the Fourier plane, respectively. They are related to the actual coordinates (x,y) by x = λfu and y = λfv, where λ and f stands for the operating wavelength of the laser and the focal length of lens2. By importing the JPS into the SLM2 as the transmittance signal, the correlation output c(x,y) can be given by Fourier transforming lens3 as follow
c(x,y)=r(x,y)r(x,y)+t(x,y)t(x,y)+r(x,y)t(x,y)×δ(x+Δx,y+2a+Δy)+t(x,y)r(x,y)×δ(xΔx,y2aΔy)
where ⊗ denotes correlation operation, the cross-correlations, third and fourth terms of Eq. (5) contain the position information of cross-correlation peak. Then we can obtain the relative displacement PiPi+1(x,y) between current frame image and next frame image after processing by a digital signal processor (DSP), the image motion vector vi(x,y)is the derivation of the relative displacement PiPi+1(x,y), and can be given as
vi(x,y)=(PiPi+1(x,y));PiPi+1(x,y)=pi+1(x,y)pi(x,y)
where pi(x, y) and pi + 1(x, y) denote the coordinate positions of two cross-correlation peak, PiPi+1(x,y) represents displacement vector. Continually replace reference image r(x, y) with object image t(x, y), a series of relative image displacement and image motion vectors can be required. Then, the PSF can be modeled accurately according to Eq. (3).

An appropriate algorithm can be utilized to restore blurred image on the condition of known PSF. Here we utilized Richardson–Lucy (RL) algorithm [6]. The RL algorithm is a nonlinear iterative deconvolution technique for deblurring an image when the PSF is perfectly known in advance. The RL algorithm can be given as following

fi+1(x,y)=fi(x,y)(hi(x,y)g(x,y)hi(x,y)fi(x,y))
where f i (x, y) is the restored image after i iterations,* is the convolution operation, and ⊗ is the correlation operation.

3. Experimental results and discussions

We developed a real-time image stabilization system based on opto-electronic hybrid joint transform correlator to verify the approach effective. The experimental set-up is given in Fig. 2 . It consists of a primary CCD camera with resolution of 1028 × 1024 pixels, exposure period of 20 ms and frame rate of 1 fps; three high-speed CCD cameras with 1280 × 1024 pixels, frame rate of 636 fps and pixel size of 12 × 12 um; two Ferroelectric Liquid Crystal SLMs (FLCSLM) with resolution of 512 × 512 pixels, frame rate of 1015 fps and pixel size of 7.68 × 7.68 um. The system also incorporates a laser system with 532 nm, two Fourier Lens with the focal length of 300 mm, and a digital signal processor. To simulate the relative motion, for simplicity, we assume prime CCD camera and high-speed CCD camera keep still, and the object is uniform linear motion driven by a motor, experimental measurement system is schematically illustrated in Fig. 3 . The moving object is actually a newspaper.

 figure: Fig. 2

Fig. 2 Experiment set-up (part).

Download Full Size | PDF

 figure: Fig. 3

Fig. 3 Experimental measurement system.

Download Full Size | PDF

Actually, in order to lower the resolution requirement of SLM, input image is a sub-image extracted from the complete image captured by high-speed CCD, Moreover, an advantage of employing sub-image is that there are smaller noises compared with the complete image.

A critical goal for JTC is to detect the position of cross-correlation peak. However, the classical JTC always has several drawbacks, such as it is sensitive to geometric distortions and noise in the input scene, it contains a strong zero order peak and its discrimination ability is low. In order to overcome this difficulty, several approaches have been reported [14,15]. Here we employed wavelet transform (WT) to deal with joint transform power spectrum (JTPS).

A mother wavelet Ф(x, y) is a finite-duration window function that can generate a daughter wavelets by varying dilation (ax, ay) and shift (bx, by) and is given as

Φax,ay(x,y)=1axayΦ(xbxax,ybyay)
The mother wavelet must satisfy the admissibility conditions that it must be oscillatory, have fast decay to zero, and integrate to zero. WT is defined as an inner product between an analyzed signal f(x, y) and daughter wavelets Φax,ay as
Wf(ax,ay;bx,by)=1axayf(x,y)×Φ(xbxax,ybyay)dxdy
where * denotes the complex conjugate. By dilating the wavelet, the WT provides a multiresolution decomposition of the signal with good spatial resolution at high frequency and good frequency resolution at low frequency. Therefore, the WT can localize particular features of signals being analyzed.

In our work, two-dimensional Mexican hat wavelet is adopted owing to its ability to extract edges of equal width disregarding the size or the orientation of the input pattern, which is the second derivative of the Gaussian function and is given as

Φ(x,y)=[1(x2+y2)]exp(x2+y22)
The Fourier transform of the Mexican hat is represented as
Ψ(ωx,ωy)=4π2(ωx2+ωy2)exp[2π(ωx2+ωy2)]
where (ωx, ωy) are the spatial frequency coordinates in the x and y directions, respectively.

WT can effectively suppress the noise interference of JPS, improve the energy of ± 1 order diffractive light, and then enhance the detection ability. The comparison of cross-correlation peak was shown in Fig. 4 , the cross-correlation peak cannot nearly be detected before processing, but we obtained obvious output peak by WT processing. Cross-correlation peak intensity was also presented in Fig. 5 , the normalized correlation intensity reaches 0.5.

 figure: Fig. 4

Fig. 4 Comparison, (a) before processing, (b) after processing.

Download Full Size | PDF

 figure: Fig. 5

Fig. 5 Cross-correlation peak intensity.

Download Full Size | PDF

Real-time image stabilization process is described in detail as following. Firstly, during an exposure period high-speed CCD camera captures the image sequence, which are sent to the JTC and processed by DSP, and then obtain h(x, y). Secondly, the single blurred image captured by prime CCD camera is conveyed to DSP and restored by the R-L algorithm. Lastly, real-time and high-resolution images are acquired without delay. Figure 6 shows the real-time image stabilization results at different moving velocity, respectively.

 figure: Fig. 6

Fig. 6 experimental results and comparisons, (a) Original image at v = 2.5 um/ms; (b) Stabilized image at v = 2.5 um/ms; (c) Original image at v = 12.5 um/ms; (d) Stabilized image at v = 12.5 um/ms.

Download Full Size | PDF

The time that high-speed CCD camera captures an image is 1.57 ms (for frame rate is 636 fps), and that SLM update twice an image is approximate 0.98 × 2 = 1.96 ms (for frame rate is 1015 fps). By test and calculation, detection and process for every JPS by CCD1 will cost about 11.7 ms, and that for every cross-correlation output by CCD2 will spend 9.3 ms. Moreover, the R-L restoration algorithm need expends approximately 24 ms. The Fourier transform are implemented by light velocity, so the Fourier transforming time will nearly be ignored. For prime CCD camera exposure period is 20 ms, during the exposure period the high-speed CCD camera will capture 13 images, therefore, the process of every complete image stabilization by JTC needs approximately 0.37 s, which is less than 1 s (1fps for prime CCD camera), the real-time performance is strong.

4. Conclusions

An effective method for real-time image stabilization was proposed based on opto-electronic hybrid joint transform correlator. Velocity vector can be detected rapidly by JTC, meanwhile, accurate PSF for degrade image is modeled instantaneously by DSP; a simple and appropriate algorithm was employed to restore blurred image. A JTC measurement system was developed to verify the approach effective, the experimental results show the proposed method has advantages of real time and preferable effect. The approach applies to arbitrary motion blurred images. If frame rate for CCD and SLM is higher, such as thousands fps, the real time and restoration accuracy would be better.

Acknowledgments

This work is supported by the National Natural Science Foundation of China (NO. 60702078) and Zhejiang province Science and Technology Foundation (NO. 2010C33162).

References and links

1. Y. Yitzhaky, I. Mor, A. Lantzman, and N. S. Kopeika, “Direct method for restoration of motion-blurred images,” J. Opt. Soc. Am. A 15(6), 1512–1519 (1998). [CrossRef]  

2. G. Hochman, Y. Yitzhaky, N. S. Kopeika, Y. Lauber, M. Citroen, and A. Stern, “Restoration of images captured by a staggered time delay and integration camera in the presence of mechanical vibrations,” Appl. Opt. 43(22), 4345–4354 (2004). [CrossRef]   [PubMed]  

3. B. Golik and D. Wueller, “Measurement method for image stabilizing systems,” Proc. SPIE 6502, 65020O, 65020O–10 (2007). [CrossRef]  

4. H. Choi, J.-P. Kim, M.-G. Song, W. C. Kim, N. C. Park, Y. P. Park, and K. S. Park, “Effects of motion of an imaging system and optical image stabilizer on the modulation transfer function,” Opt. Express 16(25), 21132–21141 (2008). [CrossRef]   [PubMed]  

5. B. Likhterov and N. S. Kopeika, “Motion-blurred image restoration using modified inverse all-pole filters,” J. Electron. Imaging 13(2), 257–263 (2004). [CrossRef]  

6. S. Prasad, “Statistical-information-based performance criteria for Richardson-Lucy image deblurring,” J. Opt. Soc. Am. A 19(7), 1286–1296 (2002). [CrossRef]  

7. Y. Tian, C. Rao, L. Zhu, and X. Rao, “Modified self-deconvolution restoration algorithm for adaptive-optics solar images,” Opt. Lett. 35(15), 2541–2543 (2010). [CrossRef]   [PubMed]  

8. V. Loyev and Y. Yitzhaky, “Initialization of iterative parametric algorithms for blind deconvolution of motion-blurred images,” Appl. Opt. 45(11), 2444–2452 (2006). [CrossRef]   [PubMed]  

9. J. Zhang, Q. Zhang, and G. He, “Blind deconvolution of a noisy degraded image,” Appl. Opt. 48(12), 2350–2355 (2009). [CrossRef]   [PubMed]  

10. C. W. Chiu, P. C.-P. Chao, and D. Y. Wu, “Optimal design of magnetically actuated optical image stabilizer mechanism for cameras in mobile phones via genetic algorithm,” IEEE Trans. Magn. 43(6), 2582–2584 (2007). [CrossRef]  

11. J. F. Barrera, C. Vargas, M. Tebaldi, R. Torroba, and N. Bolognini, “Known-plaintext attack on a joint transform correlator encrypting system,” Opt. Lett. 35(21), 3553–3555 (2010). [CrossRef]   [PubMed]  

12. H. T. Chang and C. T. T. Chen, “Enhanced optical image verification based on Joint Transform Correlator adopting Fourier hologram,” Opt. Rev. 11(3), 165–169 (2004). [CrossRef]  

13. A. R. Alsamman, “Spatially efficient reference phase-encrypted joint transform correlator,” Appl. Opt. 49(10), B104–B110 (2010). [CrossRef]   [PubMed]  

14. J. A. Butt and T. D. Wilkinson, “Binary phase only filters for rotation and scale invariant pattern recognition with the joint transform correlator,” Opt. Commun. 262(1), 17–26 (2006). [CrossRef]  

15. J. Widjaja, “Wavelet filter for improving detection performance of compression-based joint transform correlator,” Appl. Opt. 49(30), 5768–5776 (2010). [CrossRef]   [PubMed]  

16. O. Hadar, I. Dror, and N. S. Kopeika, “Numerical calculation of image motion and vibration modulation transfer functions-a new method,” Proc. SPIE 1533, 61–74 (1991). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1
Fig. 1 Schematic diagram of opto-electronic hybrid JTC for real-time image stabilization.
Fig. 2
Fig. 2 Experiment set-up (part).
Fig. 3
Fig. 3 Experimental measurement system.
Fig. 4
Fig. 4 Comparison, (a) before processing, (b) after processing.
Fig. 5
Fig. 5 Cross-correlation peak intensity.
Fig. 6
Fig. 6 experimental results and comparisons, (a) Original image at v = 2.5 um/ms; (b) Stabilized image at v = 2.5 um/ms; (c) Original image at v = 12.5 um/ms; (d) Stabilized image at v = 12.5 um/ms.

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

g ( x , y ) = h ( x , y ) f ( x , y ) + n ( x , y )
g ( x , y ) = h ( x , y ) f ( x , y )
h ( x , y ) = i = 1 n 1 T v i ( x , y )
| S ( u , v ) | 2 = | R ( u , v ) | 2 + | T ( u , v ) | 2 + R ( u , v ) T * ( u , v ) exp [ 2 i π u Δ x 2 i π v ( 2 a + Δ y ) ] + T ( u , v ) R * ( u , v ) exp [ 2 i π u Δ x + 2 i π v ( 2 a + Δ y ) ]
c ( x , y ) = r ( x , y ) r ( x , y ) + t ( x , y ) t ( x , y ) + r ( x , y ) t ( x , y ) × δ ( x + Δ x , y + 2 a + Δ y ) + t ( x , y ) r ( x , y ) × δ ( x Δ x , y 2 a Δ y )
v i ( x , y ) = ( P i P i + 1 ( x , y ) ) ; P i P i + 1 ( x , y ) = p i + 1 ( x , y ) p i ( x , y )
f i + 1 ( x , y ) = f i ( x , y ) ( h i ( x , y ) g ( x , y ) h i ( x , y ) f i ( x , y ) )
Φ a x , a y ( x , y ) = 1 a x a y Φ ( x b x a x , y b y a y )
W f ( a x , a y ; b x , b y ) = 1 a x a y f ( x , y ) × Φ ( x b x a x , y b y a y ) d x d y
Φ ( x , y ) = [ 1 ( x 2 + y 2 ) ] exp ( x 2 + y 2 2 )
Ψ ( ω x , ω y ) = 4 π 2 ( ω x 2 + ω y 2 ) exp [ 2 π ( ω x 2 + ω y 2 ) ]
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.