Abstract

We developed a novel real-time motion blur compensation system for the blur caused by high-speed one-dimensional motion between a camera and a target. The system consists of a galvanometer mirror and a high-speed color camera, without the need for any additional sensors. We controlled the galvanometer mirror with continuous back-and-forth oscillating motion synchronized to a high-speed camera. The angular speed of the mirror is given in real time within 10 ms based on the concept of background tracking and rapid raw Bayer block matching. Experiments demonstrated that our system captures motion-invariant images of objects moving at speeds up to 30 km/h.

© 2015 Optical Society of America

1. Introduction

To perform visual inspection of extremely large targets, such as walls, surfaces of structures, roads, assembly lines, and so on, in an efficient manner, in terms of both time and cost, real-time inspection systems must have a simple construction and be capable of operating at high speed. However, high-speed motion degrades image quality owing to motion blur, and sometimes results in lost frames. For example, tunnels on highways have a comparatively high risk of deteriorating owing to their structures, and it is difficult to enforce the frequent traffic restrictions that are needed for their inspection. Therefore, there is an increasing demand for systems that can monitor tunnel surfaces from a moving vehicle. In particular, as a substitute for human visual inspection, high-quality images of tunnel surfaces are necessary for accurately judging faults such as cracks and stains in the structures. However, there is a trade-off relationship between efficiency and precision, as high-resolution pictures suffer from motion blur easily, and high-speed motion deteriorates the quality of images because of motion blur. In the vehicle inspection systems for infrastructure, intense illumination is used to compensate for motion blur to achieve fine spatial resolution; however, such illumination might cause other drivers to have accidents. Additionally, in general, intense light may cause damage to the surface of targets, and hence lower illumination is required. For example, inspection of products on a conveyor belt needs to be efficient; however, some products might be damaged by intensive illumination. Hence, a method that compensates for motion blur without using intense illumination is required.

Many methods have been proposed to compensate for such motion blur, and they can be categorized as those that compensate for motion blur [1–3, 9–11], in which the sensor and system are made to follow the moving object to avoid motion blur, and those that restore the captured image in post-processing [4–8]. Although considerable research effort in the computational imaging community has been focused on the latter category, the method proposed in this paper belongs to the former category. There are numerous ways to compare these categories; however, in general the former is a more powerful method because it is always better to avoid the blur in the first place rather than having to remove it post-capture.

As a method of the former category, time delayed integration (TDI) extends the exposure time virtually [1]; however, the extension of the exposure time is limited because, as the relative speed between the camera and the target increases, the exposure time at each stage of the TDI drops, and more stages are required. However, TDI sensor costs become very high as the number of stages increases, and so this system has a limitation when applied to efficient practical systems. In addition, the TDI method requires precise encoder information. Another method in the former category is optical image stabilization (OIS). This method is also effective for compensating for motion blur caused by hand shake [2,3]; however, OIS has low accuracy, and a built-in gyro sensor or acceleration sensor is needed to control the actuator.

Although additional sensors can help to reduce degradation, as methods of the latter category, there are motion blur rectification methods that do not require any additional sensors, for example, blind deconvolution [4, 5]. However, the usual blind deconvolution methods operate off-line to estimate the point spread function, and they are therefore not suitable for real-time applications. Additionally, blind deconvolution is known to be an NP-hard problem, so the accuracy and speed are poor without additional information. Unlike the usual blind deconvolution, images can be processed with motion vectors in deconvolution, simplifying software processing [6–8]. Levin et al.’s method corresponds with a variant motion vector within one exposure to rotate the camera itself [6]. Their method is very comprehensive for arbitrary, one-dimensional motion; however, their hardware is not designed for high-speed motion, and deconvolution is performed as an off-line process. Raskar et al.’s method enables one to get the motion vector easily by using a flutter shutter [7]. However, in their method, the exposure time is limited by generating a coded exposure, and, because they are based on an off-line process, real-time application is not supported. In contrast, Qian et al. developed a real-time deconvolution method [8]; however, its operating speed of 1 Hz is too slow to capture all views necessary for continuous capturing in the case of high-speed motion. Moreover, since deconvolution is also a rectification method that is performed after motion blur occurs, high-spatial-frequency information will be lost. Finally, all those deconvolution methods need additional hardware. Software processes become simpler than blind deconvolution; however, simplicity is lost in a hardware setup.

To satisfy the requirements for speed and simplicity, we considered adopting the concept of active vision [9]. This concept nearly belongs to the former category, but the purpose is not to compensate motion blur. By using this concept, dynamic image acquisition becomes possible if gaze control can be performed so that a subject is always captured at the center of the acquired image. However, conventional active vision systems have a limitation in terms of the speed at which the optical gaze direction can be moved, since the weight of the camera prevents rapid motion when the camera itself is moved by an actuator [10]. To solve this problem, Okumura et al. proposed using a two-axis galvanometer mirror to control the optical gaze of a camera at high speed [11] and achieved high-speed gaze control for general target tracking. In their system, however, the optical gaze of the camera follows the center point of the target, and hence the response time causes motion blur when the target is moving at high speed. The active vision concept is suitable for tracking a single target continuously; however, in one-dimensional motion (e.g., on roads, rails, conveyors, and so on), targets will be updated as the relative position between the camera and the targets changes. Hence the camera must capture those images one after another so as not to miss frames, and we apply such an active vision concept for updating the target successively to capture updating local targets in a large target. Active vision systems move in a similar manner to the active motion of the human eye when tracking moving objects, whereas our system is based on a model of the human eye’s vestibule-ocular reflex [12] and pigeon-head-bobbing during walking [13]. Their body mechanisms compensate for motion blur, and since we found that they can be effectively adopted into a vision system, we developed a motion blur compensation system to prolong exposure time. Additionally, unlike active vision, which has a varying motion vector within one image, here we can assume that the motion blur is invariant, since the target is large, and thus the real-time capability will be high enough to sustain the relative speed between the camera and the target. We call this novel concept background tracking.

In this paper, we propose a real-time motion blur compensation system with optical gaze control using a galvanometer mirror. In our system, we use a lightweight galvanometer mirror for gaze control [11], allowing fast mirror rotation for capturing the next target. Additionally, we employ a back-and-forth oscillating motion of the galvanometer mirror to achieve quick motion that can rapidly respond to changes in the speed of the target. This back-and-forth motion is realized by applying a sinusoidal driving pattern, and the exposure timing is synchronized with a particular angle so that the rotation is considered to be linear.

To compensate for motion blur, we estimate a one-dimensional motion vector used to set the angular speed of the galvanometer mirror. The relative angular speed between the camera and the galvanometer mirror is determined by using a Bayer raw block matching method to reduce computational costs to make the system suitable for real-time applications.

2. Principle of the real-time motion blur compensation system

2.1. Simplifying the problem

To realize a real-time high-speed motion blur compensation system without additional sensors, we need to simplify the problem. First, since we use a high-speed camera as an imaging device, the motion vector can be assumed to be invariant within one exposure. However, the speed between the target and the camera is changeable as long as the difference in depth does not cause motion blur within one exposure. In case the speed changes, a short updating cycle of the system will update the angular speed of the galvanometer mirror. A high-speed camera has the advantage that it allows us to update the angular speed of the galvanometer mirror quickly after capturing an image because of the high frame rate. Especially in high-speed motion, the relative speed vr between the camera and the target does not change greatly within a short time because of inertia. Especially for industrial inspection, systems are heavy and move fast for efficiency, so their inertia becomes high. For inspection applications, we assume motion in a one-dimensional path (e.g., on roads, rails, or conveyors). In addition, if the surface of the target in practical high-precision inspection is considered to be planar (e.g., the walls in construction, long vehicles, etc.), then we can assume that the distance l between the target and the camera is also invariant. However, the surface can have three-dimensional textures unless the difference of depth does not cause motion blur. Especially when l is long enough to compare with the differences of depth on the surface, the surface can be regarded as planar.

2.2. Back-and-forth motion control of the galvanometer mirror

2.2.1. Concept

Figure 1 illustrates the concept of back-and-forth control of the galvanometer mirror. To compensate for motion blur, we control the galvanometer mirror in front of the camera to point in the direction of vr with angular speed ωm. From the viewpoint of the camera, vr is expressed as a relative angular speed ωr between the camera and the target during an extremely short time, since we assume that l is long enough compared with the width of the camera’s viewing field, sw, which is determined by the camera’s angle of view α. In Fig. 1, l is shown as being shorter than the actual distance to save space; in practice, however, l is long in actual visual inspection situations, especially in remote sensing applications. Hence, we can substitute ωr into ωm. The mirror follows a moving target with constant ωm from t1 to t3. In the case where ωr and ωm are equivalent during exposure time tex, the optical gaze stays at the same position, and hence the acquired image will not include motion blur. If the galvanometer mirror is comparatively light, this system allows a high degree of controllability of the gaze direction. From this point, it is simple to compensate for motion blur. Namely, we can extend the exposure time by using this method. The mirror angle after the image acquisition returns to the original angle for the next shot, e.g., t4. This back-and-forth motion process is performed repeatedly as ωm is updated. ωr is calculated by a block matching method with the transfer distance xd between the previous acquired image and the current one, which have an overlapping area. We will explain the block matching method in more detail in Sec. 2.3.

 figure: Fig. 1

Fig. 1 Concept of back-and-forth motion control of the galvanometer mirror.

Download Full Size | PPT Slide | PDF

2.2.2. Method of acquiring the relative angular speed ωr

To control the galvanometer mirror, we need to compute ωr from xd. From Fig. 1, we can write

sw2l=tanα2,
xd2l=tanωr2.

Without any additional sensors, l is an unknown parameter; however, by rearranging Eqs. (1) and (2) and solving for ωr for the parameter ωr, we obtain

ωr=2tan1(xdswtanα2).

Thus, if the target is planar, ωr can be computed from two successive images, without using l or any additional sensors (e.g., distance sensors). This contributes to the simplicity of the system.

Finally, ωr is substituted for ωm up to the current time t, yielding

ωm={ωr(t1tt3),ωr(e.g.t=t4).

2.3. Background tracking using rapid block matching method in the Bayer raw domain

2.3.1. Background tracking for the rapid block matching method

To calculate xd, we adopt the concept of background tracking. In a conventional active vision system, particular feature information of the target (e.g., color or shape) is used to calculate the target position. Since the part of the target at which the optical gaze is directed is updated at each successive image acquisition in high-speed motion, we use a block matching method for detecting an arbitrary part of the target as a search window. Actually, we do not need the target position but only its speed. Then, we can implement block matching at any position. In addition, since we assume vr is one dimensional, we only need to assign at least one row or one column as a search window at any part of the target (depending on the direction of motion). Heo et al. demonstrated that modeling of the search range is valid for reducing the computational cost [14]. If the original height is 100 pixels, and the direction of motion is horizontal, then the computational cost can be reduced by a factor of 100, theoretically. This concept is illustrated in Fig. 2(a).

 figure: Fig. 2

Fig. 2 Rapid block matching method in the Bayer raw domain (BGR). (a) Background tracking for the rapid block matching method. (b) Block matching between two Bayer raw images.

Download Full Size | PPT Slide | PDF

2.3.2. Rapid block matching method in the Bayer raw domain

Here we introduce the rapid block matching method used to acquire xd for color cameras. Although there are not many inspection systems that adopt color sensors, color sensors allow more information to be obtained compared with monochrome sensors for comprehensive inspection systems. If more information is obtained in the inspection, there is a higher possibility of detecting abnormal points; hence color sensors are necessary for more comprehensive systems. However, in general, the amount of information and the speed have a trade-off relationship, and so conventional systems mainly use monochrome sensors.

Then, we propose our method, which is compatible with high-speed image processing methods and color sensors. However, software conversion from Bayer raw images into RGB images takes additional time, complicating the implementation of real-time systems. Therefore, to realize a real-time system, here we propose using a block matching method that does not require the conversion. There has been some research on processing of Bayer raw images [15, 16]. Romanenko et al. implemented a block matching method between a Bayer raw image and a noise model for de-noising [15]. Yang et al. used only red pixels from a Bayer raw image for filtering [16]. In contrast, we implement block matching between two Bayer raw images. To achieve higher computational speed, we implement the block matching method every two pixels, since even before conversion the array of pixels repeats RGrRGrRGr… or GbBGbBGbB… horizontally (Fig. 2(b)).

The following equation illustrates this calculation:

RSSD=j=01i=0Ww1(Imgp(i,j)Imgc(i+2,j))2.

Here Ww represents the width of the window for block matching. This calculation is iterated from one end to another horizontally. When RSSD between the previous image Imgp and the current image Imgc is the smallest, we set the x position of the window to xd.

2.4. Temporal control of the real-time high-speed motion blur compensation system

2.4.1. Control flow

Figure 3 illustrates the control flow. In the initial state P1, we can set an initial value of ωm; then ωm is set automatically in successive processes. After setting an arbitrary value of ωm, at P2, the system itself checks whether or not the current angle of the galvanometer mirror is appropriate for exposure. Then, at P3, the camera exposes an image until a fixed exposure time has elapsed. After the exposure, the mirror starts to rotate in the opposite direction until it reaches the original angle. At the same time, the latest ωm is calculated by using the acquired images at P4 and P5, and the value is set at P1 again. These processes from P1 to P5 are repeated. The frequency of this flow, f, is set before P1 and is governed by the acceleration of the galvanometer mirror and the computational speed. We will discuss f in more detail in Sec. 3.2.1.

 figure: Fig. 3

Fig. 3 Control flow of real-time motion blur compensation system.

Download Full Size | PPT Slide | PDF

2.4.2. Method of synchronization between the camera exposure timing and the galvanometer mirror angle

The frequency f and the amplitude of the oscillating galvanometer mirror are limited by its weight. Thus, the mirror size and acceleration have a trade-off relationship. Actually, a constant angular speed is the most appropriate condition for making ωr and ωm agree with each other, namely, for compensation of motion blur, and we can generate triangular waves with positive and negative constant angular speeds for back-and-forth motion:

θ=ωmt{(t1tt3)}.

However, since triangular waves have sharp instantaneous turns, the galvanometer mirror would need to have a extremely high acceleration and, also, the amplitude would be small because of the control delay. To avoid this problem, we adopt sine waves for control to approximate triangular waves that have common A:

θ=Asin(2πft).

Here, parameter A is given by

A=ωm4f.

In Eq. (8), 1= f corresponds with the cycle of the sine wave, and the mirror angle reaches A at a quarter of the cycle when it started from 0.

Sine waves are smooth at all points, and hence the required acceleration is lower than that of triangular waves [17]. This is good for preventing saturation of control and performance degradation of the galvanometer mirror. At the same time, sine waves have approximately linear parts away from the turning points. Thus, we use sine waves to achieve high-speed control of the galvanometer mirror motion. Figure 4 illustrates the difference between a triangular wave and a sine wave. In Fig. 4, t1 and t3 represent turning points of the rotational direction (see also Fig. 1).

 figure: Fig. 4

Fig. 4 Mirror angle waveform and exposure timing.

Download Full Size | PPT Slide | PDF

Actually, although this sine wave seems to deviate from the triangular wave, in high-speed applications, A becomes quite small for realizing a high frame rate, and hence the difference will become small. The slope of this sine curve around 0° agrees with a straight line of gradient ωm given by Eq. (4), and this sine curve allows us to compensate for motion blur.

After configuration, the camera exposes an image from −tex = 2 to +tex = 2 to synchronize with the mirror rotation. These processes are repeated every 1= f.

3. Experimental evaluation

3.1. Experimental setup

To demonstrate our proposed method, we compensated for the motion of a rapidly moving conveyor belt. Figure 5 illustrates the experimental system. To evaluate the performance of our system, we prepared a resolution chart and detailed images to paste onto the surface of the conveyor belt. The still image of the resolution chart had a steep slope on a horizontal profile; therefore, we checked peak-to-peak values of black-and-white pairs at each vr.

 figure: Fig. 5

Fig. 5 Schematic diagram of the experimental system.

Download Full Size | PPT Slide | PDF

We used a CMOS high-speed color camera (Mikrotron Eosens MC4083). This camera can acquire full HD images at almost 900 Hz. The galvanometer mirror was an M3 series device manufactured by Cambridge Technology, which is capable of oscillating at a few hundred hertz with an analogue servo driver Mini Sax II and has an effective diameter of 3 cm, making it suitable for laser projection and camera sensing. We also prepared an AD/DA interface board (LPC-361216) having a 16-bit resolution. The PC had a CPU Intel Xeon E5-1620 processor and ran Windows 7 Professional. Software was written in C/C++ with OpenCV 2.4.6. The system also consisted of a lamp (Mintage M Power Light PMX-120) and a lens (Nikon AF-S NIKKOR 200mm f/2G ED VR II). A photograph of the prototype motion blur compensation system is shown in Fig. 6.

 figure: Fig. 6

Fig. 6 Optical components of the prototype real-time high-speed motion blur compensation system.

Download Full Size | PPT Slide | PDF

At the beginning of the experiments, we set the parameters as follows: tex = 1 ms; vr = 0 to 30 km/h; α = 4.5°; Sw = 2336 pixels; and l = 3.0 m.

3.2. Preliminary experiment

3.2.1. Response characteristics of the galvanometer mirror

In a first preliminary experiment, we tested the response characteristic of the galvanometer mirror with respect to f to analyze the relationship between the input and output amplitude. As Duma et al. has researched, the response characteristics will help in the design of appropriate optical applications [17], and those data will become a software requirement. We used a function generator to generate sine waves with frequencies from 100 to 500 Hz. M3 can officially operate at frequencies up to 300 Hz; however, we experimented to find the limitation characteristics. Additionally, we set the amplitude from 0 to 500 mV. An input amplitude of ±3 V gets converted into a rotation angle of 30°. When vr is 30 km/h, the target moves forward by 4.2 cm within 5 ms (half of the period corresponding to a frequency of 100 Hz), and, therefore, we can derive the theoretical maximum input amplitude to be ±1.39 mV from arctan(0:042=3)=30 × 3000. However, since the lower value of the input amplitude includes noise components, we checked the response up to 500 mV to determine the tendency of the response characteristics.

As a result, we obtained the characteristics shown in Figs. 7(a) and (b). In the figures, the input voltage corresponds to A, and the input frequency corresponds to f. In Fig. 7(a), we found that the plots were linear when f was 100 and 200 Hz, up to an input of 500 mV, and the plots at 100 Hz corresponded to y = x. Moreover, Fig. 7(b) shows that the gain at 100 Hz was 0 dB, whereas the others were below zero. Hence, we set f to 100 Hz in the main experiment.

 figure: Fig. 7

Fig. 7 Response characteristics of the galvanometer mirror. (a) Input signal [mV] and output signal [mV] (with noise removed to smooth the averaging). (b) Input signal [mV] and gain [dB].

Download Full Size | PPT Slide | PDF

3.2.2. Performance of the rapid block matching method

In a second preliminary experiment, we evaluated the performance of our rapid block matching method. From the result in Sec. 3.2.1, we set f to 100 Hz, and hence the duration of each cycle is 10 ms. If the total time for block matching and calculation of the angular speed is <10 ms, then the latest ωr is set to ωm. Equations (3) and (4) are comparatively light computational processes, whereas Eq. (5) is heavy. We checked whether or not our proposed block matching method takes less than 10 ms.

To do so, we prepared two horizontally separated Bayer-array still images (see Fig. 8), which are part of the belt conveyor, and we pasted a noticeable red seal onto it to check the results in a simple manner. The width of the images was 1500 pixels, and the height of the images was 848 pixels. The distance between the images in Figs. 8(a) and (b) was 346 pixels.

 figure: Fig. 8

Fig. 8 Horizontally separated still Bayer-array images to be processed by the straightforward block matching method (green) and the rapid block matching method (red). (a) Previous image. (b) Current image.

Download Full Size | PPT Slide | PDF

Table 1 shows that it took 8.9 ms for Bayer conversion, 4351 ms for the straightforward block matching method with a full size search range, and 4.3 ms for our proposed method with a reduced size search range on a raw Bayer image (see Fig. 8). The two block matching methods had the same precision, as shown by their distance of 346 pixels. Figure 3 shows some other processes; however, only P4 entails two-dimensional image processing, which requires a high computational cost; the other processes are very simple and involve light computation, so we can exclude consideration of those processes. Thus, we demonstrated that our method is appropriate for implementing a 100-Hz real-time system, and the algorithm was a factor of almost 1000 faster than the straightforward one.

We used the same red pattern in the main experiment also.

Tables Icon

Table 1. Performance Comparison of Block Matching Methods for Color Images

3.3. Experimental results

Figures 9(a)–(c) show the fundamental results of our system. Despite the fact that the image in Fig. 9(c) had degraded sharpness compared with that in Fig. 9(a), the image in Fig. 9(c) had significantly better sharpness than that in Fig. 9(b). Figure 9 shows the profiles obtained by analyzing the performance of our motion blur compensation system quantitatively. The profile in Fig. 9(b) is entirely flat, whereas that in Fig. 9(c) is bumpy because the contrast of black-and-white stripes improved.

 figure: Fig. 9

Fig. 9 Fundamental result obtained with our system when vr was 30 km/h vertically, and vertical profiles at the position of the blue lines (with images trimmed for aligned display). (a) Still image. (b) Image during vr =30 km/h with motion blur compensation off). (c) Image during vr = 30 km/h with motion blur compensation on.

Download Full Size | PPT Slide | PDF

To discuss the results more quantitatively, the peak-to-peak intensity of the initial black-and white-pair at each vr is shown in Fig. 10. When the motion compensation was turned off, it was difficult to distinguish between black and white, whereas when the motion compensation was turned on, the peak-to-peak value was maintained even at vr = 30 km/h. In all trials, we could acquire images at 100 Hz with motion blur compensation, which was achieved by synchronizing with the galvanometer mirror.

 figure: Fig. 10

Fig. 10 Peak-to-peak intensity of the initial vertical black-and-white pair at each vr.

Download Full Size | PPT Slide | PDF

Finally, we show example applications of our system in Fig. 11. Figures 11(a)–(c) show images of cracks in asphalt, which is improved in (c) compared with the image in (b), demonstrating that this system is effective for inspecting the condition of roads, especially under limited illumination and during high-speed motion. This real-time compensation system will help to warn of dangerous road damage that must be urgently mended. Figures 11(d)–(f) show that this system is also effective for checking whether or not defective parts exist when inspecting objects on a conveyor line. Figures 11(g)–(i) show that this system is also effective for images captured from a helicopter. After image acquisition, the precision of image searching can be improved because motion blur is compensated for. In each of these real-world situations, the operating frequency of 100 Hz makes it possible to capture images without temporal gaps. Thus, we demonstrated that our method is simple and can be performed in real time to compensate for motion blur.

 figure: Fig. 11

Fig. 11 Applications of our system in practical situations (with images trimmed for aligned display). The first row ((a), (b), and (c)) shows cracked roads, the second row ((d), (e), and (f)) shows printed boards, and the third row ((g), (h), and (i)) shows helicopter shots. The first column ((a), (d), and (g)) shows still images, the second column ((b), (e), and (h)) shows images during vr = 30 km/h with motion blur compensation off, and the third column ((c), (f), and (i)) shows images during vr = 30 km/h with motion blur compensation on.

Download Full Size | PPT Slide | PDF

4. Discussion

4.1. Improved method of motion blur compensation

The sharpness in Fig. 9(c) is degraded compared with that in Fig. 9(a). Figure 10 also shows that, when motion compensation was turned on, the peak-to-peak intensity at a speed of 30 km/h is around one-half that in the still condition. As possible reasons for this, first we consider the imperfect synchronization between the camera exposure timing and θ of the galvanometer mirror. Since we controlled the galvanometer mirror with open-loop control from the PC, control delay may have caused the imperfect synchronization. In Fig. 4, if the phase is delayed, the effect of motion blur compensation will decrease. To avoid this, we must use closed-loop control or a real-time operating system. Then, we can consider use of another waveform (e.g., a triangular wave, a saw tooth wave, etc.); however, as we explained in Sec. 2.4.2, sharp edges in the waveform require the galvanometer to have extremely high acceleration and, therefore, we also need to consider how to increase the acceleration. This is also discussed in Sec. 4.2. Finally, we can consider the inconsistency between ωr and ωm. This can also be avoided by using a closed loop to check the parameters and to modify the theoretical model to one that is suitable for practical use.

4.2. Gain compensation for applications requiring faster performance

As mentioned in Sec. 3.2.1, our system was not compatible with real-time applications requiring f > 100 Hz because of the lack of responsiveness of the galvanometer mirror. When f increases, the output gain of the galvanometer mirror decreases. This is considered to be because the performance of PID control is limited by the input frequency; if the input frequency is high, PID control does not work well unless we change the parameters, especially proportional coefficient. The galvanometer mirror M3 uses a servo driver that is controlled by PID control. In general, tuning of parameters for PID control can be used to solve the problem of gain decrease; however, the performance of the system will degrade when f is decreased. If the mirror is made smaller, the responsiveness will improve, allowing f to be set higher; however, the amount of illumination received by the camera will decrease because of the reduced mirror area. For higher-speed applications with the same amount of illumination, we will need to improve the galvanometer mirror or the control method. This could be achieved by using a higher current for achieving higher acceleration, by using a lighter mirror (with the same surface area, only thinner), adopting other control methods, or using other types of galvanometer mirrors.

5. Conclusions

To compensate for motion blur in real time without additional sensors, we developed a system that captures successive images with a high-speed color camera using motion blur compensation. Motion blur compensation was achieved by back-and-forth motion of a galvanometer mirror. To achieve real-time performance, we proposed the concept of background tracking. With this method, we demonstrated that our rapid block matching takes 4.3 ms. We also demonstrated that a frequency of 100 Hz is suitable for controlling the galvanometer mirror, and we demonstrated that our system reduced motion blur at this frequency compared with the conventional approach. We envisage that our system can be applied to various fields (e.g., searching for defective parts on conveyor lines, inspection of road conditions, precise image searching, and so on). We will continue to investigate higher performance systems and methods that can compensate for motion blur more effectively than our current system.

References and links

1. E. Bodenstorfer, J. Furtler, J. Brodersen, K. J. Mayer, C. Eckel, K. Gravogl, and H. Nachtnebel, “High-speed line-scan camera with digital time delay integration,” Proc. SPIE 6496, 64960I (2007). [CrossRef]  

2. B. Golik and D. Wueller, “Measurement method for image stabilizing systems,” Proc. SPIE 6502, 65020O (2007). [CrossRef]  

3. C. W. Chiu, P. C. P. Chao, and D. Y. Wu, “Optimal design of magnetically actuated optical image stabilizer mechanism for cameras in mobile phones via genetic algorithm,” IEEE Trans. Magn. 43(6), 2582–2584 (2007). [CrossRef]  

4. Y. Yitzhaky, R. Milberg, S. Yohaev, and N. S. Kopeika, “Comparison of direct blind deconvolution methods for motion-blurred images,” Appl. Opt. 38(20), 4325–4332 (1999). [CrossRef]  

5. J. Zhang, Q. Zhang, and G. He, “Blind deconvolution of a noisy degraded image,” Appl. Opt. 48(12), 2350–2355 (2009). [CrossRef]   [PubMed]  

6. A. Levin, P. Sand, T. S. Cho, F. Durand, and W. T. Freeman, “Motion-invariant photography,” ACM Trans. Graph. 27(3), 71 (2008). [CrossRef]  

7. R. Raskar, A. Agrawal, and J. Tumblin, “Coded exposure photography: motion deblurring using fluttered shutter,” ACM Trans. Graph. 25(3), 795–804 (2006). [CrossRef]  

8. Y. Qian, Y. Li, J. Shao, and H. Miao, “Real-time image stabilization for arbitrary motion blurred image based on opto-electronic hybrid joint transform correlator,” Opt. Express 19(11), 10762–10768 (2011). [CrossRef]   [PubMed]  

9. K. Daniilidis, C. Krauss, M. Hansen, and G. Sommer, “Real-time tracking of moving objects with an active camera,” Real-Time Imag. 4(1), 3–20 (1998). [CrossRef]  

10. H. Oike, H. Wu, C. Hua, and T. Wada, “Clear image capture-active cameras system for tracking a high-speed moving object,” in Proceedings of the Fourth International Conference on Informatics in Control (2007), pp. 94–102.

11. K. Okumura, H. Oku, and M. Ishikawa, “High-speed gaze controller for millisecond-order pan/tilt camera,” in Proceedings of IEEE International Conference on Robotics and Automation (IEEE, 2011), pp. 6186–6191.

12. M. Ito, “Cerebellar control of the vestibulo-ocular reflex - around the flocculus hypothesis,” Annu. Rev. Neurosci. 1(5), 275–296 (1982). [CrossRef]  

13. M. Davis and P. Green, “Head-bobbing during walking, running and flying: relative motion perception in the pigeon,” J. Exp. Biol. 138(1), 71–91 (1988).

14. J. Heo, J. Kim, and D. Lee, “Real-time digital image stabilization using motion sensors for search range reduction,” in SoC Design Conference (ISOCC, 2012), pp. 363–366.

15. I. V. Romanenko, E. A. Edirisinghe, and D. Larkin, “Block matching noise reduction method for photographic images applied in Bayer RAW domain and optimized for real-time implementation,” Proc. SPIE8437, 84370F (2012).

16. O. Yang and B. Choi, “Laser speckle imaging using a consumer-grade color camera,” Opt. Lett. 37(19), 3957–3959 (2012). [CrossRef]   [PubMed]  

17. V. Duma, J. P. Rolland, O. Group, A. Vlaicu, and R. Ave, “Advancements on galvanometer scanners for high-end applications,” Proc. SPIE 8936, 893612 (2014). [CrossRef]  

References

  • View by:
  • |
  • |
  • |

  1. E. Bodenstorfer, J. Furtler, J. Brodersen, K. J. Mayer, C. Eckel, K. Gravogl, and H. Nachtnebel, “High-speed line-scan camera with digital time delay integration,” Proc. SPIE 6496, 64960I (2007).
    [Crossref]
  2. B. Golik and D. Wueller, “Measurement method for image stabilizing systems,” Proc. SPIE 6502, 65020O (2007).
    [Crossref]
  3. C. W. Chiu, P. C. P. Chao, and D. Y. Wu, “Optimal design of magnetically actuated optical image stabilizer mechanism for cameras in mobile phones via genetic algorithm,” IEEE Trans. Magn. 43(6), 2582–2584 (2007).
    [Crossref]
  4. Y. Yitzhaky, R. Milberg, S. Yohaev, and N. S. Kopeika, “Comparison of direct blind deconvolution methods for motion-blurred images,” Appl. Opt. 38(20), 4325–4332 (1999).
    [Crossref]
  5. J. Zhang, Q. Zhang, and G. He, “Blind deconvolution of a noisy degraded image,” Appl. Opt. 48(12), 2350–2355 (2009).
    [Crossref] [PubMed]
  6. A. Levin, P. Sand, T. S. Cho, F. Durand, and W. T. Freeman, “Motion-invariant photography,” ACM Trans. Graph. 27(3), 71 (2008).
    [Crossref]
  7. R. Raskar, A. Agrawal, and J. Tumblin, “Coded exposure photography: motion deblurring using fluttered shutter,” ACM Trans. Graph. 25(3), 795–804 (2006).
    [Crossref]
  8. Y. Qian, Y. Li, J. Shao, and H. Miao, “Real-time image stabilization for arbitrary motion blurred image based on opto-electronic hybrid joint transform correlator,” Opt. Express 19(11), 10762–10768 (2011).
    [Crossref] [PubMed]
  9. K. Daniilidis, C. Krauss, M. Hansen, and G. Sommer, “Real-time tracking of moving objects with an active camera,” Real-Time Imag. 4(1), 3–20 (1998).
    [Crossref]
  10. H. Oike, H. Wu, C. Hua, and T. Wada, “Clear image capture-active cameras system for tracking a high-speed moving object,” in Proceedings of the Fourth International Conference on Informatics in Control (2007), pp. 94–102.
  11. K. Okumura, H. Oku, and M. Ishikawa, “High-speed gaze controller for millisecond-order pan/tilt camera,” in Proceedings of IEEE International Conference on Robotics and Automation (IEEE, 2011), pp. 6186–6191.
  12. M. Ito, “Cerebellar control of the vestibulo-ocular reflex - around the flocculus hypothesis,” Annu. Rev. Neurosci. 1(5), 275–296 (1982).
    [Crossref]
  13. M. Davis and P. Green, “Head-bobbing during walking, running and flying: relative motion perception in the pigeon,” J. Exp. Biol. 138(1), 71–91 (1988).
  14. J. Heo, J. Kim, and D. Lee, “Real-time digital image stabilization using motion sensors for search range reduction,” in SoC Design Conference (ISOCC, 2012), pp. 363–366.
  15. I. V. Romanenko, E. A. Edirisinghe, and D. Larkin, “Block matching noise reduction method for photographic images applied in Bayer RAW domain and optimized for real-time implementation,” Proc. SPIE8437, 84370F (2012).
  16. O. Yang and B. Choi, “Laser speckle imaging using a consumer-grade color camera,” Opt. Lett. 37(19), 3957–3959 (2012).
    [Crossref] [PubMed]
  17. V. Duma, J. P. Rolland, O. Group, A. Vlaicu, and R. Ave, “Advancements on galvanometer scanners for high-end applications,” Proc. SPIE 8936, 893612 (2014).
    [Crossref]

2014 (1)

V. Duma, J. P. Rolland, O. Group, A. Vlaicu, and R. Ave, “Advancements on galvanometer scanners for high-end applications,” Proc. SPIE 8936, 893612 (2014).
[Crossref]

2012 (1)

2011 (1)

2009 (1)

2008 (1)

A. Levin, P. Sand, T. S. Cho, F. Durand, and W. T. Freeman, “Motion-invariant photography,” ACM Trans. Graph. 27(3), 71 (2008).
[Crossref]

2007 (3)

E. Bodenstorfer, J. Furtler, J. Brodersen, K. J. Mayer, C. Eckel, K. Gravogl, and H. Nachtnebel, “High-speed line-scan camera with digital time delay integration,” Proc. SPIE 6496, 64960I (2007).
[Crossref]

B. Golik and D. Wueller, “Measurement method for image stabilizing systems,” Proc. SPIE 6502, 65020O (2007).
[Crossref]

C. W. Chiu, P. C. P. Chao, and D. Y. Wu, “Optimal design of magnetically actuated optical image stabilizer mechanism for cameras in mobile phones via genetic algorithm,” IEEE Trans. Magn. 43(6), 2582–2584 (2007).
[Crossref]

2006 (1)

R. Raskar, A. Agrawal, and J. Tumblin, “Coded exposure photography: motion deblurring using fluttered shutter,” ACM Trans. Graph. 25(3), 795–804 (2006).
[Crossref]

1999 (1)

1998 (1)

K. Daniilidis, C. Krauss, M. Hansen, and G. Sommer, “Real-time tracking of moving objects with an active camera,” Real-Time Imag. 4(1), 3–20 (1998).
[Crossref]

1988 (1)

M. Davis and P. Green, “Head-bobbing during walking, running and flying: relative motion perception in the pigeon,” J. Exp. Biol. 138(1), 71–91 (1988).

1982 (1)

M. Ito, “Cerebellar control of the vestibulo-ocular reflex - around the flocculus hypothesis,” Annu. Rev. Neurosci. 1(5), 275–296 (1982).
[Crossref]

Agrawal, A.

R. Raskar, A. Agrawal, and J. Tumblin, “Coded exposure photography: motion deblurring using fluttered shutter,” ACM Trans. Graph. 25(3), 795–804 (2006).
[Crossref]

Ave, R.

V. Duma, J. P. Rolland, O. Group, A. Vlaicu, and R. Ave, “Advancements on galvanometer scanners for high-end applications,” Proc. SPIE 8936, 893612 (2014).
[Crossref]

Bodenstorfer, E.

E. Bodenstorfer, J. Furtler, J. Brodersen, K. J. Mayer, C. Eckel, K. Gravogl, and H. Nachtnebel, “High-speed line-scan camera with digital time delay integration,” Proc. SPIE 6496, 64960I (2007).
[Crossref]

Brodersen, J.

E. Bodenstorfer, J. Furtler, J. Brodersen, K. J. Mayer, C. Eckel, K. Gravogl, and H. Nachtnebel, “High-speed line-scan camera with digital time delay integration,” Proc. SPIE 6496, 64960I (2007).
[Crossref]

Chao, P. C. P.

C. W. Chiu, P. C. P. Chao, and D. Y. Wu, “Optimal design of magnetically actuated optical image stabilizer mechanism for cameras in mobile phones via genetic algorithm,” IEEE Trans. Magn. 43(6), 2582–2584 (2007).
[Crossref]

Chiu, C. W.

C. W. Chiu, P. C. P. Chao, and D. Y. Wu, “Optimal design of magnetically actuated optical image stabilizer mechanism for cameras in mobile phones via genetic algorithm,” IEEE Trans. Magn. 43(6), 2582–2584 (2007).
[Crossref]

Cho, T. S.

A. Levin, P. Sand, T. S. Cho, F. Durand, and W. T. Freeman, “Motion-invariant photography,” ACM Trans. Graph. 27(3), 71 (2008).
[Crossref]

Choi, B.

Daniilidis, K.

K. Daniilidis, C. Krauss, M. Hansen, and G. Sommer, “Real-time tracking of moving objects with an active camera,” Real-Time Imag. 4(1), 3–20 (1998).
[Crossref]

Davis, M.

M. Davis and P. Green, “Head-bobbing during walking, running and flying: relative motion perception in the pigeon,” J. Exp. Biol. 138(1), 71–91 (1988).

Duma, V.

V. Duma, J. P. Rolland, O. Group, A. Vlaicu, and R. Ave, “Advancements on galvanometer scanners for high-end applications,” Proc. SPIE 8936, 893612 (2014).
[Crossref]

Durand, F.

A. Levin, P. Sand, T. S. Cho, F. Durand, and W. T. Freeman, “Motion-invariant photography,” ACM Trans. Graph. 27(3), 71 (2008).
[Crossref]

Eckel, C.

E. Bodenstorfer, J. Furtler, J. Brodersen, K. J. Mayer, C. Eckel, K. Gravogl, and H. Nachtnebel, “High-speed line-scan camera with digital time delay integration,” Proc. SPIE 6496, 64960I (2007).
[Crossref]

Edirisinghe, E. A.

I. V. Romanenko, E. A. Edirisinghe, and D. Larkin, “Block matching noise reduction method for photographic images applied in Bayer RAW domain and optimized for real-time implementation,” Proc. SPIE8437, 84370F (2012).

Freeman, W. T.

A. Levin, P. Sand, T. S. Cho, F. Durand, and W. T. Freeman, “Motion-invariant photography,” ACM Trans. Graph. 27(3), 71 (2008).
[Crossref]

Furtler, J.

E. Bodenstorfer, J. Furtler, J. Brodersen, K. J. Mayer, C. Eckel, K. Gravogl, and H. Nachtnebel, “High-speed line-scan camera with digital time delay integration,” Proc. SPIE 6496, 64960I (2007).
[Crossref]

Golik, B.

B. Golik and D. Wueller, “Measurement method for image stabilizing systems,” Proc. SPIE 6502, 65020O (2007).
[Crossref]

Gravogl, K.

E. Bodenstorfer, J. Furtler, J. Brodersen, K. J. Mayer, C. Eckel, K. Gravogl, and H. Nachtnebel, “High-speed line-scan camera with digital time delay integration,” Proc. SPIE 6496, 64960I (2007).
[Crossref]

Green, P.

M. Davis and P. Green, “Head-bobbing during walking, running and flying: relative motion perception in the pigeon,” J. Exp. Biol. 138(1), 71–91 (1988).

Group, O.

V. Duma, J. P. Rolland, O. Group, A. Vlaicu, and R. Ave, “Advancements on galvanometer scanners for high-end applications,” Proc. SPIE 8936, 893612 (2014).
[Crossref]

Hansen, M.

K. Daniilidis, C. Krauss, M. Hansen, and G. Sommer, “Real-time tracking of moving objects with an active camera,” Real-Time Imag. 4(1), 3–20 (1998).
[Crossref]

He, G.

Heo, J.

J. Heo, J. Kim, and D. Lee, “Real-time digital image stabilization using motion sensors for search range reduction,” in SoC Design Conference (ISOCC, 2012), pp. 363–366.

Hua, C.

H. Oike, H. Wu, C. Hua, and T. Wada, “Clear image capture-active cameras system for tracking a high-speed moving object,” in Proceedings of the Fourth International Conference on Informatics in Control (2007), pp. 94–102.

Ishikawa, M.

K. Okumura, H. Oku, and M. Ishikawa, “High-speed gaze controller for millisecond-order pan/tilt camera,” in Proceedings of IEEE International Conference on Robotics and Automation (IEEE, 2011), pp. 6186–6191.

Ito, M.

M. Ito, “Cerebellar control of the vestibulo-ocular reflex - around the flocculus hypothesis,” Annu. Rev. Neurosci. 1(5), 275–296 (1982).
[Crossref]

Kim, J.

J. Heo, J. Kim, and D. Lee, “Real-time digital image stabilization using motion sensors for search range reduction,” in SoC Design Conference (ISOCC, 2012), pp. 363–366.

Kopeika, N. S.

Krauss, C.

K. Daniilidis, C. Krauss, M. Hansen, and G. Sommer, “Real-time tracking of moving objects with an active camera,” Real-Time Imag. 4(1), 3–20 (1998).
[Crossref]

Larkin, D.

I. V. Romanenko, E. A. Edirisinghe, and D. Larkin, “Block matching noise reduction method for photographic images applied in Bayer RAW domain and optimized for real-time implementation,” Proc. SPIE8437, 84370F (2012).

Lee, D.

J. Heo, J. Kim, and D. Lee, “Real-time digital image stabilization using motion sensors for search range reduction,” in SoC Design Conference (ISOCC, 2012), pp. 363–366.

Levin, A.

A. Levin, P. Sand, T. S. Cho, F. Durand, and W. T. Freeman, “Motion-invariant photography,” ACM Trans. Graph. 27(3), 71 (2008).
[Crossref]

Li, Y.

Mayer, K. J.

E. Bodenstorfer, J. Furtler, J. Brodersen, K. J. Mayer, C. Eckel, K. Gravogl, and H. Nachtnebel, “High-speed line-scan camera with digital time delay integration,” Proc. SPIE 6496, 64960I (2007).
[Crossref]

Miao, H.

Milberg, R.

Nachtnebel, H.

E. Bodenstorfer, J. Furtler, J. Brodersen, K. J. Mayer, C. Eckel, K. Gravogl, and H. Nachtnebel, “High-speed line-scan camera with digital time delay integration,” Proc. SPIE 6496, 64960I (2007).
[Crossref]

Oike, H.

H. Oike, H. Wu, C. Hua, and T. Wada, “Clear image capture-active cameras system for tracking a high-speed moving object,” in Proceedings of the Fourth International Conference on Informatics in Control (2007), pp. 94–102.

Oku, H.

K. Okumura, H. Oku, and M. Ishikawa, “High-speed gaze controller for millisecond-order pan/tilt camera,” in Proceedings of IEEE International Conference on Robotics and Automation (IEEE, 2011), pp. 6186–6191.

Okumura, K.

K. Okumura, H. Oku, and M. Ishikawa, “High-speed gaze controller for millisecond-order pan/tilt camera,” in Proceedings of IEEE International Conference on Robotics and Automation (IEEE, 2011), pp. 6186–6191.

Qian, Y.

Raskar, R.

R. Raskar, A. Agrawal, and J. Tumblin, “Coded exposure photography: motion deblurring using fluttered shutter,” ACM Trans. Graph. 25(3), 795–804 (2006).
[Crossref]

Rolland, J. P.

V. Duma, J. P. Rolland, O. Group, A. Vlaicu, and R. Ave, “Advancements on galvanometer scanners for high-end applications,” Proc. SPIE 8936, 893612 (2014).
[Crossref]

Romanenko, I. V.

I. V. Romanenko, E. A. Edirisinghe, and D. Larkin, “Block matching noise reduction method for photographic images applied in Bayer RAW domain and optimized for real-time implementation,” Proc. SPIE8437, 84370F (2012).

Sand, P.

A. Levin, P. Sand, T. S. Cho, F. Durand, and W. T. Freeman, “Motion-invariant photography,” ACM Trans. Graph. 27(3), 71 (2008).
[Crossref]

Shao, J.

Sommer, G.

K. Daniilidis, C. Krauss, M. Hansen, and G. Sommer, “Real-time tracking of moving objects with an active camera,” Real-Time Imag. 4(1), 3–20 (1998).
[Crossref]

Tumblin, J.

R. Raskar, A. Agrawal, and J. Tumblin, “Coded exposure photography: motion deblurring using fluttered shutter,” ACM Trans. Graph. 25(3), 795–804 (2006).
[Crossref]

Vlaicu, A.

V. Duma, J. P. Rolland, O. Group, A. Vlaicu, and R. Ave, “Advancements on galvanometer scanners for high-end applications,” Proc. SPIE 8936, 893612 (2014).
[Crossref]

Wada, T.

H. Oike, H. Wu, C. Hua, and T. Wada, “Clear image capture-active cameras system for tracking a high-speed moving object,” in Proceedings of the Fourth International Conference on Informatics in Control (2007), pp. 94–102.

Wu, D. Y.

C. W. Chiu, P. C. P. Chao, and D. Y. Wu, “Optimal design of magnetically actuated optical image stabilizer mechanism for cameras in mobile phones via genetic algorithm,” IEEE Trans. Magn. 43(6), 2582–2584 (2007).
[Crossref]

Wu, H.

H. Oike, H. Wu, C. Hua, and T. Wada, “Clear image capture-active cameras system for tracking a high-speed moving object,” in Proceedings of the Fourth International Conference on Informatics in Control (2007), pp. 94–102.

Wueller, D.

B. Golik and D. Wueller, “Measurement method for image stabilizing systems,” Proc. SPIE 6502, 65020O (2007).
[Crossref]

Yang, O.

Yitzhaky, Y.

Yohaev, S.

Zhang, J.

Zhang, Q.

ACM Trans. Graph. (2)

A. Levin, P. Sand, T. S. Cho, F. Durand, and W. T. Freeman, “Motion-invariant photography,” ACM Trans. Graph. 27(3), 71 (2008).
[Crossref]

R. Raskar, A. Agrawal, and J. Tumblin, “Coded exposure photography: motion deblurring using fluttered shutter,” ACM Trans. Graph. 25(3), 795–804 (2006).
[Crossref]

Annu. Rev. Neurosci. (1)

M. Ito, “Cerebellar control of the vestibulo-ocular reflex - around the flocculus hypothesis,” Annu. Rev. Neurosci. 1(5), 275–296 (1982).
[Crossref]

Appl. Opt. (2)

IEEE Trans. Magn. (1)

C. W. Chiu, P. C. P. Chao, and D. Y. Wu, “Optimal design of magnetically actuated optical image stabilizer mechanism for cameras in mobile phones via genetic algorithm,” IEEE Trans. Magn. 43(6), 2582–2584 (2007).
[Crossref]

J. Exp. Biol. (1)

M. Davis and P. Green, “Head-bobbing during walking, running and flying: relative motion perception in the pigeon,” J. Exp. Biol. 138(1), 71–91 (1988).

Opt. Express (1)

Opt. Lett. (1)

Proc. SPIE (3)

V. Duma, J. P. Rolland, O. Group, A. Vlaicu, and R. Ave, “Advancements on galvanometer scanners for high-end applications,” Proc. SPIE 8936, 893612 (2014).
[Crossref]

E. Bodenstorfer, J. Furtler, J. Brodersen, K. J. Mayer, C. Eckel, K. Gravogl, and H. Nachtnebel, “High-speed line-scan camera with digital time delay integration,” Proc. SPIE 6496, 64960I (2007).
[Crossref]

B. Golik and D. Wueller, “Measurement method for image stabilizing systems,” Proc. SPIE 6502, 65020O (2007).
[Crossref]

Real-Time Imag. (1)

K. Daniilidis, C. Krauss, M. Hansen, and G. Sommer, “Real-time tracking of moving objects with an active camera,” Real-Time Imag. 4(1), 3–20 (1998).
[Crossref]

Other (4)

H. Oike, H. Wu, C. Hua, and T. Wada, “Clear image capture-active cameras system for tracking a high-speed moving object,” in Proceedings of the Fourth International Conference on Informatics in Control (2007), pp. 94–102.

K. Okumura, H. Oku, and M. Ishikawa, “High-speed gaze controller for millisecond-order pan/tilt camera,” in Proceedings of IEEE International Conference on Robotics and Automation (IEEE, 2011), pp. 6186–6191.

J. Heo, J. Kim, and D. Lee, “Real-time digital image stabilization using motion sensors for search range reduction,” in SoC Design Conference (ISOCC, 2012), pp. 363–366.

I. V. Romanenko, E. A. Edirisinghe, and D. Larkin, “Block matching noise reduction method for photographic images applied in Bayer RAW domain and optimized for real-time implementation,” Proc. SPIE8437, 84370F (2012).

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1
Fig. 1 Concept of back-and-forth motion control of the galvanometer mirror.
Fig. 2
Fig. 2 Rapid block matching method in the Bayer raw domain (BGR). (a) Background tracking for the rapid block matching method. (b) Block matching between two Bayer raw images.
Fig. 3
Fig. 3 Control flow of real-time motion blur compensation system.
Fig. 4
Fig. 4 Mirror angle waveform and exposure timing.
Fig. 5
Fig. 5 Schematic diagram of the experimental system.
Fig. 6
Fig. 6 Optical components of the prototype real-time high-speed motion blur compensation system.
Fig. 7
Fig. 7 Response characteristics of the galvanometer mirror. (a) Input signal [mV] and output signal [mV] (with noise removed to smooth the averaging). (b) Input signal [mV] and gain [dB].
Fig. 8
Fig. 8 Horizontally separated still Bayer-array images to be processed by the straightforward block matching method (green) and the rapid block matching method (red). (a) Previous image. (b) Current image.
Fig. 9
Fig. 9 Fundamental result obtained with our system when vr was 30 km/h vertically, and vertical profiles at the position of the blue lines (with images trimmed for aligned display). (a) Still image. (b) Image during vr =30 km/h with motion blur compensation off). (c) Image during vr = 30 km/h with motion blur compensation on.
Fig. 10
Fig. 10 Peak-to-peak intensity of the initial vertical black-and-white pair at each vr.
Fig. 11
Fig. 11 Applications of our system in practical situations (with images trimmed for aligned display). The first row ((a), (b), and (c)) shows cracked roads, the second row ((d), (e), and (f)) shows printed boards, and the third row ((g), (h), and (i)) shows helicopter shots. The first column ((a), (d), and (g)) shows still images, the second column ((b), (e), and (h)) shows images during vr = 30 km/h with motion blur compensation off, and the third column ((c), (f), and (i)) shows images during vr = 30 km/h with motion blur compensation on.

Tables (1)

Tables Icon

Table 1 Performance Comparison of Block Matching Methods for Color Images

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

s w 2 l = tan α 2 ,
x d 2 l = tan ω r 2 .
ω r = 2 tan 1 ( x d s w tan α 2 ) .
ω m = { ω r ( t 1 t t 3 ) , ω r ( e . g . t = t 4 ) .
R S S D = j = 0 1 i = 0 W w 1 ( Img p ( i , j ) Img c ( i + 2 , j ) ) 2 .
θ = ω m t { ( t 1 t t 3 ) } .
θ = A sin ( 2 π f t ) .
A = ω m 4 f .

Metrics