Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Edge enhancement of color images using a digital micromirror device

Open Access Open Access

Abstract

A method for orientation-selective enhancement of edges in color images is proposed. The method utilizes the capacity of digital micromirror devices to generate a positive and a negative color replica of the image used as input. When both images are slightly displaced and imagined together, one obtains an image with enhanced edges. The proposed technique does not require a coherent light source or precise alignment. The proposed method could be potentially useful for processing large image sequences in real time. Validation experiments are presented.

©2012 Optical Society of America

1. Introduction

Edge detection is one of the oldest topics in image processing and has been widely studied because it is one of the most crucial steps toward classification, segmentation, and recognition of objects. Edges correspond to rapid discontinuities in physical quantities such as gray level, color, and texture, and arise from different physical phenomena, such as changes in the orientation or distance from the viewer of the visible surfaces (object edges), changes in illumination (shadows), and changes in surface reflectance (color). In gray-level images, an edge usually represents object boundaries or changes in physical properties like illumination or reflectance. This definition is more elaborate in the case of multispectral images, since more detailed edge information is expected from color edge detection [1]. An alternative edge characteristic is that the colors of the neighboring pixels are significantly different, even though their brightness values are very similar. Therefore, both changes in the brightness and changes in the color between neighboring pixels should be exploited for more efficient color-edge extraction. In fact, Novak and Shafer [2] found that about 10% of the edges detected in color images cannot be recognized in gray-value images.

For these reasons, various approaches to edge detection for color images have been proposed in recent decades. Early approaches to color edge detection are extensions of monochrome edge detection. These techniques are applied to the three color channels independently, and then the results are combined using certain logical operations [3,4]. However, these approaches tend to fail in detecting an edge when, in any one color component, adjacent pixels have the same intensity value. More recent approaches assume that the pixels of color and multispectral images can be thought as vector quantities and thus treat the whole image as a vector field [5]. Several researchers have applied vector order statistics methods such as vector mean and vector median filters [5] or the minimum vector dispersion (MVD) edge detector [6] in the RGB space. A comprehensive evaluation of vector gradient, difference vector, vector range, and entropy operators in the RGB color space is given in [7].

Other approaches consider performing operations in an alternative color space; e.g., Sobel was applied to each component of the HSI space by Weeks and co-workers [8], and the isotropic color-edge detector that operates in the YUV color space was proposed in [9]. However, the concentration of research on the RGB color space is logical since it avoids the preprocessing step of transforming the captured image into another color space (e.g., to CIELAB, CIELAB, or HSI). Finally, the vector angle and Euclidean distance metrics were analyzed and compared in [10,11]. The authors show that these two metrics are complementary; i.e., Euclidean distance does not quantify the color similarities. On the other hand, vector angle is not sensitive to intensity differences but quantifies well hue and saturation differences. The principal disadvantage of digital methods is the computation time spent on an enhancement operation, which increases with the size of image and complexity of employed algorithm (see Section 4).

Thus, it is worthwhile to consider the realization of such operators on optical processors, which offer the attractive features of parallel processing in real time (e.g., at video rates). We have recently proposed different approaches involving liquid crystal displays (LCDs) for edge enhancement/detection in gray level images [1214].

In this paper, we introduce a new method for orientation-selective edge detection and enhancement in color images. The method is based on the capacity of digital micromirror devices (DMDs) to generate a (positive) copy of the digital image used as input, and simultaneously a complementary color replica of it, i.e., a “negative” replica [15]. When both images (the original one and its slightly displaced “negative” replica) are simultaneously imaged across the detection plane, one obtains a resulting image with enhanced first derivatives.

In the next section, we describe in some detail our proposal for edge detection in color images by the partial first-order derivatives of an image, and in Section 3, experimental results are presented. Finally, in Sections 4 and 5, we discuss the advantages of the proposed method and present the conclusions, respectively.

2. Description of the Method

Let I(x,y) be the intensity distribution of the image to be optically processed, and let Iout(x,y) be the intensity distribution obtained at the processor output. For a color image, one can write I(x,y)=(IR(x,y),IG(x,y),IB(x,y)), where the subindices RGB refer to the additive primary colors red, green, and blue, and an equivalent expression for the RGB components of Iout(x,y). [In the following, we assume that the RGB intensities are normalized to the unit.]

As mentioned in the Introduction, the detection and enhancement of edges is associated with the enhancement of the image derivatives. In principle, the implementation of an optical processor working with incoherent illumination presents some difficulties, because incoherent illumination implies addition of light intensities and not their subtraction, as required for the implementation of derivatives. (For simplicity, in the following we deal with partial derivatives in the x direction alone. The procedure in the y-direction is equivalent.)

In Fig. 1 we are proposing a setup to generate the first partial derivatives of color images. It consists of a micromirror device (DMD) (denoted as D in Fig. 1) with its own control electronic interface (E), a light source (LS) constituted by three RGB LEDs, four mirrors (M1-4), an imaging lens (other lens system) (L), a thin glass plate (G), and a digital color camera (C) to acquire the processed images (Iout(x,y)).

 figure: Fig. 1.

Fig. 1. Scheme of the system for edge enhancement in a color image. D and E denote the DMD and its own control electronic interface, respectively. L, LS, M14, G, and C denote an imaging lens, a light source constituted by three RGB LEDs, four mirrors, a thin glass plate and a digital color camera, respectively. A personal computer is used to drive the DLP and the CCD camera.

Download Full Size | PDF

DMDs are currently manufactured for digital light projector (DLP) purposes. When a digital image is displayed in a DMD/DLP, the micromirrors can take only two possible positions; some micromirrors are tilted an angle α/2 to the right—with respect to the direction normal to the DMD surface—and the rest of them are tilted an angle α/2 to the left [16] (see below). In a commercial DMD/DLP, only light reflected by mirrors tilted in one direction will reach the projection screen, while the light reflected by the other mirrors is considered spurious, and it must be absorbed by a black surface inside the projector. For the sake of specificity, in the following we will assume that the micromirrors tilted to the right are currently used to generate what we call the “positive” image I(x,y), while those tilted to the left generate the currently considered spurious complementary color image I0I(x,y) (i.e., a “negative” image), where I0=(1,1,1) is the white light illumination of the DLP. The micromirrors are moved at high-frequency rates (kilohertz or higher) controlled by the electronics provided by the manufacturer, and synchronized with the on/off switching of the RGB-LEDs. Thus, at the projection screen, one has a rapid temporal succession of binary single color images. The temporal average performed by the photodetectors (camera or naked eye) permits the reproduction of half-tone colored images.

In our setup, we illuminate the DMD chip from two different directions, forming approximately angles ±α with respect to the direction normal to the DMD surface, as shown in Fig. 2. In principle, for each illumination direction, one generates simultaneously two complementary intensity patterns. But only the light reflected (quasi-) normal to the DMD surface will pass through the imaging lens (L), and thus, actually only one “positive” and only one “negative” image is obtained at the output plane (C) of the optical processor. The light paths corresponding to these complementary images are denoted as “image +” and “image ” in Figs. 1 and 2. The illumination directions were intentionally chosen in order to achieve that these light paths were spatially separated. Thus, it is possible to alter slightly one of the light paths without perturbing the other one.

 figure: Fig. 2.

Fig. 2. Two representative micromirrors, mm1 tilted to the left and mm2 tilted to the right with respect to the direction orthogonal to the DMD surface (dotted line). Both micromirrors are illuminated from two different directions with the help of the mirrors M3 and M4. The light rays reflected (quasi-) orthogonal to the DMD surface will pass through the imaging lens (L).

Download Full Size | PDF

In other words, from the DMD emerge two well defined and spatially separated light bundles (“image +” and “image ”). It is important to remark that both images (positive and negative) are generated simultaneously, and it is not an effect of the time averaging (which is necessary for achieving half-tone images).

In absence of the glass plate (G), the lens (L) will image each DMD point on a corresponding point of the output plane (C), and thus, we would get a homogeneously bright intensity pattern. On the other hand, when a thin glass plate is interposed in one of the paths (e.g., across “image ”; see Fig. 1), the corresponding image will be slightly displaced in a direction determined by the orientation of the plate; e.g., if the plate is rotated around an axis orthogonal to the plane of Fig. 1, the image will be displaced in the x direction. (Of course, the plate also produces a slight defocusing of the complementary image and introduces a small amount of astigmatism, but they are second-order effects.)

Then, at the output plane, we will have the incoherent superposition of the “positive” image and a slightly displaced “negative” replica, i.e.,

Iout(x,y)=a·I(x,y)+b·{I0I(x+Δx,y)},
where a and b are positive real quantities that take into account a possible intensity imbalance between both images. The equation can be rewritten as
Iout(x,y)=(ab)·I(x,y)b·{I(x+Δx,y)I(x,y)}+b·I0.
Thus, we have the superposition of the original image and its partial derivative, plus a homogeneous intensity pattern. In the particular case when the “positive” and “negative” image have the same intensity, i.e., a=b=1/2, we will only have the partial derivative of the image on a homogeneous gray intensity background (1/2, 1/2, 1/2).

3. Experimental Results

We have performed validation experiments using a commercial DMD/DLP of 600×800mirrors (model Samsung SP-P410M). The mirror separation is 14μm, and their angular movement (α/2) is of the order of ±10°—with respect to the direction orthogonal to the micromirror matrix surface—for the on/off states. Actually, each individual micromirror rotates around a diagonal axis forming 45° with respect to the principal directions of the mirror matrix. In order to work on a horizontal plane, in our experiments the whole DMD/DLP was rotated 45° around an axis orthogonal to the mirror matrix. Also, the original projection lens and some parts of the plastic box were removed to permit the lateral illumination of the DMD. In our setup, L is a lens system with an effective focal distance of the order of 12 cm, and the glass plate (G) has a thickness of 0.15mm. The white light source was the original (RGB-LED) light source of the projector, which is controlled by the electronic provided by the manufacturer. The optically processed images were acquired with a commercial digital color camera (model Pentax K-x) whose objective lens was removed.

Firstly we have performed a series of experiments using computer-generated test images. Specifically, we generated a color image consisting in a yellow capital letter “B” over a homogeneous blue background, and displayed it in the DMD. [The color denomination “yellow,” “blue,” etc., used through the present work does not necessarily correspond to the standard color denomination (CIE-1931; see, e.g., [1,16]), but it has only a descriptive character.]

Figure 3(a) shows the “positive” image obtained when the light reflected by M4 is blocked, while Fig. 3(b) shows the complementary (“negative”) image obtained when the light reflected by M3 is blocked. Figures 3(c) and 3(d) show both images superimposed, with the “negative” image slightly displaced with respect to the “positive” one along the horizontal and vertical direction, respectively. The required (arbitrary) displacements of the “negative” image along the selected directions were achieved by rotating the glass plate G, as explained above.

 figure: Fig. 3.

Fig. 3. Experimental results using computer generated test images. (a), (e) “Positive” images obtained when the light reflected by M4 is blocked; (b), (f) complementary (“negative”) images obtained when the light reflected by M3 is blocked; (c), (g) images obtained by superposition, with the “negative” image slightly displaced with respect to the “positive” one along the horizontal direction; (d), (h) images obtained by superposition, with the “negative” image slightly displaced along the vertical direction.

Download Full Size | PDF

The vertical image borders in Fig. 3(c) and the horizontal image borders in Fig. 3(d) depict a bluish or a yellowish tone depending on the sign of the transition, from blue to yellow or from yellow to blue; i.e., the technique distinguishes the sign of the derivative along an arbitrarily selected direction. [The blue and yellow colors used to generate the original digital image are not exactly complementary ones. For that reason, the yellow and blue tones in Figs. 3(a) and 3(b) are not exactly the same.]

We performed the same experiment with a computer generated yellow “B” over a homogeneous red background. Figure 3(e) shows the “positive” image obtained when the light reflected by M4 is blocked, while Fig. 3(f) shows the complementary (“negative”) image obtained when the light reflected by M3 is blocked. Figures 3(g) and 3(h) show both images superimposed, with the “negative” image slightly displaced with respect to the “positive” one along the horizontal and vertical direction, respectively. Clearly, Figs. 3(c) and 3(d) and Figs. 3(g) and 3(h) present enhanced edges along the selected directions. The images have a grayish background as expected, but the images shown in Figs. 3(c) and 3(d) have a certain bluish tone, while Figs. 3(g) and 3(h) have a reddish tone. Hence we conclude that the intensities of the “positive” and “negative” image were not exactly balanced.

We repeated the experiment using a digital version of Picasso’s picture Dove of Peace [17] as test image. Figures 4(a) and 4(b) show the “positive” and “negative” image obtained experimentally, respectively. Figures 4(c) and 4(d) depict the superposition of both images, but with the “negative” image slightly displaced with respect to the “positive” one along the horizontal and vertical direction, respectively. Figures 4(c) and 4(d) clearly show the enhancement of the edges of the original picture: We see that the colors of the enhanced edges depend on the colors of the adjacent regions of the image. The optically processed images show a grayish background tone in the regions where the first derivative of the intensity pattern is null, as expected.

 figure: Fig. 4.

Fig. 4. Experimental results using Picasso’s painting Dove of Peace as test image. (a), (b) “Positive” and “negative” image obtained experimentally, respectively; (c), (d) superposition of both images, with the “negative” image slightly displaced with respect to the “positive” one along the horizontal and vertical directions, respectively.

Download Full Size | PDF

Figures 5(a)5(d) show the experimental results obtained using a picture showing a set of colored cubes with pastel shades [18]. Figures 5(a) and 5(b) are the “positive” and “negative” images, while Figs. 5(c) and 5(d) show the superposition of both images, when the negative replica had been slightly displaced in horizontal and vertical direction, respectively. Again, the borders of the cubes are enhanced, but, as mentioned above, the color and the magnitude of the enhanced edge depends on the colors and the brightness difference of the adjacent regions of the image.

 figure: Fig. 5.

Fig. 5. Experimental results using a set of colored cubes with pastel shades as test image. (a), (b) “Positive” and “negative” images, respectively; (c), (d) superposition of both images, with the negative replica slightly displaced in the horizontal and vertical directions, respectively.

Download Full Size | PDF

In order to illustrate the potential application of the method for processing large image sequences in real time, we used a video recording showing the mitosis of fruit fly embryos observed using two-photon fluorescence microscopy [19]. Figures 6(a)6(c) show a single-frame excerpt from a video recording (multimedia online). Figure 6(a) shows the original image of the mentioned cell division process, while Fig. 6(b) shows the complementary color image. Figure 6(c) shows the optically processed image with enhanced first derivatives along a direction at approximately 45° with respect to the horizontal axis. As expected, this image depicts a grayish background tone in the regions where the first derivative of the original image is null.

 figure: Fig. 6.

Fig. 6. Single-frame excerpt from a video recording (Media 1) showing the cell division of fruit fly embryo observed using two-photon fluorescence microscopy. (a) Original image, (b) complementary color image, (c) superposition of both images with the negative replica slightly displaced.

Download Full Size | PDF

4. Comparison with Digital Techniques

Table 1 compares the performance of our method with digital edge detector algorithms implemented in CPU, GPU, and FPGAs for different image sizes. The CPU and GPU execution time for the Sobel edge detector were taken from data reported in [20]. In this work, for CPU implementation the authors used an Intel Pentium Dual E2160 CPU@1.8GHz, 1 GB RAM, and for GPU implementation they used a NVIDIA Geforce 8800GT (1.5 GHz), 512 MB global memory. The runtime of the FPGAs was taken from data reported in [21]. The authors implemented the Canny edge detector, using Sobel method as gradient operator, in Spartan 6 FPGA operating a frequency above 200 MHz. In Table 1, we observe that the absolute runtime of both GPU and FPGAs algorithms increases almost proportionally with image dimension N×M. On the other hand, in the proposed method the processing time depends only on the DMD characteristics, but in principle the processing time does not depend on the image size. Most of the commercially available DMDs can run at just 60 Hz for a 24-bit RGB color image and 120 Hz for 8-bit monochrome image and have an image size up to 2.2 megapixels. Thus, using a simple optical setup with a standard DMD, one can achieve processing rates 60 to 120frames/s (i.e., 16.68.3msperimage), which is a considerable processing speed (comparable with that of FPGAs when processing RGB images), sufficient for most practical applications (e.g., traffic lights detection [22]). Furthermore, using a specific DMD/DLP, in 3D optical metrology applications, had been reported that is possible to display an 8-bit gray scale patterns with a (1024×768) of resolution and frames rates up to 700 Hz [23]. Fast implementations would be also possible using fast binary ferroelectric LCDs (the separation and recombination could be done using polarizing beam splitters).

Tables Icon

Table 1. Processing Time for Different Image Sizesa

5. Conclusions

We have presented an optical method for orientation-selective enhancement of edges in color images. The method is based on the capacity of digital micromirror devices to generate a (positive) copy of the digital image used as input, and simultaneously a complementary color replica of it. Unlike other methods that propose the difficult decomposition (and posterior recomposition) of color images in the basic RGB components, our technique deals directly with the color images without decomposition.

The involved optical procedure is simple, and it consists of a superposition operation rather than using gradients and other digital operations, which are computationally expensive. The proposed method allows the implementation of partial derivatives with incoherent illumination, so that the technique is very robust and does not require a highly precise alignment. Since it does not involve numerical processing, it could be potentially useful for processing large images in real-time applications requiring edge detection in color (or gray-level) images. Validation experiments were presented. (Our experiments were performed with a casually available low-resolution DMD, which is sufficient for illustrative purposes. But, of course, in real-world applications it can be used DMDs with large resolution and frame rates.) To the best of our knowledge, no other incoherent optical system had been proposed that can perform color edge detection in a simple way.

J. L. Flores expresses his gratitude to Programa de Estancias Sabáticas en el Extranjero, CONACYT-Mexico (project no. 159889), for funding his academic stay at the Facultad de Ingeniería, UdelaR, Uruguay, where this research was developed. The authors are grateful for the financial support from PEDECIBA (Uruguay) and the Comisión Sectorial de Investigación Científica (CSIC, UdelaR, Uruguay). G. Ayubi is grateful for a scholarship from the Agencia Nacional de Investigación e Innovación (ANII, Uruguay).

References

1. A. Koschan and M. A. Abidi, Color Image Processing (Wiley, 2008).

2. C. L. Novak and S. A. Shafer, “Color edge detection,” in Proceedings of DARPA Image Understanding Workshop 1987 (Morgan Kaufmann, 1998), Vol. 1, pp. 35–37.

3. M. Hedley and H. Yan, “Segmentation of color images using spatial and color space information,” J. Electron. Imaging 1, 374–380 (1992). [CrossRef]  

4. T. Carron and P. Lambert, “Color edge detector using jointly hue, saturation and intensity,” in Proceedings of ICIP-94 (IEEE, 1994), pp. 977–981.

5. P. Trahanias and A. N. Venetsanopoulos, “Color edge detection using vector statistics,” IEEE Trans. Image Process. 2, 259–264 (1993). [CrossRef]  

6. P. E. Trahanias and A. N. Venetsanopoulos, “Vector order statistics operators as color edge detectors,” IEEE Trans. Syst. Man Cybern. B 26, 135–143 (1996). [CrossRef]  

7. S.-Y. Zhu, K. N. Plataniotis, and A. N. Venetsanopoulos, “Comprehensive analysis of edge detection in color image processing,” Opt. Eng. 38, 612–625 (1999). [CrossRef]  

8. A. R. Weeks, C. E. Felix, and H. R. Myler, “Edge detection of color images using the HSL color space,” Proc. SPIE 2424, 291–301 (1995). [CrossRef]  

9. J. Fan, D.. K. Y. Yau, A. K. Elmagarmid, and W. G. Aref, “Automatic image segmentation by integrating color-edge extraction and seeded region growing,” IEEE Trans. Image Process. 10, 1454–1466 (2001). [CrossRef]  

10. R. D. Dony and S. Wesolkowski, “Edge detection on color images using RGB vector angle,” in Proceeding of IEEE Canadian Conference on Electrical and Computer Engineering (IEEE, 1999), pp. 687–692.

11. S. Wesolkowski and E. Jernigan, “Color edge detection in RGB using jointly Euclidean distance and vector angle,” in Proceedings of Vision Interface ’99 (Canadian Image Processing and Pattern Recognition Society, 1999), pp. 9–16.

12. J. A. Ferrari and J. L. Flores, “Nondirectional edge enhancement by contrast reverted low-pass Fourier filtering,” Appl. Opt. 49, 3291–3296 (2010). [CrossRef]  

13. J. A. Ferrari, J. L. Flores, and G. Garcia-Torales, “Directional edge enhancement using a liquid-crystal display,” Opt. Commun. 283, 2803–2806 (2010). [CrossRef]  

14. J. L. Flores and J. A. Ferrari, “Orientation-selective edge detection/enhancement using the irradiance transport equation,” Appl. Opt. 49, 619–624 (2010). [CrossRef]  

15. D. Dudley, W. M. Duncan, and J. Slaughter, “Emerging digital micromirror device (DMD) applications,” Proc. SPIE 4985, 14–25 (2003). [CrossRef]  

16. D. Malacara, Color Vision and Colorimetry: Theory and Applications (SPIE, 2001).

17. http://arttattler.com/designcoldwarmodern.html (accessed 6 October 2011).

18. http://lujosabarcelona.blogs.elle.es/files/2011/02/color-block.jpg (accessed 6 October 2011).

19. http://www.youtube.com/watch?v=Q8vfl1rR40M (accessed 6 October 2011).

20. N. Zhang, J. Wang, and Y. Chen, “Image parallel processing based on GPU,” in Proceedings of the 2nd International Conference on Advanced Computer Control (ICACC) (IEEE, 2010), pp. 367–370.

21. C. Gentsos, C. L. Sotiropoulou, S. Nikolaidis, and N. Vassiliadis, “Real-time canny edge detection parallel implementation for FPGAs,” Proceedings of 2010 17th IEEE International Conference on Electronics, Circuits, and Systems (IEEE, 2010), pp. 499–502.

22. C. Claus, R. Huitl, J. Rausch, and W. Stechele, “Optimizing the SUSAN corner detection algorithm for a high speed FPGA implementation,” in Proceedings of International Conference on Field Programmable Logic and Applications, 2009 (IEEE, 2009), pp. 138–145.

23. Texas Instruments, “Using the DLP Pico 2.0 kit for structured light applications,” Application Report DLPA021 (1–18 January 2010).

Supplementary Material (1)

Media 1: MOV (471 KB)     

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. Scheme of the system for edge enhancement in a color image. D and E denote the DMD and its own control electronic interface, respectively. L, LS, M14, G, and C denote an imaging lens, a light source constituted by three RGB LEDs, four mirrors, a thin glass plate and a digital color camera, respectively. A personal computer is used to drive the DLP and the CCD camera.
Fig. 2.
Fig. 2. Two representative micromirrors, mm1 tilted to the left and mm2 tilted to the right with respect to the direction orthogonal to the DMD surface (dotted line). Both micromirrors are illuminated from two different directions with the help of the mirrors M3 and M4. The light rays reflected (quasi-) orthogonal to the DMD surface will pass through the imaging lens (L).
Fig. 3.
Fig. 3. Experimental results using computer generated test images. (a), (e) “Positive” images obtained when the light reflected by M4 is blocked; (b), (f) complementary (“negative”) images obtained when the light reflected by M3 is blocked; (c), (g) images obtained by superposition, with the “negative” image slightly displaced with respect to the “positive” one along the horizontal direction; (d), (h) images obtained by superposition, with the “negative” image slightly displaced along the vertical direction.
Fig. 4.
Fig. 4. Experimental results using Picasso’s painting Dove of Peace as test image. (a), (b) “Positive” and “negative” image obtained experimentally, respectively; (c), (d) superposition of both images, with the “negative” image slightly displaced with respect to the “positive” one along the horizontal and vertical directions, respectively.
Fig. 5.
Fig. 5. Experimental results using a set of colored cubes with pastel shades as test image. (a), (b) “Positive” and “negative” images, respectively; (c), (d) superposition of both images, with the negative replica slightly displaced in the horizontal and vertical directions, respectively.
Fig. 6.
Fig. 6. Single-frame excerpt from a video recording (Media 1) showing the cell division of fruit fly embryo observed using two-photon fluorescence microscopy. (a) Original image, (b) complementary color image, (c) superposition of both images with the negative replica slightly displaced.

Tables (1)

Tables Icon

Table 1. Processing Time for Different Image Sizesa

Equations (2)

Equations on this page are rendered with MathJax. Learn more.

Iout(x,y)=a·I(x,y)+b·{I0I(x+Δx,y)},
Iout(x,y)=(ab)·I(x,y)b·{I(x+Δx,y)I(x,y)}+b·I0.
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.