Abstract

It is experimentally demonstrated that even though the numerical aperture in the object space is fixed, the resolution of an imaging system still can be improved by adjusting the parameters in the image space. This strategy cannot be realized until the discovery of the violation of the Lagrange invariant in a kind of self-interference holography. With the violation, parameters in the image space can escape the constraint of the object space for resolution improvement. Experiments that directly confirm this new ability are implemented and results agree well with the theoretical prediction. Additionally, better performance on frequency recording and finer details beyond the diffraction limit have been recorded with this method.

© 2015 Optical Society of America

1. Introduction

According to the diffraction theory [1], two identical emitters within λ/(2NAo) cannot be resolved, where λ and NAo represent the wavelength of light and the numerical aperture in the object space, respectively. Thus, people usually focus on using smaller λ, higher NAo (such as oil immersion or synthetic aperture), or frequency modulation method (such as structured or tilted illumination [2, 3]) to get better resolution. In recent decades, an alternative way to improve the resolution has been proposed. However, in this kind of method, the physical or chemical properties of label have been used to shatter the diffraction barrier [4–7].

It is worth noting that in an optical imaging system there actually exists two spaces, the object space and the image space, where objects and images located respectively. However, as mentioned above once the wavelength is settled methods for resolution improvement are usually focused on the object space [2, 3], or for another solution by combining with other technology [4–7]. However, little efforts are made in the image space for resolution improvement. That is because once the numerical aperture in the object space NAo fixed, changes in the parameters in image space (i.e. the lateral magnification, MT, and the numerical aperture in the image space, NAi) cannot improve the resolution.

Actually, this conclusion is valid because of the constraint of the Lagrange invariant (or the sine condition [1]). According to the Lagrange invariant [1], it can be derived that the product of the parameters in the image space is equal to that in the object space (i.e. the numerical aperture in the object space), as MTNAi = NAo. Therefore, once the numerical aperture in the object space is fixed, changes in one of MT and NAi will cause the other one change inversely and proportionally so that their product keeps unchanged as NAo [8], resulting in no alteration in resolution. Thus, in conventional imaging system, because of this constraint, it is impossible to improve the resolution by adjusting the parameters in the image space. Therefore, people usually focused on the object space, ignoring the possibility in the image space.

This traditional knowledge will be altered in a kind of self-interference holography (SH) systems [9–14] in which the Lagrange invariant is violated [11]. The SH system is a kind of new holography in which the reference wave also contains the object information as the object wave, while in conventional holographic system the reference wave is usually a plane wave or a spherical wave without object information. This new property of SH system introduces new characters to this system, such as the violation of the Lagrange invariant. In our last paper [11], we presented why and how the Lagrange invariant is violated. To find the operating rules of resolving ability after the violation of the Lagrange invariant, further study is needed.

After the violation of the Lagrange invariant, the lateral magnification and the angular magnification can be altered independently [11], which means it is promising that the parameters in the image space can be altered independently, escaping the constraint of the Lagrange invariant. Joseph Rosen et al. used the violation of the Lagrange invariant to found its effect on resolution by studying the unconventional ratio between the transverse magnification of the image-object and the transverse magnification of the spot [8]. In their work, these suppositions have not been verified with convincible experimental results. Until now, there still exists uncertainty about the new capability of resolution improvement when the Lagrange invariant is violated and thorough experimental studies are required.

Here, we also study the effect of the violation of the Lagrange invariant on the resolution. We find out that the resolution of an imaging system can be improved by altering the parameters in the image space even though the numerical aperture in the object space is fixed. This cannot be realized in the past. We explain how and why this becomes possible both theoretically and experimentally. More importantly, this strategy is directly confirmed with our two-dot experiments. Furthermore, it is experimentally observed that this method has better performance on frequency recording and finer details beyond the diffraction limit can be resolved. These experimental results convinced the new capability of resolution improvement when the Lagrange invariant is violated and present valuable results for optical imaging system research.

2. Theoretical analysis

The study is implemented on a SH system in which the Lagrange invariant is violated. The diagram of one kind of SH system, namely a two-lens Fresnel incoherent correlation holography system [10–12], is depicted in Fig. 1(a) for hologram recording and Figs. 1(b)-1(d) for image reconstruction. One kind of classical optical imaging system (in which the Lagrange invariant holds), namely wide-field microscopy, can also be achieved with the arrangement in Fig. 1(a) by displaying one lens on the spatial light modulator (SLM) and placing the CCD camera in the image plane.

 figure: Fig. 1

Fig. 1 Schematic of an SH system. (a), Hologram recording; (b), (c), (d), image reconstruction. The actual SLM is reflective but is illustrated as transmissive for clarity.

Download Full Size | PPT Slide | PDF

In Fig. 1(a), the light from object is collected by an infinitely corrected micro-objective (MO, Olympus, UPLSAPO 4X) with a focal length fo of 45 mm. The collected light will be modulated by a SLM (Holoeye, Pluto VIS). A filter (F, Semrock, FF01-488/20-25) is inserted between the MO and the SLM to control the bandwidth of the light source. A polarizer (P) is also introduced to obtain polarization parallel to the optical axis of SLM. The SLM acts as two concentric lenses with adjustable focal lengths (lens L1 with focal length f1 and lens L2 with focal length f2). The multiplexing is realized by randomly choosing half of the pixels to simulate one lens while the other part of pixels to simulate the other one. For one point object S, the combination of the MO and the two lenses on SLM produces two magnified images, O1 and O2, one of which is used as the object wave while the other is used as the reference wave. The hologram plane xy (the CCD plane) is located at a distance d from the SLM, where d can be changed flexibly. With phase shift methods [15], the wave front in the hologram plane can be decoded and then an image can be reconstructed with this wave front, as presented in Figs. 1(b)-1(d). The reconstruction distance dr can be equal to, smaller than, or larger than the recording distance (for example d1), as presented in Figs. 1(b)-1(d), respectively.

2.1 Derivation of the parameters in the image space

The parameters in the image space and their relations to the object space will be studied both in the classical imaging system and the SH system by analyzing the point spread function (PSF). It is worth mentioning that here, “the parameters in the image space” stands for “the numerical aperture in the image space NAi” and “the lateral magnification MT”. As given later, in the image space, the former determines the size of the diffraction patterns while the latter determines the distances between diffraction patterns. Methods for analyzing the PSF of this kind of system can be found in many publications [11–14]. Here, some derivations are presented again for further study.

Use U1 and U2 to represent the two waves split by SLM. The hologram recorded by the CCD can be described as I = |U1|2 + |U2|2 + U1*U2 + U1U2*, where (•)* denotes the complex conjugate. Using the phase shifting method [15], a cross term (U1*U2 or U1U2*) of hologram can be decoded and used to reconstruct the image. Assume that the first cross term, U1*U2, is decoded. For an infinitesimal point object S located at (x10, 0, zs), using the Fresnel diffraction formula [1], the U1*U2 can be expressed as

U1*(x,y)U2(x,y)=P(r)exp[ikr22dr]exp{ikxx10fo(f2Tt1f1Tt2)Tt1Tt2}
where x, y are the coordinates in the CCD plane, r = (x2 + y2)1/2, Tt1 = f1ΔldΔlf1zsd, Tt2 = f2ΔldΔlf2zsd, Δl = fo2 + fozsd0zs, and P(r) = 1 when r < LH and P(r) = 0 otherwise. The parameter LH is the radius of hologram which is the smaller of the sizes of the two waves in CCD camera, i.e. LH = min {r1, r2}. The parameters r1 and r2 are the radii of the two waves in the CCD plane. They can be calculated as r1 = Ls |li1d| /li1 and r2 = Ls |li2d| /li2, where li1 = 1/(zsl + 1/f1), li2 = 1/(zsl + 1/f2), and Ls is the radius of the wave just reflected by SLM. The value of dr′ in Eq. (1) can be calculated as

dr=11/(li2d)1/(li1d)

The quadratic phase term in Eq. (1) is a spherical wave with focal length dr′. Thus, in the reconstruction process, a focused image Si can be obtained by propagating (accomplished in the computer) the field described by Eq. (1) over a distance dr′. The propagation distance dr′ is called the reconstruction distance.

The field in the reconstructed image plane can be derived as

U(xi,yi)=[2J1(2πriLHλdr)2πriLHλdr]δ(xiλdr+x10fo(f2Tt1f1Tt2)λTt1Tt2,yiλdr)
where xi, yi are the coordinates in the image plane, ri = (xi2 + yi2)1/2, J1 is the Bessel function of the first kind of order one, and “”denotes two-dimensional convolution.

The first part of Eq. (3) describes the diffraction pattern of the focused image. According to Eq. (3), in the image space the radii of diffraction patterns, δi, can be expressed as δi = 0.61λ/(LH/|dr′|), from which the numerical aperture in the image space, NAi, of the system can be expressed as NAi = LH/|dr′|. It can be seen that higher NAi will lead to smaller radius of the diffraction pattern. Combining this with the expression for dr′ in Eq. (2) and setting zs = 0 for simplicity, NAi becomes

NAi=LH|f1f2||(df1)(df2)|

The second part of Eq. (3) describes the lateral location of the reconstructed image Si. From the location of the image, the lateral magnification MT of this holographic system can be calculated [16] as MT = |xi/x10| = drfo|f2Tt1f1Tt2|/(Tt1Tt2). Higher lateral magnification MT will lead to larger distance between two point images. We should point out that in this paper the negative sign of MT is neglected, as the reverse of image is not concerned. With the expression for dr′, Tt1 and Tt2, and setting zs = 0 for simplicity, MT becomes

MT=d/fo

As mentioned above, in the image space the radius of the diffraction pattern is δi. As we know, the size of diffraction pattern in the object space is MT times smaller than δi. Thus, the radius of the diffraction pattern in the object space is

δo=0.61λ/(MTNAi)
Equation (6) describes the resolving ability of an optical imaging system.

For wide field imaging system, there is only one wave, for example O1. Without loss of generality, set zs = 0 for simplicity. Similarly, the numerical aperture in the image space of the wide-field system can be derived as NAi = Ls/f1. On the other hand, the lateral magnification of the wide-field imaging system can be derived as MT = f1/fo. When zs = 0, the numerical aperture in the object space NAo is NAo = Ls/fo.

It can be seen that in wide-field imaging system in which the Lagrange invariant holds, as expected the product of the parameters in image space is equal to the numerical aperture in the object space, as MTNAi = NAo. Therefore, for the imaging system in which the Lagrange holds, the resolution given in Eq. (6) is equal to 0.61λ/NAo showing consistency with the Abbe diffraction limit. On the other hand, it can be seen that when the NAo is fixed which means the Ls and fo are fixed, smaller f1 will not only lead to higher NAi but also smaller MT. That means because of the constraint of the Lagrange invariant, parameters in the image space will change simultaneously and inversely.

However from Eqs. (4) and (5), it can be seen that in the SH system the parameters in the image space can change independently, escaping the constraint of the Lagrange invariant. More importantly, the product of the parameters in the image space is no longer NAo. After the violation of the Lagrange invariant, as given in Eq. (6), the resolution is determined by 0.61λ/(MTNAi) rather than the commonly used Abbe diffraction limit 0.61λ/NAo.

When MTNAi is larger than NAo, the resolution will be improved. On the other hand, when MTNAi is smaller than NAo, the resolution will be degraded. It can be derived that the value of MTNAi will be larger than NAo when min{li1, li2} < d < max{li1, li2} otherwise it will become smaller than or equal to NAo. It should be noted that when one of li1 and li2 is infinite, the size of object wave will be cut off by the reference wave when d > 2min{li1, li2} [13]. This will cause loss of information of the object wave, which is not appreciated. Considering this, in such case, the condition for larger MTNAi is min{li1, li2} < d < 2min{li1, li2}. Moreover, in this system, when there is a perfect overlap between the reference and the object wave [12], it can be derived that the largest MTNAi is 2NAo. That means, comparing with a system of same coherence property, the resolution can be improved two times at best even with same NAo.

2.2 Theoretical prediction

Based on the theoretical analysis, a simple theoretical prediction about the effect of the Lagrange invariant on the spatial resolution is given in Fig. 2. In conventional imaging system in which the Lagrange holds, as mentioned above there exist a relation as MTNAi = NAo. The blue line in Fig. 2 shows this relationship between NAi and MT when NAo is fixed. It can be seen that the NAi decreases with the increase of MT. Therefore, as shown in cases w#1 and w#2, in the image space the radii of the point diffraction patterns (determined by NAi) and the distance between them (determined by MT) will change simultaneously and proportionally resulting in no alteration in resolution.

 figure: Fig. 2

Fig. 2 Relationship between NAi and MT when NAo is fixed. The blue line is for a classical imaging system in which the Lagrange invariant holds.

Download Full Size | PPT Slide | PDF

However, when the Lagrange invariant is violated, the numerical aperture in the image space NAi and the lateral magnification MT can be altered independently, which means that the radii of the point diffraction patterns and their distance can be adjusted independently. Ultimately, this can be used to modulate the resolution of an imaging system, as shown in cases #1-#4 in Fig. 2.

For example, for two dots beyond the Abbe diffraction limit, in cases #1 and #2, MT remains unchanged while NAi can be changed, therefore the peak-to-peak distance fixes while the radii of diffraction patterns can be reduced (case #1) or increased (case #2) to alter the resolution. On the other hand, in cases #3 and #4, NAi remains unchanged while MT can be changed. Therefore, the radii fixes while the distance can be increased (case #3) or reduced (case #4) to alter the resolution. Specifically, the resolving power can be improved in cases #1 and #3 even without higher NAo.

In general, the resolution will keep unchanged when NAi and |MT| located on the blue line. However, the resolution will be improved if they located at the area above the blue line. On the other hand, the resolution will be ruined if they located at the area below the blue line.

3. Experimental results

Experiments were implemented to verify the theoretical prediction. In experiments, the 1951 USAF target (Edmund, F55-622) and two-point resolution dots on a test target (Thorlabs, R1L3S5P) were used as the object. When using two-dot as object, the aperture of the objective was decreased so that the two dots could just be resolved. The targets were located at the front focal plane of the objective (zs = 0) and illuminated with an LED light source (Thorlabs, M455L2). All of the lenses displayed on SLM had the same radius, as 4.32 mm.

The parameters used for each experiment are given in Table 1. For each object, all cases had the same numerical aperture in the object space NAo. By using different experimental parameters, different values of the lateral magnification MT and the numerical aperture in the image space NAi were obtained according to Eqs. (4) and (5). As shown in Table 1, cases #1 and #2 shared the same value of MT as case w#1, but case #1 had higher NAi while case #2 had lower NAi. On the other hand, cases #3 and #4 shared the same value of NAi as case w#1, but case #3 had larger MT while case #4 had smaller MT.

Tables Icon

Table 1. Parameters for each experiment (zs = 0).

First, the strategy was examined in a classical wide-field imaging system in which the Lagrange invariant holds. For this system, the imaging results in cases w#1 and w#2 are presented in Figs. 3(a), 3(c) and Figs. 3(b), 3(d), respectively, Figs. 3(a) and 3(b) for two-point resolution dots and Figs. 3(c) and 3(d) for the USAF target. For each image, the normalized intensity profile (or the normalized projected intensity profile of group 8 for the USAF target) is also presented with the same case number. It should be noted that the intensity profiles are scaled in the image space while the scale bars are scaled in the object space.

 figure: Fig. 3

Fig. 3 Changes in the resolving power by altering NAi or MT in a wide-field system where the Lagrange invariant holds. (a), (c), Case w#1, NAi = Ls/1000, MT = 22.2; (b), (d), case w#2, NAi = Ls/592, MT = 13.2. Scale bar, 50 μm.

Download Full Size | PPT Slide | PDF

Comparing Figs. 3(a) and 3(b), we can see that from case w#1 to case w#2, the numerical aperture in the image space NAi becomes 1.7 times larger (see Table 1), leading to smaller radii of diffraction patterns, as shown in Fig. 3(b). Meanwhile, the lateral magnification also becomes 1.7 times smaller, leading to smaller distance between the two diffraction patterns, as given in Fig. 3(b). Therefore, the barely resolvable two-point object remains barely resolved. In addition, for the USAF target, there is no change in resolving power by the comparison of Figs. 3(c) and 3(d). For both cases, in the image of the USAF target, element 1 of group 8 can just be resolved, corresponding to a resolution of ~1.95 μm, which is close to the diffraction limit of 1.86 μm. It means that the resolution in case w#1 is the same as case w#2.

Therefore, as described by the Abbe diffraction limit, in the classical optical imaging system with certain wavelength, when the numerical aperture in the object space NAo is fixed, the resolving power is also fixed. Changes in the parameters in the image space will not improve the resolution.

However, when the Lagrange invariant is violated, as in cases #1 and #2 in Fig. 2, NAi can be changed without any change in MT. This enables the radii of diffraction patterns to be changed while the peak-to-peak distances remain unchanged. Figure 4 presents the influence of NAi on the resolving power when MT stays unchanged.

 figure: Fig. 4

Fig. 4 Influence of NAi on the resolving power when MT remains unchanged. (a), (c), Case #1, NAi = Ls/550, MT = 22.2; (b), (d), case #2, NAi = Ls/2200, MT = 22.2. Cases #1 and #2 have the same lateral MT as case w#1, but case #1 has higher NAi while case #2 has lower NAi. Scale bar, 50 μm.

Download Full Size | PPT Slide | PDF

By using different experimental parameters (see Table 1), cases #1 and #2 had the same lateral magnification MT as case w#1, but comparing with case w#1, case #1 had higher NAi (~2 times) while #2 had smaller NAi (~0.5 times). The imaging results under cases #1 and #2 are given in Figs. 4(a), 4(c) and Figs. 4(b), 4(d) respectively, Figs. 4(a) and 4(b) for the two-point resolution dots and Figs. 4(c) and 4(d) for the USAF target.

Comparing with case w#1 in Fig. 3(a), the NAi of case #1 becomes larger while MT stays the same. Therefore, as shown in Fig. 4 (a), the radii of diffraction patterns decreased while the distance between them stayed unchanged, and the change of radii agrees well with the theoretical prediction. As a result, the two barely resolvable points in case w#1 can be easily distinguished in case #1. Consequently, the resolution is improved and the unresolvable bar in Fig. 3(c) (marked by a blue circle) can be clearly distinguished in Fig. 4(c).

Obviously, comparing Figs. 3(c) and 4(c), the resolving ability in holographic system is better than that of the wide-field imaging system and fine details beyond the diffraction limit can be resolved. From the intensity profile of holographic image, group 8 element 6 can be just resolved corresponding to a resolution of 1.10 μm. Comparing with Fig. 3(c), the resolution is improved about 1.8 times, which is same as the theoretical prediction (i.e. 1000/550 = 1.8).

On the other hand, for case #2, as given in Fig. 4(b), when NAi becomes smaller, the radii of diffraction patterns become larger without any change in the distance between them. (We should point out that, in experiments the change in radius is a little larger than 2 times, resulting from the overlapping of two diffraction patterns). Therefore, the two barely resolvable points can no longer be distinguished. The resolution decreased, and the bars in Fig. 3(c) become more blurred in Fig. 4(d).

Thus, the resolution can be modulated by changing the radii of the point diffraction patterns (with different NAi) while the distance between them remain the same (with the same MT). It means that the resolution can be modulated by changing the numerical aperture in the image space (NAi) with the lateral magnification (MT) stays unchanged. Especially, the resolution can be improved with increased NAi even though NAo is fixed.

Similarly, when the Lagrange invariant is violated, as in cases #3 and #4 in Fig. 2, MT can be altered with NAi keeping unchanged. This enables the distance between two diffraction patterns to be changed while their radii keep unchanged. Figure 5 presents the influence of MT on the resolving power when NAi stays unchanged. In these experiments, the light source was changed to one with a bandwidth of ~2 nm (by using another filter, Semrock, LL01-458-25) to satisfy the larger coherence length requirement [10]. Here the change of bandwidth has little effect on the resolution.

 figure: Fig. 5

Fig. 5 Influence of MT on the resolving power when NAi keeps unchanged. (a), (c), Case #3, NAi = Ls/1000, MT = 39.5; (b), (d), case #4, NAi = Ls/1000, MT = 13.2. For each object, cases #3 and #4 have the same NAi as case w#1, but case #3 has larger MT while case #4 has smaller MT. Because of differences in magnification, the size and contrast of images have been adjusted and some margin areas have been cut for display. Scale bar, 50 μm.

Download Full Size | PPT Slide | PDF

By using different experimental parameters (see Table 1), for each object, cases #3 and #4 had the same NAi as case w#1, but comparing with case w#1, case #3 had larger MT (~1.8 times) while case #4 had smaller MT (~0.6 times). The imaging results under cases #3 and #4 are given in Figs. 5(a), 5(c) and Figs. 5(b), 5(d) respectively, Figs. 5(a) and 5(b) for the two-point resolution dots and Figs. 5(c) and 5(d) for the USAF target.

Comparing with case w#1 in Fig. 3(a), the MT of case #3 becomes larger while its NAi stays the same. Therefore, as shown in Fig. 5(a), the two diffraction patterns get further away from each other while the radii of diffraction patterns stay unchanged. As a result, the two barely resolvable points in case w#1 can be easily distinguished in case #3. The resolution is improved, and the unresolvable bars in Fig. 3(c) marked by the blue circle become resolvable in Fig. 5(c). It should be noticed that this improvement is not as good as case #1 because of the smaller signal-to-noise ratio for large recording distance [10].

On the other hand, for case #4, in Fig. 5(b), when MT becomes smaller, the two diffraction patterns become closer to each other without any change in the radii of diffraction patterns. (It should be noted that the small change in radius results from the overlapping of two diffraction patterns). Accordingly, the two barely resolvable points in Fig. 3(a) can no longer be distinguished in Fig. 5(b). Consequently, the resolution decreased, and comparing Fig. 5(d) with Fig. 3(c), the unresolvable bars become more blurred.

Thus, the resolution can be modulated by changing the distance between point diffraction patterns (with different MT) while their size keeps the same (with the same NAi). It means that the resolution can be modulated by changing the lateral magnification (MT) with the numerical aperture in the image space (NAi) stays unchanged. Especially, the resolution can be improved with increased MT even though NAo is fixed.

It should be pointed out that all the experimental results shown above were obtained when one of NAi and MT changed while the other one fixed. Actually, NAi and MT can be changed simultaneously to affect the resolution. Until now, we proposed and experimentally demonstrated a reliable concept that the resolving power of the imaging system can be changed by altering NAi or MT. Experiments with two-dot target were performed to directly confirm its effect on resolution.

In addition, important observations has been made in the Fourier transform of the images. The spectral distributions (on common logarithmic scale) of the wide-field image (shown in Fig. 3(c)) and the holographic image (shown in Fig. 4(c)) are given in Fig. 6(a) and Fig. 6(b), respectively. In Fig. 6, the background noise of holographic image is higher. This might be caused by the sharper transfer function of SH system.

 figure: Fig. 6

Fig. 6 Comparison of the spectrum. (a), Spectral distribution of the wide-field image given in Fig. 3(c). (b), Spectral distribution of the holographic image given in Fig. 4(c).

Download Full Size | PPT Slide | PDF

In Fig. 6(a), it should be noted that the bright lines along the whole x and y-axis were caused by the high brightness of Fig. 3(c). From Figs. 6(a) and 6(b), it is obvious that the image obtained with the SH system contains more complex frequencies than that of the wide-field imaging system. It means that the SH system can record more frequencies than the wide-field imaging system because finer details has been recorded. This also shows the better resolving ability of the SH system.

4. Discussion

It is worth emphasizing that although the proof-of-principle experiments presented here were performed under low NAo conditions because of the available resolution of target. This strategy can also work in high numerical aperture system because even in this kind of system, the image space is still under paraxial conditions. A relay system with small magnification can be adopted for second imaging. The Lagrange invariant holds in the relay system, and then the strategy presented here can be used to violate the Lagrange invariant for resolution improvement.

Moreover, it should be mentioned that although here the achievable MTNAi is two times at best, theoretically higher improvement could be obtained with further increase of the numerical aperture in the image space. Because the numerical aperture in the image space can be further increased with shorter reconstruction distances by utilizing multiplex-wave interference. This part of the research will be studied in detail in future work.

5. Conclusion

In summary, we have investigated that for a standard optical imaging system, as indicated by the Abbe diffraction limit, once the numerical aperture in the object space NAo is fixed, the resolution is also fixed and cannot be improved by increasing the parameters in the image space (i.e. NAi or MT), resulting from the Lagrange invariant. However, for one kind of holographic system, it is demonstrated that the parameters in the image space, NAi and MT, can be adjusted independently because of the violation of the Lagrange invariant. The experiments with two-dot target directly show the subsequent ability that, in the image space, the radii of the point diffraction patterns and their distance can be adjusted independently. This ability enables the improvement on resolution by altering the parameters in the image space. Additionally, it is experimentally observed that this method has better performance on frequency recording, and details beyond the Abbe diffraction limit can be resolved.

This approach is particularly attractive because it requires no increase on the numerical aperture in the object space and it can be easily realized on the existing holographic microscopy by introducing object information into the reference wave. On the other hand, this shows a possibility that people can also focus on the image space for higher resolution, which indicates a promising way for resolution improvement and refreshes our knowledge about the optical imaging systems.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (No. 91232306, and No. 61372155), and 973 program (2015CB755603).

References and links

1. M. Born and E. Wolf, Principles of Optics (Cambridge University, 2005).

2. M. G. L. Gustafsson, “Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy,” J. Microsc. 198(2), 82–87 (2000). [CrossRef]   [PubMed]  

3. A. Hussain, J. L. Martínez, A. Lizana, and J. Campos, “Super resolution imaging achieved by using on-axis interferometry based on a Spatial Light Modulator,” Opt. Express 21(8), 9615–9623 (2013). [CrossRef]   [PubMed]  

4. S. W. Hell and J. Wichmann, “Breaking the diffraction resolution limit by stimulated emission: stimulated-emission-depletion fluorescence microscopy,” Opt. Lett. 19(11), 780–782 (1994). [CrossRef]   [PubMed]  

5. E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313(5793), 1642–1645 (2006). [CrossRef]   [PubMed]  

6. S. T. Hess, T. P. K. Girirajan, and M. D. Mason, “Ultra-high resolution imaging by fluorescence photoactivation localization microscopy,” Biophys. J. 91(11), 4258–4272 (2006). [CrossRef]   [PubMed]  

7. M. J. Rust, M. Bates, and X. Zhuang, “Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM),” Nat. Methods 3(10), 793–796 (2006). [CrossRef]   [PubMed]  

8. J. Rosen and R. Kelner, “Modified Lagrange invariants and their role in determining transverse and axial imaging resolutions of self-interference incoherent holographic systems,” Opt. Express 22(23), 29048–29066 (2014). [CrossRef]   [PubMed]  

9. J. Rosen and G. Brooker, “Digital spatially incoherent Fresnel holography,” Opt. Lett. 32(8), 912–914 (2007). [CrossRef]   [PubMed]  

10. X. Lai, Y. Zhao, X. Lv, Z. Zhou, and S. Zeng, “Fluorescence holography with improved signal-to-noise ratio by near image plane recording,” Opt. Lett. 37(13), 2445–2447 (2012). [CrossRef]   [PubMed]  

11. X. Lai, S. Zeng, X. Lv, J. Yuan, and L. Fu, “Violation of the Lagrange invariant in an optical imaging system,” Opt. Lett. 38(11), 1896–1898 (2013). [CrossRef]   [PubMed]  

12. B. Katz, J. Rosen, R. Kelner, and G. Brooker, “Enhanced resolution and throughput of Fresnel incoherent correlation holography (FINCH) using dual diffractive lenses on a spatial light modulator (SLM),” Opt. Express 20(8), 9109–9121 (2012). [CrossRef]   [PubMed]  

13. J. Rosen, N. Siegel, and G. Brooker, “Theoretical and experimental demonstration of resolution beyond the Rayleigh limit by FINCH fluorescence microscopic imaging,” Opt. Express 19(27), 26249–26268 (2011). [CrossRef]   [PubMed]  

14. P. Bouchal, J. Kapitán, R. Chmelík, and Z. Bouchal, “Point spread function and two-point resolution in Fresnel incoherent correlation holography,” Opt. Express 19(16), 15603–15620 (2011). [CrossRef]   [PubMed]  

15. I. Yamaguchi, T. Matsumura, and J. Kato, “Phase-shifting color digital holography,” Opt. Lett. 27(13), 1108–1110 (2002). [CrossRef]   [PubMed]  

16. M. Gu, Principles Of Three-Dimensional Imaging In Confocal Microscopes (World Scientific Publishing Co Pte Ltd, 1996).

References

  • View by:

  1. M. Born and E. Wolf, Principles of Optics (Cambridge University, 2005).
  2. M. G. L. Gustafsson, “Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy,” J. Microsc. 198(2), 82–87 (2000).
    [Crossref] [PubMed]
  3. A. Hussain, J. L. Martínez, A. Lizana, and J. Campos, “Super resolution imaging achieved by using on-axis interferometry based on a Spatial Light Modulator,” Opt. Express 21(8), 9615–9623 (2013).
    [Crossref] [PubMed]
  4. S. W. Hell and J. Wichmann, “Breaking the diffraction resolution limit by stimulated emission: stimulated-emission-depletion fluorescence microscopy,” Opt. Lett. 19(11), 780–782 (1994).
    [Crossref] [PubMed]
  5. E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313(5793), 1642–1645 (2006).
    [Crossref] [PubMed]
  6. S. T. Hess, T. P. K. Girirajan, and M. D. Mason, “Ultra-high resolution imaging by fluorescence photoactivation localization microscopy,” Biophys. J. 91(11), 4258–4272 (2006).
    [Crossref] [PubMed]
  7. M. J. Rust, M. Bates, and X. Zhuang, “Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM),” Nat. Methods 3(10), 793–796 (2006).
    [Crossref] [PubMed]
  8. J. Rosen and R. Kelner, “Modified Lagrange invariants and their role in determining transverse and axial imaging resolutions of self-interference incoherent holographic systems,” Opt. Express 22(23), 29048–29066 (2014).
    [Crossref] [PubMed]
  9. J. Rosen and G. Brooker, “Digital spatially incoherent Fresnel holography,” Opt. Lett. 32(8), 912–914 (2007).
    [Crossref] [PubMed]
  10. X. Lai, Y. Zhao, X. Lv, Z. Zhou, and S. Zeng, “Fluorescence holography with improved signal-to-noise ratio by near image plane recording,” Opt. Lett. 37(13), 2445–2447 (2012).
    [Crossref] [PubMed]
  11. X. Lai, S. Zeng, X. Lv, J. Yuan, and L. Fu, “Violation of the Lagrange invariant in an optical imaging system,” Opt. Lett. 38(11), 1896–1898 (2013).
    [Crossref] [PubMed]
  12. B. Katz, J. Rosen, R. Kelner, and G. Brooker, “Enhanced resolution and throughput of Fresnel incoherent correlation holography (FINCH) using dual diffractive lenses on a spatial light modulator (SLM),” Opt. Express 20(8), 9109–9121 (2012).
    [Crossref] [PubMed]
  13. J. Rosen, N. Siegel, and G. Brooker, “Theoretical and experimental demonstration of resolution beyond the Rayleigh limit by FINCH fluorescence microscopic imaging,” Opt. Express 19(27), 26249–26268 (2011).
    [Crossref] [PubMed]
  14. P. Bouchal, J. Kapitán, R. Chmelík, and Z. Bouchal, “Point spread function and two-point resolution in Fresnel incoherent correlation holography,” Opt. Express 19(16), 15603–15620 (2011).
    [Crossref] [PubMed]
  15. I. Yamaguchi, T. Matsumura, and J. Kato, “Phase-shifting color digital holography,” Opt. Lett. 27(13), 1108–1110 (2002).
    [Crossref] [PubMed]
  16. M. Gu, Principles Of Three-Dimensional Imaging In Confocal Microscopes (World Scientific Publishing Co Pte Ltd, 1996).

2014 (1)

2013 (2)

2012 (2)

2011 (2)

2007 (1)

2006 (3)

E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313(5793), 1642–1645 (2006).
[Crossref] [PubMed]

S. T. Hess, T. P. K. Girirajan, and M. D. Mason, “Ultra-high resolution imaging by fluorescence photoactivation localization microscopy,” Biophys. J. 91(11), 4258–4272 (2006).
[Crossref] [PubMed]

M. J. Rust, M. Bates, and X. Zhuang, “Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM),” Nat. Methods 3(10), 793–796 (2006).
[Crossref] [PubMed]

2002 (1)

2000 (1)

M. G. L. Gustafsson, “Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy,” J. Microsc. 198(2), 82–87 (2000).
[Crossref] [PubMed]

1994 (1)

Bates, M.

M. J. Rust, M. Bates, and X. Zhuang, “Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM),” Nat. Methods 3(10), 793–796 (2006).
[Crossref] [PubMed]

Betzig, E.

E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313(5793), 1642–1645 (2006).
[Crossref] [PubMed]

Bonifacino, J. S.

E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313(5793), 1642–1645 (2006).
[Crossref] [PubMed]

Bouchal, P.

Bouchal, Z.

Brooker, G.

Campos, J.

Chmelík, R.

Davidson, M. W.

E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313(5793), 1642–1645 (2006).
[Crossref] [PubMed]

Fu, L.

Girirajan, T. P. K.

S. T. Hess, T. P. K. Girirajan, and M. D. Mason, “Ultra-high resolution imaging by fluorescence photoactivation localization microscopy,” Biophys. J. 91(11), 4258–4272 (2006).
[Crossref] [PubMed]

Gustafsson, M. G. L.

M. G. L. Gustafsson, “Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy,” J. Microsc. 198(2), 82–87 (2000).
[Crossref] [PubMed]

Hell, S. W.

Hess, H. F.

E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313(5793), 1642–1645 (2006).
[Crossref] [PubMed]

Hess, S. T.

S. T. Hess, T. P. K. Girirajan, and M. D. Mason, “Ultra-high resolution imaging by fluorescence photoactivation localization microscopy,” Biophys. J. 91(11), 4258–4272 (2006).
[Crossref] [PubMed]

Hussain, A.

Kapitán, J.

Kato, J.

Katz, B.

Kelner, R.

Lai, X.

Lindwasser, O. W.

E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313(5793), 1642–1645 (2006).
[Crossref] [PubMed]

Lippincott-Schwartz, J.

E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313(5793), 1642–1645 (2006).
[Crossref] [PubMed]

Lizana, A.

Lv, X.

Martínez, J. L.

Mason, M. D.

S. T. Hess, T. P. K. Girirajan, and M. D. Mason, “Ultra-high resolution imaging by fluorescence photoactivation localization microscopy,” Biophys. J. 91(11), 4258–4272 (2006).
[Crossref] [PubMed]

Matsumura, T.

Olenych, S.

E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313(5793), 1642–1645 (2006).
[Crossref] [PubMed]

Patterson, G. H.

E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313(5793), 1642–1645 (2006).
[Crossref] [PubMed]

Rosen, J.

Rust, M. J.

M. J. Rust, M. Bates, and X. Zhuang, “Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM),” Nat. Methods 3(10), 793–796 (2006).
[Crossref] [PubMed]

Siegel, N.

Sougrat, R.

E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313(5793), 1642–1645 (2006).
[Crossref] [PubMed]

Wichmann, J.

Yamaguchi, I.

Yuan, J.

Zeng, S.

Zhao, Y.

Zhou, Z.

Zhuang, X.

M. J. Rust, M. Bates, and X. Zhuang, “Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM),” Nat. Methods 3(10), 793–796 (2006).
[Crossref] [PubMed]

Biophys. J. (1)

S. T. Hess, T. P. K. Girirajan, and M. D. Mason, “Ultra-high resolution imaging by fluorescence photoactivation localization microscopy,” Biophys. J. 91(11), 4258–4272 (2006).
[Crossref] [PubMed]

J. Microsc. (1)

M. G. L. Gustafsson, “Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy,” J. Microsc. 198(2), 82–87 (2000).
[Crossref] [PubMed]

Nat. Methods (1)

M. J. Rust, M. Bates, and X. Zhuang, “Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM),” Nat. Methods 3(10), 793–796 (2006).
[Crossref] [PubMed]

Opt. Express (5)

Opt. Lett. (5)

Science (1)

E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313(5793), 1642–1645 (2006).
[Crossref] [PubMed]

Other (2)

M. Gu, Principles Of Three-Dimensional Imaging In Confocal Microscopes (World Scientific Publishing Co Pte Ltd, 1996).

M. Born and E. Wolf, Principles of Optics (Cambridge University, 2005).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1
Fig. 1 Schematic of an SH system. (a), Hologram recording; (b), (c), (d), image reconstruction. The actual SLM is reflective but is illustrated as transmissive for clarity.
Fig. 2
Fig. 2 Relationship between NAi and MT when NAo is fixed. The blue line is for a classical imaging system in which the Lagrange invariant holds.
Fig. 3
Fig. 3 Changes in the resolving power by altering NAi or MT in a wide-field system where the Lagrange invariant holds. (a), (c), Case w#1, NAi = Ls/1000, MT = 22.2; (b), (d), case w#2, NAi = Ls/592, MT = 13.2. Scale bar, 50 μm.
Fig. 4
Fig. 4 Influence of NAi on the resolving power when MT remains unchanged. (a), (c), Case #1, NAi = Ls/550, MT = 22.2; (b), (d), case #2, NAi = Ls/2200, MT = 22.2. Cases #1 and #2 have the same lateral MT as case w#1, but case #1 has higher NAi while case #2 has lower NAi. Scale bar, 50 μm.
Fig. 5
Fig. 5 Influence of MT on the resolving power when NAi keeps unchanged. (a), (c), Case #3, NAi = Ls/1000, MT = 39.5; (b), (d), case #4, NAi = Ls/1000, MT = 13.2. For each object, cases #3 and #4 have the same NAi as case w#1, but case #3 has larger MT while case #4 has smaller MT. Because of differences in magnification, the size and contrast of images have been adjusted and some margin areas have been cut for display. Scale bar, 50 μm.
Fig. 6
Fig. 6 Comparison of the spectrum. (a), Spectral distribution of the wide-field image given in Fig. 3(c). (b), Spectral distribution of the holographic image given in Fig. 4(c).

Tables (1)

Tables Icon

Table 1 Parameters for each experiment (zs = 0).

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

U 1 * ( x,y ) U 2 ( x,y )=P( r )exp[ ik r 2 2 d r ]exp{ ikx x 10 f o ( f 2 T t1 f 1 T t2 ) T t1 T t2 }
d r = 1 1/ ( l i2 d ) 1/ ( l i1 d )
U( x i , y i )=[ 2 J 1 ( 2π r i L H λ d r ) 2π r i L H λ d r ]δ( x i λ d r + x 10 f o ( f 2 T t1 f 1 T t2 ) λ T t1 T t2 , y i λ d r )
N A i = L H | f 1 f 2 | | ( d f 1 )( d f 2 ) |
M T =d/ f o
δ o =0.61λ/( M T N A i )

Metrics