Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Nano illumination microscopy: a technique based on scanning with an array of individually addressable nanoLEDs

Open Access Open Access

Abstract

In lensless microscopy, spatial resolution is usually provided by the pixel density of current digital cameras, which are reaching a hard-to-surpass pixel size / resolution limit over 1 µm. As an alternative, the dependence of the resolving power can be moved from the detector to the light sources, offering a new kind of lensless microscopy setups. The use of continuously scaled-down Light-Emitting Diode (LED) arrays to scan the sample allows resolutions on order of the LED size, giving rise to compact and low-cost microscopes without mechanical scanners or optical accessories. In this paper, we present the operation principle of this new approach to lensless microscopy, with simulations that demonstrate the possibility to use it for super-resolution, as well as a first prototype. This proof-of-concept setup integrates an 8 × 8 array of LEDs, each 5 × 5 μm2 pixel size and 10 μm pitch, and an optical detector. We characterize the system using Electron-Beam Lithography (EBL) pattern. Our prototype validates the imaging principle and opens the way to improve resolution by further miniaturizing the light sources.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Nowadays, lensless microscopy is a relevant competitor to the classical optical approach. Taking advantage of the electronics downscaling as well as of the increased availability of processing power, lensless arrangements appeared as an effort to reduce the complexity of optical setups. They made available simpler, inexpensive and more flexible microscopes, thanks to the lack of optical elements [16], even enabling the integration of microfluidics directly on dedicated microscopes [7]. With these capabilities, lensless microscopes evolved to become widely used for example in disease diagnosis [8], tracking of biological samples [9] or microbial observation [10].

In parallel, conventional microscopy kept improving until it finally met a fundamental limit: the diffraction inherent to all optical systems [11]. The observation of objects with dimensions below this limit remained only accessible to electronic microscopy techniques, but at the price of becoming bulky and expensive, as well as excluding the possibility to observe live samples due to the preparation processes involved [12]. The development of super-resolution techniques (STED [13], STORM [14], PALM [15]) opened up the direct observation of objects at scales where molecular processes are important, and offered the possibility to look inside of the living cells [16,17]. As these techniques kept improving, new uses have been found, moving towards the chemical world for single molecule tracking [1820] or offering information about chemical reactions [21,22]. Nevertheless, these methods did not escape the need for relatively large and expensive setups [23].

One of the problems of trying to apply the lensless microscopy approach to super-resolution in order to simplify the setups is the ultimate resolution achievable. In its standard configuration, the sample is illuminated from a known and controlled light source while a CCD or CMOS camera records the shadow image, which is used to reconstruct the object. In this case, lensless microscopes are limited by the pixel size of the camera, which is restricted by the microelectronic technology used and its noise levels. Currently this size is constrained to around 1 µm [24]. This limitation can be mitigated using additional techniques such as pixel super-resolution, though at the expense of additional mobile parts and complex setups [25] .

Another approach capable of providing resolutions below the diffraction limit are the methods within the Scanning Near-field Optical Microscopy (SNOM) family. These methods consist on scanning the zone of interest with a point light source, which is then scattered by the sample and recovered on a far field regime, providing information about the topology of the illuminated area. Lateral resolutions of 20 nm and vertical resolutions of 2–5 nm have been demonstrated by these means [2630]. SNOM and its variations are being used in diagnosis [31], nanoscale electro-magnetic field mapping with biosensing or quantum optics applications [32], development of new photonic metamaterials and subwavelength confinement structures [33] or protein structure imaging [34].

As an alternative approach, we propose to use spatially resolved light sources to scan the sample by switching on and off, one after the other, the Light Emitting Diodes (LEDs) on a single chip. As we demonstrate in this work, a microscope can be built by measuring the intensity of light reaching the sensor from every individual LED as it passes through a sample, in what one could call Nano Illumination Microscopy (NIM). At present, high luminosity GaN LEDs arrays can be fabricated with pixel sizes in the micrometer range [35], and research results on even smaller devices are promising [36]. Following the approach proposed here it is thus feasible to use, in the future, arrays of nanoLEDs with a pitch smaller than Abbe’s diffraction limit to potentially render super-resolution images. Since no bulky lenses nor mobile parts are involved in such a device and since the components are exclusively based on mass producible microelectronic technologies, these new lensless microscopes have the potential for making super-resolution microscopy on a chip ubiquitous and accessible to everyone. This would open the possibility to integrate microscopes into much smaller devices than the conventional lensless approach and without the expensive scanning setups needed for SNOM measurements [37].

In this context, this work presents the basis of this new operation principle, with simulations that predict its operation in super-resolution conditions for an LED array with a pitch of 100 nm. We also present the first proof-of-concept prototype to investigate the imaging capabilities of this new NIM approach to lensless microscopy, with the corresponding simulations. This microscope is composed of an array of $8 \times 8$ LEDs with 5 µm size and 5 µm spacing, and a CMOS Single Photon Avalanche Photodiode (SPAD) detector. The results obtained thanks to this prototype confirm the abilities of the proposed setup for super-resolution.

2. NIM operation principle

The principle behind NIM is similar to that of SNOM, but without the need for mobile parts nor scanning tips. Instead, the sampling is done by switching on and off alternatively the LEDs in an array close to the sample. Figure 1 shows a schema of the image acquisition process for a single line of the LED array. When the same process is repeated for every row of the array, the result is a direct map of the sample according to the light it blocks from each LED. This means that for this method, the microscope sensor only needs to capture the light/shadow cone from every LED in the array to operate properly, hugely relaxing the impact of its fill factor or pixel size.

 figure: Fig. 1.

Fig. 1. Operation principle of the NIM microscope: the sample is scanned by illuminating alternatively with different LEDs, and the sensor on top records the intensity of light reaching it.

Download Full Size | PDF

In a general lensless setup, the field of view and the resolution depend on the distances between the light emitter, the sample and the sensor. For a NIM setup this influence has been analyzed by means of calculations. Figure 2 shows the distribution and results of ray tracing simulations comparing three fundamental operating conditions with a single LED row –since the process will be the same for each one. In this case, the sensor is 10 µm in diameter, and the samples are four completely opaque, 12 µm wide objects at different distances from the LEDs and the sensor.

 figure: Fig. 2.

Fig. 2. Detail and results for the ray tracing simulation of the microscope operation, for a single LED row. The red surfaces on the top part of each figure are the 10 µm sensor, while the test samples are in black and are 12 µm wide patterns. The LEDs (in blue at the lower part of each figure) switch on and off sequentially, creating shadow patterns on the sensor. The figure at the bottom shows the relative illumination received on the sensor with each LED. (a) Shadow imaging case: the samples are close to the sensor, far from the LED. (b) Intermediate case: the increased distance makes proper sampling more difficult (c) Ideal NIM scan mode: the samples are very close to the LEDs, creating large shadow patterns.

Download Full Size | PDF

In Fig. 2 a) the setup would operate in the conventional lensless microscopy approach. Samples are placed far from the light sources, creating the same shadows on the sensor plane when illuminated by different LEDs. This is due to the small pitch of the LEDs in comparison to the distance to the sample (note the difference in scale between the x and y axis). In shadow imaging, it is important to keep the sample as close to the imager as possible, in order to have both maxiµm resolving power and field of view [19], so that the resolution is limited by the sensor pixel pitch. It is the opposite in NIM setups, since the distance, which should be minimized, is between sample and light sources.

The process of creating a NIM image is shown in Figs. 2 b) and c) for a single row of the LED array, and consists on switching on and off a single LED to illuminate a small piece of the sample. The amount of light reaching the sensor gives information about the object geometry, and the closer the object to the light source, the better in order to produce sharper contrasted images. The ultimate resolution of a NIM microscope, i.e. the capability to resolve sample features, is related to the pitch of the LEDs, and objects less spaced than the LEDs cannot be resolved. In the particular case of a pattern of the same periodicity as the LEDs, the same signal would be measured while scanning (i.e. all samples would equally cover the LEDs). Single objects with the same pitch than the LEDs can be detected, although it is a limit case which depends on the relative positions of the samples and the sensor. Nevertheless, to properly resolve two objects their pitch has to be larger than twice the LEDs pitch, in accordance to sampling theory. For objects below that limit, aliasing occurs and information is lost. As visible in the example in Fig. 2 b), the opaque squares with a pitch of 12.8 µm and a length of 6.4 µm are not properly imaged in the central area because they are aliased together. This figure simulates the distances involved in the prototype built and shown later. Besides that, Fig. 2 c) simulates the ideal case with the object very near to the LED array, which creates large shadow areas easily detected by the sensor and whose field of view is the size of the array itself.

Figure 3 shows the evolution of contrast as the system moves from bad operating distances (sample much closer to the sensor than to the LED array) to NIM conditions. It has been obtained from ray tracing simulations, with the intensities measured on the sensor while scanning with the LEDs of a single row from the array, varying the $D$ distances between sample and sensor. The rest of the parameters have been selected equal to those of the prototype setup, that is $d = 300$ µm, $L = 5$ µm and $x_{\textrm {det}} = 10$ µm. For completion, the samples used are 11 µm squares. We see the improvement in contrast as the sensor moves away from the sample, shifting into proper NIM operation conditions. As distance D grows the contrast increases as expected, in agreement also to the design conditions given in following sections.

 figure: Fig. 3.

Fig. 3. Contrast detected from scanning with a single row of the LED array as a function of the distance D between sample and sensor. Results obtained from ray tracing simulation, with the rest of the parameters fixed and chosen to reproduce the experimental setup.

Download Full Size | PDF

3. Discussion of super-resolution capabilities

In order to demonstrate that this method is viable for super-resolution measurements and depends only on the development of miniaturized LED arrays, full field electromagnetic simulations on the range of the hundreds of nanometers were calculated. These simulations used Finite Difference Time Domain method (FDTD) implemented in software CST studio. A schematic of the simulated array and sample can be seen in Fig. 4. Al bar periodicities ($P_b$ in Figs. 4 and 5) were changed between 100 nm and 300 nm and distances between bars and light spots (D in Figs. 4 and 5) from 50 nm to 400 nm. The pitch of the LED array was kept constant at 100 nm, and the width of the Al bars is $W=P_b/2$ at each simulation. As expected, the simulations result in no contrast when the Al grating pitch is equal to the LED pitch, due to aliasing. Figure 5 presents the intensity of the z component of the Poynting vector integrated over the surface above the Al bars, and normalized to the corresponding intensity without bars present. The dielectric function for Al was taken from [38].

 figure: Fig. 4.

Fig. 4. Schema of the near field model for electromagnetic simulations. The rectangle limited by red lines indicates the part of the array presented in the simulations. In all cases the LED array period is $P_a=100$ nm, and $P_b$ is the period of the Al bars.

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. Far-field light intensity outgoing from an Al bar grating for a dipole array spaced 100 nm normalized to the intensity of a single dipole source. Different distances between bars and dipoles (D) are simulated using FDTD, for a periodicity $P_b$ of the Al bars of 210 nm. The dipole source is polarized in x direction (perpendicular to the bar axis) in the left image, and in y direction (along bar axis) on the right side. The gray areas indicate the positions of the Al bars.

Download Full Size | PDF

Simulations also show the distance between LEDs and sample to be important for the contrast of the received signal, as expected and demonstrated on the raytracing simulations. In Fig. 5, the simulated far-field intensity from each dipole –simulating a nanoLED as an ensemble of 8 nm dipoles– is drawn over the representation of the aluminum bars. The bars have a periodicity of 210 nm, and are properly sampled as shown by the different responses for polarization perpendicular o parallel to the bar axis. The contrast of the received signal depends on the distance between the light sources and the sample, as shown for the far-field case before. These simulations show that NIM would be able to resolve objects below the Abbe limit, achieving super-resolution. Moreover, this study demonstrates that NIM setups can be more compact than traditional lensless microscopes, since the distance of the sample to the LED array is necessarily minimized, and the distance between sample and camera can be kept small as long as it is several times larger than the LED – sample distance.

To better approximate realistic sizes for the future LEDs, simulations where also carried with bigger ensembles of 50 nm dipoles. The simulations in Fig. 6 show how the size of the dipole ensembles (and thus the simulated nanoLEDs) is not a critical influence on the contrast of the NIM setup, with differences only showing up at distances closer than the 100 nm. This supports the idea that for NIM microscopes, the performance is mainly affected by the distances and the LED pitch, more than the actual LED sizes. The contrast is defined by Eq.  (1).

$$C = 100 \cdot \frac{I_{max}-I_{min}}{I_{max} + I_{min}}$$

 figure: Fig. 6.

Fig. 6. Contrast obtained by the different dipole models, ensemble of 8 and 50 nm dipoles forming square patterns to simulate the nanoLEDs. Results obtained from the FDTD simulations.

Download Full Size | PDF

To complete this simulation study, Fig. 7 shows the Fast Fourier Transform (FFT) obtained when bars with different periodicities are sampled with LEDs spaced 100 nm. The FFT should have a peak on the bar periodicity, but it can be seen that it is only properly sampled for $P_b$ larger than 200 nm, as expected from sampling theory. For lower periods, aliasing appears, such as for periodicity $P_b = 150$ nm, which generates a response that is almost the same as for periodicity $P_b = 300$ nm. For $P_b = 100$ nm, there is no way to properly reconstruct the bar distribution.

 figure: Fig. 7.

Fig. 7. FFT of the far-field intensity signals obtained for different Al bar periods ($P_b$) on the EM simulations for an extended LED array. Results obtained from the FDTD simulations.

Download Full Size | PDF

4. NIM design principles

Discussion about the requirements on the design of a NIM microscope is illustrated with the help of Fig. 8, where d is the distance between LEDs and sample, and D is the distance between sample and sensing area. Figure 8 a) shows the field of view (FOV) of a NIM setup, and how it depends on the LED – sensing area distance , as well as the width of the sensing area itself and the width of the nanoLED array. When the distance D is much larger than the distance d, that is, the microscope operates in ideal NIM conditions, the field of view is the width of the LED array, as long as the effect of the angle of incidence for the light can be compensated (for example, by adjusting the current through the LEDs). A way to remove any dependence on the angle from the FOV could be to make the optical detector as wide as the LED array, but this would result in a reduced contrast for the microscope as illustrated in Figs. 8 b) and 8 c). In Fig. 8 b) it is shown the worst case in which the shadows from different LEDs overlap. That would happen if the sensor at a distance D from the sample crossed the dashed red lines. To avoid this, the maxiµm width of the detector, $x_{\textrm {det}}$ is determined by Eq.  (2), in which L is the size of the LEDs.

$$x_{\textrm{det}} < 2 \frac{3D-2d}{2d} L$$

 figure: Fig. 8.

Fig. 8. General schematics of the NIM microscope. In blue, LEDs from the array. In red, sensing area or sensor. a) shows the field of view for an arbitrary large LED array and sensor (grey shaded region). b) shows the shadow cones projected by a single sample (in black) from different LEDs and how they may overlap, which would mean the same area of the sample would be sampled by different LEDs, reducing the contrast of the microscope. The dashed red lines show the limits of that overlap zone c) shows the shadow cones projected by different samples while illuminated by the same LED, and how they may be integrated together by a sensor too large.

Download Full Size | PDF

This relationship confirms that there is a minimal distance between sample and detector. Moreover, to relax the requirements on the sensor the distance D should be kept larger than d as necessary. For the prototype built and presented below (D = 600 µm, d = 300 µm) the maxiµm size for the sensor to avoid the shadow overlapping problem is 20 µm in diameter.

At the same time, Fig. 8 c) illustrates another situation to be considered. In it, the sensor pictured integrates the shadows projected by different regions of the sample being illuminated by the same LED. If we remember that only samples spaced twice the LED pitch can be observed, the condition follows Eq.  (3). This relationship is less restrictive than the previous one (for the prototype dimensions, $x_{\textrm {det}} < 40 \mu m$), so the microscope should conform to the condition presented in Eq.  (2).

$$x_{\textrm{det}} < 2 \frac{3D+2d}{2d} L$$
While for small LED arrays such as the integrated in the prototype presented below the dependence of the recovered light on the angle of incidence is corrected by a calibration process, this might be difficult to do for arbitrary large LED arrays. In that case, it would be useful to add additional sensor pixels or take new sensing areas in parallel, each with their own FOV. The spacing between pixels required for this is very low (in the current experimental setup, one pixel each 100 µm is enough), and easily met with any camera. Since trying to sample a large array with a single sensor would get in the way of contrast, this strategy would serve both purposes, keeping contrast as sharp as possible while recovering the maxiµm amount of light, since arbitrarily sized sensing areas placed directly above a group of nanoLEDs can be used for that purpose.

5. Microscope components and setup

Nowadays, Gallium nitride (GaN) LED technology is developing beyond solid-state lightning, leading to highly efficient micro- and nanodevices for applications such as point light sources for optical communications, imaging and sensing [39]. In particular, the nanoLEDs used in this work were obtained from standard blue LED structure, based on InGaN/GaN quantum wells grown on a sapphire wafer [40]. Their design granted individual access to each one of the pixels of the array by defining individual contacts (top) to the p side of each LED and an n-type contact (bottom) common to all pixels. This is a major factor limiting the number of LEDs in the array, because the driving electronics must access the LEDs through a metallic interconnection. The low number of LEDs sets the field of view of the microscope, which is only as large as the array itself. The entire chip for our prototype, including the LED array and contact pads, is $1 \times 1 cm^2$ in size. The LEDs are distributed in an $8 \times 8$ matrix of 5 µm LEDs (spaced also 5 µm), which makes them smaller than any commercially available GaN LED array (Fig. 9). The metal contacts are produced by deposition of Cr/Au, and light is emitted through the 300 µm thick sapphire substrate. The different illumination levels between LEDs were corrected by adjusting the driving current for each one to equalize the response measured in the optical detector, which removes the dependence with the angle of incidence too. The LED array is driven by a digitally controlled current source, supplying from 27 muA up to 3 mA through $8 \times 8$ channel analog demultiplexers, implemented with discrete components.

 figure: Fig. 9.

Fig. 9. The NIM light source: a GaN nanoLED array chip with 64 ($8 \times 8$) pixels sized 5 µm. (a) 3D sketch of the chip. (b) Image of the microLED array chip.

Download Full Size | PDF

As commented before, the NIM microscope effectively relaxes any requirements on the optical detector, which could be any conventional CCD or CMOS camera. For the prototype, a custom SPAD camera was implemented because of its adequate form factor (low profile when mounted on the PCB, no additional optical components), as well as to prepare for future fluorescence experiments with the NIM technique. Moreover, the SPAD sensors are useful as a benchmark for the possibility that future miniaturized LEDs present a much lower emission power, which other sensors could have problem detecting, and could help for example confronting shot noise. For all these reasons, the camera consists on a circular SPAD sensor integrated on a 0.35 µm CMOS process and with a 10 µm diameter. The dark noise for this pixel configuration is 200 Hz, while the PDP at 450 nm (dominant wavelength emitted by the LED) is around $10\%$ [41]. In our demonstrator, creating an image frame in the NIM mode requires scanning through the 64 different LEDs, and their switching speed limits the framerate to 20 frames per second.

The LED chip is ball-bonded on the exposed contacts of a PCB containing its driving electronics and control connections. Opposite to the LEDs, the imaging chip is mounted on its own PCB, with its wire bonds protected with spin coated SU-8 [42]. Positioning stagers are used for both the sample and the SPAD sensor, to allow studying the images obtained at different relative positions between sample, LEDs and sensor. They provide flexibility for testing different alignments and setups, but they are not an essential part of the microscope. Additionally, a sample holder is used to position and move the sample over the LED array, in direct contact with the surface of the sapphire layer, in order to obtain the maxiµm resolving power. A general schematic of the setup can be seen in Fig. 10, while Fig. 11 shows a close view photography of the actual construction.

 figure: Fig. 10.

Fig. 10. General scheme of the microscope. The LED array is ball-bonded over the driver PCB. The sample is placed over the LEDs as close as allowed by the sapphire layer. The light transmitted through the sample reaches the SPAD sensor, opposite the LEDs.

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. Close up photography of the microscope setup. The sample holder presents an opening where the sample is placed and the SPAD imager is directly above it.

Download Full Size | PDF

6. Experimental results

In order to test the capabilities of the setup, 40 nm of Cr was deposited on a fused silica wafer patterned by Electron-Beam Lithography (EBL) with features from 50 µm down to 50 nm in size. The dies resulting after wafer dicing were used as samples for the system. These were placed over the sapphire layer of the LED chip, with a holder ring around it to allow for positioning, as shown in Figs. 10 and 11. The sample was at a distance of around 300 µm from the LED array and from the sensor, to obtain the images shown in Fig. 12. Note that the colors are inverted between the microscope picture and the NIM image because the picture is taken from a reflection microscope, while the NIM setup operates by transmission. Each pixel from the images in the right part of Fig. 12 corresponds to the light intensity measured from turning on the corresponding LED of the array, the sensor being the same in all cases, and each pixel of the image representing the sample (or lack of) right above the corresponding pixel. Before taking the image of the sample, each LED is characterised and calibrated so that for each one the sensor receives the same light intensity.

 figure: Fig. 12.

Fig. 12. EBL patterns, with the focused zone marked by the square (left) and shadow images from nano illumination microscopy method for each (right). (a) 20 µm side squares, separated 20 µm. (b) 6.4 µm side squares, separated 6.4 µm

Download Full Size | PDF

With the current LED array, the smallest resolved patterns are the 6.4 µm squares, as can be seen in Fig. 12(b). This can be considered a limit case, because the aliasing predicted by simulations causes that the central squares to not be properly resolved and to be sampled as a single object. Features down to isolated 5 µm separator lines can also be observed, depending on the relative positioning of the parts, but it would not be possible to resolve two side by side. Still, this could be useful for detecting the presence or passage of particles of sizes down to the LED pitch for example.

7. Conclusion

In this work, we have demonstrated a new approach to shadow imaging microscopy, Nano Illumination Microscopy (NIM), which bases its resolving power on miniaturized light sources instead of the sensor geometry, while avoiding the use of any moving parts or optics. Simulations in different scales for the imaging process are presented. An experimental demonstrator validates the imaging principle, and opens the way to improve resolution by further miniaturizing the light sources. A key issue for the image formation is the distance between the light sources and the sample, which should be kept as low as possible. For future nanometric setups, it can prove difficult to be able to guarantee the planarity of an entire sample over the LED array, although for biological ones such as cells, with their membrane resting just above the insulation layer, this still would be possible.

Together with the low requirements on the distance between sample and optical detector, NIM setups are very compact, which will result in smaller microscopes when compared with optical or conventional lensless shadow imaging setups. As an example, the total height of the experimental prototype is 1 mm. It is worth noting that since the process to build an image requires to scan through all the LEDs in the array, capturing enough photons from each, building an image with this method will be slower than conventional lensless shadow imaging.

The NIM method provides a resolution of two times the periodicity of the light source and a field of view of the same size as the LED array used. Therefore, the resolution will improve with the technological progress of GaN LED emitters, with sizes and pixel to pixel distances moving into the nanoscale. Additionally, a reduction of the distance between illumination source and sample will also allow for higher frequency components to be sampled correctly. The same technological improvements will also increase the number of pixels in an array, directly increasing the field of view available. Additionally, the high brightness of inorganic GaN LEDs makes them a good technological choice for size reduction.

Since the resolving power depends on the light sources, NIM relaxes the requirements on the sensors. As shown in the paper, using a single SPAD detector is enough to obtain images for the small LED array used. Nevertheless, for bigger arrays additional sensors positioned to cover the whole LED array have to be used. This opens the field to play with other CMOS or CCD commercial sensors. Future areas of study could be to use this new microscopy method, NIM, together with other resolution enhancement methods, such as pixel super-resolution. Since the illumination technique consists on using nanoLEDs and each one from the array is individually controllable and addressable, it could also have direct applications for structured illumination microscopy, since it is possible to create a diversity of patterns with them.

Funding

Horizon 2020 Framework Programme (737089).

Acknowledgments

This work is being carried out in the European project ChipScope, funded by the European Union’s Horizon 2020 research and innovation program under grant agreement no. 737089.

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References

1. W. S. Haddad, D. Cullen, J. C. Solem, J. W. Longworth, A. McPherson, K. Boyer, and C. K. Rhodes, “Fourier-transform holographic microscope,” Applied Optics (2009).

2. W. Xu, M. H. Jericho, I. A. Meinertzhagen, and H. J. Kreuzer, “Digital in-line holography for biological applications,” Proc. Natl. Acad. Sci. 41(25), 5367–5375 (2002). [CrossRef]  

3. L. Repetto, E. Piano, and C. Pontiggia, “Lensless digital holographic microscope with light-emitting diode illumination,” Opt. Lett. 29(10), 1132–1134 (2004). [CrossRef]  

4. G. Pedrini and H. J. Tiziani, “Short-coherence digital microscopy by use of a lensless holographic imaging system,” Applied Optics (2007).

5. A. K. Singh, G. Pedrini, M. Takeda, and W. Osten, “Scatter-plate microscope for lensless microscopy with diffraction limited resolution,” Sci. Rep. 7(1), 10687 (2017). [CrossRef]  

6. A. Ozcan and E. McLeod, “Lensless Imaging and Sensing,” Annu. Rev. Biomed. Eng. 18(1), 77–102 (2016). [CrossRef]  

7. A. C. Sobieranski, F. Inci, H. C. Tekin, M. Yuksekkaya, E. Comunello, D. Cobra, A. Von Wangenheim, and U. Demirci, “Portable lensless wide-field microscopy imaging platform based on digital inline holography and multi-frame pixel super-resolution,” Light: Sci. Appl. 4(10), e346 (2015). [CrossRef]  

8. C. W. Pirnstill and G. L. Coté, “Malaria Diagnosis Using a Mobile Phone Polarized Microscope,” Sci. Rep. 5(1), 13368 (2015). [CrossRef]  

9. J. F. Restrepo and J. Garcia-Sucerquia, “Automatic three-dimensional tracking of particles with high-numerical-aperture digital lensless holographic microscopy,” Opt. Lett. 37(4), 752–754 (2012). [CrossRef]  

10. Y. Wu and A. Ozcan, “Lensless digital holographic microscopy and its applications in biomedicine and environmental monitoring,” Methods 136, 4–16 (2018). [CrossRef]  

11. M. Born, E. Wolf, A. B. Bhatia, P. C. Clemmow, D. Gabor, A. R. Stokes, A. M. Taylor, P. A. Wayman, and W. L. Wilcock, Principles of Optics (Cambridge University, Cambridge, 1999).

12. L. Schermelleh, R. Heintzmann, and H. Leonhardt, “A guide to super-resolution fluorescence microscopy,” J. Cell Biol. 190(2), 165–175 (2010). [CrossRef]  

13. T. A. Klar and S. W. Hell, “Subdiffraction resolution in far-field fluorescence microscopy,” Opt. Lett. 24(14), 954–956 (1999). [CrossRef]  

14. M. J. Rust, M. Bates, and X. Zhuang, “Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM),” Nat. Methods 3(10), 793–796 (2006). [CrossRef]  

15. E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313(5793), 1642–1645 (2006). [CrossRef]  

16. A. G. Godin, B. Lounis, and L. Cognet, “Super-resolution microscopy approaches for live cell imaging,” Biophys. J. 107(8), 1777–1784 (2014). [CrossRef]  

17. Y. Chen, W. Liu, Z. Zhang, C. Zheng, Y. Huang, R. Cao, D. Zhu, L. Xu, M. Zhang, Y.-H. Zhang, J. Fan, L. Jin, Y. Xu, C. Kuang, and X. Liu, “Multi-color live-cell super-resolution volume imaging with multi-angle interference microscopy,” Nat. Commun. 9(1), 4818 (2018). [CrossRef]  

18. K. I. Mortensen, L. S. Churchman, J. A. Spudich, and H. Flyvbjerg, “Optimized localization analysis for single-molecule tracking and super-resolution microscopy,” Nat. Methods 7(5), 377–381 (2010). [CrossRef]  

19. A. Von Diezmann, Y. Shechtman, and W. E. Moerner, “Three-Dimensional Localization of Single Molecules for Super-Resolution Imaging and Single-Particle Tracking,” Chem. Rev. 117(11), 7244–7275 (2017). [CrossRef]  

20. D. Jin, P. Xi, B. Wang, L. Zhang, J. Enderlein, and A. M. van Oijen, “Nanoparticles for super-resolution microscopy and single-molecule tracking,” Nat. Methods 15(6), 415–423 (2018). [CrossRef]  

21. K. Bermudez-Hernandez, S. Keegan, D. R. Whelan, D. A. Reid, J. Zagelbaum, Y. Yin, S. Ma, E. Rothenberg, and D. Fenyö, “A Method for Quantifying Molecular Interactions Using Stochastic Modelling and Super-Resolution Microscopy,” Sci. Rep. 7(1), 14882 (2017). [CrossRef]  

22. F. A. Caetano, B. S. Dirk, J. H. Tam, P. C. Cavanagh, M. Goiko, S. S. Ferguson, S. H. Pasternak, J. D. Dikeakos, J. R. de Bruyn, and B. Heit, “MIiSR: Molecular Interactions in Super-Resolution Imaging Enables the Analysis of Protein Interactions, Dynamics and Formation of Multi-protein Structures,” PLoS Comput. Biol. 11(12), e1004634 (2015). [CrossRef]  

23. H. Ma, R. Fu, J. Xu, and Y. Liu, “A simple and cost-effective setup for super-resolution localization microscopy,” Sci. Rep. 7(1), 1542 (2017). [CrossRef]  

24. W. G. J. Jerome, Confocal Digital Image Capture (Springer International Publishing, Cham, 2018), pp. 155–186.

25. W. Bishara, U. Sikora, O. Mudanyali, T.-W. Su, O. Yaglidere, S. Luckhart, and A. Ozcan, “Handheld, lensless microscope identifies malaria parasites,” SPIE Newsroom pp. 10–12 (2011).

26. D. W. Pohl, W. Denk, and M. Lanz, “Optical stethoscopy: Image recording with resolution λ/20,” Appl. Phys. Lett. 44(7), 651–653 (1984). [CrossRef]  

27. E. Betzig, M. Isaacson, and A. Lewis, “Collection mode near-field scanning optical microscopy,” Appl. Phys. Lett. 51(25), 2088–2090 (1987). [CrossRef]  

28. L. Novotny, D. W. Pohl, and P. Regli, “Near-field, far-field and imaging properties of the 2D aperture SNOM,” Ultramicroscopy 57(2-3), 180–188 (1995). [CrossRef]  

29. U. Dürig, D. W. Pohl, and F. Rohner, “Near-field optical-scanning microscopy,” J. Appl. Phys. 59(10), 3318–3327 (1986). [CrossRef]  

30. P. Bazylewski, S. Ezugwu, and G. Fanchini, “A review of three-dimensional scanning near-field optical microscopy (3D-SNOM) and its applications in nanoscale light management,” Appl. Sci. 7(10), 973 (2017). [CrossRef]  

31. H. Abramczyk, J. Surmacki, M. Kopeć, A. K. Olejnik, A. Kaufman-Szymczyk, and K. Fabianowska-Majewska, “Epigenetic changes in cancer by Raman imaging, fluorescence imaging, AFM and scanning near-field optical microscopy (SNOM). Acetylation in normal and human cancer breast cells MCF10A, MCF7 and MDA-MB-231,” Analyst 141(19), 5646–5658 (2016). [CrossRef]  

32. N. Rotenberg and L. Kuipers, “Mapping nanoscale light fields,” Nat. Photonics 8(12), 919–926 (2014). [CrossRef]  

33. D. Denkova, N. Verellen, A. V. Silhanek, P. Van Dorpe, and V. V. Moshchalkov, “Lateral magnetic near-field imaging of plasmonic nanoantennas with increasing complexity,” Small 10(10), 1959–1966 (2014). [CrossRef]  

34. F. Huth, M. Schnell, J. Wittborn, N. Ocelic, and R. Hillenbrand, “Infrared-spectroscopic nanoimaging with a thermal source,” Nat. Mater. 10(5), 352–356 (2011). [CrossRef]  

35. D. Hwang, A. Mughal, C. D. Pynn, S. Nakamura, and S. P. DenBaars, “Sustained high external quantum efficiency in ultrasmall blue III-nitride micro-LEDs,” Appl. Phys. Express 10(3), 032101 (2017). [CrossRef]  

36. K. Ogawa, R. Hachiya, T. Mizutani, S. Ishijima, and A. Kikuchi, “Fabrication of InGaN/GaN MQW nano-LEDs by hydrogen-environment anisotropic thermal etching,” Phys. Status Solidi A 214(3), 1600613 (2017). [CrossRef]  

37. W. Luo, Y. Zhang, A. Feizi, Z. Göröcs, and A. Ozcan, “Pixel super-resolution using wavelength scanning,” Light: Sci. Appl. 5(4), e16060 (2016). [CrossRef]  

38. A. D. Rakić, “Algorithm for the determination of intrinsic optical constants of metal films: application to aluminum,” Appl. Opt. 34(22), 4755–4767 (1995). [CrossRef]  

39. H. S. Wasisto, J. D. Prades, J. Gülink, and A. Waag, “Beyond solid-state lighting: Miniaturization, hybrid integration, and applications of GaN nano-and micro-LEDs,” Appl. Phys. Rev. 6(4), 041315 (2019). [CrossRef]  

40. J. Gülink, S. Bornemann, H. Spende, M. A. der Maur, A. D. Carlo, J. D. Prades, H. S. Wasisto, A. Waag, J. Gülink, S. Bornemann, H. Spende, M. A. der Maur, A. D. Carlo, J. D. Prades, H. S. Wasisto, and A. Waag, “InGaN/GaN nanoLED Arrays as a Novel Illumination Source for Biomedical Imaging and Sensing Applications,” Proc. 2(13), 892 (2018). [CrossRef]  

41. F. Villa, D. Bronzi, Y. Zou, C. Scarcella, G. Boso, S. Tisa, A. Tosi, F. Zappa, D. Durini, S. Weyers, U. Paschen, and W. Brockherde, “CMOS SPADs with up to 500 µm diameter and 55% detection efficiency at 420 nm,” J. Mod. Opt. 61(2), 102–115 (2014). [CrossRef]  

42. J. Canals, N. Franch, O. Alonso, A. Vilà, and A. Diéguez, “A Point-of-Care Device for Molecular Diagnosis Based on CMOS SPAD Detectors with Integrated Microfluidics,” Sensors (Basel, Switzerland) (2019).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1.
Fig. 1. Operation principle of the NIM microscope: the sample is scanned by illuminating alternatively with different LEDs, and the sensor on top records the intensity of light reaching it.
Fig. 2.
Fig. 2. Detail and results for the ray tracing simulation of the microscope operation, for a single LED row. The red surfaces on the top part of each figure are the 10 µm sensor, while the test samples are in black and are 12 µm wide patterns. The LEDs (in blue at the lower part of each figure) switch on and off sequentially, creating shadow patterns on the sensor. The figure at the bottom shows the relative illumination received on the sensor with each LED. (a) Shadow imaging case: the samples are close to the sensor, far from the LED. (b) Intermediate case: the increased distance makes proper sampling more difficult (c) Ideal NIM scan mode: the samples are very close to the LEDs, creating large shadow patterns.
Fig. 3.
Fig. 3. Contrast detected from scanning with a single row of the LED array as a function of the distance D between sample and sensor. Results obtained from ray tracing simulation, with the rest of the parameters fixed and chosen to reproduce the experimental setup.
Fig. 4.
Fig. 4. Schema of the near field model for electromagnetic simulations. The rectangle limited by red lines indicates the part of the array presented in the simulations. In all cases the LED array period is $P_a=100$ nm, and $P_b$ is the period of the Al bars.
Fig. 5.
Fig. 5. Far-field light intensity outgoing from an Al bar grating for a dipole array spaced 100 nm normalized to the intensity of a single dipole source. Different distances between bars and dipoles (D) are simulated using FDTD, for a periodicity $P_b$ of the Al bars of 210 nm. The dipole source is polarized in x direction (perpendicular to the bar axis) in the left image, and in y direction (along bar axis) on the right side. The gray areas indicate the positions of the Al bars.
Fig. 6.
Fig. 6. Contrast obtained by the different dipole models, ensemble of 8 and 50 nm dipoles forming square patterns to simulate the nanoLEDs. Results obtained from the FDTD simulations.
Fig. 7.
Fig. 7. FFT of the far-field intensity signals obtained for different Al bar periods ( $P_b$ ) on the EM simulations for an extended LED array. Results obtained from the FDTD simulations.
Fig. 8.
Fig. 8. General schematics of the NIM microscope. In blue, LEDs from the array. In red, sensing area or sensor. a) shows the field of view for an arbitrary large LED array and sensor (grey shaded region). b) shows the shadow cones projected by a single sample (in black) from different LEDs and how they may overlap, which would mean the same area of the sample would be sampled by different LEDs, reducing the contrast of the microscope. The dashed red lines show the limits of that overlap zone c) shows the shadow cones projected by different samples while illuminated by the same LED, and how they may be integrated together by a sensor too large.
Fig. 9.
Fig. 9. The NIM light source: a GaN nanoLED array chip with 64 ( $8 \times 8$ ) pixels sized 5 µm. (a) 3D sketch of the chip. (b) Image of the microLED array chip.
Fig. 10.
Fig. 10. General scheme of the microscope. The LED array is ball-bonded over the driver PCB. The sample is placed over the LEDs as close as allowed by the sapphire layer. The light transmitted through the sample reaches the SPAD sensor, opposite the LEDs.
Fig. 11.
Fig. 11. Close up photography of the microscope setup. The sample holder presents an opening where the sample is placed and the SPAD imager is directly above it.
Fig. 12.
Fig. 12. EBL patterns, with the focused zone marked by the square (left) and shadow images from nano illumination microscopy method for each (right). (a) 20 µm side squares, separated 20 µm. (b) 6.4 µm side squares, separated 6.4 µm

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

C = 100 I m a x I m i n I m a x + I m i n
x det < 2 3 D 2 d 2 d L
x det < 2 3 D + 2 d 2 d L
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.