Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Atmospheric aerosol monitoring by an elastic Scheimpflug lidar system

Open Access Open Access

Abstract

This work demonstrates a new approach – Scheimpflug lidar – for atmospheric aerosol monitoring. The atmospheric backscattering echo of a high-power continuous-wave laser diode is received by a Newtonian telescope and recorded by a tilted imaging sensor satisfying the Scheimpflug condition. The principles as well as the lidar equation are discussed in details. A Scheimpflug lidar system operating at around 808 nm is developed and employed for continuous atmospheric aerosol monitoring at daytime. Localized emission, atmospheric variation, as well as the changes of cloud height are observed from the recorded lidar signals. The extinction coefficient is retrieved according to the slope method for a homogeneous atmosphere. This work opens up new possibilities of using a compact and robust Scheimpflug lidar system for atmospheric aerosol remote sensing.

© 2015 Optical Society of America

1. Introduction

Atmospheric aerosols play a critical role in global radiation budget, and their chemical/physical properties and spatial distribution are the utmost important parameters for the climate on earth [1,2]. Further, during recent decades, aerosol particles, produced by anthropogenic activities, have significantly influenced global climate, and can adversely affect public health in fast growing cities because of increased cancer risk following aerosol lung deposition [3]. It thus very important to study or monitor aerosol particles in atmosphere in terms of climatology as well as for the determination of pollution sources and management of urban environments. Point monitoring suction devices based on filters and nephelometers provide detailed information on the captured particles [4]. However, the measurements are known to highly dependent on turbulence and not necessarily representative for surrounding environment. Further, the atmosphere is known to be structured vertically and mixes predominantly horizontally, thus point measurements at ground could deviate considerably from remaining air column.

One prevailing method for atmospheric aerosol remote sensing is to employ the light detection and ranging (Lidar) technique, which conventionally captures the backscattered echoes of a nanosecond-pulsed light source from atmospheric molecules and particles [5–9]. From the backscattering echoes, the spatial distribution as well as temporal evolution of atmospheric aerosol can be retrieved. The aerosol type, size, and concentration can also be analyzed by employing, e.g., sophisticated Raman lidar and/or multi−wavelength lidar techniques [10–13]. However, the high cost and complexity of conventional pulsed lidar systems limit the usage to the research community.

Apart from the monostatic pulsed lidar technique, another approach developed for range-resolved atmospheric remote sensing is the so-called bistatic camera lidar (CLidar) technique [14,15], where the laser beam transmitted into atmosphere is imaged using a CCD camera with a baseline separation in the order of 100 meters from the laser transmitter. The angular phase function of atmospheric scattering has to be taken into account because of the bistatic characteristic, and a wide angle objective typically of 100° acceptance must be used. Such objectives are incompatible with having a large aperture, thus the light collection efficiency is low and only night time operation is reported so far, in spite that high power Nd:YAG lasers are frequently employed. Large field of view is also incompatible with narrow interference band-pass filters for background suppression.

In this article, we demonstrate a new lidar technique – Scheimpflug Lidar for atmospheric aerosol monitoring, which is based on the Scheimpflug principle that describes the relationship between the image- and object-planes when they are not parallel in an optical imaging system. The Scheimpflug principle has been utilized for short-scale laser profiling since several decades [16–18]. However, it was until quite recent that the Scheimpflug principle, for the first time, was employed for atmospheric fauna sounding [19] and atmospheric oxygen molecules detection [20] over kilometers range, beneficial by the high power continuous-wave laser diodes available in recent years. Due to the short baseline separation between the transmitter and receiving telescope which is comparable to expander-receiver distances in conventional off-axis pulsed lidar systems, the Scheimpflug lidar system can be considered a monostatic. The range-resolved backscattering echo of a high-power continuous-wave laser is recorded by employing either a line scan or an area camera with the optical layout satisfying the Scheimpflug principle as well as the hinge rule [21,22]. The entire illuminated air volume can thus be in-focus simultaneously and infinite focal depth is achieved with large aperture. Thus, the light receiving efficiency of the Scheimpflug lidar system is substantially larger than that of the CLidar technique.

In the following sections, we will first discuss the principle as well as the lidar equation of the Scheimpflug lidar technique, and then describe the system schematic of a typical Scheimpflug lidar system employing a high power 808 nm laser diode. To demonstrate the potential of using Scheimpflug lidar for atmospheric remote sensing, aerosol monitoring measurements are presented and investigated in detail.

2. Principles and methods

2.1 Scheimpflug lidar principles

According to the Scheimpflug principle [21,22], in the situation where the object plane is nonparallel to the lens plane, sharp focus can be achieved by tilting the image sensor so that the image plane intersects both with the object- and lens-planes. The plane of sharp focus (PoF) is further constrained by the hinge rule, which states that the front focal plane, the plane through the center of the lens parallel to the image plane, and the PoF should also intersect with each other. As can be seen in Fig. 1, the PoF is collectively determined by the Scheimpflug and hinge intersects. Despite the confusing history that the Scheimpflug principle was actually stated by Jules Carpentier while the hinge rule was presented by Theodor Scheimpflug, the principle describing the relationship between the object, image and lens planes when they are nonparallel is well-known as the Scheimpflug principle.

 figure: Fig. 1

Fig. 1 System schematic of SLidar, the PoF is collectively determined by the Scheimpflug intersect and hinge intersect. f – the focal length of the receiving telescope, L – the distance of the lens from the object plane, LIL – the distance between lens and image planes, pI – the pixel position of the image sensor on the image plane, z – the distance of the object along the object plane, Φ – the swing angle of lens, Θ – the tilt angle of the image plane to the lens plane, γ – the sampling angle for each pixel, θ and θ|| are the divergence angle of the laser diode in y-axis/fast axis and x-axis/slow axis directions. The pixel number 1 corresponds to the closest measurement distance (z0)pI. The laser diode is predominately TE polarized, i.e., the polarization is along the slow axis.

Download Full Size | PDF

When implementing the Scheimpflug principle for atmospheric remote sensing, the backscattering echoes of the laser beam transmitted into atmosphere can be in focus simultaneously on a tilted image plane which intersects both with the object plane (also the plane of laser beam) and the lens plane. In other words, infinite focal depth is achieved while employing large optical aperture. This is significantly important for atmospheric lidar since the backscattering signal from molecules and particles are rather weak. We hereafter refer to the method utilizing the Scheimpflug principle for atmospheric remote sensing to as Scheimpflug Lidar (SLidar). In the SLidar technique, the distance is angularly resolved rather than by time-of-flight (TOF) which is the case of conventional pulsed lidar system. The main advantage of the angularly-resolved characteristic is that continuous-wave lasers can be employed for range-resolved remote sensing instead of utilizing ns-scale pulsed light sources with high peak power putting special demands on the damage threshold on all transmission optics. This significantly reduces the complexity and cost of building up an atmospheric lidar system.

The relationship between the pixel and distance can be derived from the lens equation and trigonometric formulas [20]:

z=L[pI(sinΘcosΘtanΦ)+LIL]pI(cosΘ+sinΘtanΦ)+LILtanΦ,
Φ=arctanLzrefarctanpI,refcosΘ(zreff)zreff,
LIL=zreffzreffpI,refsinΘ.
Here arctanL/z gives the sampling angle (γ) for each measurement distance (or each pixel). The swing angle Φ, and the distance between image and lens plane LIL can be calibrated by detecting the pixel position (pI,ref) of the backscattering signal from a remote hard target with known distance (zref). Further, LILand L should satisfy the Scheimpflug condition, i.e., tanΘ=LIL/L, to achieve sharp focus for the entire probe volume. The calibration error of the swing angle, which is mainly due to the measurement uncertainties of pI,ref and zref, can result in uncertainty of the measurement distance, as indicated by Eq. (1). Figure 2 gives the distance uncertainty for specific system configurations, which linearly increases with the distance. To improve the calibration accuracy of the distance in particular for long range, the measurement uncertainty of pI,refand zrefshould be further reduced.

 figure: Fig. 2

Fig. 2 Pixel-distance relationship with different swing angles Φ and LIL, which can be calibrated by a reference target with known distance. The center of the fictive image sensor (6000 pixel, 5.5 μm pixel pitch, sensor length 33 mm) is placed coinciding with the optical axis of the lens. f=0.8 m, Θ=45. The theoretical range resolution under the circumstance of infinite narrow laser beam for the close distance is astonishingly high, e.g., 0.02 m @ 50 m, while it deteriorates as the increase of the measurement distance, e.g., 38 m @ 2.5 km. Considering that the distance of the calibration target is 1000 m ( ± 10m uncertainty) and the corresponding pixel position is 3826( ± 2 pixel uncertainty), the calibrated swing angle is then 0.276 ± 0.001°. The distance uncertainty due to the measurement errors of zref and pI,refincreases linearly with the distance, e.g., ± 0.1% at around 100 m, and ± 1% at around 1 km under the given circumstance.

Download Full Size | PDF

For a SLidar system with certain focal length (f) and tilt angle (Θ), the hinge intersect is determined, and the PoF has to pass through the hinge intersect. Nevertheless, the swing angle Φ can still be variable, which should be optimized to achieve a maximum measurement range span. As the image plane moves back and forth along the optical axis of the lens, the PoF will pivot on the hinge intersect, the swing angle Φ changes accordingly. As a result, the position of the backscattering image of the laser beam will shift along the image plane, as shown in Fig. 2. As it can also be seen, the measurement range approaches infinity tangentially because of the nature of triangulation in the Scheimpflug principle. To increase the measurement range span, the swing angle should be maximized as long as the backscattering light from infinity can still be imaged by the sensor, so that the Scheimpflug lidar system can measure closer distance. On the other hand, to capture backscattering echo from a closer distance, a longer image sensor can be employed.

2.2 Scheimpflug lidar equation

The backscattering echo for SLidar also follows the classic lidar equation used in a conventional monostatic pulsed lidar system, which is given by [23]

PS(λ,z)=P0(λ)COx(z)dzz2β(λ,z)exp[20zα(λ,z')dz'].
Here λ is the laser center wavelength, Cis the system constant, P0(λ)is the output power of the laser, z is the measurement distance, β(λ,z) and α(λ,z) are the backscattering and extinction coefficients at distance z, respectively, dz is the sampling bin at distance z which indicates the theoretical range resolution of a SLidar system, Ox(z) is the overlap function between the field of view (FOV) of the receiving telescope and the laser beam along x-axis (see Fig. 1). There are mainly two parameters which could potentially lead to non-constant system factor C for all the image pixels, i.e., photon response non-uniformity (PRNU) of the image sensor and unequal transmittance of commonly-used interference filters for different incident angles/pixels. However, the PRNU is generally very small for commercial CMOS/CCD sensors nowadays, e.g., about 1% of the detected signal for the CMOS sensor employed in the present work. Further, since the image sensor chip size is much smaller than the focal length of the receiving telescope, the transmittance difference of the interference filter for individual sensor pixels is negligible. Overall, the system factor C can be considered as constant for all pixels. It should be pointed out that the angular dependence of the backscattering coefficient is negligible since the backscattering angles are quite small in general, as in the case for the conventional off-axis pulsed lidar system.

According to Eq. (1), dz can be deduced:

dz=z2sinΘ(1tan2Φ)[pI(sinΘcosΘtanΦ)+LIL]2dpI.
Here dpI is the pixel interval, which is a constant. As can be seen, dz is proportional to z2. Substituting Eq. (5) back to Eq. (4), the 1/z2 factor is cancelled in the lidar equation, consequently the signal does not decrease by the factor of 1/z2in SLidar. This can significantly increase the exploitation of the dynamic range of a SLidar system, while considerably effort has to be devoted to increase the dynamic range in conventional pulsed lidar system. The drawback of SLidar is that the range resolution deteriorates as the increasing of distance. In other words, range resolution is preserved in the conventional TOF/pulsed lidar whereas intensity is preserved in SLidar. To further increase the range resolution in SLidar, one can increase the separation between the lens- and object-planes or LIL (related to the focal lengthf). When employing a large area 2D camera, the whole laser beam in the FOV of the telescope can be imaged without any truncation. Under this circumstance, the overlap function Ox(z) is equal to 1 for the whole detection range. The SLidar equation can then be simplified as
PS(λ,z)=Kβ(λ,z)exp[20zα(λ,z')dz'].
Here Kis a new system constant. It should be noted that since LIL>>pI, the denominator term in dz (see Eq. (5)) is approximately constant. Thus, the system constant K is independent of pixel position pI. By employing algorithms developed for the pulsed lidar techniques, the extinction coefficient or backscattering coefficient can then be resolved from the above lidar equation.

2.3 Equivalent sampling bin and effective range resolution

In the previous section, the lidar equation is deduced without considering the laser beam width in atmosphere. In this case, the sampling bin dz is the range resolution. However, the laser beam width for transformation optical systems in the most cases cannot be considered as infinite-narrow, and has to be taken into account for the SLidar technique. As can be seen in Fig. 3(a), the effective sampling range for each individual pixel does not only cover the range bin dz, but instead expand to a much wider range depending on the sampling angle γ and the laser beam width (denoted by w(z)). The spread of the sampling range is referred to as range spread function RSF(np,z), which defines the sampling proportion for pixel np at distance z:

RSF(np,z)=Oy(np,z)w(z).
Here np is the pixel number, Oy(np,z) is the overlap between the pixel footprint of pixelnp and the laser beam along y-axis at distance z. Hereby we assume a uniform distribution of light intensity along y-axis to facilitate the mathematics and estimation. The spatial integration of the range spread function gives the equivalent sampling bin dzeq, which means the distance, were the laser beam to be fully sampled, to generate an equivalent sampling area as that when the laser beam is partially sampled during the range from z1(np) to z2(np+1) (striped area in Fig. 3(b)):

 figure: Fig. 3

Fig. 3 (a) A concept sketch of the transverse pixel footprint and laser beam in atmosphere, the stripe area is the overlap between the pixel footprint and the laser beam. (b) Range spread functions (RSFs) for infinite-narrow laser beam and laser beam with finite width. z1(np) and z2(np) are the intersection distances of the pixel sampling footprint and the laser beam.

Download Full Size | PDF

dzeq=RSF(np,z')dz'.

For a parallel laser beam, it can be proven that dzeq(np)=dz(np), and thus the lidar equation given by Eq. (6) holds. For either divergent or convergent laser beam, the equivalent sampling bin dzeq(np) can deviate from dz(np), which thus distorts the lidar curve and the lidar equation should be in principle corrected by a multiplication factor dz(np)/dzeq(np). This is, however, difficult to perform in reality considering the accuracy of determining dzeq(np). To mitigate the deviation between dzeq(np) and dz(np) and thus the distortion on lidar signals, the optimized approach is to achieve a nearly parallel laser beam in the measurement range.

As indicated in Fig. 3, The backscattering light at distance zis only partially sampled by each individual pixel. The backscattering echo originating from the sampling bin dz(np) smears out to many neighboring pixels, and thus the practical range resolution of the SLidar system deteriorates considerably compared to the theoretical range resolution dz. In the case of finite laser beam width (as opposed to a theoretical pencil beam), hereby effective range resolution is defined by intersects between the footprint of the pixel and the laser beam, as shown in Fig. 3:

dzeff=z2(np)z1(np+1).
Since dzeff is typically much larger than dz, it can thus be approximated by:
dzeffw[z(np)]/γ(np)z(np)w[z(np)]/L.
Note that here the unit of γ(np) is radian. As can be seen, the effective range resolution increases as the increase of distance and is proportional to the laser beam width. To improve the effective range resolution, it is thus important to minimize the laser beam width in particular for the far distance. Further, as indicated by Eq. (10), by increasing the baseline separation, the effective range resolution can also be improved.

To minimize the deviation between dzeq(np) and dz(np), and in the meanwhile to improve the effective range resolution, the SLidar system requires laser beam with narrow beam width as well as a small divergence. This is, however, rather difficult since the beam parameter product (BPP) cannot be changed by any transformation optics, e.g., lens or telescopes. The optical requirements discussed here consist of a main challenge for the SLidar technique to precisely retrieve the atmospheric backscattering echoes with high range resolution for far distance, e.g., 10 km. To achieve high performance in SLidar systems efforts on imaging quality and beam width should match the careful considerations about pulse duration, bandwidth and sampling rate in conventional TOF/pulsed lidar systems. Besides, the beam width and divergence of the transmitted laser beam can be improved by increasing the f-number of the focusing lens. The disadvantage of employing a large f-number lens is that the light delivering efficiency deteriorates due to strong laser beam truncation in particular for the fast axis when passing through the lens. Correspondingly, the signal-to-noise ratio (SNR) also decreases. As a summary, all these aspects have to be carefully considered when designing a SLidar system.

3. Instrumentations

3.1 SLidar system schematic

A typical SLidar system schematic, consisting mainly of a laser diode with beam expander, a Newtonian receiving telescope, and an imaging sensor, is given in Fig. 4. The whole setup is mounted on a motorized equatorial mount, allowing stable and precise aiming at remote targets. A detailed description of the SLidar system is given below.

 figure: Fig. 4

Fig. 4 System schematic of a typical SLidar system.

Download Full Size | PDF

A wide-stripe (multiple transverse modes) continuous-wave laser diode emitting at 808 nm is employed as light source. The 808 nm laser diode, with a chip size of 1 μm × 230 μm (fast axis × slow axis), can give an output power of around 3.2 W with divergences of θ = 38° (fast axis) and θ|| = 10° (slow axis). The beam profile along the fast axis of the laser diodes is nearly of Gaussian shape, while it consists of multiple transverse modes on the slow-axis giving a rectangular/top-hat beam shape. The fast- and slow-axis are placed coinciding with the y- and x-axis, respectively, as shown in Fig. 1. The laser beam is transmitted into the atmosphere through an achromatic doublet (F5, 750 mm focal length, ø150 mm). The backscattering echo is collected by a Newtonian telescope (F4, 800 mm focal length, ø200 mm) with approximately 810 mm separation to the laser beam. A 2D CMOS camera (CMOSIS CMV2000 2088 × 1088 pixels, 5.5-μm pixel width) is employed to record the image of the laser beam. An 808 nm interference filter with 3 nm full width at half maximum (FWHM) and a long-pass glass color filter (RG780, cut-off wavelength 780 nm) are utilized to suppress the atmospheric background radiation. A synchronization signal from the CMOS camera is fed to a Johnson counter as a clock signal in order to alternate the laser diode between bright and dark states. The atmospheric background radiation and pixel specific dark current can then be subtracted from the raw lidar signals dynamically. Based on the pixel position of the backscattering echo from a remote hard target with known distance, the swing angle as well as the pixel-distance relationship can be retrieved.

3.2 Data processing

3.2.1 Linear interpolation of lidar signals

In the SLidar technique, the atmospheric background signal can be substantially stronger than the lidar echoes, as has been observed in [20]. In most cases, the background level can be correctly subtracted from the raw lidar echoes by the neighboring background signal recorded when the laser is turned off. However, in the cases of extremely strong sun radiations, the swift variation of atmosphere due to e.g., strong wind and passing cloud, can lead to variable background levels for subsequent measurements and may result in incorrect background subtraction. Under this circumstance, linear interpolation of lidar signals should be employed. To generalize the discussion, Fig. 5 shows an example of three-wavelength SLidar recordings, and the corresponding linear interpolation of lidar signals are demonstrated. The lidar signals of all spectral bands are linearly interpolated at the period of background measurement by the predecessor and successor lidar signals, and then subtracted by the corresponding background signal. The resulted lidar signal can then be used for further signal processing. We emphasize here that significant difference between the atmospheric lidar signals retrieved with linear interpolation and without linear interpolation can only occur in some unusual situations, e.g., windy weather condition with signal-to-background ratio less than 0.01.

 figure: Fig. 5

Fig. 5 Schematic of linear interpolation of lidar signals for a three-wavelength time-multiplexing SLidar system, Bg – background.

Download Full Size | PDF

3.2.2 Statistic mode and mean values

In atmospheric lidar measurements, to increase the SNR and also reduce the data redundancy, the recorded lidar curves are averaged in a time interval from tens of seconds to minutes. In the conventional pulsed lidar techniques, two different acquisition protocols are frequently employed. For the short distance where the analog detection is possible, the signal average is performed by calculating the arithmetic mean value of the lidar curves. For the far distance where the single photon counting (SPC) technique has to be used, the lidar curve is recorded by taking the SPC statistics in a certain measurement time. If the temporal variance of the backscattering signal is Gaussian distributed, which is the case in many situations [24], the arithmetic mean value is equivalent to that retrieved from the SPC technique. Otherwise, the arithmetic mean value cannot represent the static air return value precisely. For instance, the extremely strong echo from a flying bird/insect can introduce bias on the arithmetic mean value when the lidar system is working in the analog mode. However, the strong and rare outliers are omitted in the SPC region since it only counts the frequency but not the absolute intensity. Thus, the influence from the strong echo of atmospheric fauna is negligible in the SPC region.

In general, the possibility of detecting abnormal strong echoes originating from e.g., insects, birds, are fairly low from the point of time scale. For a pulsed lidar system with 20 Hz sampling rate and 10 ns pulse width, the effective sampling time is only about one in five million. Thus, the arithmetic mean value adopted in conventional pulsed lidar systems can work properly in most cases, in spite that sometime the contaminated lidar signals due to atmospheric fauna have to be disregarded. However, the situation is completely different in the continuous-wave SLidar techniques. The effective sampling time is 50% for a single band SLidar system. This implies that the possibility of detecting flying atmospheric fauna is a million times larger than that in the pulsed lidar system. In spite that aero fauna observations are still a rare and sparse events, the resulting strong backscattering echoes can contribute significantly to the arithmetic mean value of the recorded lidar curves. In other words, flying insects/birds are taken into account in the lidar signal as “large aerosols” if the arithmetic mean average is performed. Thus, in SLidar techniques, the proper approach for signal averaging is to employ the mode value of the recorded lidar curves and all the abnormal echoes can be eliminated, similar to the case in the SPC technique. Unfortunately, the numerical mode (most frequent integer value) is unstable. A more stable approximation of the mode and less-time consuming method than the analytic mode fit is to take the median value, which separates the higher half of the intensity from the lower half. In the present work, the final lidar signals are retrieved from the statistic median values of single shot lidar curves recorded in a certain time window.

4. Measurements, results and discussion

The present SLidar system has been employed for atmospheric monitoring in Southern Sweden, during April 2015, to demonstrate the system performance. The laser beam was transmitted into the atmosphere with an elevation angle of 4.2°. The measurement transect covers both urban and rural area, as shown in Fig. 6. In the following section, we will first monitor the profile of the laser beam in atmosphere, and then investigate the equivalent sampling bin and effective range resolution of the present SLidar system. The experimental results of atmospheric aerosol monitoring are discussed in Section 4.2.

 figure: Fig. 6

Fig. 6 Experimental site (Lund, southern Sweden), the elevation angle is about 4.2°.

Download Full Size | PDF

4.1 Images of the backscattering echo

In order to investigate the system performance of the present SLidar system, the image of the laser beam transmitted into the atmosphere is recorded to study the equivalent sampling bin and effective range solution. The laser beam is first aligned on a hard target located at 1000 m away from the SLidar system for calibration and focusing adjustment purposes. The swing angle retrieved from the calibration measurement is 0.276°. The laser beam is then steered towards the sky to record the backscattering image with 200 ms exposure time. Figure 7(a) shows the image of the laser beam along the slow-axis recorded by the 2D camera, when the slow-axis of the laser diode is oriented to the x-axis. The black-solid curves indicate the positions where the light intensity decays to 50% of the maximum, from which the FWHM of the laser beam image can be retrieved. The laser beam width in atmosphere is then deduced from the geometrical lens equation, as shown in Fig. 7(c). Similarly, by orienting the fast-axis of the laser diode to the x-axis, the corresponding laser beam image and laser beam width in atmosphere are obtained, as shown in Fig. 7(b) and 7(c). As can be seen, the beam width along the fast-axis is significantly smaller than that of the slow axis because of the very small facet size along the fast axis (1 µm). In practice, since the facet size is so small that it is even smaller than the Airy radius, i.e., 5 µm, the laser beam size in atmosphere along the fast axis (y-axis) is determined by diffraction limit and optical aberration. To achieve high effective range resolution (dzeff) and mitigate the deviations between the equivalent sampling bin (dzeq) and theoretical sampling bin (dz), the fast-axis of the laser diode must be aligned along the y-axis direction.

 figure: Fig. 7

Fig. 7 (a) and (b) Laser beam images at the slow axis (x-axis) and fast axis (y-axis), (c) Laser beam widths at the slow and fast axis, (d) Effective range resolution when orienting the fast axis of the laser diode along y-axis direction. It should be emphasized here that the beam shape for the slow axis is rectangular while it is near Gaussian in the fast axis. The divergences of the transmitted laser beam are 0.1 mrad and 0.3 mrad for the fast axis and slow axis, respectively.

Download Full Size | PDF

Based on the FWHM along the fast axis, the deviation of the effective sampling bin dzeq to the theoretical sampling bin dz can be estimated. It is found that the deviation between dzeq and dz generally increases with distance for a divergent beam. However, the deviation is rather small for the range below 2 km, i.e., 1%, while it rapidly increases up to 8% at around 5 km. A small deviation between dzeq and dz should be considered as an important criterion for achieving accurate results when quantitatively analyzing the backscattering and extinction coefficients.

Figure 7(d) shows the effective range resolution according to Eq. (10). As can be seen, the value of dzeff increases substantially due to the divergent laser beam, e.g., 475 m @2 km, 2.7 km @5 km. The range resolution below 1 km varies from 7 to 130 m, which is comparable with that of a conventional pulsed lidar system. However, it should be emphasized here that there is no fundamental obstacle to detect the backscattering signal from even farer distance, as will be seen later. The main challenge is to precisely determine the distance of the backscattering echo from far range.

4.2 Single-band aerosol sounding

The aerosol measurements were performed in a cloud-free atmosphere from early morning to around 12:00 at noon on April 9th, 2015. The atmospheric backscattering image of the laser beam was recorded with 15-ms exposure time, followed by a subsequent background recording with the same exposure time. Both the raw lidar and atmospheric background signals are retrieved by summing the pixel intensity along the slow-axis for the respective images. Here the final lidar curve is obtained from the temporal median value of 600 raw lidar signals after linear interpolation and background subtraction, as discussed in Section 3.2 (18 s total measurement time). A typical backscattering signal recorded by the CMOS camera is given in Fig. 8. As can be seen, the backscattering signal attenuates very sharply from pixel 1800 to pixel 2000, which is mainly due to the triangular relationship between the distance and pixel. The noise level of the lidar signal is about 4 counts (standard deviation).

 figure: Fig. 8

Fig. 8 Backscattering intensity recorded by the CMOS camera, 18-s measurement time.

Download Full Size | PDF

The range-time map of the backscattering signals recorded for approximately 5 hours is shown in Fig. 9. As can be seen, localized emission due to heavy traffic during the rush hour can be observed. The variation of the backscattering intensity in general agreed quite well with the variation of temperature and relative humidity. To explicitly investigate the atmospheric variation, the atmospheric backscattering signals at three typical periods are also given in Fig. 10. In the early morning (local time 07:45:53), the saturated water vapor concentration resulted in very strong extinction, and thus limits the detection distance to around 1.5 km with SNR = 3. Later as the sun dissipated the mist, together with the increase of temperature, the relative humidity decreased. Correspondingly, the atmospheric backscattering and extinction coefficients also decreased, and the detection distance increased up to around 7 km with SNR = 3. However, as has been discussed earlier, the effective range resolution significantly deteriorates as it increases both with the detection range and laser beam width which also increases with the distance. Although the backscattering signal from 7 km can be detected, it cannot be well resolved because of the low effective range resolution, i.e., around 5.3 km@7 km. Further, the deviation between dzeff and dz also increases considerably at around 7 km, i.e., around 16%, which results in an overestimation of the backscattering signal correspondingly. A reasonable estimation of the maximum range-resolved measurement distance of the present SLidar system is about 5 km, within which distance the deviation between dzeq and dz is below 10% and the effective range resolution at 5 km is about 2.7 km. If the backscattering signal from high attitude is of significance, e.g., for a vertically-looking stationary atmospheric monitoring station, the separation can be increased by a factor of a few times to improve the value of dzeff and to minimize the deviation between dzeq and dz.

 figure: Fig. 9

Fig. 9 (a) Range-time map of atmospheric backscattering echoes recorded by the SLidar system on April 9th, 2015 (local time), the exposure time is 15 ms and each lidar curve is a statistic median value over 18 s (600 recordings), (b) Relative humidity and temperature variation, (c) wind speed, the wind direction is mostly from south. The backscattering intensities are plotted in log-scale.

Download Full Size | PDF

 figure: Fig. 10

Fig. 10 Backscattering lidar curves at three different recording times on April 9th, 2015 (local time). Exposure time is 15 ms, 600-time average for a single lidar curve. The dot-curve with the text label SNR = 3 gives the intensity where the SNR is equal to 3.

Download Full Size | PDF

In the present work, neither near-end calibration nor far-end calibration is available. Thus, the system constant K cannot be calibrated and the backscattering or extinction coefficient cannot be retrieved based on Klett method or Klett-Fernard method, etc. However, under the circumstance of homogeneous atmosphere, the slop-method [25,26] can be applied. The extinction coefficient, which is assumed to be constant, can then be retrieved from the linear fitting of the log-scale lidar curve:

ln(Ps(z))=K+lnβ2αz
The slope of the log-scale lidar curve, i.e., 2α gives the extinction coefficient. As can be seen from Fig. 10, the lidar signal plotted in log-scale at local time 11:12:04 is almost a straight line in the range 83-4500 m, which implies a homogeneous atmospheric condition. By performing a least-square fitting on the lidar curve according to Eq. (12), the extinction coefficient is retrieved, i.e., 0.086 km−1. The standard deviation of the fitting residual is about 4 counts, which is the same as the noise level (4 counts) and thus reconfirms that the assumption of homogeneous atmosphere is justified.

Figure 11 shows atmospheric backscattering echoes recorded in a cloudy morning. A clear cloud layer as well as the variation of the cloud height can be observed between the altitudes of 150 m to 400 m. Under the circumstance of vertically-looking operation, the SLidar system can be very suitable for cloud height detection, especially for clouds below 1 km, where the effective range resolution is the best among the whole detection range.

 figure: Fig. 11

Fig. 11 Cloud height detection using the SLidar system, the exposure time is 20 ms and each lidar curve is a statistic median value over 20 s (500 recordings). (a) range-time map of atmospheric backscattering signal (local time), (b) temperature and relative humidity variation, (c) wind speed, the wind direction is mostly from south west. The backscattering intensities are plotted in log-scale.

Download Full Size | PDF

5. Conclusions and perspectives

In this work, the principles of the SLidar technique have been carefully discussed. Due to the trigonometric relationship in the SLidar system, the sampling distance increases as the square of the distance, which cancels out the 1/z2 in the lidar equation and thus significantly increase the dynamic range of the lidar system. Although the theoretical range resolution is extraordinarily high in principle, the practical range resolution relies on the laser beam width and is substantially larger compared to the theoretical one. Further, for either a divergent or a convergent laser beam, dzeqis actually not the same as dz which thus introduces distortion on the lidar signal. The deviation between dzeqand dz increases significantly as the increase of the laser beam divergence or convergence, and it also increases with distance for a divergent beam as in the present case. To retrieve the atmospheric backscattering echo precisely and achieve a high range resolution for far distance, apart from increasing the separation between the receiver and transmitter, the beam width and divergence should both be minimized by e.g., increasing f-number of the focusing lens. It should also be pointed out that pencil beam for a large range in atmosphere could be achieved by employing a terawatt laser which can result in self-focusing and filamentation [27,28]. With the combination of laser-induced filamentation and the SLidar technique, a much higher effective range resolution can be expected and atmospheric gas sensing could be exploited.

Atmospheric monitoring using a SLidar system has been performed in various weather conditions to demonstrate the system performance. The cloud height, atmospheric transportation, as well as localized particle emission can also be identified with range information. Post data processing using the slope-method can provide the atmospheric extinction coefficient in a homogeneous atmosphere. Further data analysis based on the well-known Klett or Klett-Fernard method [29,30], etc., can also be performed when the calibration of the system constant is available by employing e.g., nephelometer in the case of shooting parallel to the ground surface. The present SLidar system is of low-cost and compact compared to conventional pulsed lidar systems. By employing high power continuous-wave laser diodes, which are rapidly developed during recent years, the size and cost of light sources are reduced more than two orders of magnitudes. The angular resolved characteristic of the SLidar system also significantly facilitates the detection of the backscattering lidar signal, i.e., to employ highly integrated CCD/CMOS sensors rather than PMTs together with 20 MHz high-resolution data sampling and SPC. The advantages of the SLidar system, addressed here, open up new possibilities of employing a robust lidar system for atmospheric aerosol remote sensing.

Future work involves the development of multi-band, and/or polarized SLidar systems for atmospheric particle identification and sizing. SLidar system operating at other wavelengths or even multi-wavelength SLidar system can be readily implemented by employing widely available high output power laser diodes from violate to near- and shortwave infrared region, e.g., 405 nm, 520 nm, 975 nm. Raman SLidar could be possible during nighttime when employing even higher output power laser diodes, e.g., 10 W 808 nm laser diodes, and cooled-cascaded CCD cameras.

References and links

1. R. K. Pachauri and L. A. Meyer, “Climate Change 2014: Synthesis Report, Contribution of Working Groups I, II and III to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change,” (IPCC, 2014), pp. 151.

2. J. Lenoble, L. Remer, and D. Tanre, Aerosol Remote Sensing (Springer, 2013).

3. A. J. McMichael, R. E. Woodruff, and S. Hales, “Climate change and human health: present and future risks,” Lancet 367(9513), 859–869 (2006). [CrossRef]   [PubMed]  

4. W. P. Arnott, H. Moosmuller, P. J. Sheridan, J. A. Ogren, R. Raspet, W. V. Slaton, J. L. Hand, S. M. Kreidenweis, and J. L. Collett, “Photoacoustic and filter-based ambient aerosol light absorption measurements: Instrument comparisons and the role of relative humidity,” J. Geophys. Res. Atmos. 108(D1), 11–15 (2003). [CrossRef]  

5. R. J. Allen and W. E. Evans, “Laser radar (Lidar) for mapping aerosol structure,” Rev. Sci. Instrum. 43(10), 1422–1432 (1972). [CrossRef]  

6. V. Matthais, V. Freudenthaler, A. Amodeo, I. Balin, D. Balis, J. Bösenberg, A. Chaikovsky, G. Chourdakis, A. Comeron, A. Delaval, F. De Tomasi, R. Eixmann, A. Hågård, L. Komguem, S. Kreipl, R. Matthey, V. Rizi, J. A. Rodrigues, U. Wandinger, and X. Wang, “Aerosol lidar intercomparison in the framework of the EARLINET project. 1. Instruments,” Appl. Opt. 43(4), 961–976 (2004). [CrossRef]   [PubMed]  

7. V. A. Kovalev and W. E. Eichinger, Elastic Lidar: Theory, Practice, and Analysis Methods (John Wiley & Sons, 2004).

8. M. Sicard, F. Molero, J. L. Guerrero-Rascado, R. Pedros, F. J. Exposito, C. Cordoba-Jabonero, J. M. Bolarin, A. Comeron, F. Rocadenbosch, M. Pujadas, L. Alados-Arboledas, J. A. Martinez-Lozano, J. P. Diaz, M. Gil, A. Requena, F. Navas-Guzman, and J. M. Moreno, “Aerosol lidar intercomparison in the framework of SPALINET-The Spanish lidar network:methodology and results,” IEEE Trans. Geosci. Rem. Sens. 47(10), 3547–3559 (2009). [CrossRef]  

9. G. Pappalardo, A. Amodeo, A. Apituley, A. Comeron, V. Freudenthaler, H. Linne, A. Ansmann, J. Bosenberg, G. D’Amico, I. Mattis, L. Mona, U. Wandinger, V. Amiridis, L. Alados-Arboledas, D. Nicolae, and M. Wiegner, “EARLINET: towards an advanced sustainable European aerosol lidar network,” Atmos. Meas. Tech. 7(8), 2389–2409 (2014). [CrossRef]  

10. A. Papayannis, D. Balis, V. Amiridis, G. Chourdakis, G. Tsaknakis, C. Zerefos, A. D. A. Castanho, S. Nickovic, S. Kazadzis, and J. Grabowski, “Measurements of Saharan dust aerosols over the Eastern Mediterranean using elastic backscatter-Raman lidar, spectrophotometric and satellite observations in the frame of the EARLINET project,” Atmos. Chem. Phys. 5(8), 2065–2079 (2005). [CrossRef]  

11. D. Müller, A. Ansmann, I. Mattis, M. Tesche, U. Wandinger, D. Althausen, and G. Pisani, “Aerosol-type-dependent lidar ratios observed with Raman lidar,” J. Geophys. Res. Atmos. 112(D16), D16202 (2007). [CrossRef]  

12. A. Papayannis, R. E. Mamouri, V. Amiridis, E. Remoundaki, G. Tsaknakis, P. Kokkalis, I. Veselovskii, A. Kolgotin, A. Nenes, and C. Fountoukis, “Optical-microphysical properties of Saharan dust aerosols and composition relationship using a multi-wavelength Raman lidar, in situ sensors and modelling: a case study analysis,” Atmos. Chem. Phys. 12(9), 4011–4032 (2012). [CrossRef]  

13. M. R. Perrone, F. De Tomasi, and G. P. Gobbi, “Vertically resolved aerosol properties by multi-wavelength lidar measurements,” Atmos. Chem. Phys. 14(3), 1185–1204 (2014). [CrossRef]  

14. K. Meki, K. Yamaguchi, X. Li, Y. Saito, T. D. Kawahara, and A. Nomura, “Range-resolved bistatic imaging lidar for the measurement of the lower atmosphere,” Opt. Lett. 21(17), 1318–1320 (1996). [CrossRef]   [PubMed]  

15. N. C. P. Sharma, J. E. Barnes, T. B. Kaplan, and A. D. Clarke, “Coastal aerosol profiling with a camera lidar and nephelometer,” J. Atmos. Ocean. Technol. 28(3), 418–425 (2011). [CrossRef]  

16. G. Bickel, G. Hausler, and M. Maul, “Triangulation with expanded range of depth,” Opt. Eng. 24(6), 246975 (1985). [CrossRef]  

17. F. Blais, “Review of 20 years of range sensor development,” J. Electron. Imaging 13(1), 231–243 (2004). [CrossRef]  

18. U. Cilingiroglu, S. C. Chen, and E. Cilingiroglu, “Range sensing with a Scheimpflug camera and a CMOS sensor/processor chip,” IEEE Sens. J. 4(1), 36–44 (2004). [CrossRef]  

19. M. Brydegaard, A. Gebru, and S. Svanberg, “Super resolution laser radar with blinking atmospheric particles - Application to interacting flying insects,” Prog. Electromagnetics Res. 147, 141–151 (2014). [CrossRef]  

20. L. Mei and M. Brydegaard, “Continuous-wave differential absorption lidar,” Laser Photonics Rev., DOI: (2015). [CrossRef]  

21. J. Carpentier, “Improvements in enlarging or like cameras,” Great Britain Patent No. 1139 (17 January 1901).

22. T. Scheimpflug, “Improved method and apparatus for the systematic alteration or distortion of plane pictures and images by means of lenses and mirrors for photography and for other purposes,” Great Britain Patent No. 1196 (16 January 1904).

23. T. Fujii and T. Fukuchi, Laser Remote Sensing (CRC, 2005).

24. W. P. Hooper and G. M. Frick, “Lidar detected spike returns,” J. Appl. Remote Sens. 4(1), 043549 (2010). [CrossRef]  

25. J. D. Klett, “Stable analytical inversion solution for processing lidar returns,” Appl. Opt. 20(2), 211–220 (1981). [CrossRef]   [PubMed]  

26. T. Y. He, S. Stanic, F. Gao, K. Bergant, D. Veberic, X. Q. Song, and A. Dolzan, “Tracking of urban aerosols using combined LIDAR-based remote sensing and ground-based measurements,” Atmos. Meas. Tech. 5(5), 891–900 (2012). [CrossRef]  

27. J. Kasparian, M. Rodriguez, G. Méjean, J. Yu, E. Salmon, H. Wille, R. Bourayou, S. Frey, Y. B. Andre, A. Mysyrowicz, R. Sauerbrey, J. P. Wolf, and L. Wöste, “White-light filaments for atmospheric analysis,” Science 301(5629), 61–64 (2003). [CrossRef]   [PubMed]  

28. H. L. Xu and S. L. Chin, “Femtosecond laser filamentation for atmospheric sensing,” Sensors (Basel) 11(1), 32–53 (2010). [CrossRef]   [PubMed]  

29. F. G. Fernald, “Analysis of atmospheric lidar observations: some comments,” Appl. Opt. 23(5), 652–653 (1984). [CrossRef]   [PubMed]  

30. J. D. Klett, “Lidar inversion with variable backscatter/extinction ratios,” Appl. Opt. 24(11), 1638–1643 (1985). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1
Fig. 1 System schematic of SLidar, the PoF is collectively determined by the Scheimpflug intersect and hinge intersect. f – the focal length of the receiving telescope, L – the distance of the lens from the object plane, L IL – the distance between lens and image planes, p I – the pixel position of the image sensor on the image plane, z – the distance of the object along the object plane, Φ – the swing angle of lens, Θ – the tilt angle of the image plane to the lens plane, γ – the sampling angle for each pixel, θ and θ || are the divergence angle of the laser diode in y-axis/fast axis and x-axis/slow axis directions. The pixel number 1 corresponds to the closest measurement distance ( z 0 ) p I . The laser diode is predominately TE polarized, i.e., the polarization is along the slow axis.
Fig. 2
Fig. 2 Pixel-distance relationship with different swing angles Φ and L IL , which can be calibrated by a reference target with known distance. The center of the fictive image sensor (6000 pixel, 5.5 μm pixel pitch, sensor length 33 mm) is placed coinciding with the optical axis of the lens. f=0.8 m, Θ= 45 . The theoretical range resolution under the circumstance of infinite narrow laser beam for the close distance is astonishingly high, e.g., 0.02 m @ 50 m, while it deteriorates as the increase of the measurement distance, e.g., 38 m @ 2.5 km. Considering that the distance of the calibration target is 1000 m ( ± 10m uncertainty) and the corresponding pixel position is 3826( ± 2 pixel uncertainty), the calibrated swing angle is then 0.276 ± 0.001°. The distance uncertainty due to the measurement errors of z ref and p I,ref increases linearly with the distance, e.g., ± 0.1% at around 100 m, and ± 1% at around 1 km under the given circumstance.
Fig. 3
Fig. 3 (a) A concept sketch of the transverse pixel footprint and laser beam in atmosphere, the stripe area is the overlap between the pixel footprint and the laser beam. (b) Range spread functions (RSFs) for infinite-narrow laser beam and laser beam with finite width. z 1 ( n p ) and z 2 ( n p ) are the intersection distances of the pixel sampling footprint and the laser beam.
Fig. 4
Fig. 4 System schematic of a typical SLidar system.
Fig. 5
Fig. 5 Schematic of linear interpolation of lidar signals for a three-wavelength time-multiplexing SLidar system, Bg – background.
Fig. 6
Fig. 6 Experimental site (Lund, southern Sweden), the elevation angle is about 4.2°.
Fig. 7
Fig. 7 (a) and (b) Laser beam images at the slow axis (x-axis) and fast axis (y-axis), (c) Laser beam widths at the slow and fast axis, (d) Effective range resolution when orienting the fast axis of the laser diode along y-axis direction. It should be emphasized here that the beam shape for the slow axis is rectangular while it is near Gaussian in the fast axis. The divergences of the transmitted laser beam are 0.1 mrad and 0.3 mrad for the fast axis and slow axis, respectively.
Fig. 8
Fig. 8 Backscattering intensity recorded by the CMOS camera, 18-s measurement time.
Fig. 9
Fig. 9 (a) Range-time map of atmospheric backscattering echoes recorded by the SLidar system on April 9th, 2015 (local time), the exposure time is 15 ms and each lidar curve is a statistic median value over 18 s (600 recordings), (b) Relative humidity and temperature variation, (c) wind speed, the wind direction is mostly from south. The backscattering intensities are plotted in log-scale.
Fig. 10
Fig. 10 Backscattering lidar curves at three different recording times on April 9th, 2015 (local time). Exposure time is 15 ms, 600-time average for a single lidar curve. The dot-curve with the text label SNR = 3 gives the intensity where the SNR is equal to 3.
Fig. 11
Fig. 11 Cloud height detection using the SLidar system, the exposure time is 20 ms and each lidar curve is a statistic median value over 20 s (500 recordings). (a) range-time map of atmospheric backscattering signal (local time), (b) temperature and relative humidity variation, (c) wind speed, the wind direction is mostly from south west. The backscattering intensities are plotted in log-scale.

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

z= L[ p I (sinΘcosΘtanΦ)+ L IL ] p I (cosΘ+sinΘtanΦ)+ L IL tanΦ ,
Φ=arctan L z ref arctan p I,ref cosΘ( z ref f) z ref f ,
L IL = z ref f z ref f p I,ref sinΘ.
P S (λ,z)= P 0 (λ)C O x (z)dz z 2 β(λ,z)exp[ 2 0 z α(λ,z')dz' ].
dz= z 2 sinΘ( 1 tan 2 Φ ) [ p I (sinΘcosΘtanΦ)+ L IL ] 2 d p I .
P S (λ,z)=Kβ(λ,z)exp[ 2 0 z α(λ,z')dz' ].
RSF( n p ,z)= O y ( n p ,z) w(z) .
d z eq = RSF( n p ,z')dz' .
d z eff = z 2 ( n p ) z 1 ( n p +1).
d z eff w[z( n p )] / γ( n p ) z( n p )w[z( n p )]/L.
ln( P s (z))=K+lnβ2αz
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.