Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Multi-line-of-sight Hartmann-Shack wavefront sensing based on image segmentation and K-means sorting

Open Access Open Access

Abstract

Multi-line-of-sight wavefront sensing, crucial for next-generation astronomy and laser applications, often increases system complexity by adding sensors. This research introduces, to the best of our knowledge, a novel method for multi-line-of-sight Hartmann-Shack wavefront sensing by using a single sensor, addressing challenges in centroid estimation and classification under atmospheric turbulence. This method contrasts with existing techniques that rely on multiple sensors, thereby reducing system complexity. Innovations include combining edge detection and peak extraction for precise centroid calculation, improved k-means clustering for robust centroid classification, and a centroid filling algorithm for subapertures with light loss. The method’s effectiveness was confirmed through simulations for a five-line-of-sight system and experimental setup for two-line and three-line-of-sight systems, demonstrating its potential in real atmospheric aberration correction conditions. Experimental findings indicate that, when implemented in a closed-loop configuration, the method significantly reduces wavefront residuals from 1 λ to 0.1 λ under authentic atmospheric turbulence conditions. Correspondingly, the quality of the far-field spot is enhanced by a factor of 2 to 4. These outcomes collectively highlight the method’s robust capability in enhancing optical system performance in environments characterized by genuine atmospheric turbulence.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

As an effective wavefront measurement tool, the Hartmann-Shack wavefront sensor (HSWFS) has seen extensive application across various adaptive optics (AO) fields such as astronomy, optical communication, and retinal imaging, ect. [1] A typical HSWFS consists of a two-dimensional microlens array paired with a detector, dissecting the input beam to focus each microlens-split segment onto the detector. This forms a Hartmannogram, an array image of spots on the image plane. The average slope of each local sub-wavefront sampled by microlens array can be calculated from spot displacement on Hartmannogram as follows:

$$[\nabla W]_{i,j} =\left[\left(\begin{array}{c}{\frac{\partial}{\partial x}}\\{\frac{\partial}{\partial y}}\end{array}\right)W\right]_{i,j}=\left[\frac{2\pi}{\lambda f}\left(\begin{array}{c}{\Delta x}\\{\Delta y}\end{array}\right)\right]_{i,j}$$
where $W$, $f$, and $\lambda$ denotes incident wavefront, lenslets focal length and wavelength respectively on each sub-aperture $(i,j)$. And $(\Delta x, \Delta y)$ is measured spot displacement of focused spot centroid from reference spot centroid, where reference spots are defined from Hartmannogram generated by collimated incident beam without wavefront aberrations. After calculating all local wavefront gradient, the entire wavefront at the pupil can be reconstructed by a straightforward integration of local phase gradient, i.e. the zonal algorithm, or by a linear combination of Zernike polynomials, i.e. modal algorithm. A one-shot signal capturing in milliseconds to obtain Hartmannogram and wavefront reconstruction allow for real-time wavefront measurements.

Emerging applications in varied fields have spurred new advancements in HSWFS technology. Moving beyond traditional point-source sensing, extended-source HSWFS has been developed, significantly benefiting solar adaptive optics [2,3] and retinal imaging [4,5]. However, these methods predominantly focus on single-line-of-sight sensing within a narrow field of view (FoV). Cutting-edge applications, like wide FoV astronomy or multi-object laser transmission, necessitate multi-line-of-sight wavefront sensing for comprehensive measurement and correction, marking a shift from single point sensing to a more expansive surface sensing approach.

Multi-line-of-sight sensing, a cornerstone for wide field adaptive optics (WFAO), originated from multi-layer conjugate adaptive optics (MCAO) aimed at countering anisoplanatism in AO systems. The concept of MCAO, introduced by J. Beckers in 1988 [6], proposed using an array of adaptive mirrors, each aligned with a different atmospheric layer. Subsequently, M. Tallon outlined a wavefront sensing process for MCAO using multiple laser-generated reference sources, enabling the reconstruction of a three-dimensional atmospheric phase aberration map [7]. This technique has since evolved, with various systems under development and testing. Notably, the vacuum tower telescope (VTT) demonstrated a threefold increase in FoV compared to conventional AO [8,9]. The very large telescope (VLT), utilizing three laser guide stars (LGSs) and MCAO, achieved wavefront correction across a 2x2 arcminute FoV [10]. The National Solar Observatory (NSO) and the Gemini Observatory have also made significant advancements in multi-line-of-sight wavefront sensing [11,12], underscoring its potential for extremely large telescopes.

Two primary approaches have been proposed for MCAO wavefront sensing: star oriented (SO) [13] and layer oriented (LO) [14] (Fig. 1). The SO approach measures wavefront distortions within a narrow field, inferring the height distribution of turbulence through tomography. Conversely, the LO approach pairs each deformable mirror and sensor with the same altitude. Under the Hartmann-Shack framework, both methods employ multiple sensors for different line-of-sight sensing from various guide stars, inherently increasing system complexity and presenting calibration challenges.

 figure: Fig. 1.

Fig. 1. Comparative Schematic of Star-Oriented (left) and Layer-Oriented (right) Multi-Conjugate Adaptive Optics Systems

Download Full Size | PDF

Beyond MCAO, other wide FoV adaptive optics systems, like multi-reference (MRAO) [15] and multi-object adaptive optics (MOAO) [16,17], also require multi-line-of-sight wavefront sensing. Current research in these pioneering techniques mainly focuses on the tomographic algorithms for wide-field atmospheric wavefront reconstruction and optimization strategies for multi-layer atmospheric wavefront correction. However, the prevailing approach to enhance measurement accuracy, involving increasing the number of sensors, results in significant complexity.

In response to these challenges, various multi-line-of-sight wavefront sensing (MSWFS) designs have been proposed. Anne Costille et al. introduced a wide-field AO laboratory bench in 2010 [18], testing MCAO, TAO (Tomography Adaptive Optics), and GLAO (Ground Layer Adaptive Optics) concepts using a wide-field HSWFS. This setup utilized a 7x7 microlens array to simultaneously detect three guide stars and an observation target, creating multiple spots within a single sub-aperture on the Hartmannogram. However, the relative positions of the guide star and observation target were fixed and distant, necessitating the division of the sub-aperture image for different lines of sight. This approach required segmenting the multi-spot image into single-spot areas based on prior knowledge, followed by traditional centroid algorithm application for wavefront reconstruction. In 2015, Lebao Yang et al. presented a design for a multiple-object Hartmann-Shack wavefront sensor (MOSHWFS) suited for a wide FoV on the retina, employing a single HSWFS for MSWFS [19]. Their method involved positioning multiple spots in one sub-aperture in fixed areas, necessitating a regular arrangement of these spots for wavefront reconstruction. While these designs signify a pivotal shift from multi-sensor to single-sensor detection mechanisms, they are still encumbered by existing constraints in centroid estimation. Crucially, they fall short of tackling the intricate challenges inherent in real-world scenarios, such as the coupling, separation, and accurate matching of multi-line-of-sight spot arrays. Particularly, in scenarios where the positions of multiple laser guide stars are uncertain, or in dynamic environments with multiple moving targets, the efficacy of these methods is significantly undermined, leading to their complete ineffectiveness.

Addressing practical challenges, such as multiple moving targets and atmospheric scintillation causing subaperture darkening, no current multi-line-of-sight Hartmann-Shack research offers direct applicability. This paper proposes a novel multi-line-of-sight Hartmann-Shack wavefront sensing method, adept for real-world complexities. We combine image segmentation with peak extraction for centroid calculation of multi-spot arrays, and introduce a centroid filling algorithm for darkened subapertures. Furthermore, we refine the k-means clustering algorithm for precise centroid classification. This integrated approach promises high-precision in multi-line-of-sight wavefront sensing.

To demonstrate our algorithm’s efficacy, we executed a five-line-of-sight simulation and two AO system implementations for real atmospheric turbulence detection. The algorithm excelled in the simulation, achieving sub-0.05 rms precision under suboptimal lighting conditions. Two AO systems independently conducted three-line-of-sight detection, and two-line-of-sight detection with closed-loop correction. Detailed methodologies and results are elaborated in Sections 2$\sim$3.

2. Methods

In a traditional single-view Hartmann-Shack system, the incident wavefront generates an array of focal spots at the focal plane of the Hartmann-Shack sensor, enabling straightforward centroid determination and wavefront reconstruction. However, in a multi-line-of-sight Hartmann-Shack system, distinct wavefronts converge on the same sensor, creating complex spot patterns on the focal plane, with each subaperture housing multiple spots (Fig. 2). Standard centroiding methods are inadequate here, as they produce misleading single centroid coordinates within each subaperture.

 figure: Fig. 2.

Fig. 2. Hartmann-Shack data corresponding to multiple wavefronts.

Download Full Size | PDF

Addressing this challenge, we developed novel algorithms for precise centroid calculation in multi-line-of-sight scenarios. These algorithms adeptly manage complex light spot arrays, ensuring accurate centroid localization. Furthermore, our algorithm includes an innovative approach for centroid classification in multi-line-of-sight scenarios. This aspect is critical, as it involves distinguishing and categorizing centroids derived from different lines of sight within the complex array. The algorithm’s design enables the accurate identification and classification of each centroid, corresponding to its specific line of sight. This step is essential for the precise assignment of centroids before proceeding with wavefront restoration using the reconstruction matrix.

2.1 Centroid estimation based on spot segmentation

To calculate centroid coordinates for multiple light spots within each subaperture, we first segment the subaperture to ensure isolation of each light spot. The centroid for each segmented region is then determined using the centroiding method, yielding the coordinates of multiple light spots. We utilize two effective algorithms for this segmentation: peak extraction and edge detection.

The peak extraction algorithm, as depicted, begins by identifying the brightest point in a subaperture, followed by neighborhood selection and centroid calculation using the traditional center of gravity (CoG) methods within this area. This process repeats, setting the calculated area to zero before identifying the next brightest point, until all centroids are determined (Fig. 3).The pseudocode of peak extraction algorithm describes the process clearly as depicted in Algorithm 1. Please note that in the figures, red cross marks represent identified centroids. This notation will be consistently used throughout subsequent figures and will not be reiterated.

 figure: Fig. 3.

Fig. 3. The workflow of peak extraction algorithm.

Download Full Size | PDF

Tables Icon

Algorithm 1. Peak Extraction Algorithm

While efficient, this method has limitations, notably in the fixed neighborhood selection which may omit crucial morphological information of light spots, leading to centroid calculation errors.

Conversely, the edge detection algorithm, employing the Canny operator in our study, segments light spots based on significant intensity changes (Fig. 4). This technique preserves more information about the light spots’ boundaries, typically resulting in lower centroid calculation errors compared to peak extraction.

 figure: Fig. 4.

Fig. 4. The workflow of edge extraction algorithm.

Download Full Size | PDF

However, edge detection faces challenges with closely spaced light spots, potentially merging them into a single spot, an issue termed ’light spot edge confusion’ (Fig. 5). This problem is less prevalent in peak extraction due to the controlled neighborhood selection and number within each subaperture, effectively mitigating such confusion.

 figure: Fig. 5.

Fig. 5. Light spot edge confusion.

Download Full Size | PDF

To address light spot edge confusion and optimize centroid accuracy, this study employs a hybrid approach combining peak extraction and edge detection for centroid computation in multi-spot arrays. The specific algorithm workflow is illustrated in Fig. 6. Initially, the edge detection algorithm segments the light spots, and centroids are calculated for each region. Following this, we scrutinize the number of centroids within each subaperture to identify potential edge confusion. If a subaperture’s centroid count is lower than the number of incoming wavefronts, it indicates edge confusion. In such cases, we apply peak extraction for precise centroid determination, replacing the initial edge detection results. This selective use of peak extraction ensures that most subapertures benefit from the accuracy of edge detection while addressing edge confusion where it occurs.

 figure: Fig. 6.

Fig. 6. Algorithm flowchart for calculating centroids using a combination of edge detection and peak extraction.

Download Full Size | PDF

Utilizing the Zernike polynomial approach, we simulated the generation of 100 sets of random wavefronts, specifying the Zernike orders up to the 65th, with coefficient distributions adhering to the Kolmogorov spectrum. The intensity of atmospheric turbulence was progressively escalated from 5 to 20 in our simulations (details of the optical system parameters employed will be elaborated in Section 3), and compared the wavefront reconstruction accuracy between the composite algorithm and the peak extraction algorithm, with findings delineated in Fig. 7. Figure 7 compares centroid deviations between the two methods and standard values under various atmospheric conditions, highlighting the superior accuracy of edge detection. In the figures, $D/r_0$ denotes the intensity of atmospheric turbulence, where $D$ represents the aperture of the receiving telescope, and $r_0$ signifies the atmospheric coherence length. A larger value of $D/r_0$ indicates increased atmospheric turbulence.

 figure: Fig. 7.

Fig. 7. Comparison chart of hybrid and single method results.

Download Full Size | PDF

Please note that the data for this figure, as well as for the rest of the Methods section, were generated using simulations. Further mentions of the data origin within this section will not be reiterated for brevity.

This hybrid approach significantly enhances centroid calculation in multi-line-of-sight Hartmann-Shack systems, balancing precision and error minimization.

2.2 Centroid categorization based on enhanced k-means clustering algorithm

After calculating the centroids, we gather an array of centroid coordinates. For reconstructing the wavefront from multiple viewpoints, categorizing these coordinates is essential. Figure 8 illustrates typical coordinate data for clarity.

 figure: Fig. 8.

Fig. 8. Schematic diagram of unclassified centroids (The left image displays an unclassified centroids, the right image represents the ideal classification outcome, where each color signifies a distinct category).

Download Full Size | PDF

The figure reveals a disordered array of light spots, stemming from non-isoplanatic wavefronts of different lines of sight. To categorize this disorder, we analyze the pattern underlying these coordinates. While individual subaperture centroids don’t show a clear pattern, a collective view reveals a degree of separation due to varying fields of view.

We applied the k-means clustering algorithm for initial data categorization. This unsupervised learning method partitions data into distinct categories based on the proximity of data points to the calculated cluster centers. The algorithm iteratively adjusts the cluster centers, refining the categorization process until it stabilizes [20]. This methodology efficiently segregates the centroid coordinates into discernible groups, facilitating accurate wavefront reconstruction.

In the initial partitioning using the k-means clustering algorithm, initial cluster centers are selected from five centroid coordinates in the first subaperture. This iterative classification of the remaining dataset continues until the cluster centers stabilize. Figure 9 and Fig. 10 demonstrate k-means clustering results under varying atmospheric turbulence intensities, D/r0 = 5 and D/r0 = 15, respectively.

 figure: Fig. 9.

Fig. 9. K-means Algorithm Classification Results at D/r0=5

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. K-means algorithm classification results at D/r0=15.

Download Full Size | PDF

We observe that under low turbulence, the algorithm effectively distinguishes clusters. However, with increased turbulence, classification errors emerge, as depicted in Fig. 10. Proximity of centroids within a subaperture can lead to erroneous cluster assignments, affecting overall accuracy.

To mitigate this, we introduce the Neighborhood Weighting Algorithm, as depicted in Algorithm 2, capitalizing on the ’continuity’ of atmospheric turbulence, as postulated by Kolmogorov’s theory. This algorithm assumes consistency in centroid positions within a small neighborhood, emphasizing the influence of neighboring subaperture coordinates. Thus, when classifying a coordinate, those from a 3x3 neighborhood exert greater influence, enhancing classification precision.

Tables Icon

Algorithm 2. Neighborhood Weighting Algorithm

The flowchart for the Neighborhood Weighting Algorithm is shown in Fig. 11. In our methodology, we employ $S_{i,j}$ to denote the target sub-aperture pending processing, where i and j respectively represent the row and column of said subaperture. For other subapertures within the 3x3 neighborhood surrounding the target subaperture, we designate them as $S_{m,n}$, $m = i-1,i,i+1,n = j-1,j,j+1$. For a target subaperture $S_{i,j}$, the reference subapertures include $\{S_{i-1,j-1},S_{i-1,j},S_{i-1,j+1},S_{i,j-1},S_{i,j+1},S_{i+1,j-1},S_{i+1,j},S_{i+1,j+1}\}$.

 figure: Fig. 11.

Fig. 11. The flowchart for the Neighborhood Weighting Algorithm

Download Full Size | PDF

We posit that the centroid distribution in $S_{i,j}$ should mirror that of its reference subapertures. We use $S_{i,j,k}$ and $S_{m,n,k}$ to represent the centroid coordinates for wavefront k in $S_{i,j}$ and $S_{m,n}$, M is the number of reference subapertures and $K$ is the number of lines-of-sight. We use $C_{ave,k}$ to represent the average value of the centroid coordinates corresponding to the nth-line-of-sight within the subapertures around $S_{i,j}$. We calculate $C_{ave,k}$ according to Eq. (2), yielding $K$ sets of average coordinates.

$$C_{ave,k} = \frac{\sum_{m = i-1}^{i+1} \sum_{n = j-1}^{j+1} S_{m,n,k}}{M}$$

These averages offer more precise guidance for classification within $S_{i,j}$ compared to distant subapertures. Upon classifying all sub-apertures, the k-means algorithm yields $K$ clustering centers, which reflect the overarching distribution pattern of the centroids. Consequently, we amalgamate $C_{\text {ave},k}$ and the clustering centers derived from the k-means algorithm through a weighted sum to define the reference coordinate $C_{\text {ref},k}$. This value will be employed to guide the classification of centroids within $S_{i,j}$. We use $C_{k}$ to represent he cluster center corresponding to the nth-line-of-sight obtained after processing all subapertures with k-means. $C_{\text {ref},k}$, is determined by Eq. (3),

$$C_{ref,k} = C_{ave,k}\cdot w_1 + C_k\cdot w_2$$
where $w_1$ and $w_2$ are the weighting coefficients, satisfying the condition:
$$w_{1} + w_{2} = 1$$

The allocation of weights can be tailored based on the actual size of the turbulence. In our experiments, a weight setting of 1:1 proved to be appropriate, yielding high-precision classification outcomes.

Upon obtaining the reference coordinates, it is necessary to identify the matching method that minimizes the sum of distances between the reference coordinates and the centroid coordinates within the target sub-aperture. This constitutes a classic NP-problem. We define $E_{p,q} = 1$ to indicate that the centroid i is assigned to the class corresponding to reference coordinate j; $E_{p,q} = 0$ otherwise. We denote $D_{p,q}$ as the distance between the centroid to be classified and the reference coordinate. Consequently, this problem can be formulated into a quadratic 0-1 programming model as shown in Algorithm 2. Given the relatively limited number of parameters that need to be processed in this application scenario, it is feasible to find the solution to the aforementioned model through enumeration. The centroid allocation method that satisfies this solution constitutes the final centroid distribution within $S_{i,j}$. This approach ensures the precise classification of coordinates within the target subaperture, essential for accurate wavefront reconstruction.

In previous sections, we introduced the neighborhood weighting algorithm, which mitigates the limitations of k-means clustering by considering coordinate distributions near the target subaperture. However, this assumes correct classification in reference subapertures. Practically, adjacent subapertures often have misclassified coordinates. Therefore, our approach involves excluding subapertures with nearby misclassifications, identified by evaluating the number of classes within each subaperture.

Figure 12 presents experimental data for D/r0=15 processed using this algorithm. This data illustrates that our method, by addressing the shortcomings of traditional k-means clustering, significantly improves classification accuracy.

 figure: Fig. 12.

Fig. 12. Enhanced k-means algorithm classification results at D/r0=15.

Download Full Size | PDF

2.3 Centroid filling

Our approach also addresses partial subaperture blackout in Hartmann-Shack sensors, commonly caused by atmospheric scintillation. Traditionally, two methods are employed: subaperture removal and slope-zeroing [21]. The former, though accurate, demands recalculating the control matrix for each frame, increasing computational demands. The latter, less accurate, avoids matrix recalculation by zeroing slopes in affected subapertures.

In multi-view Hartmann-Shack sensors, each subaperture houses multiple light spots, complicating the direct application of these standard methods. Our strategy involves a detailed analysis of affected subapertures to pinpoint and individually address the specific line of sight impacted. A critical aspect of this process is differentiating between spot confusion and genuine light absence in subapertures, both of which reduce centroid numbers. To resolve this complexity, we introduce an innovative matching algorithm.

The flowchart for the Centroid Filling algorithm is shown in Fig. 13 and the pseudocode of this algorithm is expressed as Algorithm 3. $S_{i,j}$ is the missing light subaperture, $S_{i,j,k}$ is the centroid existing in $S_{i,j}$, $S_{m,n,k}$ centroid belongs to the $3\times 3$ neighborhood subapertures. Here, we set the full width at half maxima (FWHM) of the optical imaging system as the distance threshold $T$. Initially, we apply the enhanced k-means algorithm to the $3\times 3$ neighborhood subapertures for localized clustering around the subaperture with missing light and get $C_{center}$. After that, we match cluster centers in $C_{center}$ with centroids $S_{i,j,k}$ in $S_{i,j}$ just according to the result of comparing distance $D_{p,q}$ with $T$. Once $S_{i,j,p^{\ast }}$ is matched with $C_{center,q^{\ast }}$, $D$ will be ungraded,and $D_{p,q}$ which has the index of $p^{\ast }$ or $q^{\ast }$ no longer participating in calculations. Meanwhile, $S_{i,j,p^{\ast }}$ will be added into the set of $S^{\ast }_{i,j}$, and $C_{center,q^{\ast }}$ will be removed from $C_{center}$. The matching process can effectively address issues of light spot absence and spot confusion. The comparing and the ungrading steps will continue until each of $S_{i,j,k}$ is processed. Finally, taking the continuity of the wavefront into consideration, we replace the matching failed or the missing centroids with the remaining cluster centroids.

 figure: Fig. 13.

Fig. 13. The flowchart for the Centroid Filling Algorithm.

Download Full Size | PDF

Tables Icon

Algorithm 3. Centroid Filling Algorithm

This method effectively resolves missing light in multi-viewpoint Hartmann-Shack systems, outperforming the slope-zeroing method in accuracy, as confirmed by our reconstruction accuracy comparison under various missing light rates. Additionally, it avoids control matrix recalculation, enhancing computational efficiency.

Building upon the methodologies described, we have achieved successful centroid calculation and classification across multiple light spot arrays, as depicted in Fig. 14. Initially, centroid coordinates are computed using a combination of edge extraction and peak detection in a multi-line-of-sight Hartmann-Shack system. Subsequently, we address subapertures lacking light using a centroid filling algorithm. The process then involves an initial categorization using k-means clustering, followed by a secondary categorization for error-prone subapertures using the neighborhood-weighted algorithm. Finally, we perform wavefront recovery based on these refined classifications.

 figure: Fig. 14.

Fig. 14. Workflow diagram of the multi-line-of-sight wavefront sensing method.

Download Full Size | PDF

3. Results

In this section, we detail both simulation and experimental outcomes to substantiate the efficacy of our proposed approach. The simulations encompass analyses of the algorithm’s functionality, wavefront reconstruction precision, and the influence of light loss rate. Experimentally, we report on the real atmospheric turbulence measurements in both two-line-of-sight and three-line-of-sight scenarios.

3.1 Simulation

Employing specific system parameters—a 100 mm entrance pupil diameter, 671 nm wavelength, and 6.9 $\mathrm {\mu }$m pixel size—we simulated a quintuple-view Hartmann-Shack wavefront sensor, featuring a 16$\times$16 sub-aperture grid with 32$\times$32 pixel sub-apertures and a 7.3 mm effective focal length. The angular separation between multiple lines of sight was set to between 0.5 and 2 mrad.

3.1.1 Accuracy of Centroid categorization

To investigate the applicability and effectiveness of the improved algorithm for centroid classification, we conducted extensive data analyses under various constrained conditions using this algorithm(Fig. 15). Utilizing the Zernike polynomial method, we generated multiple sets of wavefronts, setting the Zernike order to the first 65, with coefficients that align with the Kolmogorov spectrum distribution. The simulations encompassed scenarios where atmospheric turbulence intensity was progressively increased from 5 to 20. For each level of atmospheric turbulence, the angular separation of the laser guide stars was set to 1.6971 mrad, 1.1314 mrad, and 0.5657 mrad, respectively. For each combination of atmospheric turbulence intensity and laser guide star angular parameters, we generated 100 sets of random wavefronts. These wavefronts were subsequently processed using both the classical and an enhanced version of the k-means algorithm, with a focused analysis on the algorithms’ classification error rates. Our analysis, illustrated in three graphs, compares the average classification error rates of both traditional and improved k-means clustering algorithms across varying atmospheric turbulences and guide star angles from 1.7 mrad to 0.56 mrad.

 figure: Fig. 15.

Fig. 15. Average classification error rates of traditional and enhanced k-means clustering algorithms under various conditions (In (a), (b) and (c), the angles between LGS are respectively 1.6971 mrad, 1.1314 mrad and 0.5657 mrad).

Download Full Size | PDF

The results elucidate that reduced guide star angles heighten the classification challenges due to diminished light spot array separation. For the traditional k-means clustering algorithm, a 0% classification error rate is maintained only when the guide star angle is 1.7 mrad, and D/r0 < 9. Under more stringent conditions, the error rate increases with the severity of the conditions. In contrast, the improved k-means clustering algorithm maintains a 0% error rate even under stringent conditions of a 0.56 mrad guide star angle and D/r0 = 16. This data indicates that the enhanced algorithm demonstrates superior interference resistance, maintaining high classification accuracy even in rigorous conditions.

3.1.2 Wavefront reconstruction accuracy

To further substantiate the effectiveness of our algorithm in real-world scenarios, we compared the wavefront reconstruction accuracy between our algorithm (utilizing a single wavefront detector) and the traditional wavefront restoration algorithm (utilizing five wavefront detectors) within a five-line-of-sight scenario. Figure 16 presents a set of representative simulation data: Figure 16(a) presents the true wavefront values for these five lines of sight. Figure 16(b)$\sim$ (d) demonstrate the results of processing these five lines of sight using our proposed multi-line-of-sight wavefront sensing algorithm with a single Hartmann sensor. Specifically, Fig. 16(b) depicts the Hartmann diagram obtained from the wavefronts of five lines of sight entering a single Hartmann wavefront sensor, while Fig. 16(c) illustrates the results of wavefront reconstruction using our algorithm on this Hartmann diagram. Figure 16(d) shows the residuals between the calculated wavefronts and their true values. Figure 16(e)$\sim$ (g) display the results achieved by the traditional single-line-of-sight algorithm, which employs five separate Hartmann sensors for processing the five lines of sight. Figure 16(e) shows the Hartmann diagrams generated as each of the five lines of sight enters into the five separate Hartmann sensors, and Fig. 16(f) displays the outcomes of wavefront reconstruction using the traditional single-line-of-sight algorithm on these five Hartmann diagrams. Figure 16(g) reveals the residuals between the calculated wavefronts and their true values. It should be noted that the units for all color legends in this paper are expressed in $\lambda$, which will not be reiterated subsequently.

 figure: Fig. 16.

Fig. 16. A set of typical wavefront reconstruction results from multi-line-of-sight method (upper) and single-line-of-sight method (lower),

Download Full Size | PDF

The comparison reveals that the residual errors in wavefront reconstruction are essentially consistent between the two approaches, indicating that our algorithm can achieve the same reconstruction precision as that obtained under ideal conditions with multiple wavefront sensors, even when employing only a single sensor. Compared to traditional methods, our approach significantly reduces the number of sensors required, thereby decreasing the optical path complexity and simplification of system calibration. For a more comprehensive verification, we compared multi-view and single-view Hartmann-Shack systems under various atmospheric turbulence intensities (Fig. 17).

 figure: Fig. 17.

Fig. 17. Comparison of multi-line-of-sight and single-line-of-sight hartmann-shack wavefront reconstruction under various atmospheric turbulence intensities (mean of 100 data sets).

Download Full Size | PDF

We observed that at higher D/r0 values, our multi-view algorithm (utilizing a single wavefront sensor) slightly surpasses the traditional single-view under ideal condition (utilizing five wavefront detectors without introducing any assembly errors) in accuracy. This improvement is linked to increased deviations of light spots from subaperture centers at higher D/r0, where traditional centroid methods, limited to individual subapertures, do not account for diffraction in adjacent subapertures. In contrast, our edge extraction method, truncating at light spot edges, avoids these diffraction issues, thus enhancing precision as spots deviate from the center.

3.1.3 Cases of partial subaperture light loss

In the context of atmospheric turbulence, scintillation frequently leads to partial subaperture light loss. We conducted simulations to evaluate our algorithm’s performance under such conditions. Figure 18 and Fig. 19 illustrates representative results, which compared centroid calculations and wavefront reconstruction results without light loss and with 15% random light loss.

 figure: Fig. 18.

Fig. 18. Centroid calculation results with 15% Light Loss((a) shows the centroid calculation result without light loss, (b) shows the light spot array resulting from a 15% light loss, (c) shows the centroids calculated using the centroid filling algorithm with 15% light loss.)

Download Full Size | PDF

 figure: Fig. 19.

Fig. 19. Wavefront reconstruction results and residuals without light loss (upper), wavefront reconstruction results and residuals using the centroid filling algorithm under a 15% light loss (middle), wavefront reconstruction results and residuals using the slope-zeroing algorithm under the same 15% light loss (lower).

Download Full Size | PDF

Figure 19(a) displays the true wavefront values for five lines of sight. Figure 19(b) and (c) respectively showcase the wavefront reconstruction results under ideal conditions (without light loss) and the residuals between these results and the true values. Figure 19(d) and (e) illustrate the wavefront reconstruction outcomes and the associated residuals when employing the centroid filling algorithm under a 15% light loss. Figure 19(f) and (g) present the wavefront reconstruction results and residuals achieved using the traditional slope-nulling method under the same 15% light loss condition. Moreover, for clarity, the leftmost column displays the corresponding Hartmann diagrams.

This data demonstrates that under partial light loss conditions, our algorithm is still capable of accurately calculating centroids, with the introduced errors remaining within a controllable range. Furthermore, the precision of our approach significantly surpasses that of the traditional slope-zeroing method, indicating its superior efficacy in handling wavefront reconstruction amidst challenges of subaperture light loss. Further, we analyzed wavefront reconstruction accuracy across various light loss rates, contrasting with the conventional single-view slope-zeroing approach (Fig. 20).

 figure: Fig. 20.

Fig. 20. Wavefront reconstruction accuracy of centroid filling algorithm and slope-zeroing algorithm at different light loss rates(mean of 100 data sets).

Download Full Size | PDF

Notably, even at 30% light loss, our algorithm maintained a wavefront rms within 0.1 $\lambda$, significantly outperforming the direct zeroing approach. This demonstrates our algorithm’s robustness against atmospheric scintillation.

3.2 Experiment

To verify the algorithm’s effectiveness under actual atmospheric turbulence, we established two multi-view adaptive optics (AO) experimental platforms for two-line-of-sight and three-line-of-sight atmospheric turbulence detection and correction experiments, with both AO systems undergoing a 3 km atmospheric transmission.

3.2.1 Two-line-of-sight system

Our dual-line-of-sight wavefront detection system, as illustrated in Fig. 21, is bifurcated into three primary components: the beacon sources (Fig. 21(b)), the laser receiving system and the laser transmission correction system (Fig. 21(c)). Two 660 nm lasers served as beacon sources. one serving as a fixed position beacon and the other as a mobile beacon, with the purpose of altering the distance between the beacons. The receiving system, comprising a $\phi$280 mm aperture telescope and auxiliary mirrors, is tasked with capturing beacon light. The transmission correction system encompasses a collimating mirror, a high-density deformable mirror, reflective mirrors, and a composite sensor, all functioning synergistically to detect and correct atmospheric turbulence-induced aberrations on the beacon light.

 figure: Fig. 21.

Fig. 21. Partial experimental optical path diagram of the two-line-of-sight system ( (a) shows a schematic depiction of the 3-kilometer laser transmission experimental scenario, (b) shows the two laser beacon sources, (c) shows the telescope system and wavefront correction system.)

Download Full Size | PDF

The light radiating from the beacon source undergoes a three-kilometer journey through the atmospheric expanse before being captured by the telescope (Fig. 21(a)). The laser light, converged by the telescope system, is collimated and subsequently directed through reflective mirrors and the deformable mirror before entering the composite detector. This detector integrates a Hartmann-Shack wavefront detection module, a near-field detection module, and a far-field detection module, with a provision for external wavefront detectors or fiber collimators.

To provide a comprehensive understanding, table 1 encapsulates the key experimental parameters:

Tables Icon

Table 1. Experimental Setup Parameters

In our dual-line-of-sight system, we meticulously evaluated the performance both in open-loop and closed-loop configurations. The experimental outcomes are pivotal in demonstrating the efficacy of our multi-line-of-sight wavefront detection method, which was applied to process the light spot arrays captured by the Hartmann-Shack sensor.

Open-Loop Results:

In the open-loop scenario, the wavefront root mean square (RMS) for both lines of sight hovered around 1 $\lambda$, indicative of the typical atmospheric turbulence impact without active correction. The beam quality, quantified by the Strehl ratio $\beta$ (defined as the ratio of the actual spot radius to the ideal spot radius encompassing 68% of the energy), was found to be 6.72 and 3.30 for the two respective lines of sight. These values elucidate the substantial atmospheric distortion affecting the beam quality in the absence of adaptive correction.

Closed-Loop Results:

Upon initiating the closed-loop correction, a significant enhancement in beam quality was observed. The wavefront RMS for both lines of sight was reduced to within 0.2 $\lambda$, a clear indication of the system’s robustness in counteracting atmospheric aberrations. Correspondingly, the Strehl ratios improved remarkably to 1.60 and 1.72, underscoring the efficacy of the closed-loop system in ameliorating beam quality. These results not only validate the algorithm’s functionality under real atmospheric turbulence but also highlight its potential applicability in sophisticated multi-view AO systems, such as MCAO or MOAO.

Figure 22 presents a set of typical experimental data. Panels (a), (b), and (c) respectively depict the outcomes in open-loop, closed-loop for line-of-sight 1, and closed-loop for line-of-sight 2. Figure 23 illustrates the transition of the system from open-loop to closed-loop operation, capturing the continuous evolution of beam quality for both lines of sight over this period.

 figure: Fig. 22.

Fig. 22. Experimental data from the two-line-of-sight wavefront detection system ( (a), (b), and (c) represent the experimental data for the system in open-loop, closed-loop targeting line of sight 1, and closed-loop targeting line of sight 2, respectively. Within these, label 1 show the light spot arrays from two beacons, labels 2 and 3 show the reconstructed wavefronts for line of sight 1 and line of sight 2, and label 4 show the corresponding far-field spots.)

Download Full Size | PDF

 figure: Fig. 23.

Fig. 23. Continuous variation of beam quality $\beta$ for lines of sight 1 and 2 during transition from open-loop to closed-loop operation.

Download Full Size | PDF

These comprehensive parameters and results validate the robust adaptability and precision of our experimental setup in addressing the challenges posed by atmospheric turbulence, thereby enhancing the overall performance of the two-line-of-sight system.

3.2.2 Three-line-of-sight system

In the three-line-of-sight system, considering real-time processing, we conducted wavefront detection for three viewpoints without closed-loop correction. Figure 24 illustrates a portion of the experimental optical path. This setup was similar to the two-line-of-sight system but used three 660 nm lasers over 3km, received by a Hartmann-Shack sensor (44 sub-apertures, effective focal length 21.31 mm, pixel size 5.5 $\mu$m), without incorporating a deformable mirror for correction. Figure 25 displays the results: Fig. 25(a) shows the light spot array at the Hartmann focal plane, and (b)$\sim$ (d) shows the calculated wavefronts.

 figure: Fig. 24.

Fig. 24. Partial experimental optical path diagram of the three-line-of-sight system ( (a) shows the three laser beacon sources, (b) shows the telescope system and wavefront correction system.)

Download Full Size | PDF

 figure: Fig. 25.

Fig. 25. Experimental data from the three-line-of-sight wavefront detection system ( (a) shows the light spot array at the Hartmann focal plane, (b)$\sim$ (d) shows the calculated wavefronts)

Download Full Size | PDF

Notably, due to environmental differences, atmospheric turbulence intensity was notably stronger than in the two-line-of-sight system. This led to significant issues of subaperture light loss and spot dispersion in the measurement results, which result in notably significant wavefront reconstruction errors. Traditional algorithms nearly falter under such conditions, whereas our algorithm retains the capability for wavefront reconstruction. This indicates the advanced robustness of our approach for wavefront detection amidst severe atmospheric disturbances, showcasing its superior performance and reliability in challenging scenarios.

4. Conclusion

This study delves into multi-line-of-sight wavefront sensing within the context of real atmospheric turbulence. We introduce a unified single-sensor approach for multi-line detection, effectively addressing centroid computation and classification challenges amidst atmospheric disturbances. Our method, surpassing existing techniques, achieves precision in centroid extraction and classification across wider application scenarios(where the positions of multiple laser guide stars are uncertain, or in dynamic environments with multiple moving targets). This advancement offers cutting-edge solutions for wide-field and multi-target wavefront correction, showing promise for application in MCAO, MOAO and other complex AO systems.

This paper’s key innovations encompass three aspects for advanced wavefront sensing. First, a novel centroid calculation method for multi-spot arrays combines edge detection and peak extraction, enhancing precision and avoiding edge confusion. Second, an improved k-means clustering algorithm facilitates accurate classification of centroids under severe atmospheric turbulence. Lastly, a centroid filling algorithm effectively addresses light loss in multi-view Hartmann-Shack systems without needing control matrix reconstruction. These advancements offer enhanced accuracy and robustness compared to traditional methods, marking a significant leap in multi-line-of-sight AO technology.

Integrating the three key components, we devised a comprehensive multi-line-of-sight Hartmann-Shack wavefront detection method. Its effectiveness was initially confirmed through simulations. We then tested the algorithm’s performance in actual atmospheric conditions using two-line-of-sight and three-line-of-sight systems. In the two-line system, successful closed-loop correction was achieved, demonstrating the method’s efficacy in real turbulence. The three-line system, due to time constraints, focused solely on wavefront detection without closed-loop correction. Future research will aim to optimize the algorithm for real-time processing.

Funding

National Natural Science Foundation of China (62305343); Sichuan Science and Technology Program (2022JDRC0095).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. Z. Gao, X. Li, and H. Ye, “Large dynamic range shack–hartmann wavefront measurement based on image segmentation and a neighbouring-region search algorithm,” Opt. Commun. 450, 190–201 (2019). [CrossRef]  

2. R. Changhui, Z. Lei, Z. Lanqiang, et al., “Development of solar adaptive optics,” Opto-Electron. Eng. 45, 170733 (2018). [CrossRef]  

3. L. Zhang, Y. Guo, and C. Rao, “Solar multi-conjugate adaptive optics based on high order ground layer adaptive optics and low order high altitude correction,” Opt. Express 25(4), 4356–4367 (2017). [CrossRef]  

4. X. Wei and L. Thibos, “Design and validation of a scanning shack hartmann aberrometer for measurements of the eye over a wide field of view,” Opt. Express 18(2), 1134–1143 (2010). [CrossRef]  

5. C. Leroux and C. Dainty, “Estimation of centroid positions with a matched-filter algorithm: relevance for aberrometry of the eye,” Opt. Express 18(2), 1197–1206 (2010). [CrossRef]  

6. J. Beckers, “Increasing the size of the isoplanatic patch with multiconjugate adaptive optics,” in ESO Conference and Workshop Proceedings, (1988), pp. 693–703.

7. M. Tallon and R. Foy, “Adaptive telescope with laser probe-isoplanatism and cone effect,” Astron. Astrophys. 235, 549–557 (1990).

8. O. von der Lühe, T. Berkefeld, and D. Soltau, “Multi-conjugate solar adaptive optics at the vacuum tower telescope on tenerife,” C. R. Phys. 6(10), 1139–1147 (2005). [CrossRef]  

9. T. Berkefeld, D. Soltau, and O. von der Luhe, “Multiconjugate adaptive optics at the vacuum tower telescope, tenerife,” in Adaptive Optical System Technologies II, vol. 4839 (SPIE, 2003), pp. 544–553.

10. H. Sana, Y. Momany, M. Gieles, et al., “A mad view of trumpler 14,” Astron. & Astrophys. 515, A26 (2010). [CrossRef]  

11. D. Schmidt, N. Gorceix, P. R. Goode, et al., “Clear widens the field for observations of the sun with multi-conjugate adaptive optics,” Astron. & Astrophys. 597, L8 (2017). [CrossRef]  

12. F. Rigaut, B. Neichel, M. Boccas, et al., “Gems first on-sky results,” in Adaptive Optics Systems III, vol. 8447 (SPIE, 2012), pp. 149–163.

13. C. Arcidiacono, M. Lombini, A. Moretti, et al., “An update of the on-sky performance of the layer-oriented wave-front sensor for mad,” arXivarXiv:1009.3393 (2010). [CrossRef]  

14. C. Verinaud, C. Arcidiacono, M. Carbillet, et al., “Layer-oriented multiconjugate adaptive optics systems: performance analysis by numerical simulations,” in Adaptive Optical System Technologies II, vol. 4839 (SPIE, 2003), pp. 524–535.

15. A. V. Goncharov, N. M. Devaney, T. Farrell, et al., “Alternative schemes for multi-reference wavefront sensing,” in Adaptive Optics Systems, vol. 7015 (SPIE, 2008), pp. 1405–1413.

16. E. Gendron, F. Vidal, M. Brangier, et al., “Moao first on-sky demonstration with canary,” Astron. & Astrophys. 529, L2 (2011). [CrossRef]  

17. U. Conod, O. Lardière, K. Jackson, et al., “Status of the girmos moao demonstrator,” in Adaptive Optics Systems VIII, vol. 12185 (SPIE, 2022), pp. 1279–1290.

18. A. Costille, C. Petit, and J.-M. Conan, “Wide field adaptive optics laboratory demonstration with closed-loop tomographic control,” J. Opt. Soc. Am. A 27(3), 469–483 (2010). [CrossRef]  

19. L. Yang, L. Hu, and D. Li, “Multiple-object shack–hartmann wavefront sensor design for a wide field of view on the retina,” Chin. Opt. Lett. 13(12), 120801 (2015). [CrossRef]  

20. T. Zhou and H. Lu, “Clustering algorithm research advances on data mining,” Computer Engineering and Applications 48, 100–111 (2012).

21. W. Ping, L. Xinyang, L. Xi, et al., “Influence of lack of light in partial subapertures on wavefront reconstruction for Shack-Hartmann wavefront sensor,” Chin. J. Lasers 47(4), 409002 (2020).

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (25)

Fig. 1.
Fig. 1. Comparative Schematic of Star-Oriented (left) and Layer-Oriented (right) Multi-Conjugate Adaptive Optics Systems
Fig. 2.
Fig. 2. Hartmann-Shack data corresponding to multiple wavefronts.
Fig. 3.
Fig. 3. The workflow of peak extraction algorithm.
Fig. 4.
Fig. 4. The workflow of edge extraction algorithm.
Fig. 5.
Fig. 5. Light spot edge confusion.
Fig. 6.
Fig. 6. Algorithm flowchart for calculating centroids using a combination of edge detection and peak extraction.
Fig. 7.
Fig. 7. Comparison chart of hybrid and single method results.
Fig. 8.
Fig. 8. Schematic diagram of unclassified centroids (The left image displays an unclassified centroids, the right image represents the ideal classification outcome, where each color signifies a distinct category).
Fig. 9.
Fig. 9. K-means Algorithm Classification Results at D/r0=5
Fig. 10.
Fig. 10. K-means algorithm classification results at D/r0=15.
Fig. 11.
Fig. 11. The flowchart for the Neighborhood Weighting Algorithm
Fig. 12.
Fig. 12. Enhanced k-means algorithm classification results at D/r0=15.
Fig. 13.
Fig. 13. The flowchart for the Centroid Filling Algorithm.
Fig. 14.
Fig. 14. Workflow diagram of the multi-line-of-sight wavefront sensing method.
Fig. 15.
Fig. 15. Average classification error rates of traditional and enhanced k-means clustering algorithms under various conditions (In (a), (b) and (c), the angles between LGS are respectively 1.6971 mrad, 1.1314 mrad and 0.5657 mrad).
Fig. 16.
Fig. 16. A set of typical wavefront reconstruction results from multi-line-of-sight method (upper) and single-line-of-sight method (lower),
Fig. 17.
Fig. 17. Comparison of multi-line-of-sight and single-line-of-sight hartmann-shack wavefront reconstruction under various atmospheric turbulence intensities (mean of 100 data sets).
Fig. 18.
Fig. 18. Centroid calculation results with 15% Light Loss((a) shows the centroid calculation result without light loss, (b) shows the light spot array resulting from a 15% light loss, (c) shows the centroids calculated using the centroid filling algorithm with 15% light loss.)
Fig. 19.
Fig. 19. Wavefront reconstruction results and residuals without light loss (upper), wavefront reconstruction results and residuals using the centroid filling algorithm under a 15% light loss (middle), wavefront reconstruction results and residuals using the slope-zeroing algorithm under the same 15% light loss (lower).
Fig. 20.
Fig. 20. Wavefront reconstruction accuracy of centroid filling algorithm and slope-zeroing algorithm at different light loss rates(mean of 100 data sets).
Fig. 21.
Fig. 21. Partial experimental optical path diagram of the two-line-of-sight system ( (a) shows a schematic depiction of the 3-kilometer laser transmission experimental scenario, (b) shows the two laser beacon sources, (c) shows the telescope system and wavefront correction system.)
Fig. 22.
Fig. 22. Experimental data from the two-line-of-sight wavefront detection system ( (a), (b), and (c) represent the experimental data for the system in open-loop, closed-loop targeting line of sight 1, and closed-loop targeting line of sight 2, respectively. Within these, label 1 show the light spot arrays from two beacons, labels 2 and 3 show the reconstructed wavefronts for line of sight 1 and line of sight 2, and label 4 show the corresponding far-field spots.)
Fig. 23.
Fig. 23. Continuous variation of beam quality $\beta$ for lines of sight 1 and 2 during transition from open-loop to closed-loop operation.
Fig. 24.
Fig. 24. Partial experimental optical path diagram of the three-line-of-sight system ( (a) shows the three laser beacon sources, (b) shows the telescope system and wavefront correction system.)
Fig. 25.
Fig. 25. Experimental data from the three-line-of-sight wavefront detection system ( (a) shows the light spot array at the Hartmann focal plane, (b)$\sim$ (d) shows the calculated wavefronts)

Tables (4)

Tables Icon

Algorithm 1. Peak Extraction Algorithm

Tables Icon

Algorithm 2. Neighborhood Weighting Algorithm

Tables Icon

Algorithm 3. Centroid Filling Algorithm

Tables Icon

Table 1. Experimental Setup Parameters

Equations (4)

Equations on this page are rendered with MathJax. Learn more.

[ W ] i , j = [ ( x y ) W ] i , j = [ 2 π λ f ( Δ x Δ y ) ] i , j
C a v e , k = m = i 1 i + 1 n = j 1 j + 1 S m , n , k M
C r e f , k = C a v e , k w 1 + C k w 2
w 1 + w 2 = 1
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.