Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Hybrid optimization algorithm based on neural networks and its application in wavefront shaping

Open Access Open Access

Abstract

The scattering effect of turbid media can lead to optical wavefront distortion. Focusing light through turbid media can be achieved using wavefront shaping techniques. Intelligent optimization algorithms and neural network algorithms are two powerful types of algorithms in the field of wavefront shaping but have their advantages and disadvantages. In this paper, we propose a new hybrid algorithm that combines the particle swarm optimization algorithm (PSO) and single-layer neural network (SLNN) to achieve the complementary advantages of both. A small number of training sets are used to train the SLNN to obtain preliminary focusing results, after which the PSO continues to optimize to the global optimum. The hybrid algorithm achieves faster convergence and higher enhancement than the PSO, while reducing the size of training samples required for SLNN training. SLNN trained with 1700 training sets can speed up the convergence of the PSO by about 50% and boost the final enhancement by about 24%. This hybrid algorithm will be of great significance in fields such as biomedicine and particle manipulation.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

The material composition of a turbid medium is inhomogeneous, and light propagation in it is affected by scattering effects [1]. The random scattering effect will disrupt the optical wavefront distribution, resulting in speckle patterns in the transmission field. The wavefront shaping technique can be used to compensate for the aberration introduced by the scattering medium in the incident light, thus achieving point focus on the imaging surface [25]. There are three methods commonly used to implement wavefront shaping: transmission matrix method [610], phase conjugation approach [11], and iterative optimization algorithm [2,3,1219].

The transmission matrix method establishes the connection between the incident and the outgoing optical fields, and the modulation mask is derived from the target optical field. Based on the photoacoustic transmission matrix model, T. Zhao et al. achieved optical focusing inside the scattering medium [6]. In addition, the transmission matrix method was also used for focusing through multimode fibers due to the multimode fiber mode superposition producing similar effects on the beam as the scattering medium [7]. Since the transmission matrix method can quickly complete the calculation of masks corresponding to different focusing targets after a single measurement, this method also achieves a dynamic range of focusing based on fast scanning, which has important applications in biomedicine [8]. However, despite the high accuracy and scalability of the transmission matrix method, the measurement of the transmission matrix is time-consuming [9,10]. In contrast, the optimization time of the phase conjugation approach is short, but the experimental setup is complex and requires a high degree of accuracy in setup and time control [11]. Iterative optimization algorithms are capable of fast optimization to approximate the target optical field but usually run the risk of falling into a local optimum. Among them, the stepwise sequential algorithm and the continuous sequential algorithm need to calculate the phase values one by one. As a result, the optimization process is slow and the final optimization results are vulnerable to noise [2,12]. The partitioning algorithm has a better signal-to-noise ratio (SNR), but the optimization speed is even slower [13]. A. Drémeau et al. proposed an iterative optimization algorithm based on the Kullback-Leibler divergence under the mean-field assumption achieving efficient and robust optimization [14]. In addition, bionic optimization algorithm, including genetic algorithm (GA) and particle swarm optimization algorithm (PSO), also have fast convergence and high SNR. However, all of them run the risk of falling into local optimal solutions [1519].

Recently, neural network algorithm shows great potential in imaging and microscopy with the rapid development of computer technology and hardware capabilities [20,21]. S. Li et al. introduced the neural network algorithm to wavefront shaping by training the network using a pair of images, outgoing optical field and mask on spatial light modulator (SLM), to achieve ultra-generation from a given target optical field to mask [22]. Later, A. Turpin et al. reported their work on single-layer neural network (SLNN), which can be trained more rapidly and have higher focusing intensity than the conventional convolutional neural networks [23]. However, the training process of the neural network algorithm is time-consuming, and it requires a large number of training samples to obtain a focused light with an intensity approaching the theoretical value because the model accuracy is closely related to the size of the training sample. Based on the pros and cons of these two types of algorithms, Y. Luo et al. proposed the hybrid algorithm GeneNN, which combines GA with deep convolutional neural networks (DCNN) to reduce the training consumption of neural networks as well as the number of iterations of GA. Although the combination of algorithms speeds up the convergence, the iteration of GA still needs to start from the enhancement close to zero and a large number of iterations of optimization are still needed to complete the convergence. In practical applications, camera acquisition and data transmission time consumption occupy a major part of iterative optimization. Depending on the number of populations in the intelligent algorithm, one iteration usually requires tens or even hundreds of image acquisitions, which leads to a lengthy optimization time. On the other hand, training DCNN still requires a large number of training sets, which further increases the time consumption for capturing images. As a result, the total time consumed by the hybrid algorithm even exceeds that of GA [24]. This places a higher demand on the stability of both the device and the environment in practical applications.

In this article, we propose a novel hybrid algorithm based on the combination of PSO and SLNN, which achieves faster convergence and higher focusing intensity compared to conventional algorithms. Based on the unique iterative approach of PSO, our proposed new combination method allows the hybrid algorithm to start with a high enhancement during iterative optimization, reducing the time for iterative optimization. Besides, compared to DCNN, the introduction of SLNN can achieve a higher enhancement in shorter training time thanks to its simpler network structure [23]. The combination method designed based on PSO’s characteristics allows SLNN and PSO to maximize their strengths. In our experiments, we achieved both a reduction in the total optimization time and a significant improvement in the overall enhancement. In the remaining sections of this paper, we firstly detailed the theoretical framework of the hybrid algorithm, including two types of combination methods. Then, to demonstrate the superiority of the hybrid algorithm, we experimentally compare the convergence process of the hybrid algorithm with PSO in single-point focus. Finally, we discuss the effect of the population number on the hybrid algorithm by comparing the optimization process of the two combination methods with PSO under different populations.

2. Theory

2.1 Particle swarm algorithm

The physical model of PSO is a bird population foraging, where each particle in the population independently searches for the global optimal solution based on individual memory and the population optimal direction. Their motion is guided by the population optimal value and the individual optimal value. For an individual particle, the optimal position recorded in the previous movements and the optimal position in the whole population guide the direction of its next move, which is determined based on the difference between the current state and these two values. The magnitude of the difference determines the speed of motion, which decreases when it’s closer to the target.

We used the randomly generated DMD masks to initialize the positions of all particles. Equation (1) calculates the gap between each individual and the target:

$$\eta = \frac{{{I_m}}}{{{I_{ref}}}},$$
where η is the enhancement of the focus, Im is the optimized focus intensity, and Iref is the reference intensity.

The magnitude of the evaluation value is called the fitness of the particle, and the position with the highest fitness of the i-th particle in the k-th generation is recorded as the vector Pik presenting the individual optimum, while the position with the highest fitness of all individuals in the population is recorded as the vector Gik presenting the population optimum. The direction of the subsequent motion is decided based on the difference between the current position, Pik, and Gik. With the following equations

$${\boldsymbol {v}}_i^{k + 1} = w \cdot {\boldsymbol {v}}_i^k + {c_1} \cdot {r_1} \cdot ({{\boldsymbol {P}}_i^k - {\boldsymbol {x}}_i^k} )+ {c_2} \cdot {r_2} \cdot ({{\boldsymbol {G}}_i^k - {\boldsymbol {x}}_i^k} ),$$
$${\boldsymbol {x}}_i^{k + 1}{\bf = }{\boldsymbol {v}}_i^k{\bf + }{\boldsymbol {x}}_i^k,$$
where vik is the velocity of the i-th individual in the k-th generation, w is the rate factor, c1 and c2 are respectively the weighting factors of the individual optimal and population optimal, r1 and r2 are random numbers between 0 and 1, and xik is the position of this particle in the k-th generation, we can update the positions of the particles. Then we can calculate the difference between particles and the target position again, and repeat the process of moving. As the particle motion proceeds, the particles in the population will move in the direction of the optimal solution and keep narrowing the gap. With enough iterations, it will be optimized to converge to the global optimal solution.

2.2 Single-layer neural network

SLNN has only one fully connected layer (FCL) and the network structure is shown in Fig. 1, which is a linearized neural network.

 figure: Fig. 1.

Fig. 1. SLNN structure diagram.

Download Full Size | PDF

If the m-order vector A represents the input, the m×n matrix W represents the weights, and the n-order vector B represents the bias, the output Z can be expressed as follows:

$${{\boldsymbol {A}}^T} = \left[ {\begin{array}{{ccc}} {{a_1}}&{{a_2}}&{\begin{array}{{ccc}} {{a_3}}& \cdots &{\begin{array}{{cc}} {{a_{m - 1}}}&{{a_m}} \end{array}} \end{array}} \end{array}} \right],$$
$${{\boldsymbol {B}}^T} = \left[ {\begin{array}{{ccc}} {{b_1}}&{{b_2}}&{\begin{array}{{ccc}} {{b_3}}& \cdots &{\begin{array}{{cc}} {{b_{n - 1}}}&{{b_n}} \end{array}} \end{array}} \end{array}} \right],$$
$${\boldsymbol {W}}{\bf = }\left[ {\begin{array}{{ccc}} {{w_{11}}}& \cdots &{{w_{1n}}}\\ \vdots & \ddots & \vdots \\ {{w_{m1}}}& \cdots &{{w_{mn}}} \end{array}} \right],$$
$${\boldsymbol {Z}}{\bf = }{{\boldsymbol {A}}^T}{\boldsymbol {W}}{\bf + }{\boldsymbol {B}},$$
$${\boldsymbol {Z}}{\bf = }\mathrm{\sigma }({\boldsymbol {Z}} ),$$
where σ is the activation function.

2.3 Hybrid algorithm

Although the optimization quality of the neural network algorithm is limited by the size of the training set, it does not require a large size of training sample to obtain a preliminary focus. Increasing the size of the training sample will cost a lot of time in exchange for a small improvement in the focus intensity, which is unacceptable in practical applications [23]. At the same time, the optimization results of the neural network can be used to guide the optimization direction at the beginning of the iteration of the intelligent optimization algorithm. Compared to the use of random initialization, the intelligent optimization algorithm continues to optimize on the basis of the pre-processed mask, which is less likely to converge to a local optimum and can obtain more stable results correspondingly.

The structure of the hybrid algorithm is shown in Fig. 2. Using the speckle patterns and DMD modulation masks as the input and output, the neural network is trained to learn the relations between them. At the end of the training, the focal image is put into the neural network, and the output of the network is the predicted DMD mask. Using the predicted mask as the starting point, the intelligent optimization algorithm continues to optimize to the global optimum.

 figure: Fig. 2.

Fig. 2. Schematic diagram of the structure of the hybrid algorithm.

Download Full Size | PDF

In this hybrid algorithm, the neural network plays a role in guiding the initial direction of optimization. That is, it does not require a particularly high quality of optimization. So, the demand for the training sample can be compressed to a small size, thus substantially increasing the training speed and cutting the time consumption. The intelligent optimization algorithm that performs subsequent optimization also converges to the global optimal solution more quickly because there are individuals closer to the target in the first generation, and the direction of population optimization is less likely to deviate.

In our experiments, we used two different combination methods for putting the pre-processing results of SLNN into PSO. The two combination methods are shown in the following Fig. 3. Combination-1 is to use the neural-network-prediction mask as an individual of the first generation of the intelligent optimization algorithm, which has been preliminarily focused and has a higher individual evaluation value than other randomly generated masks, guiding the direction for the subsequent optimization. In this combination method, the role of dominant individuals will be more obvious. At the beginning of the iteration, particles are induced to gather toward and search around the prediction mask. Combination-2 is to use the mask as the basis of the first generation of individuals. The first generation randomly inherits part of the mask and the rest is randomly initialized so that the whole first generation is influenced by the mask and has a higher initial evaluation value. At this time, the particles of the starting state are distributed around the prediction mask, with it as the center of the random initialization position.

 figure: Fig. 3.

Fig. 3. Structure of two combination methods of hybrid algorithms. (a) combination-1, (b) combination-2.

Download Full Size | PDF

For PSO, the combination-1 will make the particle with the neural-network-prediction mask the closest to the optimal solution in the first generation, thus becoming the population optimum, and the difference between the evaluation value of this particle and other particles is large. So, this position of the optimal particle is used as the dominant when the speed of other particles is calculated during the motion. The role of the individual optimum of each particle is weakened, and the randomness of the optimization direction is reduced. Due to the random initialization, the particle evaluation value is not high, while the difference with the optimal particle is large. It leads to a large speed of movement in the early stage of optimization, which can quickly enhance the value of the focus intensity. The initial position of the particles in the combination-2 is determined by randomly inheriting from the prediction mask, and the particles are distributed around this mask. With the population being closer to the optimum, the individual particles do not differ much from the population optimum and move at a slower speed.

3. Experiment

3.1 Experimental setup

Our experimental setup is shown in Fig. 4. In our experiment, the light source is a 532nm solid-state continuous-wave laser (Self-made) with a maximum output power of 5W. The laser is firstly expanded by the beam expander, and then the polarization state of the light wave is modulated by the combination of half-wave plate and polarizer. A high-speed DMD (ViALUX, V7001, resolution: 1024×768, pixel size: 13.7 μm×13.7 μm) is employed to modulate the light wavefront. The plane wavefront is incident on the DMD, reflected, and then transferred to the scattering medium through the 4–f system (F1 = 600 mm, F2 = 300 mm). The scattering medium is a ground glass diffuser (Edmund, 220 grits) which scatters the light away after passing through it. An objective lens (Obj, Olympus, MPLN20×, NA=0.40) and an imaging lens (F3 = 180 mm) are equipped to image the transmission light to the complementary metal-oxide-semiconductor (CMOS) camera (PCO, edge 4.2 bi, pixel size: 6.5 μm×6.5 μm). The control programs for the camera and DMD were written in C++ (Visual studio 2019 community). The neural network employed in the experiment is finished based on MATLAB (R2020a) with GPU accelerating the computation. The hardware configuration is CPU: Intel Core i5-10400F, GPU: NVIDIA GeForce GTX 1660 SUPER.

 figure: Fig. 4.

Fig. 4. Experimental setup. F1, F2, F3: lens, HWP: half-wave plate, P1, P2: polarizer, BE: beam expander, AP: small aperture diaphragm, S: scattering medium, Obj: objective lens.

Download Full Size | PDF

3.2 Experimental results

The network is trained using 1700 pairs of the transmitted light field intensity distributions and the DMD masks while testing on 1000 pairs not introduced for training. The trained neural network is fed into the focus map of the target location and the network predicts the generation of the DMD modulation mask. For 1024×768 pixels on the DMD, we selected the central 512×512 pixels to participate in the modulation and divided the adjacent 16×16 pixels into one control unit, finally getting 32×32 control units.

SLNN is a linear network, which contains only one fully connected layer to connect the input to the output layer, and after that, the result is mapped to between 0 and 1 by an activation function. It is found that when the activation function uses sigmoid, the network is prone to overfitting and cannot predict well. Based on the application requirements of binarization modulation, the activation function is replaced with a step function, which solves the overfitting problem and guarantees the model a better focusing effect [23]. The training time of the model depends on the size of the training sample and the number of generations, which is 12.41s at 1700 pairs and 10 generations. Once finishing the training process, only 6ms is needed to make a prediction. The mask prediction results of the network model on the focus field are shown in Fig. 5, and the target optimizing location is also marked with dashed blue circles.

 figure: Fig. 5.

Fig. 5. SLNN prediction results. (a) scattered field before optimization, (b) focus field after optimization, (c) target focus field.

Download Full Size | PDF

The SNR of the focus field is η = 77. Enlarging the size of the training sample can improve the prediction accuracy of the network and increase the intensity of the focus, but accompanying an increase in time-consuming as well, as shown in Fig. 6. SLNN requires only 1000 training sets to obtain optimized optical fields with target-focused features, while the achieved enhancement values do not grow linearly with increasing training set size, and the slope becomes progressively smaller. Above 10,000 training sets, the elevation of the enhancement is not significant when increasing the training set size.

 figure: Fig. 6.

Fig. 6. SLNN with different training sample sizes. (a) time consuming, (b) enhancement.

Download Full Size | PDF

After SLNN prediction, we let the first generation of PSO inherit that prediction mask randomly to realize the combination of SLNN and PSO. Combining SLNN and PSO, we can get the result as Fig. 7 shows, which is generated by combination-2.

 figure: Fig. 7.

Fig. 7. Hybrid algorithm optimization results. (a) SLNN optimization results, (b) results after PSO optimization, (c) hybrid algorithm and PSO enhancement with the iterative process.

Download Full Size | PDF

SLNN obtained a preliminary focus field with a small number of training samples (1700), achieving an enhancement of 77 as shown in Fig. 7(a), and the subsequent optimization was continued by the PSO algorithm. Here we set the parameters of PSO as w = 0.95 and c1 = c2 = 18, the population NG is set as 70. After about 120 iterations, the algorithm converges to the global optimum and the achieved enhancement is 131.2 as shown in Fig. 7(b), which is close to the theoretical value of 32×32×(1/2π) ≈ 162.9 [25]. The initial optimization result of SLNN plays a role in guiding the direction of the subsequent optimization, and the mask predicted by SLNN corresponds to the focused light field at the target location, thus this one individual has the highest fitness in the first generation of the PSO algorithm and becomes the population optimum. The movement of the population in the subsequent optimization is guided by this result, avoiding the risk of falling into the local optimum solution compared with randomly moving at the beginning of the iteration. At the same time, the mask given by SLNN is close to the global optimum, and the resulting first-generation individual positions will be distributed near the global optimum. As a result, the range of particle swarm search is narrowed and the speed of algorithm convergence is improved.

The hybrid algorithm was compared with PSO under the same parameter conditions and the results are shown in Fig. 7(c). The global optimum reached when using only the PSO algorithm was 105.2, and the enhancement of 100 was reached at 106 iterations; after using the SLNN preprocessing, the global optimum was 131.2, and only need 46 iterations to reach the enhancement of 100. The results show that the combination of SLNN enhances the final focusing intensity of PSO and speeds up the convergence.

4. Discussion

In the hybrid algorithm, PSO continues the optimization using the SLNN prediction mask, and the size of the population directly determines the effect of the prediction mask in the population. Small populations are easier to be influenced by the prediction mask overall, but there is also the risk of falling into the local optimum. Large populations have a larger search range, but weaken the guiding role of the prediction mask as well. In our experiment, we tested the performance of the hybrid algorithm using six sets of parameters NG = 20, 30, 40, 50, 60, 70. In addition, we also used two different combination methods for comparison. The experimental results are shown in Fig. 8.

 figure: Fig. 8.

Fig. 8. The curves of enhancement for PSO and two different combinations with different population sets. (a)-(f) NG = 20, 30, 40, 50, 60, 70.

Download Full Size | PDF

The prediction mask in the combination-1 is added to the first generation of iterations. This particle has a higher evaluation value and has a direct guiding effect on the movement of the whole population, while the other particles are influenced by the optimal value of the population to move in the direction of the prediction mask. With this combination method, the enhancement is higher at the beginning of the iteration, and subsequent iterations continue to build on it. However, it should be noted that this approach also greatly limits the direction of the particle swarm search and tends to fall into local optimal solutions. Compared with PSO and combination-2, this approach ultimately achieves a lower enhancement.

The first-generation population in the combination-2 is generated from the prediction mask. By the combination of partial inheritance and partial randomness, the particle population is distributed around the prediction mask, which shrinks the scope of searching for the global optimum. The overall evaluation value is higher in the first-generation population, and there is no restriction on the direction of motion by an individual, and the global search capability is stronger for subsequent optimization. This combination method of the hybrid algorithm achieves faster convergence and higher final enhancement compared to PSO.

In addition, as the number of particle population populations NG changes, the characteristics of the two combination methods compared to PSO remain essentially unchanged. The number of iterations required for optimization to convergence decreases as the population size NG increases.

Here we also quantitatively analyze the improvement in convergence speed and the final enhancement achieved by the hybrid algorithm for different population numbers. For the convergence speed, we analyze the number of iterations required to reach different enhancement values and calculate the number of image acquisitions saved in combination with the corresponding population size in order to compare with the increased time consumption of neural network training to obtain the total time consumption of the hybrid algorithm. The results are presented in Table 1. In Table 2, we calculate the improvement of the final enhancement compared to PSO for both combinations with different population numbers, where negative values represent a decrease.

Tables Icon

Table 1. Comparison of convergence speed

Tables Icon

Table 2. Percentage of final enhancement boost

The convergence speedups corresponding to the three enhancement values reflect the same pattern, where the number of image acquisitions saved by the hybrid algorithm increases with the number of populations, which in turn reduces the overall elapsed time of the algorithm optimization. For the combination-1, the time saved for reaching not particularly high enhancement is large because the optimization initial value is very high, which eliminates the optimization process at the beginning of the iteration. However, in contrast, the combination-1 is also more likely to fall into the local optimum, achieving a significant boost in final enhancement only at NG = 20 compared to the PSO. For larger population sizes, the final enhancements of the combination-1 are all close to those of PSO, with no significant improvement. For combination-2, it can be optimized starting from relatively high enhancement and additionally speeds up the iterative convergence. For different populations and enhancement targets, a stable speedup can be achieved, and the total time consumed by the hybrid algorithm based on the combination-2 is lower than PSO after considering the neural network training when the population number reaches 70. However, it should be noted that the reduction in the number of image acquisitions for this combination method is not significant for small populations and low target enhancement. The overall time consumption of the hybrid algorithm is higher than that of PSO after taking into account the image acquisition time required for neural network training. In terms of optimization quality, the combination-2 can improve the final enhancement by up to 24.7%. Combination-2 can achieve both convergence speed and optimization quality improvement based on large population size.

5. Conclusions

In conclusion, we have proposed a novel high-quality hybrid algorithm for optimizing the wavefront and achieving light focusing through turbid media. This algorithm combines PSO and SLNN to make the best use of their complementary advantages. SLNN with a small number of training sets is first employed to find a preliminary optimization result, and then PSO continues to optimize it to achieve the global optimum. The pre-optimization process of SLNN could dramatically accelerate the convergence of PSO and improve the enhancement. With the hybrid algorithm, we experimentally realized light focusing with an enhancement of 131.2, which was 24.7% higher than that realized by PSO. Meanwhile, the time consumed for iterative optimization was reduced by 54.9%. We believe this hybrid algorithm will be of significance to wavefront shaping technology, as well as research fields such as biomedicine and particle manipulation.

Funding

National Natural Science Foundation of China (61875100, 61905128).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. J. W. Goodman, Speckle phenomena in optics: theory and applications (Roberts and Company Publishers, 2007).

2. I. M. Vellekoop and A. P. Mosk, “Focusing coherent light through opaque strongly scattering media,” Opt. Lett. 32(16), 2309–2311 (2007). [CrossRef]  

3. I. M. Vellekoop, “Feedback-based wavefront shaping,” Opt. Express 23(9), 12189–12206 (2015). [CrossRef]  

4. P. Lai, L. Wang, J. W. Tay, and L. V. Wang, “Photoacoustically guided wavefront shaping for enhanced optical focusing in scattering media,” Nat. Photonics 9(2), 126–132 (2015). [CrossRef]  

5. O. Tzang, E. Niv, A. M. Caravaca-Aguirre, and R. Piestun, “Thermal expansion feedback for wave-front shaping,” Opt. Express 25(6), 6122–6131 (2017). [CrossRef]  

6. T. Zhao, S. Ourselin, T. Vercauteren, and W. Xia, “High-speed photoacoustic-guided wavefront shaping for focusing light in scattering media,” Opt. Lett. 46(5), 1165–1168 (2021). [CrossRef]  

7. G. Huang, D. Wu, J. Luo, Y. Huang, and Y. Shen, “Retrieving the optical transmission matrix of a multimode fiber using the extended Kalman filter,” Opt. Express 28(7), 9487–9500 (2020). [CrossRef]  

8. X. Tao, D. Bodington, M. Reinig, and J. Kubby, “High-speed scanning interferometric focusing by fast measurement of binary transmission matrix for channel demixing,” Opt. Express 23(11), 14168–14187 (2015). [CrossRef]  

9. S. M. Popoff, G. Lerosey, R. Carminati, M. Fink, A. C. Boccara, and S. Gigan, “Measuring the Transmission Matrix in Optics: An Approach to the Study and Control of Light Propagation in Disordered Media,” Phys. Rev. Lett. 104(10), 100601 (2010). [CrossRef]  

10. H. Zhang, B. Zhang, and Q. Liu, “OAM-basis transmission matrix in optics: a novel approach to manipulate light propagation through scattering media,” Opt. Express 28(10), 15006–15015 (2020). [CrossRef]  

11. M. Cui and C. Yang, “Implementation of a digital optical phase conjugation system and its application to study the robustness of turbidity suppression by phase conjugation,” Opt. Express 18(4), 3444–3455 (2010). [CrossRef]  

12. J. V. Thompson, B. H. Hokr, and V. V. Yakovlev, “Optimization of focusing through scattering media using the continuous sequential algorithm,” J. Mod. Opt. 63(1), 80–84 (2016). [CrossRef]  

13. I. M. Vellekoop and A. P. Mosk, “Phase control algorithms for focusing light through turbid media,” Opt. Commun. 281(11), 3071–3080 (2008). [CrossRef]  

14. A. Drémeau and F. Krzakala, “Phase recovery from a Bayesian point of view: The variational approach,” in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (IEEE, 2015), pp. 3661–3665.

15. Q. Feng, B. Zhang, Z. Liu, C. Lin, and Y. Ding, “Research on intelligent algorithms for amplitude optimization of wavefront shaping,” Appl. Opt. 56(12), 3240–3244 (2017). [CrossRef]  

16. B. Zhang, Z. Zhang, Q. Feng, Z. Liu, C. Lin, and Y. Ding, “Focusing light through strongly scattering media using genetic algorithm with SBR discriminant,” J. Opt. 20(2), 025601 (2018). [CrossRef]  

17. D. B. Conkey, A. N. Brown, A. M. Caravaca-Aguirre, and R. Piestun, “Genetic algorithm optimization for focusing through turbid media in noisy environments,” Opt. Express 20(5), 4840–4849 (2012). [CrossRef]  

18. B. Li, B. Zhang, Q. Feng, X. Cheng, Y. Ding, and Q. Liu, “Shaping the Wavefront of Incident Light with a Strong Robustness Particle Swarm Optimization Algorithm,” Chin. Phys. Lett. 35(12), 124201 (2018). [CrossRef]  

19. H. Huang, Z. Chen, C. Sun, J. Liu, and J. Pu, “Light Focusing through Scattering Media by Particle Swarm Optimization,” Chin. Phys. Lett. 32(10), 104202 (2015). [CrossRef]  

20. U. S. Kamilov, I. N. Papadopoulos, M. H. Shoreh, A. Goy, C. Vonesch, M. Unser, and D. Psaltis, “Learning approach to optical tomography,” Optica 2(6), 517–522 (2015). [CrossRef]  

21. A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” Optica 4(9), 1117–1125 (2017). [CrossRef]  

22. S. Li, M. Deng, J. Lee, A. Sinha, and G. Barbastathis, “Imaging through glass diffusers using densely connected convolutional networks,” Optica 5(7), 803–813 (2018). [CrossRef]  

23. A. Turpin, I. Vishniakou, and J. D. Seelig, “Light scattering control in transmission and reflection with neural networks,” Opt. Express 26(23), 30911–30929 (2018). [CrossRef]  

24. Y. Luo, S. Yan, H. Li, P. Lai, and Y. Zheng, “Focusing light through scattering media by reinforced hybrid algorithms,” APL Photonics 5(1), 016109 (2020). [CrossRef]  

25. D. Akbulut, T. J. Huisman, E. G. Putten, W. L. Vos, and A. P. Mosk, “Focusing light through random photonic media by binary amplitude modulation,” Opt. Express 19(5), 4017–4029 (2011). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. SLNN structure diagram.
Fig. 2.
Fig. 2. Schematic diagram of the structure of the hybrid algorithm.
Fig. 3.
Fig. 3. Structure of two combination methods of hybrid algorithms. (a) combination-1, (b) combination-2.
Fig. 4.
Fig. 4. Experimental setup. F1, F2, F3: lens, HWP: half-wave plate, P1, P2: polarizer, BE: beam expander, AP: small aperture diaphragm, S: scattering medium, Obj: objective lens.
Fig. 5.
Fig. 5. SLNN prediction results. (a) scattered field before optimization, (b) focus field after optimization, (c) target focus field.
Fig. 6.
Fig. 6. SLNN with different training sample sizes. (a) time consuming, (b) enhancement.
Fig. 7.
Fig. 7. Hybrid algorithm optimization results. (a) SLNN optimization results, (b) results after PSO optimization, (c) hybrid algorithm and PSO enhancement with the iterative process.
Fig. 8.
Fig. 8. The curves of enhancement for PSO and two different combinations with different population sets. (a)-(f) NG = 20, 30, 40, 50, 60, 70.

Tables (2)

Tables Icon

Table 1. Comparison of convergence speed

Tables Icon

Table 2. Percentage of final enhancement boost

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

η = I m I r e f ,
v i k + 1 = w v i k + c 1 r 1 ( P i k x i k ) + c 2 r 2 ( G i k x i k ) ,
x i k + 1 = v i k + x i k ,
A T = [ a 1 a 2 a 3 a m 1 a m ] ,
B T = [ b 1 b 2 b 3 b n 1 b n ] ,
W = [ w 11 w 1 n w m 1 w m n ] ,
Z = A T W + B ,
Z = σ ( Z ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.