Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Spatially-resolved bending recognition based on a learning-empowered fiber specklegram sensor

Open Access Open Access

Abstract

Fiber specklegram sensors do not rely on complex fabrication processes and expensive sensor interrogation schemes and provide an alternative to routinely used fiber sensing technologies. Most of the reported specklegram demodulation schemes focus on correlation calculation based on statistical properties or classification according to features, resulting in limited measurement range and resolution. In this work, we propose and demonstrate a learning-empowered spatially resolved method for fiber specklegram bending sensors. This method can learn the evolution process of speckle patterns through a hybrid framework constructed by a data dimension reduction algorithm and regression neural network, which can simultaneously identify the curvature and perturbed position according to the specklegram, even for the unlearned curvature configuration. Rigorous experiments are performed to verify the feasibility and robustness of the proposed scheme, and the results show that the prediction accuracy for the perturbed position is 100%, and the average prediction errors for the curvature of the learned and unlearned configurations are 7.79 × 10−4 m-1 and 7.02 × 10−2 m-1, respectively. The proposed method promotes the application of fiber specklegram sensors in the practical scene and provides insights for the interrogation of sensing signals by deep learning.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Curvature measurement is vital in mechanical engineering, architecture and the aerospace industry [1]. Fiber optic curvature sensors offer many advantages over electrical sensor-based solutions, including small size, immunity to electromagnetic interference, high sensitivity and resistance to harsh environments [2]. Among them, the common fiber curvature sensors are divided into three categories: fiber Bragg gratings (FBG) [35], long-period fiber gratings (LPFG) [69] and fiber interferometers [1,1014]. Each of the above schemes has its merits, but most rely on complex fabrication processes and expensive sensor interrogation schemes [15]. For example, fiber interferometers for curvature measurements are competitive due to their high sensitivity, but most of the reported interferometers often require complex fabrication procedures, such as tapering [1,14], twisting [11], etching [10] or splicing [12,13]. In this context, fiber specklegram sensors are a promising alternative to the fiber sensing scheme above. Fiber specklegrams are patterns with bright and dark dots generated at the distal end of fiber due to the interference between numerous eigenmodes. Specklegrams-based sensing scheme demodulates the signal by tracking the variation of the speckle pattern. The fiber specklegram sensor has a sensitivity comparable to the other kinds of fiber sensors and can be implemented with a simple experimental setup, thus demonstrates the potential to circumvent the shortcomings of traditional schemes [15]. Various techniques have been reported to analyze changes in speckle patterns, such as statistical analysis [16], morphological image processing [17], normalized inner-product coefficient (NIPC) [18,19], and zero mean normalized cross-correlation coefficient (ZNCC) [15]. However, these techniques were devoted to analyze speckle patterns statistically, and predict the sensing parameter according to the correlation coefficient. The evolution of the speckle patterns with specific perturbation cannot be fully described by these techniques, because a lot of information carried by numerous fiber modes has not been utilized.

Deep learning, which can mine and learn the principal features of the given data using multiple layers of processing units, has become a hot research topic in recent years. Inspired by its outstanding achievements in domains related to engineering [20,21], deep learning has been applied to optics and photonics [2224] and proved to be applicable to the demodulation of sensing signals [2527]. Most of the reported learning-based fiber specklegram sensing systems employ classification neural networks. Considering that most of the parameters in nature are continually variable, when only the discrete status can be distinguished, the classification neural networks may be not an efficient interrogation method, especially for the sensing applications required high resolution [2831]. In principle, it is possible to extend this approach by training the classification neural network on a more extensive solution space or a dataset containing more configurations. However, in order to statistically explore this extended solution space, a large number of possible configurations need to be added to the training set, resulting in the training set becoming too bloated to be effectively processed in terms of time cost and experimental effort. Furthermore, it is also impossible to expect to learn all potential configurations of the fibers. Predicting unlearned configurations is promising if neural networks can be utilized to mine and learn the evolution rules of speckle patterns, and the regression neural network is a guaranteed candidate. By learning the mapping relationship between speckle patterns and disturbances, the regression neural network can identify unlearned configurations, circumventing the drawbacks of classification neural networks. In 2022, G. Li et al. applied a regression model based on the convolutional neural network architecture to successfully perform demodulation for the unlearned configuration [32]. Unfortunately, this scheme has unsatisfactory prediction accuracy and can only predict a single parameter, which fails to take full advantage of the generalization capability of the regression model.

In this work, we demonstrate a learning-empowered spatially resolved fiber specklegram curvature sensor for high-precision dual-parameter sensing using a hybrid framework constructed by the principal component analysis (PCA) method and BP neural network. The PCA algorithm is utilized to reduce the dimensionality of the collected samples, removing redundant information and noise while reducing the computational complexity. BP neural network, a classical regression model, is employed to learn the evolution pattern of the speckle pattern on the optimized data set and to fit the mapping relationship between the specklegram and the configuration. The trained model can simultaneously identify the curvature and perturbed position according to the speckle pattern. Rigorous experiments verify the robustness and accuracy of the proposed scheme. The test results indicate that for the learned configurations, the prediction error of curvature is 7.79 × 10−4 m-1, and the prediction accuracy of the perturbed position is 100%. For configurations that have never been learned or seen, the prediction error and accuracy of the model for curvature and perturbed position are 7.02 × 10−2 m-1 and 100%, respectively. Compared with the reported learning-based fiber specklegram sensing system, the proposed scheme further exploits the potential of neural networks and makes them better adapted to the sensing scene, allowing spatially resolved identification for the bending state of the optical fiber, and providing an enlightening reference for solving sensing problems using deep learning.

2. Methods

2.1 Design principle

As light transports through the fiber, multimode interference effect distort the incident beam, generating a speckle pattern at the distal end of the fiber. Although visually random, the propagation of the incident beam is a determined process when the optical fiber can be regarded as linear medium [33]. Assuming that the incident light is coherent, the speckle field at the exit end of the fiber can be expressed as the superposition of all excited modes [34]:

$$A(x,y) = \sum\limits_{m = 0}^M {{a_m}} (x,y)\exp [{j{\phi_m}(x,y)} ], $$
where M is the number of eigenmodes, am is the amplitude distribution of the m-th mode, and ϕm is the phase distribution of the m-th mode. The intensity of the speckle field recorded by the camera can be expressed as:
$$I(x,y) = {|{A(x,y)} |^2} = \sum\limits_{n = 0}^M {\sum\limits_{m = 0}^M {{a_m}} {a_n}\exp [{j({{\phi_m} - {\phi_n}} )} ]}. $$

As seen, the speckle pattern is formed by the interference of M excited modes in the multimode fiber (MMF), and any change of the mode propagation will cause variation of the speckle pattern. When the fiber is bent, the change of bending radius will induce the phase difference and energy coupling between the excited modes and thus cause the variation of the speckle pattern. Therefore, the bending state of the fiber can be demodulated by analyzing the speckle pattern. The scheme proposed in this work is described in Fig. 1. The speckle patterns formed at the exit end of the fiber are collected and stored as training samples. The PCA algorithm is then employed to reduce the dimensionality of the collected samples, reducing computational complexity while removing redundant information and noise. The PCA algorithm is a technique used to explore the structure of high-dimensional data, projecting data from high-dimensional to low-dimensional space while maximizing the preservation of intrinsic features [35,36]. In this work, the combination of curvature and perturbed position is defined as configuration. The BP neural network learns the mapping relationship between speckle patterns and configuration on the pre-processed dataset, and the trained model can simultaneously identify the corresponding curvature and perturbed position according to the specklegram.

 figure: Fig. 1.

Fig. 1. Overview of bending recognition scheme based on PCA-BP model.

Download Full Size | PDF

The architecture of the BP neural network used in this work is shown in Fig. 2. BP neural network is a classical multilayer perceptron, which can effectively approximate the continuous function to be fitted through the forward propagation of data and the backward propagation of error [3739]. In this work, the structure of the BP neural network is 30-25-10-5, in which the input layer has 30 neurons, the hidden layer 1 has 25 neurons, the hidden layer 2 has 10 neurons, and the output layer has 5 neurons. The number of neurons in the input layer is equal to the dimensionality of the low-dimensional space constructed by the PCA algorithm, and the five neurons in the output layer represent the curvature corresponding to the five monitored positions respectively. Once the unknown samples are fed into the network, the trained model can output the curvature corresponding to each monitored position based only on the speckle pattern. Thus, the model proposed in this work is not only able to predict single point excitations, but is also suitable for demodulating perturbations applied to multiple locations simultaneously.

 figure: Fig. 2.

Fig. 2. Architecture of the deep learning model for identifying bending status and perturbed positions.

Download Full Size | PDF

2.2 Experimental setup and data acquisition

The optical configuration used to collect the data is depicted in Fig. 3 (a). A solid-state laser (MGL-III, 532 nm, 50 mW) with the central wavelength of 532 nm is employed as the illumination source. The light emitted from the laser is coupled into the proximal end of a 1.5m-long MMF (step index, 62.5/125 µm core/cladding diameters) using a microscope objective (OBJ1). The intensity of the speckle pattern emerging from the output plane of the MMF is imaged by a second microscope objective (OBJ2) on a charge-coupled device (CCD) camera (FLIR, GS3-U3-91S6M-C, 3376 × 2704 pixels). Five monitored positions (labeled P1-P5) are selected along the MMF, and a precise micrometer driver is used to control local deformation. Specifically, the micrometer driver bends the fiber through a three-point contact, as shown in Fig. 3 (b), resulting in a bell-shaped local deformation of the fiber, and inducing the variation of the speckle pattern. The curvature of bending is determined by the applied displacement of the micrometer driver and can be expressed as:

$$C = \frac{{2d}}{{{d^2} + \frac{{{L^2}}}{4}}},$$
where d is the applied displacement and L is the length of the monitored position.

 figure: Fig. 3.

Fig. 3. Schematic of the experimental setup. OBJ: microscopic Objective (OBJ1: 20×, numerical aperture (NA) = 0.40; OBJ2: 40×, NA = 0.75;); CCD: charge-coupled device camera; MMF: multimode fiber; P1-P5: perturbed positions.

Download Full Size | PDF

To achieve learning-based spatially resolved bending recognition, speckle patterns in different scenes need to be collected. By changing the local deformation, solution spaces and datasets containing a large number of possible configurations can be constructed. Specifically, the first step is to select position P1 to introduce the deformation and keep other positions stationary. The local deformation at position P1 is varied using precision micrometer drives, and the speckle patterns corresponding to the different curvatures are recorded. In this work, the applied displacement d is from 0 mm to 6 mm in steps of 0.1 mm, and the corresponding curvature is from 0m-1 to 7.33m-1. The length L of the monitored position is 80 mm, and the distance L2 between adjacent positions is 100 mm. A total of 61 groups of samples were collected under different conditions, including 60 different curvatures and original status. Twenty images were collected for each group of curvatures, and the acquisition process was repeated five times to ensure the diversity of the data. It should be noted that the speckle pattern corresponding to the original state is the same for all monitored positions. Therefore, it is sufficient to collect the original state only once. Following the method described above, local deformations were applied to all monitored positions in turn and corresponding specklegrams were collected. A total of 30,100 speckle patterns were collected.

The second step is to produce a dataset using the collected speckle patterns. To more accurately evaluate the generalization ability of the model, two different test sets were used in this work. Specifically, 10 groups of samples were randomly selected from the 61 groups collected when local deformation was applied to the fiber within position P1 and divided into Group B, while the remaining samples were divided into Group A. Then, the samples collected when other positions are disturbed are processed as described above. Group A includes 251 groups of samples from 5 monitored positions, with a total of 25100 speckle patterns. Group B consists of 50 groups of samples from 5 monitored positions, with a total of 5000 speckle patterns. The speckle pattern in Group A is divided into training set A and test set A according to 4:1, and the samples contained in Group B are defined as test set B. Training set A is employed to provide solution space in which the neural network can learn the mapping relationship between speckle pattern and bending status and perturbed position. The test set A is tasked with characterizing the generalization ability of the trained model to the learned bending status and perturbed positions. The samples contained in test set B are all collected from configurations that have never been seen or learned and can be used to further test the feasibility and robustness of the proposed scheme.

3. Results and discussion

Before being processed by the PCA-BP model, the collected samples need to be cropped to a 2500*2500 pixels window centered on the speckle and then downsampled to 250*250 pixels. The processed speckle patterns are shown in Fig. 4(a), where the upper panel of Fig. 4(a) shows the speckle patterns collected when different local deformations are applied to the fiber within position P1, and the bottom panel depicts the differences between adjacent patterns. It can be found that the speckle pattern is susceptible to external perturbations, and there are significant differences between the specklegrams corresponding to different bending states. The lynchpin of the proposed scheme is whether the evolution rules of the speckle pattern corresponding to curvature and space can be learned simultaneously. Hence, it is necessary to investigate the variability of the speckle when the same perturbation is applied to different positions. The top panel of Fig. 4(b) shows the speckle pattern collected from the distal end of the fiber when the same curvature (2.49m-1) is applied to the five monitored positions, respectively, while the bottom panel describes the differences between adjacent patterns. It can be observed that there are also significant differences between the captured speckle patterns. In general, the prediction accuracy of the trained model is proportional to the difference between the configurations. From the above test results, it can be concluded that the speckle pattern is very sensitive and spatially resolved to external perturbations, and the differences between different configurations are sufficient to be learned by the neural network.

 figure: Fig. 4.

Fig. 4. (a) The speckle patterns collected when different local deformations are applied to the fiber within position P1 (upper panel) and their differences between the adjacent speckle patterns (bottom panel). (b) The speckle patterns collected when the same local deformation is applied to each of the fibers within different positions (upper panel) and their differences between the adjacent speckle patterns (bottom panel).

Download Full Size | PDF

Next, the dimensionality of the low-dimensional space constructed using the PCA algorithm needs to be determined. The role of the PCA algorithm is to reduce the dimension of sample space while preserving the abstract features of the specklegram as much as possible, which helps to remove redundant information and improve the learning efficiency and convergence speed of the model. Inevitably, as the dimensionality is reduced, some helpful data features will be lost, thus affecting the prediction accuracy of the model. Therefore, it is necessary to make a tradeoff between the convergence speed and the model accuracy. In this work, the cumulative contribution rate, defined as the ratio of features contained in the low-dimensional space to those contained in the original space, is used to help determine the dimensionality. As shown in Fig. 5, when the dimension is 30, the features contained in the low dimensional space account for 99.11% of the original sample space. Therefore, the dimension chosen in this work is 30, i.e., the dimension of the sample is reduced from 62500 (250*250) to 30.

 figure: Fig. 5.

Fig. 5. The cumulative contribution rate as a function of the dimension.

Download Full Size | PDF

The described work is implemented on a computer equipped with i7-10857 H CPU. Initially, the PCA algorithm is used to perform data dimensionality reduction on all samples in the training set A, and this process takes about 10 seconds. Then, the BP neural network is trained on the optimized training set A, and training time is 411 seconds. The initial learning rate is 0.01. In this work, the weights of similar models trained in other specklegram demodulation experiments were extracted and loaded onto the model constructed in this work as initial values to alleviate the uncertainty caused by initial value sensitivity. The trained model can simultaneously predict the curvature and perturbed position according to the speckle patterns collected from the unlearned configuration. Test set A is employed to test the generalization ability of the trained model to the learned configuration, and the test results are presented in Fig. 6, Fig. 7, and Fig. 8, respectively. The demodulation speed of the trained model is 0.01 milliseconds per frame. Figure 6 depicts the prediction ability of the trained model for the perturbed positions. It can be found that the trained model has excellent spatial discrimination ability and the prediction accuracy of the perturbed position can reach 100%.

 figure: Fig. 6.

Fig. 6. The trained model is used to predict the perturbed positions of the samples contained in test set A

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. The trained model is utilized to identify the bending status of the samples contained in test set A.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. The absolute error between the predicted curvature and the true value on test set A.

Download Full Size | PDF

Figure 7 compares the predicted curvature obtained by using the trained model on test set A and the true value. It can be noticed that the elements in Fig. 7 are basically clustered on the diagonal, and the slope of the fitted curve is very close to 1, proving that the trained model has a good generalization ability for the learned configuration. The error between the true curvature and the predicted curvature is within ±1 × 10−2 m-1 for 100% of the samples, and the proportion of prediction error within ±1 × 10−3 m-1 and ±1 × 10−4 m-1 is 84.4% and 55.7%, respectively. Figure 8 shows the absolute error between the predicted and true values of the curvature. It should be noted that for the convenience of analysis, the absolute error between the average predicted value of speckle patterns collected under the same configuration and the true value is shown in this work. The average prediction error of the training model on the test set A is 7.79 × 10−4 m-1. For the learned configurations, the trained model exhibits high generalization ability and can accurately predict the curvature and perturbed position at the same time.

The samples in the test set A are all collected from learned configurations and can reflect the tolerance of the trained model to environmental noise. In practical applications, it is impossible to expect the model to learn all possible configurations, so the generalization ability of the model to unlearned configurations determines its practicability and reliability. The samples contained in test set B are collected from configurations that have not been seen or learned by the model and can be used further to test the generalization ability of the trained model. The test results are shown in Fig. 9, Fig. 10, and Fig. 11, respectively. Figure 9 depicts the ability of the trained model to recognize the perturbed positions. It can be found that the trained model can still accurately identify the perturbed positions even though the test samples are taken from configurations that the model has not seen at all. The prediction accuracy of the perturbed position can reach 100%, which fully reflects the superiority of the proposed scheme.

 figure: Fig. 9.

Fig. 9. The trained model is used to predict the perturbed positions of the samples contained in test set B.

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. The trained model is utilized to identify the bending status of the samples contained in test set B.

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. The absolute error between the predicted curvature and the true value on test set B.

Download Full Size | PDF

The prediction results of the trained model for the curvature of the samples in test set B are shown in Fig. 10. It can be found that the elements in Fig. 10 are basically gathered on the diagonal and the slope of the fitted curve is close to 1, indicating a good agreement between the predicted curvature and the true curvature. The error between the true curvature and the predicted curvature is within ±0.3 m-1 for 97.7% of the samples, and the proportion of prediction error within ±0.2 m-1 and ±0.1 m-1 is 90.1% and 78.3%, respectively. The absolute error between the predicted curvature and the true curvature is depicted in detail in Fig. 11. The average prediction error of the training model on test set B is 7.02 × 10−2 m-1. Although more challenging, the trained model still shows satisfactory spatial discrimination and bending recognition ability for unlearned configurations.

To quantitatively evaluate the value of the PCA technique, the same dataset was used to train the BP neural network without PCA preprocessing, and the prediction results of the BP model were compared with the proposed PCA-BP model. In the comparison experiments, the original resolution specklegram was not directly used as the input sample for the BP model. This can be attributed to the fact that if the raw resolution images are used directly as input to the BP model, it will greatly increase the computational effort and require huge physical memory, making it difficult for the computer to complete the iterative computation successfully. After weighing the resolution of the input samples and the computational effort, the input samples were downsampled to 32*32 pixels (the number of neurons contained in the input layer is 1024), which is the maximum computational power that the computer used in this work can provide. The training time of the BP model without pre-processing by PCA technique is about 8 hours. The same test sets as employed in the paper were used to evaluate the generalization ability of the trained BP models. The experimental results show that the average curvature demodulation error of the BP model without the PCA technique preprocessing is 0.18 m-1 for the learned samples, while the demodulation accuracy is 45.67% for the perturbed positions. For the unlearned samples, the average curvature demodulation error of the BP model is 1.16 m-1, while the demodulation accuracy for the perturbed locations is 17.6%. In contrast, the curvature prediction errors of the PCA-BP model are 7.79 × 10−4 m-1 and 7.02 × 10−2 m-1 for the learned and unlearned configurations, respectively, while both achieve 100% demodulation accuracy for the perturbed positions. The training time of the PCA-BP model is 411 seconds. Compared with the BP model without pre-processing by PCA technique, the PCA-BP model improves the training speed by approximately 70 times. In addition, the PCA-BP model also shows significant advantages in terms of the demodulation accuracy of the perturbed position and curvature.

The robustness of the proposed scheme was demonstrated by long-term quantification of the system stability. In the stability test, the bending state of the fibers is kept constant for about 10 hours. A speckle pattern is collected from the distal end of the fiber every 5 minutes and the Pearson Correlation Coefficient (PCC) is used to describe the correlation between these speckle patterns. The test results are shown in Fig. 12. It can be found that although the correlation between the speckle patterns decreases with time, the correlation consistently remains above 97% for at least 10 hours, demonstrating the robustness of the proposed sensing system.

 figure: Fig. 12.

Fig. 12. Stability test results of the sensing system.

Download Full Size | PDF

Most previously reported learning-based fiber specklegram sensing systems rely on classification neural networks, which demodulate the sensing signal by learning the relationship between the speckle patterns and discrete categories [2831]. For learned configurations, such schemes have satisfactory accuracy and processing speed. However, classification neural network-based schemes can only make predictions for a limited number of categories and are unable to recognize the configurations between two labeled states. Since it is impossible to learn all potential configurations, the resolution and measurement range of the classification model-based scheme are limited. Fiber specklegram sensors based on regression models have been reported and successfully applied to predict unlearned configurations [32,40]. However, these works do not fully exploit the potential of regression models and can only make predictions for a single parameter. In contrast, the scheme presented in this work explores the feasibility of regression model-based multiparameter measurements, demonstrating spatially resolved bending identification performed for unlearned configurations. In addition, the proposed PCA-BP model has better prediction accuracy and robustness for unlearned configurations than the reported work. For the CNN-based fiber specklegram bending sensor reported in Ref. [32], the error between the true curvature and the predicted curvature is within ±0.3 m-1 for 94.7% of the samples, and the proportion of prediction error within ±0.2 m-1 and ±0.1 m-1 is 77.6% and 42.7%, respectively. In contrast, the PCA-BP model reported in this work has a more robust predictive ability and more accurate prediction results. This optimization can be attributed to the use of the hybrid framework that circumvents the drawbacks of traditional schemes by combining the advantages of PCA algorithms and multilayer perceptron. The PCA algorithm removes the redundant information contained in the original samples and the system noise accumulated during data collection, making the evolutionary pattern of the speckle patterns more evident and more easily learned. The BP neural network used improves the learning capacity by increasing the depth so that the configurations can be recognized according to specklegrams accurately. The introduction of the transfer learning strategy weakens the uncertainty caused by initial value sensitivity as much as possible and enhances the learning ability of the model. Our work has verified the feasibility of learning-based multi-parameter sensing schemes, and provided insights for using learning model to solve sensing problems.

The errors in the described scheme arise from two primary sources. On one hand, mechanical vibrations, thermal convection, and other disturbances may produce noise on the collected speckle pattern during data acquisition. On the other hand, the constructed model may not learn the evolution pattern of the specklegram sufficiently. One viable solution is to optimize the model architecture and training strategy, such as dynamically adjusting the learning rate to improve the learning efficiency, introducing regularization to prevent overfitting, and increasing the depth of the network to improve the learning capability. Another possible improvement strategy is to optimize the dataset so that the model can learn the impact of environmental disturbances on speckle patterns. In addition, these disturbances can be further suppressed by using vibration isolation platforms and packaging optical fibers.

4. Conclusion

In conclusion, we propose a novel PCA-BP hybrid framework to simultaneously identify the curvature and perturbed position of unlearned configurations. Compared with the reported learning-based fiber specklegram sensing system, the scheme described in this work tends to explore the evolution of speckle patterns in the solution space without being bound to a limited number of categories, and extend the measurement range and resolution. In addition, the proposed PCA-BP model further explores the feasibility of using regression models to sense multiple sensing parameters simultaneously. The feasibility of the learning-based model has been explored in the application of double parameter sensing. The results show that for the learned configuration, the recognition accuracy of the trained model for the perturbed position can reach 100%, and the average prediction error for curvature is 7.79 × 10−4 m-1. The proportion of samples with prediction error within ±1 × 10−3 m-1 and ±1 × 10−4 m-1 is 84.4% and 55.7%, respectively. For the unlearned configuration, the trained model has 100% accuracy in identifying the perturbed positions. The prediction error of 78.3% speckle patterns collected from unlearned configurations is within ±0.1m-1, and the average prediction error is 7.02 × 10−2 m-1. The scheme proposed in this work provides a convenient and powerful demodulation scheme for learning-based fiber sensing systems, and opening new avenues for using learning-based model to solve sensing problems.

Funding

Natural Science Foundation of Shanghai (22ZR1443100); National Natural Science Foundation of China (62075132).

Disclosures

The authors declare that there are no conflicts of interest related to this paper.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. Z. Li, Y. X. Zhang, W. G. Zhang, L. X. Kong, Y. Yue, and T. Y. Yan, “Parallelized fiber Michelson interferometers with advanced curvature sensitivity plus abated temperature crosstalk,” Opt. Lett. 45(18), 4996–4999 (2020). [CrossRef]  

2. Y. Qian, Y. Zhao, Q.-L. Wu, and Y. Yang, “Review of salinity measurement technology based on optical fiber sensor,” Sens. Actuators, B 260, 86–105 (2018). [CrossRef]  

3. B. Koo and D. H. Kim, “Directional bending sensor based on triangular shaped fiber Bragg gratings,” Opt. Express 28(5), 6572–6581 (2020). [CrossRef]  

4. X. Yi, X. Chen, H. Fan, F. Shi, X. Cheng, and J. Qian, “Separation method of bending and torsion in shape sensing based on FBG sensors array,” Opt. Express 28(7), 9367–9383 (2020). [CrossRef]  

5. F. Zhu, Y. Zhang, Y. Qu, W. Jiang, H. Su, Y. Guo, and K. Qi, “Stress-insensitive vector curvature sensor based on a single fiber Bragg grating,” Opt. Fiber Technol. 54, 102133 (2020). [CrossRef]  

6. M. Lai, Y. Zhang, Z. Li, W. Zhang, H. Gao, L. Ma, H. Ma, and T. Yan, “High-sensitivity bending vector sensor based on γ-shaped long-period fiber grating,” Opt. Laser Technol. 142, 107255 (2021). [CrossRef]  

7. Z. Li, S. Liu, Z. Bai, C. Fu, Y. Zhang, Z. Sun, X. Liu, and Y. Wang, “Residual-stress-induced helical long period fiber gratings for sensing applications,” Opt. Express 26(18), 24114–24123 (2018). [CrossRef]  

8. W. Yi-Ping and R. Yun-Jiang, “A novel long period fiber grating sensor measuring curvature and determining bend-direction simultaneously,” IEEE Sens. J. 5(5), 839–843 (2005). [CrossRef]  

9. Y. S. Zhang, W. G. Zhang, L. Chen, Y. X. Zhang, S. Wang, L. Yu, Y. P. Li, P. C. Geng, T. Y. Yan, X. Y. Li, and L. X. Kong, “Concave-lens-like long-period fiber grating bidirectional high-sensitivity bending sensor,” Opt. Lett. 42(19), 3892–3895 (2017). [CrossRef]  

10. Y. Wei, T. Jiang, C. Liu, X. Zhao, L. Li, R. Wang, C. Shi, and C. Liu, “Sawtooth fiber MZ vector bending sensor available for multi parameter measurement,” J. Lightwave Technol. 40(17), 6037–6044 (2022). [CrossRef]  

11. K. Tian, Y. Xin, W. Yang, T. Geng, J. Ren, Y.-X. Fan, G. Farrell, E. Lewis, and P. Wang, “A curvature sensor based on twisted single-mode–multimode–single-mode hybrid optical fiber structure,” J. Lightwave Technol. 35(9), 1725–1731 (2017). [CrossRef]  

12. S. Wang, C. Shan, J. Jiang, K. Liu, X. Zhang, Q. Han, J. Lei, H. Xiao, and T. Liu, “Temperature-insensitive curvature sensor based on anti-resonant reflection guidance and Mach–Zehnder interferometer hybrid mechanism,” Appl. Phys. Express 12(10), 106503 (2019). [CrossRef]  

13. Y. Zhao, L. Cai, and X.-G. Li, “In-fiber modal interferometer for simultaneous measurement of curvature and temperature based on hollow core fiber,” Opt. Laser Technol. 92, 138–141 (2017). [CrossRef]  

14. Y. Zhao, A. Zhou, H. Guo, Z. Zheng, Y. Xu, C. Zhou, and L. Yuan, “An integrated fiber michelson interferometer based on twin-core and side-hole fibers for multiparameter sensing,” J. Lightwave Technol. 36(4), 993–997 (2018). [CrossRef]  

15. E. Fujiwara, L. E. da Silva, T. D. Cabral, H. E. de Freitas, Y. T. Wu, and C. M. d. B. Cordeiro, “Optical fiber specklegram chemical sensor based on a concatenated multimode fiber structure,” J. Lightwave Technol. 37(19), 5041–5047 (2019). [CrossRef]  

16. W. B. Spillman Jr., B. R. Kline, L. B. Maurice, and P. L. Fuhr, “Statistical-mode sensor for fiber optic vibration sensing uses,” Appl. Opt. 28(15), 3166–3176 (1989). [CrossRef]  

17. P. Etchepareborda, A. Federico, and G. H. Kaufmann, “Sensitivity evaluation of dynamic speckle activity measurements using clustering methods,” Appl. Opt. 49(19), 3753–3761 (2010). [CrossRef]  

18. F. Feng, W. Chen, D. Chen, W. Lin, and S.-C. Chen, “In-situ ultrasensitive label-free DNA hybridization detection using optical fiber specklegram,” Sens. Actuators, B 272(1), 160–165 (2018). [CrossRef]  

19. S. Hu, H. Liu, B. Liu, W. Lin, H. Zhang, B. Song, and J. Wu, “Self-temperature compensation approach for fiber specklegram magnetic field sensor based on polarization specklegram analysis,” Meas. Sci. Technol. 33(11), 115101 (2022). [CrossRef]  

20. S. Minaee, N. Kalchbrenner, E. Cambria, N. Nikzad, M. Chenaghlu, and J. Gao, “Deep learning–based text classification: a comprehensive review,” ACM Comput. Surv. 54(3), 1–40 (2022). [CrossRef]  

21. S. Oprea, P. Martinez-Gonzalez, A. Garcia-Garcia, J. A. Castro-Vargas, S. Orts-Escolano, J. Garcia-Rodriguez, and A. Argyros, “A review on deep learning techniques for video prediction,” IEEE Trans. Pattern Anal. Mach. Intell. 44(6), 2806–2826 (2022). [CrossRef]  

22. G. Genty, L. Salmela, J. M. Dudley, D. Brunner, A. Kokhanovskiy, S. Kobtsev, and S. K. Turitsyn, “Machine learning and applications in ultrafast photonics,” Nat. Photonics 15(2), 91–101 (2020). [CrossRef]  

23. W. Ma, F. Cheng, Y. Xu, Q. Wen, and Y. Liu, “Probabilistic representation and inverse design of metamaterials based on a deep generative model with semi-supervised learning strategy,” Adv. Mater. 31(35), e1901111 (2019). [CrossRef]  

24. W. Ma, Z. Liu, Z. A. Kudyshev, A. Boltasseva, W. Cai, and Y. Liu, “Deep learning for the design of photonic structures,” Nat. Photonics 15(2), 77–90 (2020). [CrossRef]  

25. A. R. Cuevas, M. Fontana, L. Rodriguez-Cobo, M. Lomer, and J. M. Lopez-Higuera, “Machine learning for turning optical fiber specklegram sensor into a spatially-resolved sensing system. proof of concept,” J. Lightwave Technol. 36(17), 3733–3738 (2018). [CrossRef]  

26. D. L. Smith, L. V. Nguyen, D. J. Ottaway, T. D. Cabral, E. Fujiwara, C. M. B. Cordeiro, and S. C. Warren-Smith, “Machine learning for sensing with a multimode exposed core fiber specklegram sensor,” Opt. Express 30(7), 10443–10455 (2022). [CrossRef]  

27. L. V. Nguyen, C. C. Nguyen, G. Carneiro, H. Ebendorff-Heidepriem, and S. C. Warren-Smith, “Sensing in the presence of strong noise by deep learning of dynamic multimode fiber interference,” Photonics Res. 9(4), B109–B117 (2021). [CrossRef]  

28. E. Fujiwara, Y. T. Wu, M. F. M. Santos, E. A. Schenkel, and C. K. Suzuki, “Optical fiber specklegram sensor for measurement of force myography signals,” IEEE Sens. J. 17(4), 951–958 (2017). [CrossRef]  

29. H. Li, H. Liang, Q. Hu, M. Wang, and Z. Wang, “Deep learning for position fixing in the micron scale by using convolutional neural networks,” Chin. Opt. Lett. 18(5), 050602 (2020). [CrossRef]  

30. Q. Liang, J. Tao, X. Wang, T. Wang, X. Gao, P. Zhou, B. Xu, C. Zhao, J. Kang, L. Wang, C. Shen, D. Wang, and Y. Li, “Demodulation of Fabry-Perot sensors using random speckles,” Opt. Lett. 47(18), 4806–4809 (2022). [CrossRef]  

31. Y. Liu, G. Li, Q. Qin, Z. Tan, M. Wang, and F. Yan, “Bending recognition based on the analysis of fiber specklegrams using deep learning,” Opt. Laser Technol. 131, 106424 (2020). [CrossRef]  

32. G. Li, Y. Liu, Q. Qin, X. Zou, M. Wang, and F. Yan, “Deep learning based optical curvature sensor through specklegram detection of multimode fiber,” Opt. Laser Technol. 149, 107873 (2022). [CrossRef]  

33. Y. Luo, S. Yan, H. Li, P. Lai, and Y. Zheng, “Towards smart optical focusing: deep learning-empowered dynamic wavefront shaping through nonstationary scattering media,” Photonics Res. 9(8), B262 (2021). [CrossRef]  

34. F. T. Yu, M. Wen, S. Yin, and C. M. Uang, “Submicrometer displacement sensing using inner-product multimode fiber speckle fields,” Appl. Opt. 32(25), 4685–4689 (1993). [CrossRef]  

35. M. Mrówczyńska, J. Sztubecki, and A. Greinert, “Compression of results of geodetic displacement measurements using the PCA method and neural networks,” Measurement 158, 107693 (2020). [CrossRef]  

36. A. Ramezankhani, F. Hosseini-Esfahani, P. Mirmiran, F. Azizi, and F. Hadaegh, “The association of priori and posteriori dietary patterns with the risk of incident hypertension: Tehran Lipid and Glucose Study,” J. Transl. Med. 19(1), 44 (2021). [CrossRef]  

37. D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature 323(6088), 533–536 (1986). [CrossRef]  

38. B. Sang, “Application of genetic algorithm and BP neural network in supply chain finance under information sharing,” J. Comput. Appl. Math. 384, 113170 (2021). [CrossRef]  

39. Y. Song, L. Yue, Y. Wang, H. Di, F. Gao, S. Li, Y. Zhou, and D. Hua, “Research on BP network for retrieving extinction coefficient from Mie scattering signal of lidar,” Measurement 164, 108028 (2020). [CrossRef]  

40. S. Razmyar and M. T. Mostafavi, “Deep learning for estimating deflection direction of a multimode fiber from specklegram,” J. Lightwave Technol. 39(6), 1850–1857 (2021). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1.
Fig. 1. Overview of bending recognition scheme based on PCA-BP model.
Fig. 2.
Fig. 2. Architecture of the deep learning model for identifying bending status and perturbed positions.
Fig. 3.
Fig. 3. Schematic of the experimental setup. OBJ: microscopic Objective (OBJ1: 20×, numerical aperture (NA) = 0.40; OBJ2: 40×, NA = 0.75;); CCD: charge-coupled device camera; MMF: multimode fiber; P1-P5: perturbed positions.
Fig. 4.
Fig. 4. (a) The speckle patterns collected when different local deformations are applied to the fiber within position P1 (upper panel) and their differences between the adjacent speckle patterns (bottom panel). (b) The speckle patterns collected when the same local deformation is applied to each of the fibers within different positions (upper panel) and their differences between the adjacent speckle patterns (bottom panel).
Fig. 5.
Fig. 5. The cumulative contribution rate as a function of the dimension.
Fig. 6.
Fig. 6. The trained model is used to predict the perturbed positions of the samples contained in test set A
Fig. 7.
Fig. 7. The trained model is utilized to identify the bending status of the samples contained in test set A.
Fig. 8.
Fig. 8. The absolute error between the predicted curvature and the true value on test set A.
Fig. 9.
Fig. 9. The trained model is used to predict the perturbed positions of the samples contained in test set B.
Fig. 10.
Fig. 10. The trained model is utilized to identify the bending status of the samples contained in test set B.
Fig. 11.
Fig. 11. The absolute error between the predicted curvature and the true value on test set B.
Fig. 12.
Fig. 12. Stability test results of the sensing system.

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

A ( x , y ) = m = 0 M a m ( x , y ) exp [ j ϕ m ( x , y ) ] ,
I ( x , y ) = | A ( x , y ) | 2 = n = 0 M m = 0 M a m a n exp [ j ( ϕ m ϕ n ) ] .
C = 2 d d 2 + L 2 4 ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.