Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Modelling of fibre laser cutting via deep learning

Open Access Open Access

Abstract

Laser cutting is a materials processing technique used throughout academia and industry. However, defects such as striations can be formed while cutting, which can negatively affect the final quality of the cut. As the light-matter interactions that occur during laser machining are highly non-linear and difficult to model mathematically, there is interest in developing novel simulation methods for studying these interactions. Deep learning enables a data-driven approach to the modelling of complex systems. Here, we show that deep learning can be used to determine the scanning speed used for laser cutting, directly from microscope images of the cut surface. Furthermore, we demonstrate that a trained neural network can generate realistic predictions of the visual appearance of the laser cut surface, and hence can be used as a predictive visualisation tool.

Published by The Optical Society under the terms of the Creative Commons Attribution 4.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.

1. Introduction

Laser materials processing is a non-contact technique that has industrial applications at large scale [1] and small scale [2,3], including for energy storage [4], defence [5] and packaging [6]. The applications of lasers include cutting [7,8], welding [9], cladding [10], drilling [11] and marking [12]. Fibre lasers are especially attractive in materials processing as they are compact, energetically efficient and have good thermal properties [13]. High precision laser cutting is of particular importance due to its wide applicability across many areas of manufacturing. In general, the laser cutting process occurs in several stages. Firstly, the laser is used to drill a small hole into the workpiece with a gas assisted jet blowing away the molten and vaporised material at the hole. The laser then drills in further, with molten and vaporised material blown away through the bottom. The nozzle scans across the sample once the workpiece has been fully pierced to commence the cutting process [14].

Cutting with continuous wave (CW) lasers can cause striations and welts to form on the surface, negatively impacting the quality of the cut [15]. The physical origins of these striations are not well understood, but explanations include the unsteady nature of the laser cutting process, variations in gas pressure, laser power fluctuations, as well as possible effects caused within the molten material [10]. Modelling of laser cutting is challenging due to the highly non-linear nature of laser cutting. Arai [15] has characterised the origins of laser cutting defects using thermal models, while Bocksrocker et al. [16] have studied the correlation between laser parameters and direction of laser melt. Miraoui et al. [17] have used analytical models of heat affected zones to characterise the laser cutting of steel. Each of these involved the use of specialised measuring equipment such as high-speed, high-resolution and specialised cameras.

Deep learning has seen a significant level of interest recently, due to its ability to use a data-driven approach to model complex phenomena directly from observed data [18,19]. The tool used for machine learning is a neural network (NN), which acts as a universal function approximator [20]. Convolutional neural networks (CNNs) are a type of neural network that learns features from multi-dimensional datasets based on the spatial relationship between data points (pixels), making them effective for analysing images [2125]. CNNs are typically effective at transforming an image into a numerical label. There are also CNNs that can make a prediction of an image based on numerical data, known as generators [26,27]. A generator learns to transform a random seed into the associated output. A separate discriminator network then determines whether a given output is an experimental measurement or a prediction. During training, the generator and discriminator compete against each other and therefore, both improve at their assigned task. Neural networks that use this arrangement are called generative adversarial networks (GANs) [2730]. A GAN whose generator takes an input based on experimental conditions is known as a conditional generative adversarial network (CGAN) [31]. As the exact patterns of striations produced in laser cutting are difficult to model, deep learning is a useful tool to predict the appearance and trends of physical defects formed along the cut, with initial results already shown in the literature [15].

Deep learning has been applied to laser processes in recent years such as laser cutting [3236], welding [8,3740], fabrication [41,42] and machining [4351]. Deep learning has also been widely applied to image data [5257]. Yilbas et al. [32] have been able to use deep learning algorithms to classify striations into normal, increasing, decreasing or cyclical patterns using thermal modelling data as input. Santolini et al. [34] used multisensory numerical data from laser cutting parameters to estimate the quality of the cut. Similar demonstrations have been made with laser welding [37,38] and fabrication [41,42]. Mills et al. [48,50] have improved on this by analysing image data with deep learning. Here, we demonstrate the application of machine learning techniques for classifying laser cut surfaces from their defects and for making visual predictions of laser cutting outcomes under different laser parameters using microscope image data.

In this work, the application of machine learning techniques is demonstrated for classifying laser cut surfaces from their defects and for making visual predictions of laser cutting outcomes under different laser parameters using microscope image data. The two key objectives were to develop i) a CNN that can classify samples of stainless steel cut by fibre laser according to the cutting speed used, and ii) two CGANs that can accurately predict the appearance of stainless steel samples cut by a fibre laser. Section 2 contains a description of the laser cut surfaces and how they were imaged. Section 3 details how machine learning techniques were used to analyse the microscope image data collect from the laser cut surfaces. Sections 4 and 5 contain methods and results for visual feature classification and predictive visualisation respectively. Conclusions are given in Section 6.

2. Laser cutting and sample imaging

For the cutting, a 6 kW fibre laser was used with multi-mode output delivered via a Ø100 µm fibre. The processing workstation was a TRUMPF TruLaser 1030 flatbed cutting machine fitted with a Precitec ProCutter cutting head which had magnification 2.0x using high pressure nitrogen as the co-axial assist gas. Ten stainless steel samples were laser cut at speeds of 15–24 m/min with under 12 bar of N2 assist gas pressure and a nozzle diameter of 2 mm, with a beam diameter of 200 µmm (1/e2). The cutting nozzle head had a stand-off distance of 1 mm which was maintained using a capacitive height sensor and with a focal position that would have been 2 mm below the metal surface. Each sample had a length of 116.0 mm, a thickness of 2.0 mm and a width of 9.5 mm. As indicated by the schematic in Fig. 1(a), the edges of the cut samples were imaged using reflective microscopy on a GBS SmartWLI microscope (Omniscan) with a Nikon 5x objective lens. Images with resolution 3000 × 2000 pixels and size of 3.4 × 2.8 mm were recorded along the length of each sample. Each sample was imaged at 12 locations giving 12 images per sample and 120 unprocessed images. Fig. 1(b) shows an example microscope image where the dark blue box highlights an example of image data used for deep learning. As observed in the figure, the blue box is positioned over the regions of the sample that were in focus. The microscope illumination was from above, and hence brighter regions correspond to regions of high reflectivity on the sample while darker regions correspond to regions with low reflectivity.

 figure: Fig. 1.

Fig. 1. Flowchart showing the process of imaging stainless steel cut with a fibre laser and converting it into data appropriate for machine learning. The stages of the flow chart consist of a) laser cutting with a fibre laser and b) an image of the edge of stainless steel rod cut by a fibre laser at a given scanning speed. In a) a stainless steel rod is cut from a 2 mm thick sheet by a fibre laser at 2 kW at 1075 nm, which is then imaged under a 5x microscope. In b) there is a 2.8 × 2.0 image of the laser cut stainless steel surface. Each images was cropped to remove unnecessary data. The red arrow shows the direction of the laser beam while the black arrow shows the direction of scan. The blue box indicates an image section used for machine learning.

Download Full Size | PDF

Figure 2 contains examples of image sections from samples cut at different speeds, illustrating the variation in appearance and defects that occurs with different cutting speeds Defects include striations, which have a spatial structure that has a regular, periodic and consistent appearance, and welts, round and elongated defects seen in the bottom half of Fig. 2 f-j, which appear more randomly. As seen in Fig. 2, angled striations are formed at the top of the sample starting at the top right corner and ending at the bottom left corner of each image section. Welts are seen in the bottom half of image sections of speeds of 19 m/min and above. The vertical position in the image sections at which the welts start to appear increases with speed, which is consistent with previous studies [58].

 figure: Fig. 2.

Fig. 2. Examples of stainless steel samples cut by a fibre laser (experimental measurements) at a) 15, b) 16, c) 17, d) 18, e) 19, f) 20, g) 21, h) 22, i) 23, and j) 24 m/min.

Download Full Size | PDF

3. Application of neural networks for classification and predictive visualisation

For neural network processing, 668 × 256 pixel image sections were randomly selected from larger images. They were separated into thirds producing 3 separate images that were then vertically concatenated to form a 3D matrix, equivalent to the approach used for RGB channels in a colour image, to produce a 256 × 256 × 3 matrix. This ensured that the input to the network was square (i.e. 256 × 256) and hence allowed the usage of well-established neural network architectures that make use of RGB channels (i.e. 256 × 256 × 3). To ensure continuity across the separated thirds, there was a 50-pixel overlap between the top and middle thirds of the image section and a 50-pixel overlap between the middle and bottom thirds of the image section to minimise the boundary seen when generating image sections. Image sections were used to capture the maximum amount of vertical information, as this allowed for the best representation of striations, while retaining the ability to turn these image sections into 256 × 256 × 3 matrices. Using the full width of the sample would have limited the continuity between thirds and would have reduced the quality of the predictions.

In this work, the two key objectives were to develop i) a CNN that can classify samples of stainless steel cut by fibre laser according to the cutting speed used, and ii) two CGANs that can accurately predict the appearance of stainless steel samples cut by a fibre laser. Samples cut at different scanning speeds have characteristic defects due to effects such as propagation of heat and the flow of molten material during the cut. In i) the CNN used these defects to classify each image section by cutting speed. In ii) there were two networks used: a “predictive” network to predict the appearance of the laser cutting edge using the laser cutting speed, and a “chaining” network to predict the appearance of the laser cutting edge using an image section of an adjacent area of the laser cutting edge. A CGAN capable of accurately predicting defects formed during the laser cutting process could be used to identify and optimise key parameters for a specific task and assist in understanding how these parameters influence the physical processes that occur in laser cutting. Predictions were made using an image section of the edge of a laser cut stainless steel sample as in input. These predicted image sections were then used as inputs to make further predictions on the appearance of subsequent portions of the laser cut edge. These processes are shown in Fig. 3.

 figure: Fig. 3.

Fig. 3. Diagram showing the functioning of the neural networks used for modelling of laser cutting data. a) A classification network that to predict the cutting speed of a laser cut sample from an image section from that sample. b) A predictive network that to predict the appearance of an image section of a laser cut sample based on the cutting speed of that sample. c) A chaining network used to predict the appearance of an image section from an image section of an adjacent location on the laser cut sample. d) A combination of b) and c) to predict the appearance of a section of the laser cut edge of stainless steel using predicted image sections from laser parameters.

Download Full Size | PDF

In Fig. 3 there are diagrams showing the intended functioning of the neural networks used. The purpose of a network that can accurately predict the cutting speed of an image section of stainless steel as in Fig. 3 a) is to determine whether each cutting speed produces defects unique to that cutting speed, such that they can be distinguished from one another. Results for this are presented in section 4. A network to predict the appearance of defects produced during laser cutting based on laser cutting speed (Fig. 3 b) could then be used to identify correlations between cutting speed and defects produced in cutting, as shown in 5.1. A network such as that in Fig. 3 c) could then use a predicted image section to predict the appearance of a subsequent image section of laser cut stainless steel without the use of experimental data as an input. Fig. 3 d) Then shows how Fig. 3 b) and c) can be combined to predict the appearance along a longer section of the laser cut stainless steel edge, using only predicted data as an input. Results for this are shown in 5.2. As CGANs visualise the most likely outcome of laser cutting under at the corresponding cutting speed, predictions made by CGANs are not expected to be identical to experimental data.

4. Feature identification

The purpose of this section is to demonstrate that a CNN can accurately determine laser cutting speeds using image section of the cutting edge. The CNN receives an image section of the laser-cut edge of a stainless steel sample as an input, with the output being a prediction of the speed at which that sample was cut Each stainless steel sample was cut at a speed ranging from 15 m/min to 24 m/min. Microscope image data were collected for each sample, with the cutting speed contained in the filename for each image. During training, when presented with an image, the cutting speed was extracted from the filename and used as a label. In machine learning, a label is a term used to describe each category of data to be classified. 100000 image sections such as those shown in Fig. 2 were produced using the method described in section 3. Of those 100000 image sections, 90024 image sections were used for training and 9976 were used for testing. Training and testing samples were taken from different pools of larger images to prevent overlap.

The CNN used to classify the image sections had an architecture consisting of six convolutional and pooling layers and three fully connected layers. The final layer had ten nodes, with one for each output speed. The output was therefore a one-hot encoded vector with ten elements, each a probability value indicating the confidence of the CNN in assigning an image section to a given class, with the highest probability assigned to the class in which the CNN had the most confidence. The loss function used for the classification CNN was sparse categorical cross-entropy with accuracy used as a metric. The optimizer used was Adam with a learning rate of 0.0001.

The CNN was trained for 10 epochs on 90024 image sections and tested on 9976 image sections. After training, the CNN was used to make predictions of the cutting speeds for the test data. The classification accuracy of predictions for the CNN was 99.9% with 9967 correct predictions and 9 incorrect predictions out of 9976 test image sections. Fig. 4 shows a confusion matrix comparing the predicted cutting speed of an image section with the experimental cutting speed of that image section. Darker regions indicate areas of higher correlation, and hence in the event of a perfect correlation, i.e. perfect accuracy, a single dark line along the diagonal going top-left to bottom-right of the matrices would be seen.

 figure: Fig. 4.

Fig. 4. A confusion matrix comparing the cutting speeds predicted by the CNN to the experimental cutting speeds corresponding to those sample. For underlying values see Dataset 1 Ref. [59].

Download Full Size | PDF

For the classification CNN, almost all values fall along the diagonal across the confusion matrix. This near-perfect correlation indicates that each cutting speed has unique defects that can be used for identification. As such, this indicates the possibility of modelling the appearance of the laser cutting process by analysis with NNs using image data. Given that NNs could associate defects with laser parameters, NNs could also be used to predict the parameters for a desired laser cutting edge, as well as showing potential for real time monitoring of defects.

5. Predictive visualisation of laser cutting

Two modelling approaches were used for predictive visualisation and are covered in their respective sections. In both cases, a standard Pix2Pix architecture [60] was used for the CGAN, with a modified L1 loss. Instead of using the mean difference between the experimental and predicted image sections as a measure of performance, the CGAN used the sum of the mean of the left half of the image section and the mean of the right half of the image section. This was found to enhance the matching of defects between predicted image sections and experimental image sections, as defects seen in the left and right half of each image section follow the same distribution. As with feature identification, 100000 image sections, such as those seen in Fig. 2, were used for training and testing. In both cases, the CGANs were trained on data of samples of cutting speeds 16 m/min, 17 m/min, 18 m/min, 19 m/min, 21 m/min, 22 m/min, and was tested on image sections of samples of cutting speeds 15 m/min and 20 m/min. The testing speeds were chosen to be outside and inside the training speed range, respectively, to check for differences in predictability for speeds within the experimental range and outside the experimental range. Whilst it would be possible to predict the appearance of samples at speeds that were used in training, such predictions would generally be considered to not be a true representation of the predictive capabilities of the network, as the network would have already observed examples of such images. For this reason, only speeds that were not used in training were used to evaluate the effectiveness of this neural network approach.

5.1 Parameter to image visualisation

The purpose of this section is to demonstrate that a CGAN can accurately predict the appearance of the edge of a laser cut stainless steel sample, from the laser parameters used. For the predictive network, the experimental input was a 768 × 256 matrix, where the pixel values in each third represented one of the input parameters. The top third corresponded to the cutting speed of the sample, the middle third corresponded to the position along the length of the sample, and the bottom third corresponded to the vertical and horizontal positions of the image section on the larger image. Position of imaging site Images, such as the one shown in Fig. 1(b), were collected at multiple positions along the cut edge of the stainless steel samples shown in Fig. 1(a); these positions will be referred to as the imaging sites. For each imaging site it is possible to designate multiple cropped regions as a means of augmenting the data sent to the NN (an example of a cropped region is shown by the blue box in Fig. 1(b). Cropped regions are referred to by the horizontal and vertical pixel coordinates of their lower left corner (relative to the bottom left of the overall site image). The predicted output was a 668 × 256 pixel image section corresponding to the predicted appearance of the laser cut edge at the positions specified by the input parameters. An example of this input is shown in Fig. 5.

 figure: Fig. 5.

Fig. 5. Example of an input for the modelling of laser cutting.

Download Full Size | PDF

As shown in Fig. 6, predictions made by the predictive network were visually similar to the corresponding experimental image sections. Fig. 6(a) and Fig. 6(b) show experimental and predicted image sections of the laser-cut edge of a stainless steel sample cut at 15 m/min. Fig. 6(c) and Fig. 6(d) show experimental and predicted image sections of the laser-cut edge of a stainless steel sample cut at 20 m/min. During laser cutting, a molten front forms under irradiation from the laser, at an angle relative to the vertical irradiation of the laser. The steepness of this angle is related to the cutting speed, with slower speeds producing shallow angles and vice versa. At a steeper angle, the volume of molten material is larger at the bottom of the cut than for shallow angles. Given that this material is also heated it will expand and possibly melt material further below as well, producing welts. Therefore, the number and size of welts is related to the cutting speed, and hence faster cutting speeds generally produce more welts. There are fewer welts in predicted image sections as these defects are generated more randomly than striations, due to their lower position in the cut and that they are generated as a function of the light-gas-matter interaction. As such welts represent a more difficult feature to model due to the increased variability of their appearance when compared to the variability of the appearance of striations and the complexity of the mechanism that generated them. Despite the reduced welt-feature reproduction at the speed of 20 m/min (Fig. 6(d)), it is clear that there are more welt-features predicted than at the lower speed of 15 m/min (Fig. 6(b)), in good qualitative agreement with the fact that more welts are produced as the cutting speed is increased. Predicted image sections are not expected to exactly reproduce the appearance of experimental image sections, because of CGANs predicting the most likely appearance of an experimental image section. Striations and the presence of welts are reproduced while individual welts are reproduced with the correct distribution. There is a boundary seen between each third of the predicted image sections, caused by the predictive network learning different defects for each third of the image section and in some cases not matching up properly with the subsequent third. This is seen more clearly for predictions of v = 20 m/min, as the defects contained within the bottom third were more varied than for predictions of v = 15 m/min, which has very similar striations throughout the whole sample.

 figure: Fig. 6.

Fig. 6. a) Experimental and b) predicted image sections for stainless steel samples cut at 15 m/min, and c) experimental and d) predicted image sections for stainless steel samples cut at 20 m/min.

Download Full Size | PDF

In this study neural networks were trained to predict the overall appearance of the edge of laser-cut stainless steel, for different cutting speeds. For example, features such as the angle and position of striations and other defects are learned, however, it is not expected that the NN will precisely predict the exact location of these defects. The cutting speed is one of many parameters that influence the appearance of the cut, with others including gas pressure and position of the focusing lens. As such, the NN with make realistic predictions about the appearance of the laser cut edge based on the laser cutting conditions used to cut. The objective of this study was to evaluate the potential for using neural networks to assist in predictive visualisation for fibre laser machining.

Figure 7 contrasts the statistical distributions of the experimental data with predictions made by the prediction network. Fig. 7(a) shows the distribution of dark areas between experimental and predicted image sections (for both v = 15 m/min and v = 20 m/min). Dark spots were chosen since most of the samples contained mostly bright areas, therefore darker spots were easier to quantify. In both cases, the distributions of defects in the predicted image sections match with the experimental ones; with both sets of image sections containing mostly dark spots of less than 0.03 mm. There is a larger divergence between experimental and predicted curves for 15 m/min than for 20 m/min The dark and bright spots were consistent across all samples, as each sample was imaged under the same microscopy conditions, such as the illumination strength and the position of the sample. Furthermore the reflectivity and topography of the sample defects is dependent on the laser cutting parameters, and therefore will have a unique appearance. As such the dark and bright spots are a function of the laser cutting parameters and as such are systematically produced. Fig. 7(b) shows the statistical distribution of pixel values, indicating how bright a pixel is, for both experimental and predicted image sections (with v = 15 m/min and v = 20 m/min). For v = 15 m/min, the general shape of the curve for predicted image sections follows the same trend as the pixel distribution for experimental image sections, although it is slightly higher for bright spots and lower for darker spots. For v = 20 m/min, all critical points match between the predicted and experimental distributions. The bottom set of plots show histograms of distribution of striation angles for both experimental and predicted image sections, with Fig. 7(c) corresponding to v = 15 m/min and Fig. 7(d) to v = 20 m/min. Each data point shows the number of image sections that contained striations at that angle.

 figure: Fig. 7.

Fig. 7. a) Plot showing the correlation between the size of dark spots on an image section and the percentage of the image section that they occupy, averaged over 500 image sections of samples cut at 15 m/min and 20 m/min. b) Plot showing the average pixel distributions for image sections of samples cut at 15 m/min and 20 m/min. c) and d) are histograms showing the distribution of angles for samples cut at c) 15 m/min. d) 20 m/min. The values for these results are available for a) in Dataset 2, Ref. [61], b) in Dataset 3, Ref. [62], c) in Dataset 4, Ref. [63] and d) in Dataset 5, Ref. [64].

Download Full Size | PDF

Each image was rotated by 360°, in angle iterations of 0.1°. At each iteration, all pixel intensities were summed along the vertical direction, producing a 1-D vector containing the sums of all pixels. The value of the angle that returned the 1-D vector with the highest standard deviation was determined to be the striation angle. All angles were measured clockwise from the vertical plane. For v = 15 m/min, the average angle measured for experimental image sections was around 25 degrees and for predicted image sections the angle was around 30 degrees. For v = 20 m/min, the average angle in experimental image sections was 33.9 degrees and in predicted image sections was 33.6 degrees. There are experimental outliers on the edges of the distribution caused by the presence of welts limiting the accuracy of striation angle measurement. The peak value experimental image sections reached 45 counts, however, the axis ranges chosen in Fig. 7(c) excluded this data point in order to show more clearly the shape and overlap of the distributions at lower counts. The higher difference between the experimental and predicted dark spots, pixel distributions and angle of striations for 15 m/min than 20 m/min is likely due to 15 m/min being outside the range of speeds prediction network was trained on. Defects produced at 15 m/min cutting speed fall outside of the distribution of features used to train the prediction.

5.2. Image to adjacent image visualisation

The purpose of this section is to demonstrate that a CGAN can accurately predict the appearance of the laser-cut edge of a stainless steel sample, from an image section of an adjacent region on that sample. The chaining network is so-called because of the translational overlap between its inputs and outputs, allowing the network to “chain” by using its previous output as its next input. The initial input to the chaining network is a predicted image section, such as those produced in 5.1. The output of the CGAN is a 668 × 256 pixel image section, where the left half corresponds closely with the right half of the input image section and where the right half of the output image section is an extrapolation predicted by the neural network. The input and output of the CGAN, therefore, have a 50% overlap. The output may then be fed back into the chaining network as its next input, which then produces the prediction for the next 50% of the image section. By chaining, in this manner, multiple times, it is possible to build up a prediction along a longer image section of the laser-cut edge. Image sections made by the prediction network in 5.1 were fed into the chaining network and used to predict image sections of the laser-cut edge for different cutting speeds, as shown in Fig. 8.

 figure: Fig. 8.

Fig. 8. Examples of predictions made by the chaining network with corresponding examples of experimental image data. The yellow highlighted boxes indicate the input to the chaining network with the red boxes indicating predictions made from the adjacent predictions. a) Predicted image section for 15 m/min. b) Experimental image section of 15 m/min. c) Predicted image section of 20 m/min. d) Experimental image section of 20 m/min.

Download Full Size | PDF

In Fig. 8 there are examples of predictions of a sample based on adjacent image sections. In a) we can see that regular striations have been predicted by the chain network for 15 m/min. Comparing with b) regular striations are seen and angles are partially reproduced. In c) we can see that the striation patterns for 20 m/min are more varied than 15 m/min, as well as more welts at the bottom. In d) we can see that the striation patterns have been reproduced with the angles matching, as well as welts seen at the bottom of the image section. As each CGAN predicts the most likely outcome for a given image section, it is expected that the predicted image sections will not exactly match the experimental image sections. The predictive network was used to predict the cut-face appearance from the laser machining parameters, as shown by the images in Fig. 6. The chaining network was used to predict the appearance of an image the cut surface based on an image of an adjacent image section to the left of the location to be predicted (i.e. it additionally accepts an image input and thereby allows creation of image features that are consistent from one section to the next). This is achieved by overlapping the right half of the input image with the left half of the output image. Critically, the chaining network can function either with real experimental images as input or with images created by the predictive network.

6. Conclusion

In conclusion, a classification CNN and two predictive visualisation CGANs were used to model the appearance of image sections of samples of stainless steel cut by a fibre laser at speeds of 15 to 24 m/min in steps of 1 m/min. The classification CNN was able to classify image sections of samples by cutting speed based in the physical defects of each sample, with an accuracy of 99.9%. Due to experimental variability, predicted image sections are not expected to reproduce experimental image sections exactly. The prediction network was successfully able to reproduce defects contained in experimental image sections such as striations, as well as other welts. Predictions made using the chaining network were also successful in reproducing the defects that occurred during the cutting process.

The advantage over previous methods of the methods proposed here is that they demonstrate that NNs can accurately correlate defects produced in laser cutting to a single laser cutting parameters without theoretical modelling. Once trained, neural networks have the capability of operating very quickly (20 ms for CNNs and 60 ms for CGANs). It is therefore feasible that such modelling capability could be combined with real time monitoring of laser cutting to provide predictive visualisation. Correlation of physical defects with laser parameters could then enable prediction of failures in the cutting process before they happen. More immediately, the capability to predict the visual appearance of laser-cut surfaces could be highly beneficial for parameter optimisation, such as for producing a desired appearance on the laser-cut edge.

Funding

Engineering and Physical Sciences Research Council (EP/N03368X/1, EP/T026197/1).

Acknowledgements

We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan X GPU used for this research.

Disclosures

The authors declare no conflict of interest related to this article.

Data availability

Data underlying the results presented in this paper is available in Dataset 1 [59], Dataset 2 [61], Dataset 3 [62], Dataset 4 [63] and Dataset 5, [64].

References

1. H. Booth, “Laser Processing in Industrial Solar Module Manufacturing,” J. Laser Micro/Nanoeng. 5(3), 183–191 (2010). [CrossRef]  

2. J. Francis and L. Bian, “Deep Learning for Distortion Prediction in Laser-Based Additive Manufacturing using Big Data,” Manuf. Lett. 20, 10–14 (2019). [CrossRef]  

3. J. Majumdar and I. Manna, “Laser processing of materials,” Sadhana 28(3-4), 495–562 (2003). [CrossRef]  

4. X. Zang, C. Jian, T. Zhu, Z. Fan, W. Wang, M. Wei, B. Li, M. Diaz, P. Ashby, Z. Lu, Y. Chu, Z. Wang, X. Ding, Y. Xie, J. Chen, J. Hohman, M. Sanghadasa, J. Grossman, and L. Lin, “Laser-sculptured ultrathin transition metal carbide layers for energy storage and energy harvesting applications,” Nat. Commun. 10(1), 1 (2019). [CrossRef]  

5. M Eichhorn, “Pulsed 2 μm fiber lasers for direct and pumping applications in defence and security,” Proc. SPIE 7836, 78360B (2010). [CrossRef]  . [cited 2 September 2020]

6. M Sparkes and W Steen, “Light” Industry: An Overview of the Impact of Lasers on Manufacturing. Elsevier, Coventry, UK (2018).

7. J. Pocorni, J. Powell, J. Frostevarg, and A. Kaplan, “Investigation of the piercing process in laser cutting of stainless steel,” J. Laser Appl. 29(2), 022201 (2017). [CrossRef]  

8. J. Fieret, M. Terry, and B. Ward, “Aerodynamic Interactions During Laser Cutting,” Proc. SPIR, 668, [online] (1986).

9. Y. Chen, B. Chen, Y. Yao, C. Tan, and J. Feng, “A spectroscopic method based on support vector machine and artificial neural network for fiber laser welding defects detection and classification,” NDT&E Int. 108, 102176 (2019). [CrossRef]  

10. L. Shepeleva, B. Medres, W. Kaplan, M. Bamberger, and A. Weisheit, “Laser cladding of turbine blades,” Surf. Coat. Technol. 125(1-3), 45–48 (2000). [CrossRef]  

11. V. Balasubramaniam, D. Rajkumar, P. Ranjithkumar, and C. Narayanan, “Comparative study of mechanical technologies over laser technology for drilling carbon fiber reinforced polymer materials,” Ind. J. Eng & Mat.s Sci 27, 19–32 (2020).

12. J. Thieme, “Fiber Laser – New Challenges for the Materials Processing,” Laser Tech. J. 4(3), 58–60 (2007). [CrossRef]  

13. M. Zervas, “High Power Ytterbium-Doped Fiber Lasers — Fundamentals and Applications,” Int. J. Mod. Phys. B 28(12), 1442009 (2014). [CrossRef]  

14. J. Mesko, “The Effect of Selected Technological Parameters of Laser Cutting on the Cut Surface Roughness,” Tehn. Vjes. – Tech. Gaz. 25(4), (2018).

15. T Arai, “Generation of Striations During Laser Cutting of Mild Steel,” SOP Trans. Appl. Phys. 2014(2), 81–95 (2014). [CrossRef]  

16. O. Bocksrocker, P. Berger, B. Regaard, V. Rominger, and T. Graf, “Characterization of the melt flow direction and cut front geometry in oxygen cutting with a solid state laser,” J. Laser Appl. 29(2), 022202 (2017). [CrossRef]  

17. I. Miraoui, M. Boujelbene, and E. Bayraktar, “Analysis of Roughness and Heat Affected Zone of Steel Plates Obtained by Laser Cutting,” Adv. Mater. Res. (Durnten-Zurich, Switz.) 974, 169–173 (2014). [CrossRef]  

18. T. Mitchell, Machine learning. McGraw-Hill, New York (1997),2.

19. I Goodfellow, Y Bengio, and A Courville, Deep Learning, MIT Press, Cambridge, MA, US, (2016).

20. K. Hornik, M. Stinchcombe, and H. White, “Multilayer feedforward networks are universal approximators,” Neur, Net. 2(5), 359–366 (1989). [CrossRef]  

21. A. Krizhevsky, I. Sutskever, and G Hinton,, “ImageNet classification with deep convolutional neural networks”, NIPS, (2012).

22. M. Zeiler and R. Fergus, “Visualizing and Understanding Convolutional Networks,” Arxiv, [online] (2013).Available at: http://export.arxiv.org/abs/1311.2901v3. [Accessed 15 March 2021].

23. K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks For Large-Scale Image Recognition,” ArXiv, (2015) [online] Available at: https://arxiv.org/abs//1409.1556v6. [Accessed 15 March 2021].

24. K. He, X. Zhang, S. Ren, and J. Sun, (2015). Deep Residual Learning for Image Recognition. ArXiv, [online] Available at: https://export.arxiv.org/abs/1512.03385v1. [Accessed 15 March 2021].

25. C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going Deeper with Convolutions,” ArXiv, (2015) [online] Available at: https://arxiv.org/abs/1409.4842v1. [Accessed 15 March 2021].

26. I Serban, R Lowe, L Charlin, and J. Pineau “Generative Deep Neural Networks for Dialogue: A Short Review,” arXiv [online], (2016) [cited 24 September 2020];. Available from: https://arxiv.org/abs/1611.06216.

27. I Goodfellow, M Mirza, B Xu, D Wade-Farley, and A. Courville “Generative Adversarial Nets,” ArXiv [online], (2014) [cited 24 September 2020];. Available from: https://arxiv.org/abs/1406.2661.

28. M. Lucic, K. Kurach, M. Michalski, S. Gelly, and O. Bousquet, “Are GANs Created Equal? A Large-Scale Study,” ArXiv, [online] (2018). Available at: < https://arxiv.org/abs/1711.10337> [Accessed 20 March 2021].

29. J. Bao, D. Chen, F. Wen, H. Li, and G. Hua, “CVAE-GAN: Fine-Grained Image Generation through Asymmetric Training,” ArXiv, [online] (2017), Available at: < https://arxiv.org/abs/1703.10155v1> [Accessed 20 March 2021].

30. I. Durugkar, I. Gemp, and S. Mahadevan, “Generative Multi-Adversarial Networks” ArXiv, [online] (2017). Available at: < https://arxiv.org/abs/1611.01673> [Accessed 20 March 2021].

31. M. Mirza and S. Osindero, “Conditional Generative Adversarial Nets,” ArXiv, [online] (2014). Available at: < https://arxiv.org/abs/1411.1784v1> [Accessed 20 March 2021].

32. B Yilbas, The Laser Cutting Process, Elsevier Science, San Diego (2017).

33. H Injeyan and G Goodno, High power laser handbook, McGraw-Hill Professional, New York (2011).

34. G. Santolini, P. Rota, D. Gandolfi, and P. Bosetti, “Cut Quality Estimation in Industrial Laser Cutting Machines: A Machine Learning Approach,” IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Long Beach, CA, USA, pp. 389–397, (2019).

35. H. Tercan, T. Khawli, U. Eppelt, C. Büscher, T. Meisen, and S. Jeschke, “Improving the laser cutting process design by machine learning techniques,” Prod. Eng. 11(2), 195–203 (2017). [CrossRef]  

36. Y. Madhukar, S. Mullick, and A. Nath, “An investigation on co-axial water-jet assisted fiber laser cutting of metal sheets,” Opt. Las. Eng. 77, 203–218 (2016). [CrossRef]  

37. C. Stadter, M. Schmoeller, L. von Rhein, and M. Zaeh, “Real-time prediction of quality characteristics in laser beam welding using optical coherence tomography and machine learning,” J. Laser Appl. 32(2), 022046 (2020). [CrossRef]  

38. K. Wasmer, T. Le-Quang, B. Meylan, F. Vakili-Farahani, M. Olbinado, A. Rack, and S. Shevchik, “Laser processing quality monitoring by combining acoustic emission and machine learning: a high-speed X-ray imaging approach,” Proc. CIRP 74, 654–658 (2018). [CrossRef]  

39. X. Zhen-ying, W. Rong, and Z. Rong, “Prediction of weld penetration status based on sparse representation in fiber laser welding,” Proc. SPIE 11343, 106 (2019). [CrossRef]  

40. C. Knaak, U. Thombansen, P. Abels, and M. Kröger, “Machine learning as a comparative tool to determine the relevance of signal features in laser welding,” Proc. CIRP 74, 623–627 (2018). [CrossRef]  

41. A. Anastasiou, E. Zacharaki, D. Alexandropoulos, K. Moustakas, and N. Vainos, “Machine learning based technique towards smart laser fabrication of CG,” Microelectron. Eng. 227, 111314 (2020). [CrossRef]  

42. X. Yao, S. Moon, and G. Bi, “A hybrid machine learning approach for additive manufacturing design feature recommendation,” Rap. Prot. J. 23(6), 983–997 (2017). [CrossRef]  

43. D. Heath, J. Grant-Jacob, Y. Xie, B. Mackay, J. Baker, R. Eason, and B. Mills, “Machine learning for 3D simulated visualization of laser machining,” Opt. Express 26(17), 21574 (2018). [CrossRef]  

44. N. Sanner, N. Huot, E. Audouard, C. Larat, J. Huignard, and B Loiseaux,, “Programmable focal spot shaping of amplified femtosecond laser pulse,” Opt. Lett. 30(12), 1479 (2005). [CrossRef]  

45. Yunhui Xie, Daniel J Heath, James Grant-Jacob, Benita MacKay, McDonnell Scout, David Tom Michael, Matthew Praeger, Robert Eason, Mills, and Benjamin, “Deep learning for the monitoring and process control of femtosecond laser machining,” JPhys Photonics 1(3), 035002 (2019). [CrossRef]  

46. M. McDonnell, G.-J. David Tom, X. James, P. Yunhui, M. Matthew, S. Benita, R. W. Eason, and B. Mills, “Modelling laser machining of nickel with spatially shaped three pulse sequences using deep learning,” Opt. Express 28(10), 14627–14637 (2020). [CrossRef]  

47. W. Feng, J. Guo, W. Yan, H. Wu, Y. Wan, and X. Wang, “Underwater laser micro-milling of fine-grained aluminium and the process modelling by machine learning,” J. of Micromech. and Microeng 30(4), 045011 (2020). [CrossRef]  

48. B. Mills, D. J. Heath, J. A. Grant-Jacob, Y. Xie, and R. W. Eason, “Image-based monitoring of femtosecond laser machining via a neural network,” J. Phys Photonics 1(1), 015008 (2018). [CrossRef]  

49. M. Zuric, O. Nottrodt, and P. Abels, “Multi-Sensor System for Real-Time Monitoring of Laser Micro-Structuring,” J. L. Micro/Nanoeng 14(3), 245–254 (2019). [CrossRef]  

50. Benjamin Mills, Daniel Heath, James Grant-Jacob, and Robert Eason, “Predictive capabilities for laser machining via a neural network,” Opt. Express 26(13), 1–9, (2018). [CrossRef]  

51. Benita Scout MacKay, Matthew Praeger, James Grant-Jacob, Janos Kanczler, Robert Eason, Richard Oreffo, and Benjamin Mills, “Modelling adult skeletal stem cell response to laser-machined topographies through deep learning,” Tiss. Cel. 67, 101442 (2020). [CrossRef]  

52. D. Teixidor, M. Grzenda, A. Bustillo, and J. Ciurana, “Modeling pulsed laser micromachining of micro geometries using machine-learning techniques,” J. Intell Manuf. 26(4), 801–814 (2015). [CrossRef]  

53. B. MacKay, S. Blundell, O. Etter, Y. Xie, M. McDonnell, D. T. Praeger, M. Grant-Jacob, J. Eason, and R. Mills, “Automated 3D labelling of fibroblasts and endothelial cells in SEM-imaged placenta using deep learning,” Proc. Int. Joint Conf. Biom. Eng. Sys. Tech. 13, 2–46 (2020). [CrossRef]  

54. D. Weichert, P. Link, A. Stoll, S. Rüping, S. Ihlenfeldt, and S. Wrobel, “A review of machine learning for the optimization of production processes,” Int. J. Adv. Manuf. Technol. 104(5-8), 1889–1902 (2019). [CrossRef]  

55. James Grant-Jacob, Matthew Praeger, Matthew Loxham, R.W. Eason, and Benjamin Mills “Lensless imaging of pollen grains at three-wavelength using deep learning,” Env. Res. Comm., (2020).

56. Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light: Sci. Appl. 7(2), 17141 (2018). [CrossRef]  

57. James Grant-Jacob, Benita MacKay, Xie Scout, Heath Yunhui, Daniel J Loxham, Eason Matthew, Robert, and Benjamin Mills, “A neural lens for super-resolution biological imaging,” J. Phys. Comm., (2019).

58. U. Karanfil and U. Yalcin, “Real-time monitoring of high-power fibre-laser cutting for different types of materials,” Ukr. J. Phys. Opt. 20(2), 60–72 (2019). [CrossRef]  

59. Alexander F. Courtier, Michael McDonnell, Matt Praeger, James A. Grant-Jacob, Christophe Codemard, Paul Harrison, Ben Mills, and Michalis Zervas, Dataset 1figshare (2021), https://doi.org/10.6084/m9.figshare.15050154.

60. P. Isola, J. Zhu, T. Zhou, and A. Efros, “Image-to-Image Translation with Conditional Adversarial Networks”, ArXiv, [online] (2016). Available at: <https://arxiv.org/abs/1611.07004> [Accessed 20 March 2021].

61. Alexander F. Courtier, Michael McDonnell, Matt Praeger, James A. Grant-Jacob, Christophe Codemard, Paul Harrison, Ben Mills, and Michalis Zervas, Dataset 2figshare (2021), https://doi.org/10.6084/m9.figshare.15050157.

62. Alexander F. Courtier, Michael McDonnell, Matt Praeger, James A. Grant-Jacob, Christophe Codemard, Paul Harrison, Ben Mills, and Michalis Zervas, Dataset 3figshare (2021), https://doi.org/10.6084/m9.figshare.15050136.

63. Alexander F. Courtier, Michael McDonnell, Matt Praeger, James A. Grant-Jacob, Christophe Codemard, Paul Harrison, Ben Mills, and Michalis Zervas, Dataset 4figshare (2021), https://doi.org/10.6084/m9.figshare.15050142.

64. Alexander F. Courtier, Michael McDonnell, Matt Praeger, James A. Grant-Jacob, Christophe Codemard, Paul Harrison, Ben Mills, and Michalis Zervas, Dataset 5figshare (2021), https://doi.org/10.6084/m9.figshare.15050148.

Supplementary Material (5)

NameDescription
Dataset 1       Experimental and predicted cutting speeds for a confusion matrix
Dataset 2       Percentage of dark spots for v=15, 20 m/min, experimental and predicted.
Dataset 3       Distribution of angles for v=15 m/min, experimental and predicted.
Dataset 4       Distribution of angles for v=20 m/min, experimental and predicted.
Dataset 5       Distribution of pixel intensities for v=15, 20 m/min, experimental and predicted

Data availability

Data underlying the results presented in this paper is available in Dataset 1 [59], Dataset 2 [61], Dataset 3 [62], Dataset 4 [63] and Dataset 5, [64].

59. Alexander F. Courtier, Michael McDonnell, Matt Praeger, James A. Grant-Jacob, Christophe Codemard, Paul Harrison, Ben Mills, and Michalis Zervas, Dataset 1figshare (2021), https://doi.org/10.6084/m9.figshare.15050154.

61. Alexander F. Courtier, Michael McDonnell, Matt Praeger, James A. Grant-Jacob, Christophe Codemard, Paul Harrison, Ben Mills, and Michalis Zervas, Dataset 2figshare (2021), https://doi.org/10.6084/m9.figshare.15050157.

62. Alexander F. Courtier, Michael McDonnell, Matt Praeger, James A. Grant-Jacob, Christophe Codemard, Paul Harrison, Ben Mills, and Michalis Zervas, Dataset 3figshare (2021), https://doi.org/10.6084/m9.figshare.15050136.

63. Alexander F. Courtier, Michael McDonnell, Matt Praeger, James A. Grant-Jacob, Christophe Codemard, Paul Harrison, Ben Mills, and Michalis Zervas, Dataset 4figshare (2021), https://doi.org/10.6084/m9.figshare.15050142.

64. Alexander F. Courtier, Michael McDonnell, Matt Praeger, James A. Grant-Jacob, Christophe Codemard, Paul Harrison, Ben Mills, and Michalis Zervas, Dataset 5figshare (2021), https://doi.org/10.6084/m9.figshare.15050148.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. Flowchart showing the process of imaging stainless steel cut with a fibre laser and converting it into data appropriate for machine learning. The stages of the flow chart consist of a) laser cutting with a fibre laser and b) an image of the edge of stainless steel rod cut by a fibre laser at a given scanning speed. In a) a stainless steel rod is cut from a 2 mm thick sheet by a fibre laser at 2 kW at 1075 nm, which is then imaged under a 5x microscope. In b) there is a 2.8 × 2.0 image of the laser cut stainless steel surface. Each images was cropped to remove unnecessary data. The red arrow shows the direction of the laser beam while the black arrow shows the direction of scan. The blue box indicates an image section used for machine learning.
Fig. 2.
Fig. 2. Examples of stainless steel samples cut by a fibre laser (experimental measurements) at a) 15, b) 16, c) 17, d) 18, e) 19, f) 20, g) 21, h) 22, i) 23, and j) 24 m/min.
Fig. 3.
Fig. 3. Diagram showing the functioning of the neural networks used for modelling of laser cutting data. a) A classification network that to predict the cutting speed of a laser cut sample from an image section from that sample. b) A predictive network that to predict the appearance of an image section of a laser cut sample based on the cutting speed of that sample. c) A chaining network used to predict the appearance of an image section from an image section of an adjacent location on the laser cut sample. d) A combination of b) and c) to predict the appearance of a section of the laser cut edge of stainless steel using predicted image sections from laser parameters.
Fig. 4.
Fig. 4. A confusion matrix comparing the cutting speeds predicted by the CNN to the experimental cutting speeds corresponding to those sample. For underlying values see Dataset 1 Ref. [59].
Fig. 5.
Fig. 5. Example of an input for the modelling of laser cutting.
Fig. 6.
Fig. 6. a) Experimental and b) predicted image sections for stainless steel samples cut at 15 m/min, and c) experimental and d) predicted image sections for stainless steel samples cut at 20 m/min.
Fig. 7.
Fig. 7. a) Plot showing the correlation between the size of dark spots on an image section and the percentage of the image section that they occupy, averaged over 500 image sections of samples cut at 15 m/min and 20 m/min. b) Plot showing the average pixel distributions for image sections of samples cut at 15 m/min and 20 m/min. c) and d) are histograms showing the distribution of angles for samples cut at c) 15 m/min. d) 20 m/min. The values for these results are available for a) in Dataset 2, Ref. [61], b) in Dataset 3, Ref. [62], c) in Dataset 4, Ref. [63] and d) in Dataset 5, Ref. [64].
Fig. 8.
Fig. 8. Examples of predictions made by the chaining network with corresponding examples of experimental image data. The yellow highlighted boxes indicate the input to the chaining network with the red boxes indicating predictions made from the adjacent predictions. a) Predicted image section for 15 m/min. b) Experimental image section of 15 m/min. c) Predicted image section of 20 m/min. d) Experimental image section of 20 m/min.
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.