Abstract

Aberrations degrade the accuracy of quantitative, imaging-based measurements, like particle image velocimetry (PIV). Adaptive optical elements can in principle correct the wavefront distortions, but are limited by their technical specifications. Here we propose an actuator-free correction based on a multiple-input deep convolutional neural network which uses an additional input from a wavefront sensor to correct time-varying distortions. It is applied for imaging flow velocimetry to conduct measurements through a fluctuating air-water phase boundary. Dataset for neural network is generated by an experimental setup with a deformable mirror. Correction performance of trained model is estimated in terms of image quality, which is improved significantly, and flow measurement results, where the errors induced by the distortion from fluctuating phase boundary can be corrected by 82 %. The technique has the potential to replace classical closed-loop adaptive optical systems where the performance of the actuators is not sufficient.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Adaptive optics (AO) is a widely used technique to improve the performance of optical systems by clearing up information-containing light of distorting effects. It has been employed in different research areas, such as astronomical telescopes, ophthalmology, free space optical communication and biological microscopy [1]. Typical AO system consists of a wavefront sensor, e.g. Hartmann-Shack wavefront sensor (HSWFS), a wavefront corrector, e.g. deformable mirror (DM) or liquid-crystal on silicon spatial light modulator (LCoS-SLM), and a real-time control unit. As all the optical and electronic components are connected in a feedback control loop, this setup is referred as closed-loop AO system.

The particle image velocimetry (PIV) [2] is widely employed in basic and applied investigations in fluid mechanics [3]. PIV is a camera-based optical measurement method to determine the velocity field in complex flows [4]. Seeding particles are added to the fluid which can follow the flow. By using an illumination light sheet, the light scattered by these seeding particles is imaged onto a camera. The 2D velocity information is gained from cross-correlating two consecutive images. However, optical distortions, introduced by inhomogeneous refractive index fields [5], or fluctuating phase boundary, e.g. an open air-water interface occurring at water channels [6] or basins, lead to blurred particle images and uncertain particle position assignment, resulting in a degradation of velocity measurement accuracy [7]. While static wavefront aberrations can be easily corrected by calibration or algorithms [8,9], removing dynamic distortions is challenging.

In our previous research, adaptive optics has been used to overcome this aberration problem in flow measurement and other applications [1018]. In reference [10], adaptive optics system is first used to improve the properties of fluid flow measurement techniques in complex environments, by correcting optical distortion caused by a dynamic air-water interface in interferometric velocity measurement technique. The Fresnel reflex of a phase boundary is used as a guide star for an adaptive optics system employed in flow measurement in [11], two Fresnel guide stars are applied for interferometric velocity measurements. Both transmission guide star (TGS) and Fresnel guide star (FGS) are used for wavefront shaping in PIV to correct distortion by spatial light modulators in [12]. Sensor-less adaptive optics technique is employed for distortion correction in PIV [13]. Adaptive optics element, i.e. a spatial light modulator is used to generate a spiral phase mask which generates double-images of seeding particle shadows in particle tracking velocimetry (PTV) for a 3D-measurement of flow fields [14]. Multi-actuator adaptive lens are used to correct aberrations in microscopy [15]. Field Programmable System on Chip (FPSoC) based adaptive optics system is presented in [16] to correct time-varying aberrations in PIV. All these approaches are actuator-based AO systems, the quality of correction are seriously depending on the performance of the actuator, e.g. deformable mirror [16] or LCoS-SLM [12]. Optical distortions from a fluctuating phase boundary in flow measurements generally have large frequency range, high spatial frequency and large dynamic range caused by capillary waves. Actuator-based approaches are limited by the performance of the actuator for such wavefront distortions. For example, deformable mirrors are limited by their spatial frequencies, LCoS-SLMs are limited by their speed. In order to overcome the limitation caused by the wavefront corrector, an actuator-less AO system could achieve the goal.

In recent years, the rapidly developing deep learning technique has been introduced to adaptive optics. It has been applied in many aspects of AO, for example, deep learning has been applied in wavefront meas [19,20], wavefront reconstruction [21], control model for sensor-less AO [22,23], post-processing of adaptive optics retinal images [24], determination of aberration functions in microscopy [25], etc. Different deep learning based approaches are also implemented in image based flow measurement, for example, 3D particle field is reconstructed by using of convolutional neural network (CNN) [26], motion of flow field is estimated by using of different architecture of neural network [2729], particle streak images for flow velocimetry is implemented by using simple CNN [30], 3D flow measurement is achieved by using of convolutional neural network and astigmatic PTV [31]. But none of them focus on distortion correction in flow measurements. When optical path of the image based flow measurement setup pass through an open air-water surface, optical distortions caused by fluctuating phased boundary can significantly deteriorate the measurement accuracy, such distortions cause inaccurate particle position assignment and induce significant measurement uncertainty, which can be considered as an ’virtual flow’ is induced. Under such conditions, optical distortion need to be corrected, otherwise the measurement will be untrustworthy even totally incorrect.

In this paper, we focus on a deep-learning based approach to correct optical distortions occurs in PIV, which can also be defined as a new concept of AO, i.e. actuator-less AO, and the distortions originated from fluctuating phase boundary. By using deep-learning based approach, a general function which approximates the light propagation procedure through a complex medium, e.g. biological tissues, atmospheric turbulence, phase boundary in flow measurement, can be built up as a computational architecture by neural networks. To be specific, this function represents the actual physical procedure of the transmission from the input wavefront to the output image, and it can be learned from a dataset by a neural network. In our approach, we propose a deep-learning based method by using a specific convolutional neural network to restore degraded PIV images caused by wavefront aberrations. We call the proposed network the Multiple Input U-net (MIUN).

The paper is organized in the following way: In Section 2 we analyze the distortion model in optical flow measurement, we illustrate how we measure the distortion with a guide star (GS) and Hartmann-Shack wavefront sensor (HSWFS). The experiment for dataset generation is followed, as well as the detailed explanation of the MIUN architecture and training process. In Section 3, we evaluate the performance of the MIUN for image correction for real PIV data obtained from a real experiment by different methods.

2. Method

2.1 Distortion from fluctuating phase boundary

In this chapter, we describe the distortion model from air-water interface in the camera-based flow measurement. As shown in the Fig. 1, the PIV camera takes pictures of particles vertically downward through a fluctuating phase boundary which leads to refraction of lights, where the particles are illuminated by light sheet. The real particle image can be considered as a stationary and planar scene $I_g$. Because of the fluctuating air-water interface, every frame particle image $I$ on the PIV camera image plane is a distorted version of $I_g$ as follows:

$$I(\mathbf{x},t) =I_g(\mathbf{x}+\mathbf{w}(\mathbf{x},t),t)$$
where $\mathbf {w}(\mathbf {x},t)$ is the distortion function (warp function) at pixel $\mathbf {x}$ and time $t$. According to Ref. [33], the distortion function $\mathbf {w}(\mathbf {x},t)$ has the relation with water surface height profile $h(\mathbf {x},t)$ at time $t$ :
$$\mathbf{w}(\mathbf{x},t)=\alpha \nabla h(\mathbf{x},t)$$

Based on Snell’s law, it can be calculated that $\alpha =h_0(1-\frac {n'}{n})$, which shows $\alpha$ is a constant depends on the water surface reference height $h_0$ when the air-water interface is at rest and the refraction index $n'$, $/n$ of two media. From Eq. (1) and Eq. (2), we know that at any time instant, the distortion function leads to local geometric distortions on each pixel, which depends on its location, relative refraction index and surface height.

It is challenging to correct such distortions when the shape of the air-water interface is not known. Such correction process while the distortion information is unknown is similar to blind deconvolution, but the kernel, which is the unknown Point Spread Function (PSF) of the optical imaging system,is spatially varying larger than normal conditions. Even by using the different neural network, such geometric distortion still can not be corrected thoroughly [32].

 figure: Fig. 1.

Fig. 1. Distortion model and measurement principle for distorted phase from fluctuating air-water interface.

Download Full Size | PPT Slide | PDF

In order to obtain distortion information from the phase boundary, we apply a spatially distributed guide star technique, which was introduced in our previous work [12], and a Hartmann-Shack wavefront sensor (HSWFS) for the wavefront measurement. In general, guide star carries wavefront information through scattering medium by a focused light. However, for camera-based optical flow measurement, like PIV, the distortion occurs on a 2D plane, a spatially distributed guide star needs to be used for the tracking of the optical path length change within the imaging system path. As shown in Fig. 1, there is a single phase boundary between the detector and the measuring object, which is the depth-of-interest (DOI) layer where particles are illuminated by the light sheet. To get the shape of fluctuating phase boundary, the wavefront through the boundary, i.e. the optical path length difference, need to be measured by HSWFS. Figure 2 shows the measurement details of the measured wavefront aberrations from fluctuating air-water interface. Principle of HSWFS is shown in Fig. 2(a), a HSWFS consists of a two-dimensional microlens array and a detector, which is a charge-coupled device (CCD) in our case. The input wavefront is sampled by the microlens array first, each microlens focuses the divided local wavefront into the CCD, which is located on the focal plane of the microlens. The resulting Hartmannogram is a spot array image, as shown in Fig. 2(b). The average slope of the sampled local wavefront can be determined by the spot displacement from the reference position. The reference spot positions are the centroids of the spots estimated when the air-water surface is at rest, i.e. laser guide star passing through the optical setup without any distortion. The calibrated HSWFS can reconstruct wavefront from the Hartmannogram, Figs. 2(c) and 2(d) show the wavefront measurement results, which is discussed in details in further sections, and with the calibration process of measurement setup.

 figure: Fig. 2.

Fig. 2. Measurement of distorted phase boundaries from fluctuating air-water interface by using Hartmann-Shack wavefront sensor.

Download Full Size | PPT Slide | PDF

The spatially guide star is used here for wavefront distortion measurement, the measured wavefront is spatially related to the distorted image. Because convolutional neural network maintains spatially structure during the propagation, information from the measured wavefront can be used for deep leaning-based image distortion correction. The necessity of using another input for convolutional neural network comes from its principle. In this problem, unlike other image translation problems, correction by convolutional neural network need both features from distortion and image. The features that convolutional neural network can extract from distorted image comes from image itself, for numerous images in flow measurement and dynamically changing distortions, different distortions can lead to different distorted images on one same real image, i.e. different input for neural network have same label. This will cause the so-called curse of dimensionality [34] in deep learning and leads to failure of training. Based on this, it is impossible for a single input neural network to solve such a problem. Hence, our approach provides an extra and critical information to neural network as another input to solve this problem. Spatially related distortion information can be easily obtained by wavefront measurement, hence, we develop a multiple-input neural network architecture for image distortion correction, additional input for neural network will be spot displacements along x and y direction because of it is highly spatially relevant to distorted image.

2.2 Optical setup for distorted phase boundary measurement and data generation

Supervised learning approaches need a dataset with ground-truth as label to train the neural network. Training process can be considered as iteratively optimizing the weights in the network model by reducing difference between predicted results and ground truth. Due to the difficulty of obtaining the distorted PIV images and corresponding undistorted PIV images through the real-time fluctuating phase boundary, we use a deformable mirror to generate the simulated distortion of the water surface, then use two different PIV cameras before and after deformable mirror to capture undistorted PIV images as ground-truth and distorted PIV images. Two PIV cameras are placed in a shared optical path to measure same object with and without distortion respectively, by placing two cameras before and after the deformable mirror in the optical path. The distorted phase boundary is measured by a laser guide star with HSWFS. Real distortion from fluctuating water-surface is measured first, then the measured distortion is displayed by deformable mirror for further dataset generation. For the dataset generation, HSWFS measures the displayed wavefront from deformable mirror which also leads to the distortion on one of PIV cameras. Measured spots displacements which represent distortion information are used as an additional input for the neural network. The optical setup is shown in Fig. 3.

 figure: Fig. 3.

Fig. 3. Schematic of the optical setup for wavefront measurement and dataset generation; Air flow is only switched on for phase boundary measurement to excite the water surface; Distortion is generated by deformable mirror for dataset generation when the surface is steady, PIV camera 1 captures undistorted particles images for label of the dataset, PIV camera 2 captures distorted images for the input of the dataset; The inset (red dashed box) shows the measurement location from a top view. HSWFS: Hartmann-Shack wavefront sensor; LP: long pass filter; SP: short pass filter, BS: 50:50 beamsplitter; Light sheet: generated by a laser source (660 nm) combined with a cylindrical lens.

Download Full Size | PPT Slide | PDF

In this setup, the measurement object of two PIV cameras here is a two-dimension image plane in a water-filled basin with an open water surface. The size of basin is 100 mm $\times$ 80 mm $\times$ 55 mm (L$\times$W$\times$D). In this basin a nozzle is used to generate a water flow, which is connected with a pump to drive the flow. The outlet of the nozzle is located at the center along basin width direction, 32 mm away from basin wall (i.e. length of nozzle inside basin), and the inner diameter is 7.5 mm. Outlet center is 16.4 mm above basin bottom, which is also the height of the laser sheet and image plane. And the depth of interest is 25 mm, i.e. depth of image plane when air-water surface is stable. Capillary wave on the water surface is excited by an air flow. The air flow is generated by a nozzle connected to air compressor, with a valve which can control the airflow velocity. The air flow nozzle with inner diameter of 7.76 mm is settled about 4 cm above the corner of the water basin, directed at the water surface with a 45$^{\circ}$ angle. It is worth mentioning that the airflow does not affect the bulk flow on the light sheet illuminated layer, and the water flow from the nozzle have no influence on water surface. The laser guide star (561nm) propagate through the window at the water basin bottom upward, and a laser light sheet, which is generated by a laser source (660 nm) combined with a cylindrical lens directs into water basin from the side window for the illumination of the seeding particles. The seeding particles here are reflecting silver-coated hollow glass spheres (DANTEC Dynamics, S-HGS-10, diam = 10 $\mu$m). The light from the laser guide star and the particles scattered light pass through the air-water surface. Both particle scattered light and laser guide star are observed by a microscope objective (Plan Apo, WD = 34 mm, Magnification = 2, NA = 0.055, infinity corrected). The center of the microscope objective projected on image plane is roughly located at the center of water-flow nozzle, and 20 mm away from nozzle outlet. Then beamsplitter BS1 splits light beam from the light sheet onto the PIV camera 1 (label camera for undistorted images) to capture particle images before the deformable mirror without distortion. A longpass filter (cut-on wavelength: 600 nm) is placed on this image path to block the light from laser guide star. The other part of light beam passes through the beamsplitter BS2 onto the deformable mirror (DM97-08, Alpao, France). The deformable mirror here are based on continuous reflective surface motioned by 97 magnetic actuators. With the 97 pistons, a combination of Zernike polynomials can be displayed. The distorted water surface is simulated by displaying a linear combination of the Zernike polynomials, where Zernike coefficients comes from the Hartmann-Shack wavefront measurement. After the deformable mirror and BS2, the beamsplitter BS3 separates the particles light to the PIV camera 2 (input camera for distorted images), which capture PIV images distorted by deformable mirror, another longpass filter (cut-on wavelength: 600 nm) is also used to block guide star light. Other part of the light which contains guide star light is directed onto an in-house developed HSWFS (Microlens Array: Thorlabs, MLA150-7AR-M), and using a shortpass filter (cut-on wavelength:600 nm) to block light from light sheet. In order to capture the undistorted PIV images, distorted PIV images and Hartmannogram for the dataset, here we use three identical cameras in the setup which are triggered by pulses from signal generator. Table 1 shows the detailed parameters of the setup.

Tables Icon

Table 1. Details of adptive PIV technique

Using the optical setup, distortions from real fluctuating air-water interface are measured first. The air flow is used to excited water surface and measure the wavefront difference of the fluctuating phase boundary. Figures 2(b) and 2(c) shows one frame of measured Hartmannogram and its reconstructed wavefront. In comparison to wavefront distortion from atmosphere turbulence or other common distortion for AO system, fluctuating phase boundary caused distortion is in a larger dynamic range. Typical HSWFS settings cannot meet the requirement of dynamic range and spatial sampling frequency at the same time. From Fig. 2(b), for the HSWFS in the setup, we can see that the spots on the Hartmannogram are beyond its corresponding detection area, i.e. out of the HSWFS dynamic range. A special algorithm to expand dynamic range described in Ref. [35] is used here. To analyze the feature of the distortion from fluctuating air-water interface and also for dataset generation, 10000 Hartmannograms are captured when the water surface is excited by air flow. With the centroids estimation algorithm in [35] and wavefront reconstruction algorithm, the 10000 frames distorted wavefront from fluctuating phase boundary are measured and represented by a linear combination of the Zernike polynomials under Noll index [36]:

$$\Delta\phi=\sum_{m}^{n}a_mZ_m$$

Amplitude spectrum of the measured first 10 orders Zernike wavefronts is shown in Fig. 2(d). Through the analysis of the amplitude spectrum, it is shown that the wavefront distortion caused by fluctuating water surface is mainly composed of low-frequency low-order aberrations, high amplitude at tip and tilt aberration, and defocus and astigmatism also account for a large proportion.

The process of the whole experiment can be summarized as in flowchart of Fig. 4. First, calibrate the HSWFS and the corresponding relationship between HSWFS and DM. The reference wavefront is measured for the HSWFS calibration at first when the water surface is steady and DM is flat. The centroids of spots under reference wavefront is considered as reference spots position. In order to reconstruct wavefront by Zernike polynomials of Eq. (3), the reconstructive matrix for HSWFS need to be calibrated, which is for the calculation of different Zernike order coefficients by a matrix multiplication with spot displacements vector from HSWFS. In the calibration process, each Zernike polynomial is displayed on the DM and the spots displacements on Hartmannogram is estimated and saved in a matrix, the inverse of this matrix is obtained Zernike reconstructive matrix. After displaying all needed Zernike polynomials, the reconstructive matrix can be obtained, then wavefront represent by Eq. (3) can be calculated by the product of spots displacements and reconstructive matrix. In this setup, the effective spot number is 332 and only the first 12 Zernike orders are calculated because distortion from phase boundary mainly consists of low order Zernike polynomials according to result in Fig. 2(d).

 figure: Fig. 4.

Fig. 4. Flowchart showing the process of dataset generation, neural network training and PIV image correction.

Download Full Size | PPT Slide | PDF

After calibration, wavefront distortion from fluctuating air-water interface is measured, 10,000 frames Hartmannogram are taken and wavefronts are reconstructed, as described above. Then set water surface steady and display 10,000 frames measured wavefront of the water surface on deformable mirror. All measured wavefront are displayed consecutively and cyclically to get enough data, and synchronously record particle images and Hartmannogram respectively from detectors. In this way, all data are taken when air-water interface is stable and distortions are reproduced by using a deformable mirror. By this process, we can obtain distorted PIV images, undistorted PIV images as ground-truth and corresponding spot displacements tensor.

The process of alignment of three detectors need to clarified: place mask patterns (calibration charts) on the image plane, based on taken image of mask pattern, region of interests (ROIs) of three detectors can be settled. Such process ensures all detectors have the same field of view, i.e., the alignment of three detectors. And the ROI of PIV image is selected by locating the center of laminar flow on the center of the PIV image along x axis. The ROI on Hartmannogram is a 16 by 16 spots array corresponding to 256 by 256 pixels on undistorted and distorted PIV image.

The size of the dataset is crucial for neural network training, the dataset should provide enough samples for learning, if it is too small in terms of sample distribution diversity, then the model cannot learn well and may cause overfitting. So, the dataset should ensure the distribution diversity, i.e., all the features of the task should be included in the dataset. 20,000 pairs of data are generated for neural network training and test. The dataset size is chosen because from results in Fig. 2(d) it can be seen 10,000 frames distortions provided enough distribution diversity in terms of distortions. Then the training dataset is fed to proposed multiple input neural network for the training. After training, trained neural network can correct distorted PIV image with additional input of spot displacement tensor. The details of neural network architecture and training process is explained in next section.

2.3 Architecture of MIUN and training process

The architecture of proposed neural network MIUN is illustrated in Fig. 5. The main structure is inspired by the popular network U-Net [37], which was proposed for biomedical image segmentation. PIV image correction process can be categorized as an image-to-image translation problem. However, different geometric distortions are randomly distributed and will lead to same correction results. Simply put distorted PIV images as the input of the neural network and undistorted images as label is difficult for neural network training. Different from original U-Net, we add an additional input into the neural network. Spots displacements from the HSWFS measured Hartmannogram are spatially related to distortion function applied on image, which means the spots displacements represent the local geometric distortion on the sampled area. Based on this, we feed the spots displacement tensor (16x16x2) calculated from captured Hartmannogram to our network.

 figure: Fig. 5.

Fig. 5. Schematics and details of the proposed MIUN deep learning model.

Download Full Size | PPT Slide | PDF

The main input to the MIUN is the distorted PIV image (grayscale image, 1 channel) with the size of 256$\times$256 pixels captured from input PIV camera in the optical setup. The additional input is the spots displacements tensor with the size of 16$\times$16, with 2 channels, one channel for displacements under x coordinates and one channel for displacements under y coordinates.

The MIUN consists of three parts, downsampling blocks which basically follow contracting path in typical convolutional network, upsampling blocks which follow the expansive path in U-Net and special concatenate steps for additional input of slope matrix.

Each downsampling block consists of the two repeated convolution layers with 3$\times$3 kernel, each convolution layer followed by a LeakyReLu [38] activation functions and a batch normalization layer [39] to accelerate training. Then a 2$\times$2 max pooling operation with stride 2 is applied for downsampling. Each downsampling block without additional input doubles the number of feature channels.

For the upsampling path, first operation is an 2$\times$2 upsampling layer (up-convolution), which can halves the feature channels number. And a concatenation operation with the corresponding cropped feature from the downsampling blocks. This operation is important in both original U-net and our network due to the loss of information in details in every convolution. Then two 3$\times$3 convolution layers followed by ReLu [40] activation functions and a batch normalization operation.

The significant difference between our network and original U-Net is the additional input part. The main steps are similar with downsampling part, two convolution layers with 3$\times$3 kernel, the first convolution layer is followed by a Tahn activation layer and the second convolution layer is followed by a LeakyRelu activation layer. The reason why we use Tahn activation layer first is because the negative values from slope matrix still play a very important role in the feature map. If LeakyRelu or ReLU is used here, most of negative values will be cut which leads to a loss of the distortion information. The feature map after convolution operation is concatenated with the corresponding cropped feature map in the downsampling path, by this operation, distortion information is fed into the network.

To emphasize the distortion information, the additional information is fed twice. The feature map concatenated to the main downsampling path is also downsampled by a max pooling layer, then after the same convolution operation, the cropped feature is concatenated again with the corresponding feature in the main path.

Compared to fully connected neural networks (FCNNs), convolution neural network can maintain the spatial information of the input and keep the original structure of the input, and the spatial structure of the slope matrix is highly correlated with the image. This is the reason why we still use convolution layer for distortion information input. The structure of MIUN can maintain the spatial structure from different input and concatenated together.

At the final output layer, different from U-Net which solves a image segmentation problem as a classification problem, we use ReLu again to make prediction of the pixel values. Dropout operation [41] is used in upsampling path in order to avoid over-fitting.

After testing different loss functions, it is found that the mean square error (MSE) and mean absolute error (MAE) loss functions have the similar and best performance, MSE loss function is chose in this work. The correction on the distorted image is to learn a complex function $\mathcal {F}$ which maps the distorted PIV image $I$ and spots displacements tensor $S$ to the ground truth PIV image $I_g$ as follows: $\mathcal {F}: \{I,S\} \rightarrow I_g$. The training process can be regarded as minimizing the loss between the corrected particle image and the distorted image, learning from training samples $\Omega$:

$$\operatorname*{arg\ min}_{\mathcal{F}} \sum_{\{I,S\}\in \Omega} ||\mathcal{F}\{I,S\} - I_g||_{2}^{2}$$

The symbol $||\cdots ||_{2}^{2}$ here represents the MSE loss function as follows:

$$loss =\frac{1}{NM}\sum_{x=1}^{N}\sum_{y=1}^{M} ||\hat{I_g}(x,y)-I_g(x,y)||_{2}^{2}$$
where $\hat {I_g}$ is the corrected image from the mapping: $\hat {I_g}=\mathcal {F}\{I,S\}$ and $N,M$ are the image size. We applied Adam optimizer [42] as the technique to update learning rate in the training process. For the training and test of the proposed network, 20,000 frames of data are splitted as proportion of: 18:1:1 for training, validation and test separately. The training process is performed on a GPU (NVIDIA GeForce RTX 2080Ti with a memory of 11 GB). Regarding the GPU memory, the batch size is set to 4 for the training. The training and validation loss are well-converged after 300 epochs, cost about 8 hours. The program is performed on the Pytorch framework with the version of 1.6.

3. Results

3.1 Performance analysis by corrected PIV image quality

After the training process by using experimental dataset and network mentioned above, MIUN can be used directly to correct the distorted PIV images. Figure 6 shows two representative frame of distorted PIV image and its restored results from trained neural network. Image residual between distorted, corrected PIV image and undistorted PIV image (ground truth) is also shown. From Fig. 6 it can be seen that distortion mainly cause inaccurate distribution of particles on PIV image, with a slight blur. After correction, particles on the image are corrected to the right position.

Performance of the trained MIUN is tested by measuring corrected image quality from 1000 frames image pairs from the test dataset. We introduce the Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM) to do the assessment, which are widely used for image quality metrics. As shown in Fig. 6, PIV image quality degrades due to optical distortions. By using undistorted PIV image as reference or ground truth, correction performance of MIUN can be evaluated by measuring PSNR and SSIM. PSNR is originated from the mean square error (MSE), and indicates the ratio of the maximum pixel intensity to the power of the distortion. The mathematical representation of the PSNR is as follows:

$$PSNR=10\log_{10}(\frac{MAX_{r}^{2}}{MSE})=20\log_{10}(\frac{MAX_{r}}{MSE})$$
where $MSE$ is the mean square error between reference image and measuring image, same as mathematical representation of the MSE loss function illustrated in Eq. (5), $MAX_{r}$ is the maximum signal value in reference image. SSIM is usually used to show the similarities between objective image and reference image [43]. The SSIM metric combines local image structure, luminance, and contrast into a single local quality score, which is defined as follow:
$$SSIM=\frac{(2\mu_o\mu_r+C_1)(2\sigma_{or}+C_2)}{(\mu_o^{2}+\mu_r^{2}+C_1)(\sigma_o^{2}+\sigma_r^{2}+C_2)}$$
where $\mu _o$,$\mu _r$ and $\sigma _o^{2}$ and $\sigma _r^{2}$ respectively represent the mean value and variance of the objective image and reference image, $\sigma _{or}$ is the covariance between objective image and reference image. $C_1$ and $C_2$ are the constants which are $C_1=(K_1/L)^{2}$, $C_2=(K_2/L)^{2}$, $K_1$ and $K_2$ are usually set as 0.01 and 0.03, and $L$ is the dynamic range of the pixel value, which is 255 in this case for 8-bit grayscale images.

 figure: Fig. 6.

Fig. 6. Two frames of undistorted-distorted PIV image pairs and its corrected results from trained MIUN, with corresponding measured wavefront and image residuals. Image residual represent the error between undistorted-distorted PIV image and undistorted-corrected PIV image, it is calculated by normalized grayscale image.

Download Full Size | PPT Slide | PDF

Figure 7 shows the improvement on image quality by MIUN distortion correction, mean PSNR improved from 21.6 dB to 31.1 dB after correction and SSIM from 0.53 to 0.78. By analysing the relationship between the distorted PIV image and root-mean-squared-error (RMS) of the wavefront, higher RMS value, which means larger average error measure over the entire reference wavefront, leads to a higher image degradation. However, the correction performance represented by SSIM is almost steady. Another interesting phenomenon shown in Fig. 7 is that after correction, PSNRs are significantly improved but the dispersion on entire dataset increased compared to distorted PIV images. This is because measurement uncertainties from HSWFS are induced to the model. And it can be seen that on the test data, a larger distortion than average can slightly affect the correction performance, this is due to the out of dynamic range on HSWFS, the error in additional input caused slightly decreasing of correction performance. But gengerally, MIUN shows a significant improvement of the image quality.

 figure: Fig. 7.

Fig. 7. Image quality assessment of PIV images before and after correction by trained neural network on test dataset; $\textrm{RMS}_{mean}=4.45$ $\mu \textrm{m}$; $\textrm{RMS}_{max}=11.45$ $\mu \textrm{m}$; $\textrm{PV}_{mean}=19.44$ $\mu \textrm{m}$; $\textrm{PV}_{max}=45.25$ $\mu \textrm{m}$.

Download Full Size | PPT Slide | PDF

3.2 Performance analysis by flow field velocity measurement

To evaluate the improvement of the flow measurement accuracy by MIUN correction, we use the test dataset to calculate the flow field and compare flow measurement results from different cases, i.e. undistorted PIV images, distorted PIV images and corrected PIV images by MIUN.

As described above, the visualization of the flow is conducted by PIV system, with reflecting seeding particles and a laser light sheet to illuminate the particles at the measurement layer. For the flow measurement, PIV camera synchronously records continuous images with a time difference $\Delta t$. The images are sampled into different areas. Particle displacements $\Delta \vec{s}$ in each sampled area is calculated by cross correlation algorithm [4]. The local two dimensional velocity $\vec{v}$ can be determined by the displacement vector $\Delta \vec{s}$ and time difference $\Delta t$, which follow as in one interrogation area: $\vec{v}=\Delta \vec{s}/\Delta t$. And based on the set fps value of all synchronized detectors, $\Delta t=0.02s$ can be calculated in this setup. By repeating this process for all sampled areas, a 2D flow vector field can be calculated [44]. PIV data processing is implemented by an open source software PIVlab [45]. FFT-based (Fast Fourier Transform) cross-correlation algorithm with multiple passes and deforming windows is used for flow field estimation. Interrogation window size for the first pass is set to 64 pixels and 32 for the second pass.

As mentioned above, the flow in the water basin is generated by a nozzle, the mean velocity of the laminar flow behind the nozzle is about $\bar{v}=2.4$ $mm/s$. With the water basin size, depth and parameters of the water (approximately 10 $^{\circ}\textrm{C}$) and based on the hydraulic radius of the open channel flow, the Reynolds number (Re) can be calculated as $\textrm{Re}=146.7$, showing that the measured flow is in laminar regime. From the 1000 frames test data, trained MIUN predicted 1000 corrected PIV images. Figure 8 shows the flow measurement results of undistorted, distorted and corrected cases. In Figs. 8(b)–8(d), the mean flow velocity is represented by white arrows, the local standard uncertainty $\sigma$ of the flow velocity in each sampled area for 1000 consecutive images from the test dataset is represented by background color. The measured standard uncertainty $\delta$ consists of three parts [16]: the uncertainty of the PIV measurement $\sigma _{PIV}$, which comes from the random error and systematic error of the PIV system, the instability of the flow cased uncertainty $\sigma _{flow}$ and uncertainty induced by the distortion from fluctuating phase boundary $\sigma _{distortion}$, which can be considered as a virtual flow. The combined uncertainty can be calculated as [46]:

$$\sigma =\sqrt{\sigma_{PIV}^{2}+\sigma_{flow}^{2}+\sigma_{distortion}^{2}}$$

The resulting mean flow velocity profile along x direction from different cases is depicted in Fig. 8(a). From the profile it can be seen that the velocity profile under the undistorted case is close to a parabolic profile, it is because the field of view (FOV) of the PIV system is close to the end of the nozzle, which leads to a higher velocity in the center area. From the distorted flow velocity profile it can be seen that the distortion induces a virtual reduction of the flow velocity, after correction by MIUN, the velocity profile is restored back. Although there is slight error between corrected profile and reference (undistorted profile), the improvement is significant. These difference is mainly caused by limitation of HSWFS in terms of spatial resolution, measurement uncertainty, dynamic range, etc. The mean velocity error from scale of 1.51 mm to 2.53 mm along y direction, i.e. the deviation to the reference is 11.34 % from distorted PIV images, after correction, it reduced to 2.76 %.

 figure: Fig. 8.

Fig. 8. Flow velocity profile and PIV measurement results from undistorted PIV images, distorted PIV images and corrected PIV images; Figures (b)-(d) show the MIUN correction performance. Flow field is represented by white arrows and the local standard deviation is represented by the background color.

Download Full Size | PPT Slide | PDF

In Fig. 8(b) the flow is measured from the undistorted PIV images, which serves as a reference to determine the measurement standard uncertainty of about 1.04 $mm/s$ for the setup. This uncertainty is consists of the PIV measurement uncertainty $\sigma _{PIV}$ and uncertainty from flow instability $\sigma _{flow}$. The local standard uncertainty is higher on the right central part of the whole FOV, it is because that this part is close to the nozzle, where the water flow is faster than other part, and flow close to nozzle has less particles than others. The concentration of the particles is highly related to the PIV measurement uncertainty, less concentration ratio leads to higher measurement uncertainty. The standard uncertainty is increased to 1.58 $mm/s$ by using distorted PIV images in Fig. 8(c). After correction by the trained MIUN, the mean standard uncertainty can be reduced to 1.13 $mm/s$ which is a reduction for 82.19% on relative mean standard uncertainty. All the cases are concluded in Fig. 8, the MIUN is not able to completely correct the distortion, it is similar to the closed-loop AO approach which achieved 77 % in our previous work [16], where a typical adaptive optic system with a deformable mirror correction was employed for the same problem.

4. Conclusion

Time-varying distortions degrade the accuracy of quantitative, imaging-based measurements, like PIV, which caused by fluctuating phase boundaries between measurement object and detector. Such distortions lead to image degradation and consequently to velocity uncertainties. We proposed an actuator-less adaptive optics technique for image distortion correction by using deep learning method and wavefront measurement. For the first time neural network-based imaging correction is applied in fluid flow metrology. Unlike typical adaptive optics system based on wavefront corrector, this method has overcome the limitation caused by actuators. In traditional wavefront correction systems, the correction performance is decided by the performance of the actuator. For distortions from fast fluctuating phase boundary, actuators like deformable mirror suffered from its limited bandwidth, stroke and speed. Since the correction is carried out with the neural networks, these restrictions do not apply. After training, the neural network can correct distortions in real-time.

The distortion model of fluctuating phase boundary and its measurement principle has been illustrated first. A spatially distributed guide star with Hartmann-Shack wavefront sensor is used for distortion measurement. The measured distortion is used as an additional input of proposed multiple input convolutional neural network called MIUN. To our knowledge, for the first time multiple input convolutional neural network is created for an image translation problem. Unlike general deep learning approaches which usually use synthetic dataset for model training, we designed an optical setup to generate dataset. Correction performance is evaluated by assessment of image quality. Comparison of flow measurement results are used to prove the correction performance of the proposed method. The mean relative velocity standard uncertainty that was additionally induced by the fluctuating aberrations can be reduced up to 82 %. As a perspective, the Fresnel guide star as presented in reference [12] can be adapted to this method to conduct the measurement in reflection based on the Fresnel reflex. This would allow one to measure through a single optical access what would yield further significance for industrial applications. This approach can be used to measure the liquid flow inside droplets or gas flows within bubbles which are characterized by an all-side open surface that hindered undistorted optical measurements so far. The measurement of the flow inside of droplets is important in fuel cells industry, in terms of the water droplets [47] in the fuel cell need to be removed. The water droplets can be removed more efficiently if the understanding of the inner flow field is more clear [48]. Optical distortion correction for these flow measurement applications are seriously limited by the performance of actuators. Here, deep-learning based approach without an actuator can bring the new perspective to these fields. Such novel laser-based flow measurements in bubbles, droplets can help to elucidate energy saving potentials.

Funding

Deutsche Forschungsgemeinschaft (BU 2241/6-1); Deutscher Akademischer Austauschdienst (91741884); Allianz Industrie Forschung (21190 BG/2).

Disclosures

The authors declare no conflicts of interest.

References

1. R. K. Tyson, Principles of Adaptive Optics, vol. 4th edition (CRC Press, Boca Raton, 2016).

2. C. Tropea and A. L. Yarin, Springer handbook of experimental fluid mechanics (Springer Science & Business Media, 2007).

3. F. Durst, Fluid mechanics: an introduction to the theory of fluid flows (Springer Science & Business Media, 2008).

4. M. Raffel, C. E. Willert, F. Scarano, C. J. Kähler, S. T. Wereley, and J. Kompenhans, Particle image velocimetry: a practical guide (Springer, 2018).

5. C. Vanselow and A. Fischer, “Influence of inhomogeneous refractive index fields on particle image velocimetry,” Opt. Lasers Eng. 107, 221–230 (2018). [CrossRef]  

6. G. Gomit, L. Chatellier, D. Calluaud, and L. David, “Free surface measurement by stereo-refraction,” Exp. Fluids 54(6), 1540 (2013). [CrossRef]  

7. B. Böhm, C. Heeger, R. L. Gordon, and A. Dreizler, “New perspectives on turbulent combustion: Multi-parameter high-speed planar laser diagnostics,” Flow, Turbul. Combust. 86(3-4), 313–341 (2011). [CrossRef]  

8. D. L. Reuss, M. Megerle, and V. Sick, “Particle-image velocimetry measurement errors when imaging through a transparent engine cylinder,” Meas. Sci. Technol. 13(7), 1029–1035 (2002). [CrossRef]  

9. G. Minor, P. Oshkai, and N. Djilali, “Optical distortion correction for liquid droplet visualization using the ray tracing method: further considerations,” Meas. Sci. Technol. 18(11), L23–L28 (2007). [CrossRef]  

10. L. Büttner, C. Leithold, and J. Czarske, “Interferometric velocity measurements through a fluctuating gas-liquid interface employing adaptive optics,” Opt. Express 21(25), 30653–30663 (2013). [CrossRef]  

11. H. Radner, L. Büttner, and J. Czarske, “Interferometric velocity measurements through a fluctuating phase boundary using two fresnel guide stars,” Opt. Lett. 40(16), 3766–3769 (2015). [CrossRef]  

12. N. Koukourakis, B. Fregin, J. König, L. Büttner, and J. W. Czarske, “Wavefront shaping for imaging-based flow velocity measurements through distortions using a fresnel guide star,” Opt. Express 24(19), 22074–22087 (2016). [CrossRef]  

13. M. Teich, J. Grottke, H. Radner, L. Büttner, and J. W. Czarske, “Adaptive particle image velocimetry based on sharpness metrics,” J. Eur. Opt. Soc.-Rapid Publ. 14(1), 5 (2018). [CrossRef]  

14. M. Teich, M. Mattern, J. Sturm, L. Büttner, and J. W. Czarske, “Spiral phase mask shadow-imaging for 3d-measurement of flow fields,” Opt. Express 24(24), 27371–27381 (2016). [CrossRef]  

15. K. Philipp, F. Lemke, S. Scholz, U. Wallrabe, M. C. Wapler, N. Koukourakis, and J. W. Czarske, “Diffraction-limited axial scanning in thick biological tissue with an aberration-correcting adaptive lens,” Sci. Rep. 9(1), 9532 (2019). [CrossRef]  

16. H. Radner, J. Stange, L. Buttner, and J. Czarske, “Field programmable system-on-chip based control system for real-time distortion correction in optical imaging,” IEEE Trans. Ind. Electron. 68(4), 3370–3379 (2021). [CrossRef]  

17. R. Nauber, L. Büttner, and J. Czarske, “Measurement uncertainty analysis of field-programmable gate-array-based, real-time signal processing for ultrasound flow imaging,” J. Sens. Sens. Syst. 9(2), 227–238 (2020). [CrossRef]  

18. R. Kuschmierz, E. Scharf, N. Koukourakis, and J. W. Czarske, “Self-calibration of lensless holographic endoscope using programmable guide stars,” Opt. Lett. 43(12), 2997–3000 (2018). [CrossRef]  

19. Y. Nishizaki, M. Valdivia, R. Horisaki, K. Kitaguchi, M. Saito, J. Tanida, and E. Vera, “Deep learning wavefront sensing,” Opt. Express 27(1), 240–251 (2019). [CrossRef]  

20. Z. Li and X. Li, “Centroid computation for shack-hartmann wavefront sensor in extreme situations based on artificial neural networks,” Opt. Express 26(24), 31675–31692 (2018). [CrossRef]  

21. Z. Li, X. Li, and R. Liang, “Random two-frame interferometry based on deep learning,” Opt. Express 28(17), 24747–24760 (2020). [CrossRef]  

22. H. Ke, B. Xu, Z. Xu, L. Wen, P. Yang, S. Wang, and L. Dong, “Self-learning control for wavefront sensorless adaptive optics system through deep reinforcement learning,” Optik 178, 785–793 (2019). [CrossRef]  

23. Q. Tian, C. Lu, B. Liu, L. Zhu, X. Pan, Q. Zhang, L. Yang, F. Tian, and X. Xin, “Dnn-based aberration correction in a wavefront sensorless adaptive optics system,” Opt. Express 27(8), 10765–10776 (2019). [CrossRef]  

24. X. Fei, J. Zhao, H. Zhao, D. Yun, and Y. Zhang, “Deblurring adaptive optics retinal images using deep convolutional neural networks,” Biomed. Opt. Express 8(12), 5675–5687 (2017). [CrossRef]  

25. B. P. Cumming and M. Gu, “Direct determination of aberration functions in microscopy by an artificial neural network,” Opt. Express 28(10), 14511–14521 (2020). [CrossRef]  

26. X. Qu, Y. Song, Y. Jin, Z. Guo, Z. Li, and A. He, “3d particle field reconstruction method based on convolutional neural network for sapiv,” Opt. Express 27(8), 11413–11434 (2019). [CrossRef]  

27. S. Cai, J. Liang, Q. Gao, C. Xu, and R. Wei, “Particle image velocimetry based on a deep learning motion estimator,” IEEE Trans. Instrum. Meas. 69(6), 3538–3554 (2020). [CrossRef]  

28. Y. Lee, H. Yang, and Z. Yin, “Piv-dcnn: cascaded deep convolutional neural networks for particle image velocimetry,” Exp. Fluids 58(12), 171 (2017). [CrossRef]  

29. S. Cai, S. Zhou, C. Xu, and Q. Gao, “Dense motion estimation of particle images via a convolutional neural network,” Exp. Fluids 60(4), 73 (2019). [CrossRef]  

30. A. V. Grayver and J. Noir, “Particle streak velocimetry using ensemble convolutional neural networks,” Exp. Fluids 61(2), 38 (2020). [CrossRef]  

31. J. König, M. Chen, W. Rösing, D. Boho, P. Mäder, and C. Cierpka, “On the use of a cascaded convolutional neural network for three-dimensional flow measurements using astigmatic ptv,” Meas. Sci. Technol. 31(7), 074015 (2020). [CrossRef]  

32. Z. Li, Z. Murez, D. Kriegman, R. Ramamoorthi, and M. Chandraker, “Learning to see through turbulent water,” in 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), (2018), pp. 512–520.

33. Y. Tian and S. G. Narasimhan, “Seeing through water: Image restoration using model-based tracking,” in 2009 IEEE 12th International Conference on Computer Vision, (2009), pp. 2303–2310.

34. E. Keogh and A. Mueen, Curse of Dimensionality (Springer US, Boston, MA, 2017), pp. 314–315.

35. Z. Gao, X. Li, and H. Ye, “Large dynamic range shack–hartmann wavefront measurement based on image segmentation and a neighbouring-region search algorithm,” Opt. Commun. 450, 190–201 (2019). [CrossRef]  

36. R. J. Noll, “Zernike polynomials and atmospheric turbulence,” J. Opt. Soc. Am. 66(3), 207–211 (1976). [CrossRef]  

37. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, (Springer International Publishing, 2015), pp. 234–241.

38. A. L. Maas, A. Y. Hannun, and A. Y. Ng, “Rectifier nonlinearities improve neural network acoustic models,” in Proc. icml, (2013), p. 3.

39. S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” arXiv preprint arXiv:1502.03167 (2015).

40. X. Glorot, A. Bordes, and Y. Bengio, “Deep sparse rectifier neural networks,” in Proceedings of the fourteenth international conference on artificial intelligence and statistics, (2011), pp. 315–323.

41. N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: A simple way to prevent neural networks from overfitting,” Journal of Machine Learning Research 15, 1929–1958 (2014).

42. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980 (2014).

43. W. Zhou, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004). [CrossRef]  

44. F. Scarano, “Iterative image deformation methods in piv,” Meas. Sci. Technol. 13(1), R1–R19 (2002). [CrossRef]  

45. W. Thielicke and E. Stamhuis, “Pivlab–towards user-friendly, affordable and accurate digital particle image velocimetry in matlab,” J. Open Res. Software 2, e30 (2014). [CrossRef]  

46. A. Sciacchitano, “Uncertainty quantification in particle image velocimetry,” Meas. Sci. Technol. 30(9), 092001 (2019). [CrossRef]  

47. J. Breitenbach, I. V. Roisman, and C. Tropea, “From drop impact physics to spray cooling models: a critical review,” Exp. Fluids 59(3), 55 (2018). [CrossRef]  

48. S. Milles, M. Soldera, B. Voisiat, and A. F. Lasagni, “Fabrication of superhydrophobic and ice-repellent surfaces on pure aluminium using single and multiscaled periodic textures,” Sci. Rep. 9(1), 13944 (2019). [CrossRef]  

References

  • View by:
  • |
  • |
  • |

  1. R. K. Tyson, Principles of Adaptive Optics, vol. 4th edition (CRC Press, Boca Raton, 2016).
  2. C. Tropea and A. L. Yarin, Springer handbook of experimental fluid mechanics (Springer Science & Business Media, 2007).
  3. F. Durst, Fluid mechanics: an introduction to the theory of fluid flows (Springer Science & Business Media, 2008).
  4. M. Raffel, C. E. Willert, F. Scarano, C. J. Kähler, S. T. Wereley, and J. Kompenhans, Particle image velocimetry: a practical guide (Springer, 2018).
  5. C. Vanselow and A. Fischer, “Influence of inhomogeneous refractive index fields on particle image velocimetry,” Opt. Lasers Eng. 107, 221–230 (2018).
    [Crossref]
  6. G. Gomit, L. Chatellier, D. Calluaud, and L. David, “Free surface measurement by stereo-refraction,” Exp. Fluids 54(6), 1540 (2013).
    [Crossref]
  7. B. Böhm, C. Heeger, R. L. Gordon, and A. Dreizler, “New perspectives on turbulent combustion: Multi-parameter high-speed planar laser diagnostics,” Flow, Turbul. Combust. 86(3-4), 313–341 (2011).
    [Crossref]
  8. D. L. Reuss, M. Megerle, and V. Sick, “Particle-image velocimetry measurement errors when imaging through a transparent engine cylinder,” Meas. Sci. Technol. 13(7), 1029–1035 (2002).
    [Crossref]
  9. G. Minor, P. Oshkai, and N. Djilali, “Optical distortion correction for liquid droplet visualization using the ray tracing method: further considerations,” Meas. Sci. Technol. 18(11), L23–L28 (2007).
    [Crossref]
  10. L. Büttner, C. Leithold, and J. Czarske, “Interferometric velocity measurements through a fluctuating gas-liquid interface employing adaptive optics,” Opt. Express 21(25), 30653–30663 (2013).
    [Crossref]
  11. H. Radner, L. Büttner, and J. Czarske, “Interferometric velocity measurements through a fluctuating phase boundary using two fresnel guide stars,” Opt. Lett. 40(16), 3766–3769 (2015).
    [Crossref]
  12. N. Koukourakis, B. Fregin, J. König, L. Büttner, and J. W. Czarske, “Wavefront shaping for imaging-based flow velocity measurements through distortions using a fresnel guide star,” Opt. Express 24(19), 22074–22087 (2016).
    [Crossref]
  13. M. Teich, J. Grottke, H. Radner, L. Büttner, and J. W. Czarske, “Adaptive particle image velocimetry based on sharpness metrics,” J. Eur. Opt. Soc.-Rapid Publ. 14(1), 5 (2018).
    [Crossref]
  14. M. Teich, M. Mattern, J. Sturm, L. Büttner, and J. W. Czarske, “Spiral phase mask shadow-imaging for 3d-measurement of flow fields,” Opt. Express 24(24), 27371–27381 (2016).
    [Crossref]
  15. K. Philipp, F. Lemke, S. Scholz, U. Wallrabe, M. C. Wapler, N. Koukourakis, and J. W. Czarske, “Diffraction-limited axial scanning in thick biological tissue with an aberration-correcting adaptive lens,” Sci. Rep. 9(1), 9532 (2019).
    [Crossref]
  16. H. Radner, J. Stange, L. Buttner, and J. Czarske, “Field programmable system-on-chip based control system for real-time distortion correction in optical imaging,” IEEE Trans. Ind. Electron. 68(4), 3370–3379 (2021).
    [Crossref]
  17. R. Nauber, L. Büttner, and J. Czarske, “Measurement uncertainty analysis of field-programmable gate-array-based, real-time signal processing for ultrasound flow imaging,” J. Sens. Sens. Syst. 9(2), 227–238 (2020).
    [Crossref]
  18. R. Kuschmierz, E. Scharf, N. Koukourakis, and J. W. Czarske, “Self-calibration of lensless holographic endoscope using programmable guide stars,” Opt. Lett. 43(12), 2997–3000 (2018).
    [Crossref]
  19. Y. Nishizaki, M. Valdivia, R. Horisaki, K. Kitaguchi, M. Saito, J. Tanida, and E. Vera, “Deep learning wavefront sensing,” Opt. Express 27(1), 240–251 (2019).
    [Crossref]
  20. Z. Li and X. Li, “Centroid computation for shack-hartmann wavefront sensor in extreme situations based on artificial neural networks,” Opt. Express 26(24), 31675–31692 (2018).
    [Crossref]
  21. Z. Li, X. Li, and R. Liang, “Random two-frame interferometry based on deep learning,” Opt. Express 28(17), 24747–24760 (2020).
    [Crossref]
  22. H. Ke, B. Xu, Z. Xu, L. Wen, P. Yang, S. Wang, and L. Dong, “Self-learning control for wavefront sensorless adaptive optics system through deep reinforcement learning,” Optik 178, 785–793 (2019).
    [Crossref]
  23. Q. Tian, C. Lu, B. Liu, L. Zhu, X. Pan, Q. Zhang, L. Yang, F. Tian, and X. Xin, “Dnn-based aberration correction in a wavefront sensorless adaptive optics system,” Opt. Express 27(8), 10765–10776 (2019).
    [Crossref]
  24. X. Fei, J. Zhao, H. Zhao, D. Yun, and Y. Zhang, “Deblurring adaptive optics retinal images using deep convolutional neural networks,” Biomed. Opt. Express 8(12), 5675–5687 (2017).
    [Crossref]
  25. B. P. Cumming and M. Gu, “Direct determination of aberration functions in microscopy by an artificial neural network,” Opt. Express 28(10), 14511–14521 (2020).
    [Crossref]
  26. X. Qu, Y. Song, Y. Jin, Z. Guo, Z. Li, and A. He, “3d particle field reconstruction method based on convolutional neural network for sapiv,” Opt. Express 27(8), 11413–11434 (2019).
    [Crossref]
  27. S. Cai, J. Liang, Q. Gao, C. Xu, and R. Wei, “Particle image velocimetry based on a deep learning motion estimator,” IEEE Trans. Instrum. Meas. 69(6), 3538–3554 (2020).
    [Crossref]
  28. Y. Lee, H. Yang, and Z. Yin, “Piv-dcnn: cascaded deep convolutional neural networks for particle image velocimetry,” Exp. Fluids 58(12), 171 (2017).
    [Crossref]
  29. S. Cai, S. Zhou, C. Xu, and Q. Gao, “Dense motion estimation of particle images via a convolutional neural network,” Exp. Fluids 60(4), 73 (2019).
    [Crossref]
  30. A. V. Grayver and J. Noir, “Particle streak velocimetry using ensemble convolutional neural networks,” Exp. Fluids 61(2), 38 (2020).
    [Crossref]
  31. J. König, M. Chen, W. Rösing, D. Boho, P. Mäder, and C. Cierpka, “On the use of a cascaded convolutional neural network for three-dimensional flow measurements using astigmatic ptv,” Meas. Sci. Technol. 31(7), 074015 (2020).
    [Crossref]
  32. Z. Li, Z. Murez, D. Kriegman, R. Ramamoorthi, and M. Chandraker, “Learning to see through turbulent water,” in 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), (2018), pp. 512–520.
  33. Y. Tian and S. G. Narasimhan, “Seeing through water: Image restoration using model-based tracking,” in 2009 IEEE 12th International Conference on Computer Vision, (2009), pp. 2303–2310.
  34. E. Keogh and A. Mueen, Curse of Dimensionality (Springer US, Boston, MA, 2017), pp. 314–315.
  35. Z. Gao, X. Li, and H. Ye, “Large dynamic range shack–hartmann wavefront measurement based on image segmentation and a neighbouring-region search algorithm,” Opt. Commun. 450, 190–201 (2019).
    [Crossref]
  36. R. J. Noll, “Zernike polynomials and atmospheric turbulence,” J. Opt. Soc. Am. 66(3), 207–211 (1976).
    [Crossref]
  37. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, (Springer International Publishing, 2015), pp. 234–241.
  38. A. L. Maas, A. Y. Hannun, and A. Y. Ng, “Rectifier nonlinearities improve neural network acoustic models,” in Proc. icml, (2013), p. 3.
  39. S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” arXiv preprint arXiv:1502.03167 (2015).
  40. X. Glorot, A. Bordes, and Y. Bengio, “Deep sparse rectifier neural networks,” in Proceedings of the fourteenth international conference on artificial intelligence and statistics, (2011), pp. 315–323.
  41. N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: A simple way to prevent neural networks from overfitting,” Journal of Machine Learning Research 15, 1929–1958 (2014).
  42. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980 (2014).
  43. W. Zhou, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
    [Crossref]
  44. F. Scarano, “Iterative image deformation methods in piv,” Meas. Sci. Technol. 13(1), R1–R19 (2002).
    [Crossref]
  45. W. Thielicke and E. Stamhuis, “Pivlab–towards user-friendly, affordable and accurate digital particle image velocimetry in matlab,” J. Open Res. Software 2, e30 (2014).
    [Crossref]
  46. A. Sciacchitano, “Uncertainty quantification in particle image velocimetry,” Meas. Sci. Technol. 30(9), 092001 (2019).
    [Crossref]
  47. J. Breitenbach, I. V. Roisman, and C. Tropea, “From drop impact physics to spray cooling models: a critical review,” Exp. Fluids 59(3), 55 (2018).
    [Crossref]
  48. S. Milles, M. Soldera, B. Voisiat, and A. F. Lasagni, “Fabrication of superhydrophobic and ice-repellent surfaces on pure aluminium using single and multiscaled periodic textures,” Sci. Rep. 9(1), 13944 (2019).
    [Crossref]

2021 (1)

H. Radner, J. Stange, L. Buttner, and J. Czarske, “Field programmable system-on-chip based control system for real-time distortion correction in optical imaging,” IEEE Trans. Ind. Electron. 68(4), 3370–3379 (2021).
[Crossref]

2020 (6)

R. Nauber, L. Büttner, and J. Czarske, “Measurement uncertainty analysis of field-programmable gate-array-based, real-time signal processing for ultrasound flow imaging,” J. Sens. Sens. Syst. 9(2), 227–238 (2020).
[Crossref]

Z. Li, X. Li, and R. Liang, “Random two-frame interferometry based on deep learning,” Opt. Express 28(17), 24747–24760 (2020).
[Crossref]

B. P. Cumming and M. Gu, “Direct determination of aberration functions in microscopy by an artificial neural network,” Opt. Express 28(10), 14511–14521 (2020).
[Crossref]

S. Cai, J. Liang, Q. Gao, C. Xu, and R. Wei, “Particle image velocimetry based on a deep learning motion estimator,” IEEE Trans. Instrum. Meas. 69(6), 3538–3554 (2020).
[Crossref]

A. V. Grayver and J. Noir, “Particle streak velocimetry using ensemble convolutional neural networks,” Exp. Fluids 61(2), 38 (2020).
[Crossref]

J. König, M. Chen, W. Rösing, D. Boho, P. Mäder, and C. Cierpka, “On the use of a cascaded convolutional neural network for three-dimensional flow measurements using astigmatic ptv,” Meas. Sci. Technol. 31(7), 074015 (2020).
[Crossref]

2019 (9)

Z. Gao, X. Li, and H. Ye, “Large dynamic range shack–hartmann wavefront measurement based on image segmentation and a neighbouring-region search algorithm,” Opt. Commun. 450, 190–201 (2019).
[Crossref]

A. Sciacchitano, “Uncertainty quantification in particle image velocimetry,” Meas. Sci. Technol. 30(9), 092001 (2019).
[Crossref]

K. Philipp, F. Lemke, S. Scholz, U. Wallrabe, M. C. Wapler, N. Koukourakis, and J. W. Czarske, “Diffraction-limited axial scanning in thick biological tissue with an aberration-correcting adaptive lens,” Sci. Rep. 9(1), 9532 (2019).
[Crossref]

S. Cai, S. Zhou, C. Xu, and Q. Gao, “Dense motion estimation of particle images via a convolutional neural network,” Exp. Fluids 60(4), 73 (2019).
[Crossref]

X. Qu, Y. Song, Y. Jin, Z. Guo, Z. Li, and A. He, “3d particle field reconstruction method based on convolutional neural network for sapiv,” Opt. Express 27(8), 11413–11434 (2019).
[Crossref]

Y. Nishizaki, M. Valdivia, R. Horisaki, K. Kitaguchi, M. Saito, J. Tanida, and E. Vera, “Deep learning wavefront sensing,” Opt. Express 27(1), 240–251 (2019).
[Crossref]

H. Ke, B. Xu, Z. Xu, L. Wen, P. Yang, S. Wang, and L. Dong, “Self-learning control for wavefront sensorless adaptive optics system through deep reinforcement learning,” Optik 178, 785–793 (2019).
[Crossref]

Q. Tian, C. Lu, B. Liu, L. Zhu, X. Pan, Q. Zhang, L. Yang, F. Tian, and X. Xin, “Dnn-based aberration correction in a wavefront sensorless adaptive optics system,” Opt. Express 27(8), 10765–10776 (2019).
[Crossref]

S. Milles, M. Soldera, B. Voisiat, and A. F. Lasagni, “Fabrication of superhydrophobic and ice-repellent surfaces on pure aluminium using single and multiscaled periodic textures,” Sci. Rep. 9(1), 13944 (2019).
[Crossref]

2018 (5)

R. Kuschmierz, E. Scharf, N. Koukourakis, and J. W. Czarske, “Self-calibration of lensless holographic endoscope using programmable guide stars,” Opt. Lett. 43(12), 2997–3000 (2018).
[Crossref]

M. Teich, J. Grottke, H. Radner, L. Büttner, and J. W. Czarske, “Adaptive particle image velocimetry based on sharpness metrics,” J. Eur. Opt. Soc.-Rapid Publ. 14(1), 5 (2018).
[Crossref]

C. Vanselow and A. Fischer, “Influence of inhomogeneous refractive index fields on particle image velocimetry,” Opt. Lasers Eng. 107, 221–230 (2018).
[Crossref]

Z. Li and X. Li, “Centroid computation for shack-hartmann wavefront sensor in extreme situations based on artificial neural networks,” Opt. Express 26(24), 31675–31692 (2018).
[Crossref]

J. Breitenbach, I. V. Roisman, and C. Tropea, “From drop impact physics to spray cooling models: a critical review,” Exp. Fluids 59(3), 55 (2018).
[Crossref]

2017 (2)

Y. Lee, H. Yang, and Z. Yin, “Piv-dcnn: cascaded deep convolutional neural networks for particle image velocimetry,” Exp. Fluids 58(12), 171 (2017).
[Crossref]

X. Fei, J. Zhao, H. Zhao, D. Yun, and Y. Zhang, “Deblurring adaptive optics retinal images using deep convolutional neural networks,” Biomed. Opt. Express 8(12), 5675–5687 (2017).
[Crossref]

2016 (2)

2015 (1)

2014 (2)

N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: A simple way to prevent neural networks from overfitting,” Journal of Machine Learning Research 15, 1929–1958 (2014).

W. Thielicke and E. Stamhuis, “Pivlab–towards user-friendly, affordable and accurate digital particle image velocimetry in matlab,” J. Open Res. Software 2, e30 (2014).
[Crossref]

2013 (2)

2011 (1)

B. Böhm, C. Heeger, R. L. Gordon, and A. Dreizler, “New perspectives on turbulent combustion: Multi-parameter high-speed planar laser diagnostics,” Flow, Turbul. Combust. 86(3-4), 313–341 (2011).
[Crossref]

2007 (1)

G. Minor, P. Oshkai, and N. Djilali, “Optical distortion correction for liquid droplet visualization using the ray tracing method: further considerations,” Meas. Sci. Technol. 18(11), L23–L28 (2007).
[Crossref]

2004 (1)

W. Zhou, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
[Crossref]

2002 (2)

F. Scarano, “Iterative image deformation methods in piv,” Meas. Sci. Technol. 13(1), R1–R19 (2002).
[Crossref]

D. L. Reuss, M. Megerle, and V. Sick, “Particle-image velocimetry measurement errors when imaging through a transparent engine cylinder,” Meas. Sci. Technol. 13(7), 1029–1035 (2002).
[Crossref]

1976 (1)

Ba, J.

D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980 (2014).

Bengio, Y.

X. Glorot, A. Bordes, and Y. Bengio, “Deep sparse rectifier neural networks,” in Proceedings of the fourteenth international conference on artificial intelligence and statistics, (2011), pp. 315–323.

Böhm, B.

B. Böhm, C. Heeger, R. L. Gordon, and A. Dreizler, “New perspectives on turbulent combustion: Multi-parameter high-speed planar laser diagnostics,” Flow, Turbul. Combust. 86(3-4), 313–341 (2011).
[Crossref]

Boho, D.

J. König, M. Chen, W. Rösing, D. Boho, P. Mäder, and C. Cierpka, “On the use of a cascaded convolutional neural network for three-dimensional flow measurements using astigmatic ptv,” Meas. Sci. Technol. 31(7), 074015 (2020).
[Crossref]

Bordes, A.

X. Glorot, A. Bordes, and Y. Bengio, “Deep sparse rectifier neural networks,” in Proceedings of the fourteenth international conference on artificial intelligence and statistics, (2011), pp. 315–323.

Bovik, A. C.

W. Zhou, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
[Crossref]

Breitenbach, J.

J. Breitenbach, I. V. Roisman, and C. Tropea, “From drop impact physics to spray cooling models: a critical review,” Exp. Fluids 59(3), 55 (2018).
[Crossref]

Brox, T.

O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, (Springer International Publishing, 2015), pp. 234–241.

Buttner, L.

H. Radner, J. Stange, L. Buttner, and J. Czarske, “Field programmable system-on-chip based control system for real-time distortion correction in optical imaging,” IEEE Trans. Ind. Electron. 68(4), 3370–3379 (2021).
[Crossref]

Büttner, L.

Cai, S.

S. Cai, J. Liang, Q. Gao, C. Xu, and R. Wei, “Particle image velocimetry based on a deep learning motion estimator,” IEEE Trans. Instrum. Meas. 69(6), 3538–3554 (2020).
[Crossref]

S. Cai, S. Zhou, C. Xu, and Q. Gao, “Dense motion estimation of particle images via a convolutional neural network,” Exp. Fluids 60(4), 73 (2019).
[Crossref]

Calluaud, D.

G. Gomit, L. Chatellier, D. Calluaud, and L. David, “Free surface measurement by stereo-refraction,” Exp. Fluids 54(6), 1540 (2013).
[Crossref]

Chandraker, M.

Z. Li, Z. Murez, D. Kriegman, R. Ramamoorthi, and M. Chandraker, “Learning to see through turbulent water,” in 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), (2018), pp. 512–520.

Chatellier, L.

G. Gomit, L. Chatellier, D. Calluaud, and L. David, “Free surface measurement by stereo-refraction,” Exp. Fluids 54(6), 1540 (2013).
[Crossref]

Chen, M.

J. König, M. Chen, W. Rösing, D. Boho, P. Mäder, and C. Cierpka, “On the use of a cascaded convolutional neural network for three-dimensional flow measurements using astigmatic ptv,” Meas. Sci. Technol. 31(7), 074015 (2020).
[Crossref]

Cierpka, C.

J. König, M. Chen, W. Rösing, D. Boho, P. Mäder, and C. Cierpka, “On the use of a cascaded convolutional neural network for three-dimensional flow measurements using astigmatic ptv,” Meas. Sci. Technol. 31(7), 074015 (2020).
[Crossref]

Cumming, B. P.

Czarske, J.

H. Radner, J. Stange, L. Buttner, and J. Czarske, “Field programmable system-on-chip based control system for real-time distortion correction in optical imaging,” IEEE Trans. Ind. Electron. 68(4), 3370–3379 (2021).
[Crossref]

R. Nauber, L. Büttner, and J. Czarske, “Measurement uncertainty analysis of field-programmable gate-array-based, real-time signal processing for ultrasound flow imaging,” J. Sens. Sens. Syst. 9(2), 227–238 (2020).
[Crossref]

H. Radner, L. Büttner, and J. Czarske, “Interferometric velocity measurements through a fluctuating phase boundary using two fresnel guide stars,” Opt. Lett. 40(16), 3766–3769 (2015).
[Crossref]

L. Büttner, C. Leithold, and J. Czarske, “Interferometric velocity measurements through a fluctuating gas-liquid interface employing adaptive optics,” Opt. Express 21(25), 30653–30663 (2013).
[Crossref]

Czarske, J. W.

David, L.

G. Gomit, L. Chatellier, D. Calluaud, and L. David, “Free surface measurement by stereo-refraction,” Exp. Fluids 54(6), 1540 (2013).
[Crossref]

Djilali, N.

G. Minor, P. Oshkai, and N. Djilali, “Optical distortion correction for liquid droplet visualization using the ray tracing method: further considerations,” Meas. Sci. Technol. 18(11), L23–L28 (2007).
[Crossref]

Dong, L.

H. Ke, B. Xu, Z. Xu, L. Wen, P. Yang, S. Wang, and L. Dong, “Self-learning control for wavefront sensorless adaptive optics system through deep reinforcement learning,” Optik 178, 785–793 (2019).
[Crossref]

Dreizler, A.

B. Böhm, C. Heeger, R. L. Gordon, and A. Dreizler, “New perspectives on turbulent combustion: Multi-parameter high-speed planar laser diagnostics,” Flow, Turbul. Combust. 86(3-4), 313–341 (2011).
[Crossref]

Durst, F.

F. Durst, Fluid mechanics: an introduction to the theory of fluid flows (Springer Science & Business Media, 2008).

Fei, X.

Fischer, A.

C. Vanselow and A. Fischer, “Influence of inhomogeneous refractive index fields on particle image velocimetry,” Opt. Lasers Eng. 107, 221–230 (2018).
[Crossref]

Fischer, P.

O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, (Springer International Publishing, 2015), pp. 234–241.

Fregin, B.

Gao, Q.

S. Cai, J. Liang, Q. Gao, C. Xu, and R. Wei, “Particle image velocimetry based on a deep learning motion estimator,” IEEE Trans. Instrum. Meas. 69(6), 3538–3554 (2020).
[Crossref]

S. Cai, S. Zhou, C. Xu, and Q. Gao, “Dense motion estimation of particle images via a convolutional neural network,” Exp. Fluids 60(4), 73 (2019).
[Crossref]

Gao, Z.

Z. Gao, X. Li, and H. Ye, “Large dynamic range shack–hartmann wavefront measurement based on image segmentation and a neighbouring-region search algorithm,” Opt. Commun. 450, 190–201 (2019).
[Crossref]

Glorot, X.

X. Glorot, A. Bordes, and Y. Bengio, “Deep sparse rectifier neural networks,” in Proceedings of the fourteenth international conference on artificial intelligence and statistics, (2011), pp. 315–323.

Gomit, G.

G. Gomit, L. Chatellier, D. Calluaud, and L. David, “Free surface measurement by stereo-refraction,” Exp. Fluids 54(6), 1540 (2013).
[Crossref]

Gordon, R. L.

B. Böhm, C. Heeger, R. L. Gordon, and A. Dreizler, “New perspectives on turbulent combustion: Multi-parameter high-speed planar laser diagnostics,” Flow, Turbul. Combust. 86(3-4), 313–341 (2011).
[Crossref]

Grayver, A. V.

A. V. Grayver and J. Noir, “Particle streak velocimetry using ensemble convolutional neural networks,” Exp. Fluids 61(2), 38 (2020).
[Crossref]

Grottke, J.

M. Teich, J. Grottke, H. Radner, L. Büttner, and J. W. Czarske, “Adaptive particle image velocimetry based on sharpness metrics,” J. Eur. Opt. Soc.-Rapid Publ. 14(1), 5 (2018).
[Crossref]

Gu, M.

Guo, Z.

Hannun, A. Y.

A. L. Maas, A. Y. Hannun, and A. Y. Ng, “Rectifier nonlinearities improve neural network acoustic models,” in Proc. icml, (2013), p. 3.

He, A.

Heeger, C.

B. Böhm, C. Heeger, R. L. Gordon, and A. Dreizler, “New perspectives on turbulent combustion: Multi-parameter high-speed planar laser diagnostics,” Flow, Turbul. Combust. 86(3-4), 313–341 (2011).
[Crossref]

Hinton, G.

N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: A simple way to prevent neural networks from overfitting,” Journal of Machine Learning Research 15, 1929–1958 (2014).

Horisaki, R.

Ioffe, S.

S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” arXiv preprint arXiv:1502.03167 (2015).

Jin, Y.

Kähler, C. J.

M. Raffel, C. E. Willert, F. Scarano, C. J. Kähler, S. T. Wereley, and J. Kompenhans, Particle image velocimetry: a practical guide (Springer, 2018).

Ke, H.

H. Ke, B. Xu, Z. Xu, L. Wen, P. Yang, S. Wang, and L. Dong, “Self-learning control for wavefront sensorless adaptive optics system through deep reinforcement learning,” Optik 178, 785–793 (2019).
[Crossref]

Keogh, E.

E. Keogh and A. Mueen, Curse of Dimensionality (Springer US, Boston, MA, 2017), pp. 314–315.

Kingma, D. P.

D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980 (2014).

Kitaguchi, K.

Kompenhans, J.

M. Raffel, C. E. Willert, F. Scarano, C. J. Kähler, S. T. Wereley, and J. Kompenhans, Particle image velocimetry: a practical guide (Springer, 2018).

König, J.

J. König, M. Chen, W. Rösing, D. Boho, P. Mäder, and C. Cierpka, “On the use of a cascaded convolutional neural network for three-dimensional flow measurements using astigmatic ptv,” Meas. Sci. Technol. 31(7), 074015 (2020).
[Crossref]

N. Koukourakis, B. Fregin, J. König, L. Büttner, and J. W. Czarske, “Wavefront shaping for imaging-based flow velocity measurements through distortions using a fresnel guide star,” Opt. Express 24(19), 22074–22087 (2016).
[Crossref]

Koukourakis, N.

Kriegman, D.

Z. Li, Z. Murez, D. Kriegman, R. Ramamoorthi, and M. Chandraker, “Learning to see through turbulent water,” in 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), (2018), pp. 512–520.

Krizhevsky, A.

N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: A simple way to prevent neural networks from overfitting,” Journal of Machine Learning Research 15, 1929–1958 (2014).

Kuschmierz, R.

Lasagni, A. F.

S. Milles, M. Soldera, B. Voisiat, and A. F. Lasagni, “Fabrication of superhydrophobic and ice-repellent surfaces on pure aluminium using single and multiscaled periodic textures,” Sci. Rep. 9(1), 13944 (2019).
[Crossref]

Lee, Y.

Y. Lee, H. Yang, and Z. Yin, “Piv-dcnn: cascaded deep convolutional neural networks for particle image velocimetry,” Exp. Fluids 58(12), 171 (2017).
[Crossref]

Leithold, C.

Lemke, F.

K. Philipp, F. Lemke, S. Scholz, U. Wallrabe, M. C. Wapler, N. Koukourakis, and J. W. Czarske, “Diffraction-limited axial scanning in thick biological tissue with an aberration-correcting adaptive lens,” Sci. Rep. 9(1), 9532 (2019).
[Crossref]

Li, X.

Li, Z.

Liang, J.

S. Cai, J. Liang, Q. Gao, C. Xu, and R. Wei, “Particle image velocimetry based on a deep learning motion estimator,” IEEE Trans. Instrum. Meas. 69(6), 3538–3554 (2020).
[Crossref]

Liang, R.

Liu, B.

Lu, C.

Maas, A. L.

A. L. Maas, A. Y. Hannun, and A. Y. Ng, “Rectifier nonlinearities improve neural network acoustic models,” in Proc. icml, (2013), p. 3.

Mäder, P.

J. König, M. Chen, W. Rösing, D. Boho, P. Mäder, and C. Cierpka, “On the use of a cascaded convolutional neural network for three-dimensional flow measurements using astigmatic ptv,” Meas. Sci. Technol. 31(7), 074015 (2020).
[Crossref]

Mattern, M.

Megerle, M.

D. L. Reuss, M. Megerle, and V. Sick, “Particle-image velocimetry measurement errors when imaging through a transparent engine cylinder,” Meas. Sci. Technol. 13(7), 1029–1035 (2002).
[Crossref]

Milles, S.

S. Milles, M. Soldera, B. Voisiat, and A. F. Lasagni, “Fabrication of superhydrophobic and ice-repellent surfaces on pure aluminium using single and multiscaled periodic textures,” Sci. Rep. 9(1), 13944 (2019).
[Crossref]

Minor, G.

G. Minor, P. Oshkai, and N. Djilali, “Optical distortion correction for liquid droplet visualization using the ray tracing method: further considerations,” Meas. Sci. Technol. 18(11), L23–L28 (2007).
[Crossref]

Mueen, A.

E. Keogh and A. Mueen, Curse of Dimensionality (Springer US, Boston, MA, 2017), pp. 314–315.

Murez, Z.

Z. Li, Z. Murez, D. Kriegman, R. Ramamoorthi, and M. Chandraker, “Learning to see through turbulent water,” in 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), (2018), pp. 512–520.

Narasimhan, S. G.

Y. Tian and S. G. Narasimhan, “Seeing through water: Image restoration using model-based tracking,” in 2009 IEEE 12th International Conference on Computer Vision, (2009), pp. 2303–2310.

Nauber, R.

R. Nauber, L. Büttner, and J. Czarske, “Measurement uncertainty analysis of field-programmable gate-array-based, real-time signal processing for ultrasound flow imaging,” J. Sens. Sens. Syst. 9(2), 227–238 (2020).
[Crossref]

Ng, A. Y.

A. L. Maas, A. Y. Hannun, and A. Y. Ng, “Rectifier nonlinearities improve neural network acoustic models,” in Proc. icml, (2013), p. 3.

Nishizaki, Y.

Noir, J.

A. V. Grayver and J. Noir, “Particle streak velocimetry using ensemble convolutional neural networks,” Exp. Fluids 61(2), 38 (2020).
[Crossref]

Noll, R. J.

Oshkai, P.

G. Minor, P. Oshkai, and N. Djilali, “Optical distortion correction for liquid droplet visualization using the ray tracing method: further considerations,” Meas. Sci. Technol. 18(11), L23–L28 (2007).
[Crossref]

Pan, X.

Philipp, K.

K. Philipp, F. Lemke, S. Scholz, U. Wallrabe, M. C. Wapler, N. Koukourakis, and J. W. Czarske, “Diffraction-limited axial scanning in thick biological tissue with an aberration-correcting adaptive lens,” Sci. Rep. 9(1), 9532 (2019).
[Crossref]

Qu, X.

Radner, H.

H. Radner, J. Stange, L. Buttner, and J. Czarske, “Field programmable system-on-chip based control system for real-time distortion correction in optical imaging,” IEEE Trans. Ind. Electron. 68(4), 3370–3379 (2021).
[Crossref]

M. Teich, J. Grottke, H. Radner, L. Büttner, and J. W. Czarske, “Adaptive particle image velocimetry based on sharpness metrics,” J. Eur. Opt. Soc.-Rapid Publ. 14(1), 5 (2018).
[Crossref]

H. Radner, L. Büttner, and J. Czarske, “Interferometric velocity measurements through a fluctuating phase boundary using two fresnel guide stars,” Opt. Lett. 40(16), 3766–3769 (2015).
[Crossref]

Raffel, M.

M. Raffel, C. E. Willert, F. Scarano, C. J. Kähler, S. T. Wereley, and J. Kompenhans, Particle image velocimetry: a practical guide (Springer, 2018).

Ramamoorthi, R.

Z. Li, Z. Murez, D. Kriegman, R. Ramamoorthi, and M. Chandraker, “Learning to see through turbulent water,” in 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), (2018), pp. 512–520.

Reuss, D. L.

D. L. Reuss, M. Megerle, and V. Sick, “Particle-image velocimetry measurement errors when imaging through a transparent engine cylinder,” Meas. Sci. Technol. 13(7), 1029–1035 (2002).
[Crossref]

Roisman, I. V.

J. Breitenbach, I. V. Roisman, and C. Tropea, “From drop impact physics to spray cooling models: a critical review,” Exp. Fluids 59(3), 55 (2018).
[Crossref]

Ronneberger, O.

O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, (Springer International Publishing, 2015), pp. 234–241.

Rösing, W.

J. König, M. Chen, W. Rösing, D. Boho, P. Mäder, and C. Cierpka, “On the use of a cascaded convolutional neural network for three-dimensional flow measurements using astigmatic ptv,” Meas. Sci. Technol. 31(7), 074015 (2020).
[Crossref]

Saito, M.

Salakhutdinov, R.

N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: A simple way to prevent neural networks from overfitting,” Journal of Machine Learning Research 15, 1929–1958 (2014).

Scarano, F.

F. Scarano, “Iterative image deformation methods in piv,” Meas. Sci. Technol. 13(1), R1–R19 (2002).
[Crossref]

M. Raffel, C. E. Willert, F. Scarano, C. J. Kähler, S. T. Wereley, and J. Kompenhans, Particle image velocimetry: a practical guide (Springer, 2018).

Scharf, E.

Scholz, S.

K. Philipp, F. Lemke, S. Scholz, U. Wallrabe, M. C. Wapler, N. Koukourakis, and J. W. Czarske, “Diffraction-limited axial scanning in thick biological tissue with an aberration-correcting adaptive lens,” Sci. Rep. 9(1), 9532 (2019).
[Crossref]

Sciacchitano, A.

A. Sciacchitano, “Uncertainty quantification in particle image velocimetry,” Meas. Sci. Technol. 30(9), 092001 (2019).
[Crossref]

Sheikh, H. R.

W. Zhou, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
[Crossref]

Sick, V.

D. L. Reuss, M. Megerle, and V. Sick, “Particle-image velocimetry measurement errors when imaging through a transparent engine cylinder,” Meas. Sci. Technol. 13(7), 1029–1035 (2002).
[Crossref]

Simoncelli, E. P.

W. Zhou, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
[Crossref]

Soldera, M.

S. Milles, M. Soldera, B. Voisiat, and A. F. Lasagni, “Fabrication of superhydrophobic and ice-repellent surfaces on pure aluminium using single and multiscaled periodic textures,” Sci. Rep. 9(1), 13944 (2019).
[Crossref]

Song, Y.

Srivastava, N.

N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: A simple way to prevent neural networks from overfitting,” Journal of Machine Learning Research 15, 1929–1958 (2014).

Stamhuis, E.

W. Thielicke and E. Stamhuis, “Pivlab–towards user-friendly, affordable and accurate digital particle image velocimetry in matlab,” J. Open Res. Software 2, e30 (2014).
[Crossref]

Stange, J.

H. Radner, J. Stange, L. Buttner, and J. Czarske, “Field programmable system-on-chip based control system for real-time distortion correction in optical imaging,” IEEE Trans. Ind. Electron. 68(4), 3370–3379 (2021).
[Crossref]

Sturm, J.

Sutskever, I.

N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: A simple way to prevent neural networks from overfitting,” Journal of Machine Learning Research 15, 1929–1958 (2014).

Szegedy, C.

S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” arXiv preprint arXiv:1502.03167 (2015).

Tanida, J.

Teich, M.

M. Teich, J. Grottke, H. Radner, L. Büttner, and J. W. Czarske, “Adaptive particle image velocimetry based on sharpness metrics,” J. Eur. Opt. Soc.-Rapid Publ. 14(1), 5 (2018).
[Crossref]

M. Teich, M. Mattern, J. Sturm, L. Büttner, and J. W. Czarske, “Spiral phase mask shadow-imaging for 3d-measurement of flow fields,” Opt. Express 24(24), 27371–27381 (2016).
[Crossref]

Thielicke, W.

W. Thielicke and E. Stamhuis, “Pivlab–towards user-friendly, affordable and accurate digital particle image velocimetry in matlab,” J. Open Res. Software 2, e30 (2014).
[Crossref]

Tian, F.

Tian, Q.

Tian, Y.

Y. Tian and S. G. Narasimhan, “Seeing through water: Image restoration using model-based tracking,” in 2009 IEEE 12th International Conference on Computer Vision, (2009), pp. 2303–2310.

Tropea, C.

J. Breitenbach, I. V. Roisman, and C. Tropea, “From drop impact physics to spray cooling models: a critical review,” Exp. Fluids 59(3), 55 (2018).
[Crossref]

C. Tropea and A. L. Yarin, Springer handbook of experimental fluid mechanics (Springer Science & Business Media, 2007).

Tyson, R. K.

R. K. Tyson, Principles of Adaptive Optics, vol. 4th edition (CRC Press, Boca Raton, 2016).

Valdivia, M.

Vanselow, C.

C. Vanselow and A. Fischer, “Influence of inhomogeneous refractive index fields on particle image velocimetry,” Opt. Lasers Eng. 107, 221–230 (2018).
[Crossref]

Vera, E.

Voisiat, B.

S. Milles, M. Soldera, B. Voisiat, and A. F. Lasagni, “Fabrication of superhydrophobic and ice-repellent surfaces on pure aluminium using single and multiscaled periodic textures,” Sci. Rep. 9(1), 13944 (2019).
[Crossref]

Wallrabe, U.

K. Philipp, F. Lemke, S. Scholz, U. Wallrabe, M. C. Wapler, N. Koukourakis, and J. W. Czarske, “Diffraction-limited axial scanning in thick biological tissue with an aberration-correcting adaptive lens,” Sci. Rep. 9(1), 9532 (2019).
[Crossref]

Wang, S.

H. Ke, B. Xu, Z. Xu, L. Wen, P. Yang, S. Wang, and L. Dong, “Self-learning control for wavefront sensorless adaptive optics system through deep reinforcement learning,” Optik 178, 785–793 (2019).
[Crossref]

Wapler, M. C.

K. Philipp, F. Lemke, S. Scholz, U. Wallrabe, M. C. Wapler, N. Koukourakis, and J. W. Czarske, “Diffraction-limited axial scanning in thick biological tissue with an aberration-correcting adaptive lens,” Sci. Rep. 9(1), 9532 (2019).
[Crossref]

Wei, R.

S. Cai, J. Liang, Q. Gao, C. Xu, and R. Wei, “Particle image velocimetry based on a deep learning motion estimator,” IEEE Trans. Instrum. Meas. 69(6), 3538–3554 (2020).
[Crossref]

Wen, L.

H. Ke, B. Xu, Z. Xu, L. Wen, P. Yang, S. Wang, and L. Dong, “Self-learning control for wavefront sensorless adaptive optics system through deep reinforcement learning,” Optik 178, 785–793 (2019).
[Crossref]

Wereley, S. T.

M. Raffel, C. E. Willert, F. Scarano, C. J. Kähler, S. T. Wereley, and J. Kompenhans, Particle image velocimetry: a practical guide (Springer, 2018).

Willert, C. E.

M. Raffel, C. E. Willert, F. Scarano, C. J. Kähler, S. T. Wereley, and J. Kompenhans, Particle image velocimetry: a practical guide (Springer, 2018).

Xin, X.

Xu, B.

H. Ke, B. Xu, Z. Xu, L. Wen, P. Yang, S. Wang, and L. Dong, “Self-learning control for wavefront sensorless adaptive optics system through deep reinforcement learning,” Optik 178, 785–793 (2019).
[Crossref]

Xu, C.

S. Cai, J. Liang, Q. Gao, C. Xu, and R. Wei, “Particle image velocimetry based on a deep learning motion estimator,” IEEE Trans. Instrum. Meas. 69(6), 3538–3554 (2020).
[Crossref]

S. Cai, S. Zhou, C. Xu, and Q. Gao, “Dense motion estimation of particle images via a convolutional neural network,” Exp. Fluids 60(4), 73 (2019).
[Crossref]

Xu, Z.

H. Ke, B. Xu, Z. Xu, L. Wen, P. Yang, S. Wang, and L. Dong, “Self-learning control for wavefront sensorless adaptive optics system through deep reinforcement learning,” Optik 178, 785–793 (2019).
[Crossref]

Yang, H.

Y. Lee, H. Yang, and Z. Yin, “Piv-dcnn: cascaded deep convolutional neural networks for particle image velocimetry,” Exp. Fluids 58(12), 171 (2017).
[Crossref]

Yang, L.

Yang, P.

H. Ke, B. Xu, Z. Xu, L. Wen, P. Yang, S. Wang, and L. Dong, “Self-learning control for wavefront sensorless adaptive optics system through deep reinforcement learning,” Optik 178, 785–793 (2019).
[Crossref]

Yarin, A. L.

C. Tropea and A. L. Yarin, Springer handbook of experimental fluid mechanics (Springer Science & Business Media, 2007).

Ye, H.

Z. Gao, X. Li, and H. Ye, “Large dynamic range shack–hartmann wavefront measurement based on image segmentation and a neighbouring-region search algorithm,” Opt. Commun. 450, 190–201 (2019).
[Crossref]

Yin, Z.

Y. Lee, H. Yang, and Z. Yin, “Piv-dcnn: cascaded deep convolutional neural networks for particle image velocimetry,” Exp. Fluids 58(12), 171 (2017).
[Crossref]

Yun, D.

Zhang, Q.

Zhang, Y.

Zhao, H.

Zhao, J.

Zhou, S.

S. Cai, S. Zhou, C. Xu, and Q. Gao, “Dense motion estimation of particle images via a convolutional neural network,” Exp. Fluids 60(4), 73 (2019).
[Crossref]

Zhou, W.

W. Zhou, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
[Crossref]

Zhu, L.

Biomed. Opt. Express (1)

Exp. Fluids (5)

Y. Lee, H. Yang, and Z. Yin, “Piv-dcnn: cascaded deep convolutional neural networks for particle image velocimetry,” Exp. Fluids 58(12), 171 (2017).
[Crossref]

S. Cai, S. Zhou, C. Xu, and Q. Gao, “Dense motion estimation of particle images via a convolutional neural network,” Exp. Fluids 60(4), 73 (2019).
[Crossref]

A. V. Grayver and J. Noir, “Particle streak velocimetry using ensemble convolutional neural networks,” Exp. Fluids 61(2), 38 (2020).
[Crossref]

G. Gomit, L. Chatellier, D. Calluaud, and L. David, “Free surface measurement by stereo-refraction,” Exp. Fluids 54(6), 1540 (2013).
[Crossref]

J. Breitenbach, I. V. Roisman, and C. Tropea, “From drop impact physics to spray cooling models: a critical review,” Exp. Fluids 59(3), 55 (2018).
[Crossref]

Flow, Turbul. Combust. (1)

B. Böhm, C. Heeger, R. L. Gordon, and A. Dreizler, “New perspectives on turbulent combustion: Multi-parameter high-speed planar laser diagnostics,” Flow, Turbul. Combust. 86(3-4), 313–341 (2011).
[Crossref]

IEEE Trans. Ind. Electron. (1)

H. Radner, J. Stange, L. Buttner, and J. Czarske, “Field programmable system-on-chip based control system for real-time distortion correction in optical imaging,” IEEE Trans. Ind. Electron. 68(4), 3370–3379 (2021).
[Crossref]

IEEE Trans. Instrum. Meas. (1)

S. Cai, J. Liang, Q. Gao, C. Xu, and R. Wei, “Particle image velocimetry based on a deep learning motion estimator,” IEEE Trans. Instrum. Meas. 69(6), 3538–3554 (2020).
[Crossref]

IEEE Trans. on Image Process. (1)

W. Zhou, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
[Crossref]

J. Eur. Opt. Soc.-Rapid Publ. (1)

M. Teich, J. Grottke, H. Radner, L. Büttner, and J. W. Czarske, “Adaptive particle image velocimetry based on sharpness metrics,” J. Eur. Opt. Soc.-Rapid Publ. 14(1), 5 (2018).
[Crossref]

J. Open Res. Software (1)

W. Thielicke and E. Stamhuis, “Pivlab–towards user-friendly, affordable and accurate digital particle image velocimetry in matlab,” J. Open Res. Software 2, e30 (2014).
[Crossref]

J. Opt. Soc. Am. (1)

J. Sens. Sens. Syst. (1)

R. Nauber, L. Büttner, and J. Czarske, “Measurement uncertainty analysis of field-programmable gate-array-based, real-time signal processing for ultrasound flow imaging,” J. Sens. Sens. Syst. 9(2), 227–238 (2020).
[Crossref]

Journal of Machine Learning Research (1)

N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: A simple way to prevent neural networks from overfitting,” Journal of Machine Learning Research 15, 1929–1958 (2014).

Meas. Sci. Technol. (5)

J. König, M. Chen, W. Rösing, D. Boho, P. Mäder, and C. Cierpka, “On the use of a cascaded convolutional neural network for three-dimensional flow measurements using astigmatic ptv,” Meas. Sci. Technol. 31(7), 074015 (2020).
[Crossref]

D. L. Reuss, M. Megerle, and V. Sick, “Particle-image velocimetry measurement errors when imaging through a transparent engine cylinder,” Meas. Sci. Technol. 13(7), 1029–1035 (2002).
[Crossref]

G. Minor, P. Oshkai, and N. Djilali, “Optical distortion correction for liquid droplet visualization using the ray tracing method: further considerations,” Meas. Sci. Technol. 18(11), L23–L28 (2007).
[Crossref]

A. Sciacchitano, “Uncertainty quantification in particle image velocimetry,” Meas. Sci. Technol. 30(9), 092001 (2019).
[Crossref]

F. Scarano, “Iterative image deformation methods in piv,” Meas. Sci. Technol. 13(1), R1–R19 (2002).
[Crossref]

Opt. Commun. (1)

Z. Gao, X. Li, and H. Ye, “Large dynamic range shack–hartmann wavefront measurement based on image segmentation and a neighbouring-region search algorithm,” Opt. Commun. 450, 190–201 (2019).
[Crossref]

Opt. Express (9)

L. Büttner, C. Leithold, and J. Czarske, “Interferometric velocity measurements through a fluctuating gas-liquid interface employing adaptive optics,” Opt. Express 21(25), 30653–30663 (2013).
[Crossref]

M. Teich, M. Mattern, J. Sturm, L. Büttner, and J. W. Czarske, “Spiral phase mask shadow-imaging for 3d-measurement of flow fields,” Opt. Express 24(24), 27371–27381 (2016).
[Crossref]

N. Koukourakis, B. Fregin, J. König, L. Büttner, and J. W. Czarske, “Wavefront shaping for imaging-based flow velocity measurements through distortions using a fresnel guide star,” Opt. Express 24(19), 22074–22087 (2016).
[Crossref]

Y. Nishizaki, M. Valdivia, R. Horisaki, K. Kitaguchi, M. Saito, J. Tanida, and E. Vera, “Deep learning wavefront sensing,” Opt. Express 27(1), 240–251 (2019).
[Crossref]

Z. Li and X. Li, “Centroid computation for shack-hartmann wavefront sensor in extreme situations based on artificial neural networks,” Opt. Express 26(24), 31675–31692 (2018).
[Crossref]

Z. Li, X. Li, and R. Liang, “Random two-frame interferometry based on deep learning,” Opt. Express 28(17), 24747–24760 (2020).
[Crossref]

B. P. Cumming and M. Gu, “Direct determination of aberration functions in microscopy by an artificial neural network,” Opt. Express 28(10), 14511–14521 (2020).
[Crossref]

X. Qu, Y. Song, Y. Jin, Z. Guo, Z. Li, and A. He, “3d particle field reconstruction method based on convolutional neural network for sapiv,” Opt. Express 27(8), 11413–11434 (2019).
[Crossref]

Q. Tian, C. Lu, B. Liu, L. Zhu, X. Pan, Q. Zhang, L. Yang, F. Tian, and X. Xin, “Dnn-based aberration correction in a wavefront sensorless adaptive optics system,” Opt. Express 27(8), 10765–10776 (2019).
[Crossref]

Opt. Lasers Eng. (1)

C. Vanselow and A. Fischer, “Influence of inhomogeneous refractive index fields on particle image velocimetry,” Opt. Lasers Eng. 107, 221–230 (2018).
[Crossref]

Opt. Lett. (2)

Optik (1)

H. Ke, B. Xu, Z. Xu, L. Wen, P. Yang, S. Wang, and L. Dong, “Self-learning control for wavefront sensorless adaptive optics system through deep reinforcement learning,” Optik 178, 785–793 (2019).
[Crossref]

Sci. Rep. (2)

K. Philipp, F. Lemke, S. Scholz, U. Wallrabe, M. C. Wapler, N. Koukourakis, and J. W. Czarske, “Diffraction-limited axial scanning in thick biological tissue with an aberration-correcting adaptive lens,” Sci. Rep. 9(1), 9532 (2019).
[Crossref]

S. Milles, M. Soldera, B. Voisiat, and A. F. Lasagni, “Fabrication of superhydrophobic and ice-repellent surfaces on pure aluminium using single and multiscaled periodic textures,” Sci. Rep. 9(1), 13944 (2019).
[Crossref]

Other (12)

R. K. Tyson, Principles of Adaptive Optics, vol. 4th edition (CRC Press, Boca Raton, 2016).

C. Tropea and A. L. Yarin, Springer handbook of experimental fluid mechanics (Springer Science & Business Media, 2007).

F. Durst, Fluid mechanics: an introduction to the theory of fluid flows (Springer Science & Business Media, 2008).

M. Raffel, C. E. Willert, F. Scarano, C. J. Kähler, S. T. Wereley, and J. Kompenhans, Particle image velocimetry: a practical guide (Springer, 2018).

Z. Li, Z. Murez, D. Kriegman, R. Ramamoorthi, and M. Chandraker, “Learning to see through turbulent water,” in 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), (2018), pp. 512–520.

Y. Tian and S. G. Narasimhan, “Seeing through water: Image restoration using model-based tracking,” in 2009 IEEE 12th International Conference on Computer Vision, (2009), pp. 2303–2310.

E. Keogh and A. Mueen, Curse of Dimensionality (Springer US, Boston, MA, 2017), pp. 314–315.

D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980 (2014).

O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, (Springer International Publishing, 2015), pp. 234–241.

A. L. Maas, A. Y. Hannun, and A. Y. Ng, “Rectifier nonlinearities improve neural network acoustic models,” in Proc. icml, (2013), p. 3.

S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” arXiv preprint arXiv:1502.03167 (2015).

X. Glorot, A. Bordes, and Y. Bengio, “Deep sparse rectifier neural networks,” in Proceedings of the fourteenth international conference on artificial intelligence and statistics, (2011), pp. 315–323.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. Distortion model and measurement principle for distorted phase from fluctuating air-water interface.
Fig. 2.
Fig. 2. Measurement of distorted phase boundaries from fluctuating air-water interface by using Hartmann-Shack wavefront sensor.
Fig. 3.
Fig. 3. Schematic of the optical setup for wavefront measurement and dataset generation; Air flow is only switched on for phase boundary measurement to excite the water surface; Distortion is generated by deformable mirror for dataset generation when the surface is steady, PIV camera 1 captures undistorted particles images for label of the dataset, PIV camera 2 captures distorted images for the input of the dataset; The inset (red dashed box) shows the measurement location from a top view. HSWFS: Hartmann-Shack wavefront sensor; LP: long pass filter; SP: short pass filter, BS: 50:50 beamsplitter; Light sheet: generated by a laser source (660 nm) combined with a cylindrical lens.
Fig. 4.
Fig. 4. Flowchart showing the process of dataset generation, neural network training and PIV image correction.
Fig. 5.
Fig. 5. Schematics and details of the proposed MIUN deep learning model.
Fig. 6.
Fig. 6. Two frames of undistorted-distorted PIV image pairs and its corrected results from trained MIUN, with corresponding measured wavefront and image residuals. Image residual represent the error between undistorted-distorted PIV image and undistorted-corrected PIV image, it is calculated by normalized grayscale image.
Fig. 7.
Fig. 7. Image quality assessment of PIV images before and after correction by trained neural network on test dataset; $\textrm{RMS}_{mean}=4.45$ $\mu \textrm{m}$ ; $\textrm{RMS}_{max}=11.45$ $\mu \textrm{m}$ ; $\textrm{PV}_{mean}=19.44$ $\mu \textrm{m}$ ; $\textrm{PV}_{max}=45.25$ $\mu \textrm{m}$ .
Fig. 8.
Fig. 8. Flow velocity profile and PIV measurement results from undistorted PIV images, distorted PIV images and corrected PIV images; Figures (b)-(d) show the MIUN correction performance. Flow field is represented by white arrows and the local standard deviation is represented by the background color.

Tables (1)

Tables Icon

Table 1. Details of adptive PIV technique

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

I ( x , t ) = I g ( x + w ( x , t ) , t )
w ( x , t ) = α h ( x , t )
Δ ϕ = m n a m Z m
a r g   m i n F { I , S } Ω | | F { I , S } I g | | 2 2
l o s s = 1 N M x = 1 N y = 1 M | | I g ^ ( x , y ) I g ( x , y ) | | 2 2
P S N R = 10 log 10 ( M A X r 2 M S E ) = 20 log 10 ( M A X r M S E )
S S I M = ( 2 μ o μ r + C 1 ) ( 2 σ o r + C 2 ) ( μ o 2 + μ r 2 + C 1 ) ( σ o 2 + σ r 2 + C 2 )
σ = σ P I V 2 + σ f l o w 2 + σ d i s t o r t i o n 2

Metrics