data from finite element modeling. We deployed a point domain network consisting of a 5 × 5 grid of sensing locations to gather the simulation data, which increases the complexity of the surrounding mesh, substantially slowing the collection of the reverberation pattern (reflected signals after 100 μs). As such, for practicality and computational efficiency, we limited the simulation data collection to the initial 100 μs. The simulation was conducted using a workstation equipped with a 3.1 GHz multi-core processor and a 4 GB dedicated graphics card. On average, each round consumed approximately 40 min. To gather an adequate volume of source domain data (simulation dataset), a data augmentation process was executed, resulting in the accumulation of 900 waveforms. It is noteworthy that differences in the reflection and trigger mechanisms between simulations and experiments, as observ- able in the figures, stem from variations in the interaction with adjacent substrates and boundary conditions, resulting in distinct reverberation patterns. Furthermore, while the simula- tion model logs the accurate time of arrival, the experimental process depends on manual trigger thresholding. t-SNE is a powerful technique for visualizing high-dimensional data by mapping each data point to a two- or three-dimensional space. While t-SNE was originally designed for static data, it has been adapted for use with time series data in some cases. Visualizing AE data can be challenging due to its complexity and high dimensionality. However, t-SNE can be used to map time series data onto a low-dimensional space while preserving its underlying structure. To apply t-SNE to time series data, we first need to transform the sequential nature of the data into a set of fixed- length feature vectors that can be used as input to t-SNE. This can be done using various techniques such as sliding windows or feature extraction methods like Fourier transforms or wavelet transforms. Once we have transformed the time series data into feature vectors, we can compute pairwise similarities between them using a Gaussian kernel: p​I,j​​ = exp(​ |​x​i​​ x​j​​||​​​​ 2​ _2 σ​​ 2​ )​ _​_______________ k​​ l​​ exp(​−_​)​​​​​ |​​​|​x​k​​ x​l​​|​|​​​ 2 2 σ​​ 2​ where x​i​​​ and x​j​​​ are two feature vectors, sigma is a parameter that controls the width of the Gaussian kernel, and p​I,j​​​ is the probability that x​i​​​ would pick x​j​​​ as its neighbor if neighbors were picked in proportion to their probability density under a Gaussian centered at ​​ i​​​ .Next, we compute pairwise similarities between points in the low-dimensional map using a Student-t distribution: q​i,j​​ = (1 + |​y​i​​ y​j​​|​|​​​ 2​ )​​​ −1​ _​_________________ k​​ l​​ ​(1 +|​​|​​y​k​​ y​l​​|​|​​​ 2​ )​​​ −1​​​ where y​i​​​ and ​​ j​​​ are two points in the low-dimensional map, and q​i,j​​​ is the probability that ​​y​i​​​ would pick ​​ j​​​ as its neighbor if neighbors were picked uniformly at random from all other points. Finally, t-SNE minimizes the difference between these two distributions using gradient descent on a cost function that measures their divergence: KL​(P‖Q​)​ = i​ j​ p​i,j​​​log​​​ p​i,j​​ q​i,j​​​​​ We’ve employed this t-SNE technique to enhance our understanding of the relationship between our simulation and experimental datasets. Two-dimensional plots generated by this method, as depicted in Figure 5, showing the similar- ities between AE signals collected from nine distinct zones. Figures 5a and 5b demonstrate that the experimental data from both the impact and PLB tests exhibit larger variability and less distinct clustering, suggesting more complexities and uncer- tainties in real-world scenarios. On the other hand, Figures 5c and 5d illustrate that the simulation data from both tests have a clearer clustering effect, indicating the advantages of using controlled and predictable simulation data for improving AE source localization techniques. Nevertheless, it’s important to recall that the simulation data might not encapsulate all the complexities and variations inherent in real-world scenarios. Therefore, further optimization of our proposed source local- ization techniques is necessary to incorporate more uncer- tainty factors, ensuring effectiveness across diverse real-world applications. Deep Transfer Learning for Knowledge Transfer This study investigates the effective application of transfer learning to new data, leveraging the insights obtained from pretrained models. A variety of deep learning models, includ- ing convolutional neural network (CNN), fully connected neural network (FCNN), Encoder, Residual Network (ResNet), Inception, and Multi-Layer Perceptron (MLP), were assessed for their ability to analyze simulated datasets and to extract underlying features using a layer-wise fine-tuning strategy. The employed methodology entailed signal acquisition from the simulated datasets, followed by data preprocessing, feature extraction via fine-tuned deep learning models, and finally classification based on acoustic emission source location. To scrutinize impact and PLB test simulations, six deep learning models with distinct architectures and capabilities were inves- tigated. This innovative strategy leads to a broader compre- hension of the data, permitting the recognition of overlooked patterns and features when using a singular model. Detailed summaries of the architectures used for the networks men- tioned are as follows: ME |AI/ML 76 M A T E R I A L S E V A L U A T I O N J U L Y 2 0 2 3 2307 ME July dup.indd 76 6/19/23 3:41 PM
Ñ CNN implements two convolutional blocks with 1D convolutions, instance normalization, and dropout. Each block comprises a Conv1D layer, succeeded by instance normalization, dropout, and max pooling. Hierarchical features are extracted from the input time series by the convolutional blocks. These features are then flattened and transmitted to a SoftMax classifier. The CNN model employs “categorical_crossentropy” loss and Adam optimizer (Simonyan and Zisserman 2014). Ñ FCNN resembles the CNN architecture but replaces max pooling with global average pooling to minimize spatial information loss. The global average pooling layer compacts the spatial information into a 1D vector, with these compressed features then passed to the SoftMax classifier (Zhang et al. 2017). Ñ ResNet uses residual blocks to circumvent the vanishing gradient issue. Residual blocks add the input directly to the stacked convolutional layers, enabling direct gradient flow. It uses batch normalization and weight regularization (L2 regu- larization). Each residual block comprises two Conv1D layers followed by batch normalization and activation, with the output of the residual blocks average pooled and transmitted to the SoftMax classifier size (He et al. 2015). Ñ Encoder resembles CNN’s convolutional blocks but employs Parametric Rectified Linear Unit (PReLU) activation and instance normalization. After the convolutional blocks, an attention mechanism is applied. This attention layer assigns weights to the feature maps, focusing on pertinent features. The attended features are flattened and passed to the SoftMax classifier extraction (Vincent et al. 2008). Ñ MLP substitutes the convolutional layers with dense layers for time series classification. The input time series is flattened and sent to the dense layers. It uses two dense layers with dropout for regularization. The output dense layer utilizes SoftMax activation for the classification (Delashmit and Manry 2005). Ñ Inception utilizes an inception module with parallel branches of 1 × 1, 3 × 3, and 5 × 5 convolutions and max pooling. The outputs of the parallel branches are concatenated, forming the inception module. It employs batch normalization and the dropout post inception module. The features are flattened and transmitted to the SoftMax classifier (Zhang et al. 2022). Zone 1 Zone 2 Zone 3 Zone 4 Zone 5 Zone 6 Zone 7 Zone 8 Zone 9 10 5 0 –5 –10 –15 –10 –5 0 5 10 x-tsne 15 10 5 0 –5 –10 –15 –10 –5 0 5 10 15 x-tsne Zone 1 Zone 2 Zone 3 Zone 4 Zone 5 Zone 6 Zone 7 Zone 8 Zone 9 15 10 5 0 –5 –10 –15 –15 –10 –5 0 5 10 15 x-tsne Zone 1 Zone 2 Zone 3 Zone 4 Zone 5 Zone 6 Zone 7 Zone 8 Zone 9 10 5 0 –5 –10 –15 –15 –10 –5 0 5 10 x-tsne Zone 1 Zone 2 Zone 3 Zone 4 Zone 5 Zone 6 Zone 7 Zone 8 Zone 9 Figure 5. The two-dimensional t-SNE plot for: (a) impact test dataset (b) PLB test dataset (c) impact simulation dataset and (d) PLB simulation dataset. J U L Y 2 0 2 3 M A T E R I A L S E V A L U A T I O N 77 2307 ME July dup.indd 77 6/19/23 3:41 PM y-tsne y-tsne y-tsne y-tsne
Previous Page Next Page