Our research employs transfer learning, a technique capable of enhancing the performance of deep neural networks across various tasks. We illustrate its efficacy by applying it to improve the performance of deep neural networks in acoustic emission source localization. AE is a nondestructive testing method that leverages sound waves to identify and analyze material defects, and acoustic emission source localization pertains to the determination of the origin of the acoustic emission signals. We initialized our process by pretraining a deep neural network on a large simulation dataset, allowing the network to capture the general features of AE signals. Following this, we fine-tuned the deep neural network on a smaller experimental dataset, a process that facilitated the network’s learning of specific features present in the experimental data. Our process involves first trans- ferring layers from a pretrained model, and subsequently freezing their parameters. As new AE data is processed, it passes through these frozen layers before progressing through the trainable layers, allowing us to localize the acoustic emission source. Owing to the intrinsic connection between simulation and experimental data, the feature extractor can be applied to the latter, incorporating it as a nonadjustable layer in our model. We designate the high-level features extracted from these layers as “bottleneck” features due to their high level of condensation and their position at the classifiers’ preceding constriction point (as illustrated in Figure 6). The applied deep learning architecture comprises one of six classifiers, each consisting of multiple fully con- nected layers following global pooling. This design enables nonlinear mapping of bottleneck features to AE source local- ization. Additionally, a fusion layer is utilized to amalgamate extracted features, and an extra layer is employed to link bot- tleneck features to location predictions. During fine-tuning, the pretrained model’s weights serve as the initial values, and the model undergoes further training with available target domain data. As a consequence, the fine-tuned model can acclimatize to the target domain’s unique characteris- tics, offering superior performance to a model trained from scratch. Results and Discussion In this section, we compare the performance of various deep learning models with and without transfer learning applied to acoustic emission source localization tasks. We analyze the mean loss and loss range over 200 epochs for CNN, FCNN, Encoder, ResNet, MLP, and Inception architectures. These models were trained on two different datasets, namely the impact dataset and the PLB dataset, which both contain distinct acoustic emission source localizations. In the first scenario, we trained CNN models without transfer learning directly on the experimental dataset. Both models exhibit a similar pattern over the epochs, initially having high loss values and gradually improving to achieve a significant reduction in loss. However, the validation loss does not decrease as substan- tially, which may indicate overfitting. In this case, the models have learned the training data too well but struggle to general- ize on new, unseen data. In contrast, for the second scenario, we employed transfer learning, where the CNN models were first pretrained on a large, simulated dataset before being fine- tuned on the experimental dataset. Both models begin with lower loss values than those without transfer learning, which could be attributed to the initial learning from the simulated dataset. Over 200 epochs, these models improve significantly. One model achieves a very low validation loss, suggesting excellent generalization capability, while the other model has a slightly higher validation loss. The performance of the other models, such as FCNN, Encoder, MLP, Inception, and ResNet, are also compared with and without transfer learning. Some models, such as the Encoder and MLP, exhibit signifi- cant improvements when transfer learning is applied, while others show minor or negligible differences. Interestingly, the ResNet model demonstrates good performance on both the impact and PLB datasets, with and without transfer learning, though it experiences more fluctuations in the loss curve without transfer learning. Figures 7, 8, and 9 illustrate the mean loss and loss range for each model with and without transfer learning on the impact and PLB datasets. These visualizations provide a clear comparison of the models’ performances, high- lighting the advantages of transfer learning in various cases. In ME |AI/ML Knowledge transfer Frozen layers Bottleneck features Finite element modeling Experimental setup z y x Pretrained layers Zone 1 Zone 2 Zone 3 Zone 9 Localization Zone 1 Zone 2 Zone 3 Zone 9 Figure 6. Schematic and structure of knowledge transfer via deep transfer learning. 78 M A T E R I A L S E V A L U A T I O N J U L Y 2 0 2 3 2307 ME July dup.indd 78 6/19/23 3:41 PM
summary, our findings suggest that transfer learning can sig- nificantly enhance the performance of deep neural networks on acoustic emission source localization tasks, particularly when high-quality training data is scarce. It highlights the utility of leveraging preexisting knowledge to expedite learning and bolster the model’s ability to generalize. However, not all models benefited from transfer learning. The Inception model’s performance was affected slightly, possibly due to the complexities inherent in its architecture. Intriguingly, the FCNN model performed better without transfer learning, indicating that its architecture might be more suited to direct learning from the training data. This observation underscores the need to consider the specificities of each model when applying transfer learning. The presented study evaluates the performance on the test dataset. Our discussion is supplemented with statistical 6 5 4 3 2 1 0 0 25 50 75 100 125 150 175 200 Training loss Epoch Mean modell Loss range modell Mean loss model with TL Loss range model with TL 0 25 50 75 100 125 150 175 200 6 5 4 3 2 1 0 Validation loss Epoch Mean modell Loss range modell Mean model with TL Loss range model with TL 4.0 3.5 3.0 2.5 2.0 1.5 1.0 0.5 0.0 0 25 50 75 100 125 150 175 200 Training loss Epoch Mean loss model Loss range model Mean loss model with TL Loss range model with TL 0 25 50 75 100 125 150 175 200 4.0 3.5 3.0 2.5 2.0 1.5 1.0 0.5 0.0 Validation loss Epoch Mean loss model Loss range model Mean model with TL Loss range model with TL M lloss de L M L M lloss de L M lloss del h L M L M L h M L M lloss del h L h Training loss Epoch 12 10 8 6 4 2 0 25 50 75 100 125 150 175 200 Mean modell Loss range modell Mean model with TL Loss range model with TL Validation loss 0 25 50 75 100 125 150 175 200 12 10 8 6 4 2 Epoch Mean modell Loss range modell Mean loss model with TL Loss range model with TL Training loss Epoch 6 5 4 3 2 0 25 50 75 100 125 150 175 200 Mean modeled Loss range modeled Mean loss model with TL Loss range model with TL Validation loss 0 25 50 75 100 125 150 175 200 6 5 4 3 2 Epoch Mean modeled Loss range modeled Mean model with TL Loss range model with TL M lloss de de lloss del th M lloss de M lloss del h M lloss lloss del h Figure 7. Comparative analysis of mean loss and range with and without the implementation of transfer learning for: (a) CNN model applied to the impact test dataset (b) CNN model applied to the PLB test dataset (c) FCNN model applied to the impact test dataset and (d) FCNN model applied to the PLB test dataset. J U L Y 2 0 2 3 M A T E R I A L S E V A L U A T I O N 79 2307 ME July dup.indd 79 6/19/23 3:41 PM
Previous Page Next Page