summary, our findings suggest that transfer learning can sig- nificantly enhance the performance of deep neural networks on acoustic emission source localization tasks, particularly when high-quality training data is scarce. It highlights the utility of leveraging preexisting knowledge to expedite learning and bolster the model’s ability to generalize. However, not all models benefited from transfer learning. The Inception model’s performance was affected slightly, possibly due to the complexities inherent in its architecture. Intriguingly, the FCNN model performed better without transfer learning, indicating that its architecture might be more suited to direct learning from the training data. This observation underscores the need to consider the specificities of each model when applying transfer learning. The presented study evaluates the performance on the test dataset. Our discussion is supplemented with statistical 6 5 4 3 2 1 0 0 25 50 75 100 125 150 175 200 Training loss Epoch Mean modell Loss range modell Mean loss model with TL Loss range model with TL 0 25 50 75 100 125 150 175 200 6 5 4 3 2 1 0 Validation loss Epoch Mean modell Loss range modell Mean model with TL Loss range model with TL 4.0 3.5 3.0 2.5 2.0 1.5 1.0 0.5 0.0 0 25 50 75 100 125 150 175 200 Training loss Epoch Mean loss model Loss range model Mean loss model with TL Loss range model with TL 0 25 50 75 100 125 150 175 200 4.0 3.5 3.0 2.5 2.0 1.5 1.0 0.5 0.0 Validation loss Epoch Mean loss model Loss range model Mean model with TL Loss range model with TL M lloss de L M L M lloss de L M lloss del h L M L M L h M L M lloss del h L h Training loss Epoch 12 10 8 6 4 2 0 25 50 75 100 125 150 175 200 Mean modell Loss range modell Mean model with TL Loss range model with TL Validation loss 0 25 50 75 100 125 150 175 200 12 10 8 6 4 2 Epoch Mean modell Loss range modell Mean loss model with TL Loss range model with TL Training loss Epoch 6 5 4 3 2 0 25 50 75 100 125 150 175 200 Mean modeled Loss range modeled Mean loss model with TL Loss range model with TL Validation loss 0 25 50 75 100 125 150 175 200 6 5 4 3 2 Epoch Mean modeled Loss range modeled Mean model with TL Loss range model with TL M lloss de de lloss del th M lloss de M lloss del h M lloss lloss del h Figure 7. Comparative analysis of mean loss and range with and without the implementation of transfer learning for: (a) CNN model applied to the impact test dataset (b) CNN model applied to the PLB test dataset (c) FCNN model applied to the impact test dataset and (d) FCNN model applied to the PLB test dataset. J U L Y 2 0 2 3 M A T E R I A L S E V A L U A T I O N 79 2307 ME July dup.indd 79 6/19/23 3:41 PM
metrics such as the minimum (the smallest value in the dataset), maximum (the largest value), median (the middle value when arranged in increasing order), first quartile (Q1: the middle value between the minimum and the median), and third quartile (Q3: the middle value between the median and the maximum). Analyzing the box plot as illustrated in Figures 10 and 11, we have added a few things to reduce overfitting: Ñ Early stopping: By stopping training if validation loss does not improve for 20 epochs, we prevent the model from overfitting to the training data. If the validation loss is no longer improving, continued training is unlikely to generalize better to new data. Ñ Restore best weights: By restoring weights from the epoch with the best validation loss, we “roll back” the model to the point before overfitting started to occur. This gives us the model that generalizes best to new data. ME |AI/ML 6 5 4 3 2 1 0 0 25 50 75 100 125 150 175 200 Training loss Epoch Mean loss modell Loss range modell Mean loss model with TL Loss range model with TL 5 4 3 2 1 0 0 25 50 75 100 125 150 175 200 Training loss Epoch Mean loss modell Loss range modell Mean loss model with TL Loss range model with TL M L M L 5 4 3 2 1 0 0 25 50 75 100 125 150 175 200 Validation loss Epoch Mean modelled Loss range modell Mean model with TL Loss range model with TL 6 5 4 3 2 1 0 0 25 50 75 100 125 150 175 200 Validation loss Epoch Mean modell Loss range modell Mean model with TL Loss range model with TL M lloss de range Mean lloss s model del th M lloss L range an lloss s m el L del th 7 6 5 4 3 2 1 0 25 50 75 100 125 150 175 200 Training loss Epoch Mean loss model Loss range model Mean model with TL Loss range model with TL 7 6 5 4 3 2 1 0 25 50 75 100 125 150 175 200 Training loss Epoch Mean modeled Loss range modeled Mean model with TL Loss range model with TL M L de M lloss del h L range M lloss L M lloss del th L range 7 6 5 4 3 2 1 0 25 50 75 100 125 150 175 200 Validation loss Epoch Mean loss model Loss range model Mean loss model with TL Loss range model with TL 7 6 5 4 3 2 1 0 25 50 75 100 125 150 175 200 Validation loss Epoch Mean modeled Loss range model Mean loss model with TL Loss range model with TL M L M h L M lloss L M m l w L Figure 8. Comparative analysis of mean loss and range with and without the implementation of transfer learning for: (a) Encoder model applied to the impact test dataset (b) Encoder model applied to the PLB test dataset (c) MLP model applied to the impact test dataset and (d) MLP model applied to the PLB test dataset. 80 M A T E R I A L S E V A L U A T I O N J U L Y 2 0 2 3 2307 ME July dup.indd 80 6/19/23 3:41 PM
Previous Page Next Page