and therefore evaluates the overall clas- sifier rather than some user-chosen value. The AUC is also a feasible metric for imbalanced data. Tips: Careful choice of evaluation metrics should be selected based upon the bias and variance of the dataset. For an unbiased, well-balanced dataset, accuracy is often the most characteristic of the model performance. In NDT/E, we are often concerned with the true positive rate, which is also known as the probability of (defect) detection or the recall of a defect. In other NDT/E scenarios, we may want to ensure that normal materials are not predicted as material defects (e.g., delaminations), in which case, the false call rate (also known as the false negative rate) or the precision score may be more valuable. Note the true positive and false positive rates are utilized in traditional NDT/E probability of detection assessment (Cherry and Knott 2022). In the cases where we want a balance between the recall and precision scores, the F1 score becomes a valuable metric. Conclusion ML has a significant potential to contrib- ute to the NDT/E community. However, successful usage of ML algorithms demands greater insight into their capa- bilities and intricacies. This sentiment is also true for those in the community building new datasets for ML practices. Understanding the basic capabilities of ML paradigms, navigating how bias and variance within the data affect the ML model, and establishing how perfor- mance will be measured will help the community create datasets that have the greatest impact. ACKNOWLEDGMENTS Work on this paper is partially funded by the United States Air Force contract FA8650- 18-C-5015. AUTHORS Joel B. Harley: Department of Electrical and Computer Engineering, University of Florida, Gainesville, FL 32611 Suhaib Zafar: Stellantis Chrysler Technology Center, Auburn Hills, MI 48326 Charlie Tran: Department of Electrical and Computer Engineering, University of Florida, Gainesville, FL 32611 CITATION Materials Evaluation 81 (7): 43–47 https://doi.org/10.32548/2023.me-04358 ©2023 American Society for Nondestructive Testing REFERENCES Belkin, M., D. Hsu, S. Ma, and S. Mandal. 2019. “Reconciling modern machine-learning practice and the classical bias-variance trade-off.” Proceedings of the National Academy of Sciences of the United States of America 116 (32): 15849– 54. https://doi.org/10.1073/pnas.1903070116. Bishop, C. M. 2006. Pattern Recognition and Machine Learning. Springer New York. Brunton, S. L., B. R. Noack, and P. Koumout- sakos. 2020. “Machine Learning for Fluid Mechanics.” Annual Review of Fluid Mechanics 52 (1): 477–508. https://doi.org/10.1146/annurev- fluid-010719-060214. Cherry, M., and C. Knott. 2022. “What is proba- bility of detection?” Materials Evaluation 80 (12): 24–28. https://doi.org/10.32548/2022.me-04324. Du, M., N. Liu, and X. Hu. 2019. “Techniques for interpretable machine learning.” Commu- nications of the ACM 63 (1): 68–77. https://doi. org/10.1145/3359786. Lever, J., M. Krzywinski, and N. Altman. 2017. “Principal component analysis.” Nature Methods 14 (7): 641–42. https://doi.org/10.1038/ nmeth.4346. Liu, C., J. B. Harley, M. Bergés, D. W. Greve, and I. J. Oppenheim. 2015. “Robust ultrasonic damage detection under complex environ- mental conditions using singular value decom- position.” Ultrasonics 58:75–86. https://doi. org/10.1016/j.ultras.2014.12.005. Mann, L. L., T. E. Matikas, P. Karpur, and S. Krishnamurthy. 1992. “Supervised backpropa- gation neural networks for the classification of ultrasonic signals from fiber microcracking in metal matrix composites.” in IEEE 1992 Ultra- sonics Symposium Proceedings. Tucson, AZ. https://doi.org/10.1109/ULTSYM.1992.275983. Martín, Ó., M. López, and F. Martín. 2007. “Arti- ficial neural networks for quality control by ultrasonic testing in resistance spot welding.” Journal of Materials Processing Technology 183 (2–3): 226–33. https://doi.org/10.1016/j.jmat protec.2006.10.011. Mehrabi, N., F. Morstatter, N. Saxena, K. Lerman, and A. Galstyan. 2022. “A Survey on Bias and Fairness in Machine Learning.” ACM Computing Surveys 54 (6): 1–35. https://doi. org/10.1145/3457607. Miceli, M., J. Posada, and T. Yang. 2022. “Studying Up Machine Learning Data: Why Talk About Bias When We Mean Power?” Proc. ACM Hum.-Comput. Interact. 6: 1–14. https://doi. org/10.1145/3492853. OpenAI. 2023. “GPT-4 Technical Report.” arXiv:2303.08774. https://doi.org/10.48550/ arXiv.2303.08774. Saleem, M., and H. Gutierrez. 2021. “Using artificial neural network and non‐destructive test for crack detection in concrete surrounding the embedded steel reinforcement.” Structural Concrete 22 (5): 2849–67. https://doi.org/10.1002/ suco.202000767. Sikorska, J. Z., and D. Mba. 2008. “Challenges and obstacles in the application of acoustic emission to process machinery.” Proceedings of the Institution of Mechanical Engineers. Part E, Journal of Process Mechanical Engi- neering 222 (1): 1–19. https://doi.org/10.1243/ 09544089JPME111. Taheri, H., and S. Zafar. 2023. “Machine learning techniques for acoustic data processing in additive manufacturing in situ process monitoring A review.” Materials Evaluation 81 (7): 50–60. Taheri, H., M. Gonzalez Bocanegra, and M. Taheri. 2022. “Artificial Intelligence, Machine Learning and Smart Technologies for Nonde- structive Evaluation.” Sensors (Basel) 22 (11): 4055. https://doi.org/10.3390/s22114055. van der Maaten, L., and G. Hinton. 2008. “Visu- alizing Data using t-SNE.” Journal of Machine Learning Research 9 (86): 2579–605. Vejdannik, M., A. Sadr, V. H. C. de Albu- querque, and J. M. R. S. Tavares. 2019. “Signal Processing for NDE,” in Handbook of Advanced Nondestructive Evaluation. eds. N. Ida and N. Meyendorf. Springer. pp. 1525–1543. https://doi. org/10.1007/978-3-319-26553-7_53. Xu, D., P. F. Liu, Z. P. Chen, J. X. Leng, and L. Jiao. 2020. “Achieving robust damage mode identification of adhesive composite joints for wind turbine blade using acoustic emission and machine learning.” Composite Structures 236:111840. https://doi.org/10.1016/j.comp- struct.2019.111840. Yang, K., S. Kim, and J. B. Harley. 2022. “Guide- lines for effective unsupervised guided wave compression and denoising in long-term guided wave structural health monitoring.” Structural Health Monitoring. https://doi. org/10.1177/14759217221124689. 1.0 0.5 0.0 0.5 1.0 False positive rate Excellent classifier Good classifier Random classifier AUC Figure 6. Receiver operating characteristic (ROC) curve. The ROC curve is achieved by plotting the false positive rate versus the true positive rate at each classification threshold. The quality of the ROC curve can be summarized by the area under the curve (AUC) shaded in gray. J U L Y 2 0 2 3 M A T E R I A L S E V A L U A T I O N 47 2307 ME July dup.indd 47 6/19/23 3:41 PM T positive rate
YOUR One Stop NDT Shop FOR ASNT PUBLICATIONS! Order 10178 |ebook 10178-e A Guide to Personnel Qualification &Certification fifth edition Order 10108 |ebook 10108-e ASNT Level III Study Guide: Basic fourth edition Order 10133 |ebook 10133-e Fundamentals of Eddy Current Testing second edition NEW PUBLICATIONS Industrial Radiography Radiation Safety Study Guide second edition Order 10107 |ebook 10107-e Radiographic Interpretation revised edition 2020 Order 10168 |ebook 10168-e Order 10144 |ebook 10144-e Quick Reference Method Cards These cards (sold individually or as a set) help both the novice and experienced inspector by providing examples of testing techniques and sample calculations. N 2307 ME July dup.indd 48 6/19/23 3:41 PM
Previous Page Next Page