approach, continuous feedback could be provided to an adaptive weld controller using the regression output of MNS, or other discrete events could be derived from it. In either case, optimal use of MNS output in a production implementation would likely require calibration welds for novel weld type (given, for example, sheet thicknesses, sheet materials, etc.) to deter- mine an appropriate threshold or target value for MNS output. In future work, this ultrasound-based approach with AI-driven feedback will be rigorously compared against alternative feedback approaches (e.g., resistance-based feedback, etc.). With respect to this AI-based approach, a more rigorous optimization of hyperparameters within the feasible space of architectures may yield better performance. In this study, the largest model possible (in terms of parameter count) was used based on feasibility study results however, it is possible that better performance may be achieved by using various popular modules in the network (if feasible), for example, skip connections (He et al. 2016 Ronneberger et al. 2015), atrous spatial pyramid pooling (Chen et al. 2018), batch normaliza- tion (Ioffe and Szegedy 2015), attention mechanisms such as convolution block attention module (Woo et al. 2018), and so forth. Alternatively, other novel architectures, such as vision transformer (Liu et al. 2021), could be explored in future work. In addition, providing known welding parameters to the model as inputs (such as sheet thicknesses, sheet material encodings, force, welding cap face diameter, etc.) is another potential opportunity for improvement, which can be investigated in the future. Precise and continuous annotations, for both the nugget and stack, at all times throughout the weld would be ideal in order to derive event timestamps and MNS curves. From the standpoint of dataset development, this would essentially be the same as labeling the M-scans for semantic segmentation of the nugget and stack boundaries, which is significantly more tedious and laborious than the proposed approach, and still subjective (though, perhaps less subjective as it is less abstract). One advantage of the proposed approach is that it required, at most, eight clicks per annotated M-scan (each of the four event timings, two nugget labels, two stack labels) during data annotation, whereas semantic segmentation would conservatively require 20 clicks per segmented region to delineate each polygon—40 clicks in total between nugget and stack regions—so the proposed approach yielded a five- fold reduction in data preparation time. That said, semantic segmentation of the ultrasonic data is still a natural next step for this work like in the case of Guo et al. (2023). Other works have demonstrated the potential for semantic segmentation in real-time ultrasonic inspection in both NDE and medical contexts (Fiorito et al. 2018 Hu et al. 2022 Shandiz and Tóth 2022). If it were found to be performant, generalizable, and still sufficiently fast for adaptive RSW (i.e., 1 ms per A-scan infer- ence time in a production environment), semantic segmenta- tion could yield more precise and continuous measurements, and consequently better feedback. This would be especially valuable if continuous feedback to an adaptive weld controller was preferred over discrete feedback, or perhaps necessitated for a particular adaptive welding algorithm. Conclusion The investigated approach is not limited to ultrasonic NDE nor resistance spot welding such an approach could be applied to the interpretation of NDE data from a variety of other modalities for a variety of other joining methodologies. In all, the investigated approach is an exciting first step toward real-time interpretation of ultrasonic NDE data from RSW. It demonstrates the enormous potential of ultrasound-based process monitoring backed by real-time interpretation using deep learning, for real-time adaptive feedback systems in modern manufacturing. Such NDE 4.0 systems are integral to Industry 4.0 and the ZDM paradigm, and this work brings zero-defect RSW closer to reality. ACKNOWLEDGMENTS This work was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) Collaborative Research and Development (CRD) grant CRDPJ 508935-17. It was also supported by the National Research Council Canada (NRC) Industrial Research Assistance Program (IRAP). The authors would also like to thank the Institute for Diagnostic Imaging Research at the University of Windsor, Canada, as well as Tessonics Inc. in Windsor, Canada, for providing experimental facilities, equipment, and materials for this research. REFERENCES Abadi, M., A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, et al. 2015. “TensorFlow: Large-scale machine learning on heterogeneous systems.” https://www.tensorflow.org. Shafiei Alavijeh, M., R. Scott, F. Seviaryn, and R. Gr. Maev. 2020. “NDE 4.0 compatible ultrasound inspection of butt-fused joints of medium-density polyethylene gas pipes, using chord-type transducers supported by customized deep learning models.” Research in Nondestruc- tive Evaluation 31(5–6). https://doi.org/10.1080/09349847.2020.1841864. Shafiei Alavijeh, M., R. Scott, F. Seviaryn, and R. Gr. Maev. 2021. “Using machine learning to automate ultrasound-based classifica- tion of butt-fused joints in medium-density polyethylene gas pipes.” Journal of the Acoustical Society of America 150 (1): 561–72. https://doi. org/10.1121/10.0005656. Cantero-Chinchilla, S., P. D. Wilcox, and A. J. Croxford. 2022. “Deep learning in automated ultrasonic NDE Developments, axioms and opportunities.” NDT &E International 131. https://doi.org/10.1016/j. ndteint.2022.102703. Chen, L.-C., Y. Zhu, G. Papandreou, F. Schroff, and H. Adam. 2018. “Encoder-decoder with atrous separable convolution for semantic image segmentation.” In Computer Vision ECCV 2018: 15th European Confer- ence: 833–851. https://doi.org/10.1007/978-3-030-01234-2_49. Chertov, A., and R. Gr. Maev. 2004. “Determination of resistance spot weld quality in real time using reflected acoustic waves. Comparison with through-transmission mode.” 16th World Conference on NDT. Montreal, Canada. Chollet, F., et al. 2015. Keras. Software available at keras.io. Denisov, A. A., C. M. Shakarji, B. B. Lawford, R. G. Maev, and J. M. Paille. 2004. “Spot weld analysis with 2D ultrasonic Arrays.” Journal of Research of the National Institute of Standards and Technology 109 (2): 233–44. https:// doi.org/10.6028/jres.109.015. Dugmore, A. 2021. “New composites target EV applications.” SAE International. https://www.sae.org/news/2021/08/new-composites -target-ev-applications. El-Banna, M. 2006. “Dynamic resistance based intelligent resistance welding.” Doctoral dissertation. Wayne State University. J U L Y 2 0 2 3 M A T E R I A L S E V A L U A T I O N 69 2307 ME July dup.indd 69 6/19/23 3:41 PM
Escobar, C. A., M. E. McGovern, and R. Morales-Menendez. 2021. “Quality 4.0: A review of big data challenges in manufacturing.” Journal of Intelli- gent Manufacturing 32 (8): 2319–34. https://doi.org/10.1007/s10845-021- 01765-4. Fiorito, A. M., A. Østvik, E. Smistad, S. Leclerc, O. Bernard, and L. Lovstakken. 2018. “Detection of cardiac events in echocardiography using 3D convolutional recurrent neural networks.” 2018 IEEE International Ultrasonics Symposium (IUS). Kobe, Japan: 1–4. https://doi.org/10.1109/ ULTSYM.2018.8580137. Psarommatis, F., F. Fraile, J. P. Mendonca, O. Meyer, O. Lazaro, and D. Kiritsis. 2023. “Zero defect manufacturing in the era of industry 4.0 for achieving sustainable and resilient manufacturing.” Frontiers in Manufacturing Technology 3. https://doi.org/10.3389/fmtec.2023.1124624. Psarommatis, F., J. Sousa, J. P. Mendonça, and D. Kiritsis. 2022. “Zero- defect manufacturing the approach for higher manufacturing sustain- ability in the era of industry 4.0: A position paper.” International Journal of Production Research 60 (1): 73–91. https://doi.org/10.1080/00207543.2021. 1987551. Guo, Y., Z. Xiao, L. Geng, J. Wu, F. Zhang, Y. Liu, and W. Wang. 2019. “Fully convolutional neural network with GRU for 3D braided composite material flaw Detection.” IEEE Access: Practical Innovations, Open Solutions 7:151180–88. https://doi.org/10.1109/ACCESS.2019.2946447. Guo, Y., Z. Xiao, and L. Geng. 2023. “Defect detection of 3D braided composites based on semantic segmentation.” Journal of the Textile Insti- tute. https://doi.org/10.1080/00405000.2022.2054103. He, K., X. Zhang, S. Ren, and J. Sun. 2016. “Deep residual learning for image recognition.” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, NV: 770–778. https://doi.org/10.1109/ CVPR.2016.90. Hu, J., E. Smistad, I. M. Salte, H. Dalen, and L. Lovstakken. 2022. “Exploiting temporal information in echocardiography for improved image segmentation.” 2022 IEEE International Ultrasonics Symposium (IUS). Venice, Italy: 1–4. https://doi.org/10.1109/IUS54386.2022.9958670. Huang, L., X. Hong, Z. Yang, Y. Liu, and B. Zhang. 2022. “CNN-LSTM network-based damage detection approach for copper pipeline using laser ultrasonic scanning.” Ultrasonics 121:106685. https://doi.org/10.1016/j. ultras.2022.106685. Ioffe, S. and C. Szegedy. 2015. “Batch normalization: accelerating deep network training by reducing internal covariate shift.” arXiv:1502.03167 [cs. LG]. https://doi.org/10.48550/ARXIV.1502.03167. Kingma, D. P. and J. Ba. 2015. “Adam: A method for stochastic optimi- zation.” 3rd International Conference for Learning Representations, San Diego, CA. Liu, Z., Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, and B. Guo. 2021. “Swin transformer: hierarchical vision transformer using shifted windows.” 2021 IEEE/CVF International Conference on Computer Vision (ICCV). Montreal, QC, Canada: 9992–10002. https://doi.org/10.1109/ ICCV48922.2021.00986. Maev, R., F. Ewasyshyn, S. Titov, J. Paille, E. Maeva, A. Denisov, and F. Seviaryn. 2005. Method and apparatus for assessing the quality of spot welds. US Patent 7,775,415 B2, filed 14 June 2005, and issued 17 August 2010. Maev, R. Gr., A. M. Chertov, J. M. Paille, and F. J. Ewasyshyn. 2013. Ultra- sonic in-process monitoring and feedback of resistance spot weld quality. US Patent 9,296,062 B2, filed 10 June 2013, and issued 29 March 2016. Maev, R. Gr., A. M. Chertov, W. Perez-Regalado, A. Karloff, A. Tchipilko, P. Lichaa, D. Clement, and T. Phan. 2014. “In-line inspection of resistance spot welds for sheet metal assembly.” Welding Journal 93: 58-62. Maev, R. Gr., and A. M. Chertov. 2010. Electrode cap for ultrasonic testing, US Patent 8,381,591 B2, filed 18 March 2010, and issued 26 February 2013. Maev, R. Gr., A. Chertov, R. Scott, D. Stocco, A. Ouellette, A. Denisov, A., and Y, Oberdoerfer. 2021. “NDE in the automotive sector.” in Handbook of Nondestructive Evaluation 4.0. Springer Nature Switzerland AG. Meyendorf, N., L. Bond, J. Curtis-Beard, S. Heilmann, S. Pal, R. Schallert, H. Scholz, and C. Wunderlich. 2017. “NDE 4.0—NDE for the 21st century— the internet of things and cyber physical systems will revolutionize NDE.” 15th Asia Pacific Conference for Non-Destructive Testing (APCNDT 2017), Singapore. Neugebauer, R., T. Wiener, and A. Zösch. 2013. “Quality control of resis- tance spot welding of high strength steels.” Procedia CIRP 12:139–44. https://doi.org/10.1016/j.procir.2013.09.025. Ouellette, A., A. C. Karloff, W. Perez-Regalado, A. M. Chertov, R. G. Maev, and P. Lichaa. 2013. “Real-time ultrasonic quality control monitoring in resistance spot welding: Today and tomorrow.” Materials Evaluation 71 (7). Perez-Regalado, W., A. Ouellette, A. M. Chertov, V. Leshchynsky, and R. G. Maev. 2013. “Joining dissimilar metals: A novel two-step process with ultrasonic monitoring.” Materials Evaluation 71 (7): 828–33. Reis, F.F., V. Furlanetto, and G. F. Batalha. 2016. “Resistance spot weld in vehicle structures using dynamic resistance adaptive control.” SAE Tech- nical Paper, 2016-36-0303. https://doi.org/10.4271/2016-36-0303. Ronneberger, O., P. Fischer, and T. Brox. 2015. “U-Net: Convolutional networks for biomedical image segmentation.” Medical Image Computing and Computer-Assisted Intervention (MICCAI): 234–241. https://doi. org/10.1007/978-3-319-24574-4_28. Runnemalm, A., and A. Appelgren. 2012. “Evaluation of non-destructive testing methods for automatic quality checking of spot welds.” Report. Retrieved from http://urn.kb.se/resolve?urn=urn:nbn:se:hv:diva-5578. Shandiz, A. H., and L. Tóth. 2022. “Improved processing of ultrasound tongue videos by combining convLSTM and 3D convolutional networks.” Advances and Trends in Artificial Intelligence. Theory and Practices in Arti- ficial Intelligence: 265–274. https://doi.org/10.1007/978-3-031-08530-7_22. Shi, X., Z. Chen, H. Wang, D.-Y. Yeung, W.-K. Wong, and W.-C. Woo. 2015. “Convolutional LSTM Network: a machine learning approach for precipi- tation nowcasting.” In Proceedings of the 28th International Conference on Neural Information Processing Systems: 802–810. Summerville, C., P. Compston, and M. Doolan. 2019. “A comparison of resistance spot weld quality assessment techniques.” Procedia Manufac- turing 29:305–12. https://doi.org/10.1016/j.promfg.2019.02.142. Sung Hoon, J., N. Yang Woo, Y. Sanghyun, K. Si Eun, R. Gr. Maev, A. M. Chertov, D. R. Scott, and D. Stocco. 2020. System and method for resis- tance spot welding control. Korea Patent 10-2166234-0000. Korean Intellec- tual Property Office. Filed 28 January 2020, and issued 25 August 2020. Taheri, H., M. Gonzalez Bocanegra, and M. Taheri. 2022. “Artificial intelli- gence, machine learning and smart technologies for nondestructive evalu- ation.” Sensors (Basel) 22 (11): 4055. https://doi.org/10.3390/s22114055. Virkkunen, I., T. Koskinen, O. Jessen-Juhler, and J. Rinta-aho. 2021. “Augmented ultrasonic data for machine learning.” Journal of Nondestruc- tive Evaluation 40 (1): 4. https://doi.org/10.1007/s10921-020-00739-5. Virtanen, V., R. Gommers, T. E. Oliphant, M. Haberland, T. Reddy, D. Cournapeau, E. Burovski, et al. 2020. “SciPy 1.0: Fundamental algorithms for scientific computing in python.” Nature Methods 17: 261–72. https://doi. org/10.1038/s41592-019-0686-2. Woo, S., J. Park, J-Y. Lee, and I.S. Kweon. 2018. “CBAM: Convolutional block attention module.” Proceedings of the European Conference on Computer Vision (ECCV): 3–19. https://doi.org/10.48550/arXiv.1807.06521. Zamiela, C., Z. Jiang, R. Stokes, Z. Tian, A. Netchaev, C. Dickerson, W. Tian, and L. Bian. 2023. “Deep multi-modal U-Net fusion methodology of thermal and ultrasonic images for porosity.” Journal of Manufacturing Science and Engineering 145 (6). https://doi.org/10.1115/1.4056873. ME |AI/ML 70 M A T E R I A L S E V A L U A T I O N J U L Y 2 0 2 3 2307 ME July dup.indd 70 6/19/23 3:41 PM
Previous Page Next Page