However, I can suggest some relevant articles and sources related to artificial intelligence and nondestructive testing that you may find useful: 1. “Applications of artificial intelligence and machine learning in non-destructive testing” by A. T. Arulrajah, P. R. W. Nadeeka, and M. A. Ali. This is an academic research paper that provides an overview of the applications of AI and machine learning in NDT. 2. “Non-Destructive Testing and Inspection Market -Growth, Trends, COVID-19 Impact, and Forecasts (2021 -2026)” by Mordor Intelligence. This is a market research report that provides insights into the trends and drivers in the NDT market, including the role of AI and machine learning. 3. “Artificial Intelligence in Nondestructive Testing: A Review” by M. Elsafi, S. Saeed, and S. M. Zubair. This is another academic research paper that provides a comprehensive review of the use of AI in NDT, including the benefits and challenges. 4. “Nondestructive Testing Using Artificial Intelligence: Trends and Opportunities” by S. K. Tiwari and R. K. Jain. This is a research article that discusses the use of AI in NDT, with a focus on the potential benefits and challenges. I hope you find these resources helpful in your research on AI and NDT. At first, this series of responses appears appro- priate and honestly impressive. However, there is one major issue if you dig deeper none of these suggested references in the last response are actually real publications. They are all fabrications. Why does this happen? GPT-3 is a language model that uses deep neural networks to predict the next word in a sequence of text based on the context provided by the previous words. Sometimes, when the predicted prompt responses are not precisely matched, the LLM will still do its best to string together the most plausible text response. Unfortunately, for some instances where precise responses are expected, sometimes they turn out to be nonsense. Besides fake references, poor logic and unex- pected responses are an issue for AI agents in 2023. The term hallucinations has been given to such fabricated responses (Alkaissi and McFarlane 2023). This issue becomes more critical when such tools are applied to technical and scientific work, like NDT, where specific details are of the utmost impor- tance and where we can’t afford to have such errors and fabrications. On NDT.net, there is a thread highlighting the bad idea of “ChatGPT3 writing your inspection procedure” (Bisle 2023). Clearly, AI agents are not ready to be given large complicated technical writing tasks and be expected to produce error-free content. From my perspective, this is OK. NDT technicians, engineers, and researchers should be leading and responsible for the quality of written procedures, reports, and scientific publications. The other issue that the aforementioned inter- change tried to highlight is the lack of any means to reference and verify where such content originated. I’ve written about the benefits and risks of AI for NDT in the past and was curious where ChatGPT was getting its material. While it is impressive that such AI agents can generate articulate responses to such questions, I do see an ethical issue. If these language models are being trained using material on the order of the content of the Library of Congress, shouldn’t they do a better job of pro- viding the source material for their response? To some degree, the current versions of these AI tools operate like efficient plagiarism agents, which is the antithesis of quality technical and scientific writing that depends on collegial citation. The Future These tools have come a long way in recent years and will only get better. While ChatGPT is based on GPT-3, OpenAI recently released GPT-4, which has received many positive reviews (Metz and Collins 2023). (While there is a monthly charge to access GPT-4 directly, Microsoft Bing Chat does provide free limited access to GPT-4 today.) There are also a number of other promising AI tools to explore today like Google’s Bard, DeepMind Sparrow, and Amazon Titan. In terms of knowledge capability, GPT-4 has been trained to be more precise and OpenAI claims it can score a 1300 (out of 1600) on the SAT. So, training on a wider depth of material and taking more care with the content selection will help. But, to some degree, if these AI agents are trained using the broad history of human writing, all of the positives and the negatives of our writing will be baked into these algorithms. The current black-box architecture will make it challenging to eliminate false or offensive responses. Going forward, the most effective way of using such tools will be in a collaborative way. This will follow our general experience with the applica- tion of AI/ML for evaluating NDT data, where maintaining a critical role for human inspectors ensures NDT data quality and helps compensate for instances of poor AI performance. (See Lindgren 2023 on page 35 in this issue for more discussion on this topic.) Workers are already finding ways to leverage these tools effectively while doing their job. In a recent survey, over 40% of Americans said they were using generative AI technology at work FEATURE |AI/ML 32 M A T E R I A L S E V A L U A T I O N J U L Y 2 0 2 3 2307 ME July dup.indd 32 6/19/23 3:41 PM
(Molla 2023). While new technologies certainly can cause disruptions, they may ultimately lead to more and better-quality work, much like the impact of the personal computer or the internet. University instruction is already striving to rethink how to inte- grate such tools into their curriculum and promote best practices (Yang 2023). It is critical to under- stand how to create appropriate prompts for getting the best information, while also understanding the risk and quality issues of the output. One of the biggest issues going forward concerns plagiarism, copyright concerns for human content providers, and how this technology could be better regulated. Artists and writers are begin- ning to take action to defend their intellectual property from so-called “fair use” (DelSignore 2023). Daniel Gervais, a professor at Vanderbilt Law School who specializes in intellectual property law states that it hinges on the following: “What’s the purpose or nature of the use and what’s the impact on the market” (DelSignore 2023). Basically, it comes down to how you are using the output. Is it for research or commercial purposes? If commercial, one needs to be extremely careful. These questions and concerns are going to greatly impact the future of this tech- nology, and how widely and rapidly it will be used. The regulation of AI is expected to evolve rapidly and must address the safe application of this technology. To date, regulation is being led by the EU and China, while the US response has been fairly limited in scope. The White House’s Blueprint for an AI Bill of Rights highlights the need for better decision-making including explanations: “Automated systems should provide explanations that are technically valid, meaningful, and useful to you and to any operators or others who need to understand the system, and calibrated to the level of risk based on the context” (Klein 2023). But experts generally agree we have made almost no progress on explaining what is happening inside these LLMs (Klein 2023). There is a clear need to be able to comprehensively validate AI performance, but this appears to be greatly complicated by how complex these algorithms have become. Work on Explainable AI—a set of tools that help one understand and interpret the outputs generated by ML algorithms— is progressing, but it will take time to get there. One consideration for our community: What if we created our own NDT Chatbot, let’s say residing behind the ASNT login, trained using ASNT- copyrighted materials, for example back issues of Materials Evaluation, and maybe even handbooks? Based on what GPT-4 is doing, it is clear such a tool could pass a Level III exam. If done right, this could be a valuable resource for the community. Of course, we’d have to first ensure that the answers are consistently correct, just as we have reviewers ensure our handbooks and publications are as error-free as possible. I feel the technology would also need to produce the source(s) for its answer to the user, so we have a record to check and verify that the answer is correct. If poor responses are discovered, we must also have the means of correcting it. While we can imagine all of the positive uses for such AI agents, they can just as easily be deployed for nefarious causes today. For example, these tools will likely improve the social engineering that is being used to fleece people of personal informa- tion and money through predatory emails, robo- calls, and social media. It is critical to consider the trade-offs of organizing our body of knowledge into one easily accessible place. Ripi Singh has some very important insight on this going forward: “The ‘Vulnerable World Hypothesis’ is a topic that deserves our undivided attention at every ASNT conference as a single body of professionals com- mitted to Creating a Safer World!® We can start with Generative AI as the first item on the list to be addressed, now” (Singh and Garg 2023). While I don’t have all the answers and definitely can’t predict the future, I do want to encourage more discussion and feedback on this important topic within ASNT. This topic has been brought up in the ASNT AI/ML Committee recently and we plan to explore possible guidance for the use of gen- erative AI in NDT going forward. (As well, please feel free to share your thoughts with me at aldrin @computationaltools.com or get involved with the ASNT AI/ML Committee.) AUTHOR John C. Aldrin: Computational Tools, Gurnee, Illinois 60031, USA aldrin@computationaltools.com CITATION Materials Evaluation 81 (7): 28–34 https://doi.org/10.32548/2023.me-04361 ©2023 American Society for Nondestructive Testing REFERENCES Alkaissi, H., and S. I. McFarlane. 2023. “Artificial hallucina- tions in ChatGPT: Implications in scientific writing.” Cureus 15 (2). https://doi.org/10.7759/cureus.35179. Bisle, W. 2023. “ChatGPT3 writing your inspection proce- dure?” NDT.net forum. 29 January 2023. https://www.ndt. net/forum/thread.php?msgID=84722. DelSignore, P. 2023. “AI and the copyright problem: Making sense of generative AI copyright issues.” Medium. 4 March 2023. https://medium.com/geekculture/ai-and-the-copy- right-problem-97da479a9ccd. Kim, S.-G. 2023. “Using ChatGPT for language editing in scientific articles.” Maxillofacial Plastic and Reconstructive Surgery 45. https://doi.org/10.1186/s40902-023-00381-x. J U L Y 2 0 2 3 M A T E R I A L S E V A L U A T I O N 33 2307 ME July dup.indd 33 6/19/23 3:41 PM
Previous Page Next Page