Generative AI deployment poses a few additional challenges and vulnerabilities, such as: Ñ Instrumental convergence. This posits that an intelligent agent (human, non-human, or machine) with unbounded but apparently harmless goals can act in surpris- ingly harmful ways, as it begins to pursue instrumental goals—a goal that is pursued not for its own sake, but rather because it is believed to be a necessary or useful step toward achieving some other desired outcome. Ñ Moloch effect. This is a game- theoretic concept characterized by the relentless pursuit of efficiency and optimization at the expense of human values and well-being. In modern society, this takes the form of a hyper-competitive global economy, where individuals and institutions are driven to maximize their productivity and profits, often at the expense of the environ- ment, social justice, and individual freedoms. Ñ Bias. Almost all AI is biased, by quality and quantity of data, as well as the algorithms. Bias driving bias toward extremes, and rendering based on one’s preference in any aspect, make it particularly dangerous. Up Until Now We have been viewing generative AI as another tool that we can harness for productivity, comfort, and solving chal- lenging scientific problems. We have held a viewpoint that AI will not replace your job, but the person using them will. Several diverse use cases that emerged with ChatGPT substantiate this view- point at the current state of technology. However, we know technology is not static. Futurists, thought leaders, security marshals, and even fiction writers are showing us all sorts of possible scenarios. Crafted videos already show the dark side of innovation. The discussions in social media are raising additional ques- tions and concerns: Where is it going? Can it take over humanity? Should we pause AI development for a few months and let the regulations catch up? The true challenge in our current situa- tion is we have: Ñ no precedence to follow, Ñ no regulation to comply with, and Ñ tremendous opportunities moti- vating its use. And this is compounded by speed of innovation and possibility of a multiplying effect when combined with other digital technologies such as IoT, 3D printing, and extended reality. Outlook Since there is no direct precedence, the question is: Can we learn from similar developments from the past? Turns out we might be able to, albeit with signif- icant additional challenges. Here are a few to think about: Should we treat it like nuclear energy? Bill Gates believes6 that “AI is like nuclear energy—both promising and dangerous.” Elon Musk is convinced7 that it is far more dangerous than nukes. There is little doubt that it can be easily weaponized and deployed by individuals or small groups, causing widespread destruction. This is clearly a type-I vulnerability: “Easy Nukes.” The way to address this is: through international norms, agreements, and regulatory frameworks to guide the responsible development and deploy- ment of AI technologies, including collab- oration between governments, industry, and academia to address AI safety concerns. We should not wait for digital Hiroshima to happen. Is it time to put an “Artificial General Intelligence (AGI) Nonproliferation Treaty” in place? The challenge is: when compared to nuclear, it is much harder to enforce, as there is hardly any barrier to entry to the AI development world. Also, AGI prolifer- ates on its own, a part of its instrumental goals. Should we treat it like publishing or the World Wide Web? The paper publishing industry was the first disruption of the information sector, permitting rapid spread across the globe through affordable paper copies of the original manuscripts. Then came the internet, which made large amounts of information search- able and accessible instantly around the globe. Generative AI is taking it to the next level, democratizing knowl- edge, not just information. Generative AI combined with social media has the potential to create fakes indistin- guishable from reality, with potential to confuse and misguide masses. This is a type-II vulnerability: “Sensitive Innovation.” The way to address this is: by encour- aging transparency in AI development and implementation, as well as creating systems of accountability to ensure that AI systems are developed and used in ways that align with human values, intellectual property rights, and data sovereignty. The challenge is: publishing was a standalone phenomenon with a high degree of traceability without direct phys- ical impact, whereas AI can interact with so many other technologies, diluting any accountability and traceability efforts, and simultaneously amplifying the influence, through control of physical devices and equipment. An AGI is an independent agent, after all. Should we treat it like fossil fuels? Fossil fuels revolutionized mobility and shrunk the world. But over time, they have significantly contributed to climate change. This is the class of innovation that poses risks accumulating over time and could lead to long-term harm or degra- dation of our environment, society, or global stability. SCANNER |NDEOUTLOOK NDE Outlook focuses on possibility thinking for NDT and NDE. Topics may include technology trends, research in progress, or calls to action. To contribute, please contact Associate Technical Editor Ripi Singh at ripi@inspiringnext.com. 18 M A T E R I A L S E V A L U A T I O N J U L Y 2 0 2 3 2307 ME July dup.indd 18 6/19/23 3:41 PM
This is a type-III vulnerability: “Gradually Destructive.” The way to address this is: through technological resilience from the begin- ning—encouraging and funding research and development into technologies that can counter or mitigate the risks posed by other potentially harmful technolo- gies, developing methods for verifying AI behavior, and ensuring the long-term stability of AI systems. The challenge is: when compared to fossil fuels, the speed of change is three orders of magnitude faster, which is closer to nuclear energy. Should we treat it like human cloning? Human cloning is the process of creating a genetically identical copy of a human being. It is a highly controversial topic, both ethically and scientifically. It raises several difficult questions about the nature of human identity and the role of science in shaping human life. As a result, human cloning is currently illegal in many coun- tries around the world. In some respects, a combination of an AGI and a robot could be as useful or deadly as a cloned human if it gets misaligned with human values or acts autonomously in ways that could be detrimental to humanity. This is a Type-IV vulnerability: “Unforeseen Risks.” The way to address this is: through raising awareness about AI safety and its implications among the public, poli- cymakers, and industry leaders, while promoting education and training in AI and related disciplines to foster a knowledgeable and responsible workforce. This vulnerability supports the recent effort to put a hold on AI development. The challenge is: once again how to enforce, given the low barrier to entry. In fact, if it gets into the dark web, it could be even worse. (Maybe it already is.) However, on a positive note, human cloning is one of our success stories where we successfully vanquished Moloch and were able to ban cloning throughout the world. Should we treat it like humanity’s child? Every analogy with technological innova- tion seems to provide some learning and poses a different set of challenges due to the speed and ease of AI development. We may have to combine all of them yet have unforeseen risks. How about a look at nature? When we raise a child, we instill certain values, morals, and discipline. If we do a good job, the children will take care of us when they become strong and we get old. AI could be like that. When it gets stronger and smarter than humans, it will treat us based on how we groom it. Once again, the speed and spread are unbounded. This requires humanity to behave like a single parent, collaborating and self-regulating at our home, called earth. The challenge is twofold: First, the bias seeps into this child’s cultural fabric with millions of teachers and parents trying to impose their world experience. The child can remember vast amounts of history (generationally collected), unlike human experience, which will further strengthen the bias. There is no known way in the current models to prune, like nature does with the cycle of life and death. This child with rapid growth characteristics will become an immortal thing, with another round of unforeseen risk. Second, the Moloch effect forces driving personal gains versus restraint for greater good, even knowing well that when everyone pursues it, no one wins. It’s the famous prisoner’s dilemma playing out at the civilizational scale, with the Nash equilibrium being catastrophic. In the meantime, should we protect our family? One might think that we should put a solid bar on the doors while the horse is still in, but we don’t know how many barns are breeding new horses, as the barrier to entry is so low. Perhaps the way to look at it is “gated communities” or “passport control” or a “cyber-firewall,” where you control what gets in your own protected zone for your safety and security. As ASNT, we can consider how far do we allow AI to become a part of the inspection ecosystem that helps us assure quality and safety of critical infra- structure. This professional society, with its body of knowledge, is quite capable of regulating what becomes a tool, method, process, or guidance. Now is the time to pay attention to AI and argue on how to nurture this baby. Call to Action “Vulnerable World Hypothesis” is a topic that deserves our undivided attention across various sectors and communities, now. Initiating collaborations between industry, academia, and policymakers to address AI safety concerns and enhance our regulatory frameworks will help in developing a responsible approach to AI innovations. Not only should ASNT conferences offer a platform for these discussions, but other organizations and events should also prioritize AI safety and its implications. This resonates with ASNT’s purpose: Creating a Safer World!® AUTHOR’S NOTE ON USE OF AI FOR THIS ARTICLE AI was not used to create this perspective or the content. Once finalized, the authors used GPT-4 to review the article using these prompts. System prompt: “You are the editor of a reputed industry magazine. There is special technical issue coming up, focusing on AI.” User prompt: “Evaluate the following outlook article as the chief editor of the magazine.” The feedback was overwhelmingly positive with suggestions to (a) modify the title, (b) incorporate examples of AI, (c) expand the call to action, and (d) consider adding a conclusion section. AI also suggested revised sentences. We incorporated the first three suggestions, including the current title, as suggested by GPT-4. The outlook articles are meant to be forward looking with an open-ended perspective, without drawing a conclusion. So, we left that one out. Once again, this demon- strates the need and power of collabo- rating with AI. A word of caution: We were able to use AI to review this opinion article. However, we are not sure that AI can be used to review a research paper discussing break- throughs in science for journal publications. J U L Y 2 0 2 3 M A T E R I A L S E V A L U A T I O N 19 2307 ME July dup.indd 19 6/19/23 3:41 PM
Previous Page Next Page