J U L Y 2 0 2 0 M A T E R I A L S E V A L U A T I O N 877 materials at the radii are also observed in the TOF map in Figure 4. As well, there are options to add uncalled indications as missed calls into the ADA report with comments. Lastly, features are also provided to support verification of calibration scans, detecting the file and matching the indication calls with expected calibration results. During this development program, feedback from the team and end users was critical in delivering the necessary capability. Ensuring NDT 4.0 Reliability A helpful model to represent the reliability of an inspection system has been introduced by Müller and others (Müller et al. 2000, 2014). Total reliability of an NDT system is defined by the intrinsic capability of the system, providing an upper bound for the technique under ideal conditions, with contri- butions from application parameters, the effect of human factors, and organization context that can degrade the performance. While NDT 4.0 systems are expected to enhance POD performance through improved human factors (supporting ease of use) and repeatability in making complex calls with varying application parameters, NDT 4.0 capability must be evaluated. NDT techniques, whether incorporating AI algorithms, manual inspector data review, or a mixed IA-based approach, require validation through a POD evaluation. Comprehensive POD evaluation procedures (US DOD 2009 ASTM 2015) have been developed to validate the reliability of NDT techniques, regardless of how the indication call is made. In prior work, a POD study was conducted to evaluate the capability of an ADA algorithm to detect cracks around holes in vertical riser aircraft structures (Aldrin et al. 2001). In the study, an ADA approach incorporating neural networks was compared with manual data review by inspectors. Results demonstrated that the automated neural network approach was significantly improved in both detectability, false call rate, and inspection time relative to manual data interpretation (Aldrin et al. 2001). Other recent studies have also addressed the role of POD evaluation when human factors are involved (Bato et al. 2017). The greatest challenge with validation of NDT algorithms is ensuring that the algorithm is not overtrained but can handle the variability of practical NDT measurements outside of the laboratory. Testing algorithms with independent samples with respect to training data is critical. Model-assisted approaches for training (Fuchs et al. 2019) and validation (Aldrin et al. 2016b) will help provide a diversity of condi- tions beyond what is practical and cost-effective with experi- mental data only. Because of these challenges, properly validating NDT techniques using IA is expected to be far easier than a purely AI-based technique. For the example of validating self-driving car technology, simply augmenting the driver experience with collaborative safety systems is much more straightforward to validate than fully validating an AI only–based self-driving car technology. Recent accidents during the testing of self-driving cars indicate the care that is needed to properly and safely validate such fully automated systems when lives are at stake. Lastly, at this early stage in the application of AI and IA, there are currently no certification requirements for people who design and/or train such algorithms. However, as the field matures, such best practices should be shared throughout the community and included in accredited training programs. Over time, the potential value of imple- menting certification programs should be considered, possibly under the umbrella of NDT engineering. Conclusions and Recommendations While an increasing use of automation and algorithms in NDT is expected over time, NDT inspectors will play a critical role in ensuring NDT 4.0 reliability. As a counterpoint to AI, IA was presented as an effective use of information technology to enhance human intelligence. Based on prior experience, this paper introduces a series of best practices for IA in NDT, highlighting how the operator should ideally interface with NDT data and algorithms. Algorithms clearly have a great potential to help alleviate the burden of big data in NDT however, it is important that operators are involved in both secondary indication review and the detection of rare event indications not addressed well by typical algorithms. In addition, IA provides more flexibility with the application of AI. When applications are not a perfect fit for existing AI algo- rithms, a human user can adapt and leverage the benefits of AI appropriately. Future work should continue to address the validation of NDT techniques that leverage both humans and algorithms for data review and investigate appropriate process controls and software design to ensure optimal performance. Currently, AI algorithms are being developed primarily by engineers to perform very specific tasks, but there may come a time soon when AI tools are more adaptive and offer collabo- rative training. It is important for adaptive AI algorithms to maintain a core competency while also providing flexibility and learning capability. Care must be taken to avoid having an algorithm “evolve” to a poorer level of practice, due to bad data, inadequate guidance, or deliberate sabotage. Like computer viruses today, proper design practices and FMEA are needed to ensure such algorithms are robust to varying conditions. It is important to design these systems to periodi- cally do self-checks on standard data sets, similar to how inspectors must verify NDT systems/transducers using stan- dardization procedures or having inspectors perform NDT examinations periodically. Lastly, NDT 4.0–connected initiatives such as digital threads and digital twins are examples of how material systems can be better managed in the future (Kobryn et al. 2017 Lindgren 2017). The digital thread provides a means to track all digital information regarding the manufacturing and sustain- ment of a component and system, including the material state and any variance from original design parameters. The digital
878 M A T E R I A L S E V A L U A T I O N J U L Y 2 0 2 0 twin concept provides a digital equivalent of a system and exercises the digital twin model through various use scenarios to evaluate individual performance and forecast possible emerging maintenance issues. NDT 4.0 systems are critical to achieving these digital thread and digital twin concepts, enabling an evolution in knowledge management for end users. ACKNOWLEDGMENTS The author would like to acknowledge support for portions of this work from the Air Force Research Laboratory (AFRL) under a SBIR Phase II Contract FA8650-13-C-5180 and under Research Initiatives for Materials State Sensing (RIMSS) II. I would like to thank Eric Lindgren and John Welter of the AFRL and David Forsyth of TRI/Austin for their helpful technical discussions. REFERENCES Aldrin, J.C., J.D. Achenbach, G. Andrew, C. P’an, B. Grills, R.T. Mullis, F.W. Spencer, and M. Golis, 2001, “Case Study for the Implementation of an Automated Ultrasonic Technique to Detect Fatigue Cracks in Aircraft Weep Holes,” Materials Evaluation, Vol. 59, No. 11, pp. 1313–1319. Aldrin, J.C., C.V. Kropas-Hughes, J. Knopp, J. Mandeville, D. Judd, and E. Lindgren, 2006, “Advanced Echo-Dynamic Measures for the Characteri- sation of Multiple Ultrasonic Signals in Aircraft Structures,” Insight, Vol. 48, No. 3, pp. 144–148. Aldrin, J.C., D.S. Forsyth, and J.T. Welter, 2016a, “Design and Demonstra- tion of Automated Data Analysis Algorithms for Ultrasonic Inspection of Complex Composite Panels with Bonds,” 42nd Annual Review of Progress in Quantitative Nondestructive Evaluation, AIP Conference Proceedings, Vol. 1706, No. 1, p. 020006. Aldrin, J.C., C. Annis, H.A. Sabbagh, and E.A. Lindgren, 2016b, “Best Prac- tices for Evaluating the Capability of Nondestructive Evaluation (NDE) and Structural Health Monitoring (SHM) Techniques for Damage Char- acterization,” 42nd Annual Review of Progress in QNDE, Incorporating the 6th European-American Workshop on Reliability of NDE, AIP Conference Proceedings, Vol. 1706, p. 200002. Aldrin, J.C., E.K. Oneida, E.B. Shell, H.A. Sabbagh, E. Sabbagh, R.K. Murphy, S. Mazdiyasni, E.A. Lindgren, and R.D. Mooers, 2017, “Model-Based Probe State Estimation and Crack Inverse Methods Addressing Eddy Current Probe Variability,” 43rd Annual Review of Progress in QNDE, AIP Conference Proceedings, Vol. 1806, No. 1, p. 110013. Aldrin, J.C., and E.A. Lindgren, 2018, “The Need and Approach for Char- acterization - US Air Force Perspectives on Materials State Awareness,” 44th Annual Review of Progress in QNDE, AIP Conference Proceedings, Vol. 1949, No. 1, p. 020004. Aldrin, J.C., E.A. Lindgren, and D. Forsyth, 2019, “Intelligence Augmenta- tion in Nondestructive Evaluation,” 45th Annual Review of Progress in QNDE, AIP Conference Proceedings, Vol. 2012, No. 1, p. 020028. Avatar Partners, 2017, “Vuforia Model Targets Application in Aircraft Maintenance,” available at vuforia.com/case-studies/avatar-partners.html. ASTM, 2015, ASTM E3023-15, Standard Practice for Probability of Detec- tion Analysis for â Versus a Data, ASTM International, West Conshohocken, PA. Bainbridge, L., 1987, “Ironies of Automation,” in New Technology and Human Error, J. Rasmussen, K. Duncan, and J. Leplat (eds.), John Wiley & Sons, Chichester, UK, pp. 271–283. Bato, M.R., A. Hor, A. Rautureau, and C. Bes, 2017, “Implementation of a Robust Methodology to Obtain the Probability of Detection (POD) Curves in NDT: Integration of Human and Ergonomic Factors,” Les Journées COFREND 2017, 30 June–1 July, Strasbourg, France. Bertović, M., 2016a, “Human Factors in Non-Destructive Testing (NDT): Risks and Challenges of Mechanised NDT,” Dissertation, Bundesanstalt für Materialforschung und -prüfung (BAM), Berlin, Germany. Bertović, M., 2016b, “A Human Factors Perspective on the Use of Auto- mated Aids in the Evaluation of NDT Data,” 42nd Annual Review of Progress in Quantitative Nondestructive Evaluation, AIP Conference Proceed- ings, Vol. 1706, No. 1, p. 020003. Case, N., 2018, “How to Become a Centaur,” Journal of Design and Science, doi: 10.21428.61b2215c. Cowen, T., 2013, Average Is Over: Powering America Beyond the Age of the Great Stagnation, Penguin Group, New York, NY. Dudenhoeffer, D.D., D.E. Holcomb, B.P. Hallbert, R.T. Wood, L.J. Bond, D.W. Miller, J.M. O’Hara, E.L. Quinn, H.E. Garcia, S.A. Arndt, and J. Naser, 2007, “Technology Roadmap on Instrumentation, Control, and Human-Machine Interface to Support DOE Advanced Nuclear Energy Programs,” Report No. INL/EXT-06-11862, Idaho National Laboratory, Idaho Falls, ID. Forsyth, D., J.C. Aldrin, and C.W. Magnuson, 2018, “Turning Nondestruc- tive Testing Data into Useful Information,” Aircraft Airworthiness & Sustainment Conference, 23–26 April, Jacksonville, FL. Fuchs, P., T. Kröger, T. Dierig, and C.S. Garbe, 2019, “Generating Mean- ingful Synthetic Ground Truth for Pore Detection in Cast Aluminum Parts,” Proceedings of Conference on Industrial Computed Tomography (iCT2019), Padua, Italy. Fukushima, K., and S. Miyake, 1982, “Neocognitron: A Self-Organizing Neural Network Model for a Mechanism of Visual Pattern Recognition,” Competition and Cooperation in Neural Nets, pp. 267–285, Springer-Verlag, Berlin/Heidelberg, Germany. Gerbert, P., 2018, “AI and the ‘Augmentation’ Fallacy,” MIT Sloan Management Review, available at sloanreview.mit.edu/article/ai-and-the -augmentation-fallacy/. Hao, K., 2019, “When Algorithms Mess Up, the Nearest Human Gets the Blame,” MIT Technology Review, available at technologyreview.com/s/613578/ai-algorithms-liability-human-blame/. Hinton, G.E., S. Osindero, and Y.-W. Teh, 2006, “A Fast Learning Algorithm for Deep Belief Nets,” Neural Computation, Vol. 18, No. 7, pp. 1527–1554. Jahanzaib, I., and J. Jasperneite, 2013, “Scalability of OPC-UA Down to the Chip Level Enables ‘Internet of Things’,” in Proceedings of 11th IEEE Inter- national Conference on Industrial Informatics (INDIN), Bochum, Germany, pp. 500–505. Jordon, H., 2018, “AFRL Viewing Aircraft Inspections through the Lens of Technology,” available at wpafb.af.mil/news/article-display/ article/1603494/afrl-viewing-aircraft-inspections-through-the -lens-of-technology. Kobryn, P., E. Tuegel, J. Zweber, and R. Kolonay, 2017, “Digital Thread and Twin for Systems Engineering: EMD to Disposal,” 55th AIAA Aero- space Sciences Meeting, 9–13 January, Grapevine, TX. LeCun, Y., Y. Bengio, and G. Hinton, 2015, “Deep Learning,” Nature, Vol. 521, No. 7553, pp. 436–444. Lewis-Kraus, G., 2016, “The Great A.I. Awakening,” The New York Times Magazine, available at nytimes.com/2016/12/14/magazine/ the-great-ai-awakening.html. Lindgren, E.A., 2017, “Opportunities for Nondestructive Evaluation: Quantitative Characterization,” Materials Evaluation, Vol. 75, No. 7, p. 862–869. Lindgren, E.A., J.R. Mandeville, M.J. Concordia, T.J. MacInnis, J.J. Abel, J.C. Aldrin, F. Spencer, D.B. Fritz, P. Christiansen, R.T. Mullis, and R. Waldbusser, 2005, “Probability of Detection Results and Deployment of the Inspection of the Vertical Leg of the C-130 Center Wing Beam/ Spar Cap,” 8th Joint DoD/FAA/NASA Conference on Aging Aircraft, 31 January–3 February, Palm Springs, CA. Link, R., and N. Riess, 2018, “NDT 4.0 Significance and Implications to NDT Automated Magnetic Particle Testing as an Example,” 12th European Conference on Non-Destructive Testing (ECNDT 2018), 11–15 June, Gothenburg, Sweden. Meier, J., I. Tsalicoglou, and R. Mennicke, 2017, “The Future of NDT with Wireless Sensors, AI and IoT,” 15th Asia Pacific Conference for Non-Destructive Testing, 13–17 November, Singapore, Singapore. ME TECHNICAL PAPER w ia and human-machine interfaces
Previous Page Next Page