UNCERTAINTY ANALYSIS AND
QUANTIFICATION FOR NONDESTRUCTIVE
EVALUATION
BY ZI LI AND YIMING DENG
Uncertainty analysis and quantification (UA&UQ) are redefining NDE—
enhancing reliability, enabling predictive maintenance, and advancing
automation through AI, digital twins, and real-time data for smarter,
safer, and more informed inspections.
Introduction
Nondestructive evaluation (NDE) is
widely used in aerospace, nuclear
energy, transportation, and civil infra-
structure, and is essential for structural
health monitoring, quality assurance,
and predictive maintenance [1, 2], while
ensuring and improving the safety, reli-
ability, and cost efficiency of complex
systems, materials, and infrastructure.
However, no NDE inspection is perfect.
Factors such as sensor imperfections,
simplified models, and uncontrolled
inspection conditions introduce devi-
ations between the ideal and the
measured response, which is called
uncertainty [3]. Generally, uncertainty
encompasses all known/unknown errors
and variations in the inspection process,
from data collection through final inter-
pretation. Since NDE is safety-critical, if
uncertainty is not properly addressed, it
can produce false positives (overestimat-
ing damage) or false negatives (missing
critical discontinuities), potentially
leading to unnecessary repairs or unde-
tected damage that must be repaired
later [4, 5].
Every NDE inspection performance
will be affected by many uncertainty
factors. For example, poor surface con-
ditions—such as rust, paint, or machin-
ing marks—can scatter or weaken the
inspection signal, hiding shallow discon-
tinuities or producing false indications
[6]. In addition, the choices of signal
processing methods may further amplify
noise or suppress weak echoes, dis-
torting the discontinuity response and
affecting decision-making [7]. Additional
uncertainty may arise from such factors
as equipment aging, probe geometry,
operator expertise, discontinuity mor-
phology, material properties, and envi-
ronmental variability these factors can
distort signals, reducing confidence
in inspection results and collectively
increasing operational risks [3, 8].
Uncertainty analysis and quantifi-
cation (UA&UQ) provide a systematic
approach to identifying and managing
these uncertainties. Traditional UQ
methods for NDE rely on statistical tech-
niques such as probability-of-detection
(POD) curves, sizing-uncertainty
bounds, and risk-informed inspection
criteria. These are applied to yield con-
fidence bounds or intervals for reliable
discontinuity detectability and sizing
accuracy across diverse NDE applica-
tions [9]. Recent advancements have
extended beyond these conventional
approaches, improving the robustness,
interpretability, and applicability of UQ
in more complex inspection scenarios.
Aldrin et al. [10] developed and
validated a model-based inversion
method to estimate crack length and
depth at multilayer fastener sites using
bolt-hole eddy current (BHEC) testing,
along with guidance on calculating the
95% lower uncertainty bound (LUS) to
support safety assessments. Knott et al.
[11] expanded traditional POD analysis
by creating a structured framework that
accounts for added uncertainties such
as material type, discontinuity shape,
gain settings, and inspector variability,
offering a clearer, step-by-step alterna-
tive to the standard transfer function
method. Additionally, statistical methods
are being applied to uncertainty quanti-
fication for example, one approach uses
a numerical inversion algorithm to prob-
abilistically estimate fatigue crack size
from eddy current signals, incorporating
both the estimated value and its uncer-
tainty [12].
Recent AI-enabled NDE systems
have boosted automation but also
introduced new uncertainty sources,
such as data-driven uncertainty from
limited or biased training sets and model
uncertainty stemming from the opaque
decision boundaries of deep networks
[13]. To mitigate these challenges, hybrid
UQ approaches are proposed, which
integrate classical probabilistic and
statistical tools with physics-informed
constraints to improve uncertainty
management in NDE applications and
provide more reliable discontinuity-
sizing metrics [14].
With the adoption and prevalence
of digital transformation in NDE—also
known as NDE 4.0—AI-driven automa-
tion, real-time sensor networks, and
digital twin technologies are transform-
ing inspection processes. However,
these advancements also create a need
for standardized UA&UQ frameworks.
This tutorial presents a comprehen-
sive review of UA&UQ methodologies,
including probabilistic, statistical, simu-
lation-based, and AI-driven approaches.
FEATURE
|
NDTTUTORIAL
24
M AT E R I A L S E V A L U AT I O N A U G U S T 2 0 2 5
It examines real-time UQ, digital twins,
and autonomous inspection systems
while exploring their practical applica-
tions across various NDE techniques and
industries.
Sources of Uncertainty in NDE
In engineering and science disciplines,
uncertainty is generally classified into
two broad categories: aleatoric and
epistemic uncertainties. Aleatoric
uncertainty, also known as stochastic
uncertainty, represents unknowns that
differ each time the same experiment
is performed. In contrast, epistemic
uncertainty, also known as systematic
uncertainty, originates from a lack of
knowledge of the NDE measurement.
This classification is foundational in
NDE-related uncertainty, with alea-
toric uncertainty often quantified using
probabilistic methods, and epistemic
uncertainty addressed through Bayesian
approaches or enhanced data collection
[13, 14].
While these distinctions provide a
useful framework, they exhibit limita-
tions in capturing the full complexity
of real-world NDE scenarios. Aleatoric
uncertainty assumes randomness is
purely stochastic, overlooking systematic
patterns or biases that could be
modeled. Epistemic uncertainty, though
effective for identifying knowledge gaps,
may not fully account for persistent
model inaccuracies or measurement
errors, even with additional data.
In NDE, uncertainties often arise
from a combination of sources—such as
sensor noise, operator skill, and model
simplifications—that cannot be neatly
separated into aleatoric or epistemic cat-
egories. This oversimplification can lead
to underestimating total uncertainty,
thereby compromising the reliability
of inspection results. Furthermore, this
general classification cannot adequately
address uncertainties introduced by
emerging technologies, such as machine
learning–based NDE systems, where
data-driven and model-driven uncer-
tainties interact in unpredictable ways,
as discussed by Ceberio et al. [15]. For
example, environmental factors like
temperature fluctuations introduce vari-
ability that defies neat categorization as
purely aleatoric or epistemic. Similarly,
emerging technologies, such as machine
learning–based NDE systems, introduce
interactions between data-driven and
model-driven uncertainties that tradi-
tional frameworks struggle to capture
[16]. These gaps underscore the need
to move beyond conventional classifi-
cations to better reflect the realities of
modern NDE applications.
Li [14] proposed a more detailed clas-
sification and framework for UA and UQ
for better understanding and managing
uncertainties in practical NDE applica-
tions. Typically, NDE-related uncertainty
is reclassified into data uncertainty,
forward modeling uncertainty, and
inverse learning uncertainty. Table 1
lists these reclassified NDE uncertainty
sources along with commonly applied
UQ methods.
Data uncertainty in NDE encom-
passes both input parameters and
obtained measurements. It arises
from factors such as inconsistencies in
material properties, variations in dis-
continuity geometry, and sensor-related
errors. Multiphysics material proper-
ties—such as conductivity, permittiv-
ity, density, elasticity, and microstruc-
ture—influence energy interactions like
wave propagation and signal response,
introducing variability into NDE mea-
surements [15, 17, 18, 19]. Discontinuity
geometry—including size, shape, and
orientation—affects detection sensitivity
and characterization accuracy, with even
TA B L E 1
Uncertainty sources and mitigation strategies for NDE applications
Uncertainty category Subcategory Description Mitigation strategies
Data
Material property variability Variations in material properties (e.g., grain size,
fatigue degradation)
Probabilistic calibration uncertainty
propagation models
Defect geometry uncertainty
Defect size, shape, and orientation variations
introduce uncertainty in forward and inverse
models.
Stochastic modeling Bayesian inference
for defect estimation
Measurement and sensor
uncertainty
Measurement errors due to liftoff effect, sensor
noise, operator variability, and environmental
conditions
Signal processing techniques adaptive
filtering sensor fusion
Modeling
Parametric uncertainty Uncertainty in material constants, defect
parameters, and boundary conditions in modeling
Bayesian calibration stochastic FEM
sensitivity analysis
Structural uncertainty Approximations, numerical errors, and unmodeled
physics in simulations impact accuracy.
Hybrid modeling approaches
perturbation methods
Learning
Overfitting Limited training data reduces model
generalization, causing false positives/negatives.
Bayesian neural networks (BNNs) Monte
Carlo dropout
Hybrid model calibration issues Discrepancies between AI predictions and
physics‑based models reduce predictive accuracy.
Physics‑informed AI hybrid modeling
techniques
Data assimilation challenges AI struggles to interpret complex relationships
between sensor data and defect parameters.
Uncertainty‑aware deep learning
adaptive feedback loops
A U G U S T 2 0 2 5 M AT E R I A L S E V A L U AT I O N 25
Previous Page Next Page