It examines real-time UQ, digital twins,
and autonomous inspection systems
while exploring their practical applica-
tions across various NDE techniques and
industries.
Sources of Uncertainty in NDE
In engineering and science disciplines,
uncertainty is generally classified into
two broad categories: aleatoric and
epistemic uncertainties. Aleatoric
uncertainty, also known as stochastic
uncertainty, represents unknowns that
differ each time the same experiment
is performed. In contrast, epistemic
uncertainty, also known as systematic
uncertainty, originates from a lack of
knowledge of the NDE measurement.
This classification is foundational in
NDE-related uncertainty, with alea-
toric uncertainty often quantified using
probabilistic methods, and epistemic
uncertainty addressed through Bayesian
approaches or enhanced data collection
[13, 14].
While these distinctions provide a
useful framework, they exhibit limita-
tions in capturing the full complexity
of real-world NDE scenarios. Aleatoric
uncertainty assumes randomness is
purely stochastic, overlooking systematic
patterns or biases that could be
modeled. Epistemic uncertainty, though
effective for identifying knowledge gaps,
may not fully account for persistent
model inaccuracies or measurement
errors, even with additional data.
In NDE, uncertainties often arise
from a combination of sources—such as
sensor noise, operator skill, and model
simplifications—that cannot be neatly
separated into aleatoric or epistemic cat-
egories. This oversimplification can lead
to underestimating total uncertainty,
thereby compromising the reliability
of inspection results. Furthermore, this
general classification cannot adequately
address uncertainties introduced by
emerging technologies, such as machine
learning–based NDE systems, where
data-driven and model-driven uncer-
tainties interact in unpredictable ways,
as discussed by Ceberio et al. [15]. For
example, environmental factors like
temperature fluctuations introduce vari-
ability that defies neat categorization as
purely aleatoric or epistemic. Similarly,
emerging technologies, such as machine
learning–based NDE systems, introduce
interactions between data-driven and
model-driven uncertainties that tradi-
tional frameworks struggle to capture
[16]. These gaps underscore the need
to move beyond conventional classifi-
cations to better reflect the realities of
modern NDE applications.
Li [14] proposed a more detailed clas-
sification and framework for UA and UQ
for better understanding and managing
uncertainties in practical NDE applica-
tions. Typically, NDE-related uncertainty
is reclassified into data uncertainty,
forward modeling uncertainty, and
inverse learning uncertainty. Table 1
lists these reclassified NDE uncertainty
sources along with commonly applied
UQ methods.
Data uncertainty in NDE encom-
passes both input parameters and
obtained measurements. It arises
from factors such as inconsistencies in
material properties, variations in dis-
continuity geometry, and sensor-related
errors. Multiphysics material proper-
ties—such as conductivity, permittiv-
ity, density, elasticity, and microstruc-
ture—influence energy interactions like
wave propagation and signal response,
introducing variability into NDE mea-
surements [15, 17, 18, 19]. Discontinuity
geometry—including size, shape, and
orientation—affects detection sensitivity
and characterization accuracy, with even
TA B L E 1
Uncertainty sources and mitigation strategies for NDE applications
Uncertainty category Subcategory Description Mitigation strategies
Data
Material property variability Variations in material properties (e.g., grain size,
fatigue degradation)
Probabilistic calibration uncertainty
propagation models
Defect geometry uncertainty
Defect size, shape, and orientation variations
introduce uncertainty in forward and inverse
models.
Stochastic modeling Bayesian inference
for defect estimation
Measurement and sensor
uncertainty
Measurement errors due to liftoff effect, sensor
noise, operator variability, and environmental
conditions
Signal processing techniques adaptive
filtering sensor fusion
Modeling
Parametric uncertainty Uncertainty in material constants, defect
parameters, and boundary conditions in modeling
Bayesian calibration stochastic FEM
sensitivity analysis
Structural uncertainty Approximations, numerical errors, and unmodeled
physics in simulations impact accuracy.
Hybrid modeling approaches
perturbation methods
Learning
Overfitting Limited training data reduces model
generalization, causing false positives/negatives.
Bayesian neural networks (BNNs) Monte
Carlo dropout
Hybrid model calibration issues Discrepancies between AI predictions and
physics‑based models reduce predictive accuracy.
Physics‑informed AI hybrid modeling
techniques
Data assimilation challenges AI struggles to interpret complex relationships
between sensor data and defect parameters.
Uncertainty‑aware deep learning
adaptive feedback loops
A U G U S T 2 0 2 5 • M AT E R I A L S E V A L U AT I O N 25
and autonomous inspection systems
while exploring their practical applica-
tions across various NDE techniques and
industries.
Sources of Uncertainty in NDE
In engineering and science disciplines,
uncertainty is generally classified into
two broad categories: aleatoric and
epistemic uncertainties. Aleatoric
uncertainty, also known as stochastic
uncertainty, represents unknowns that
differ each time the same experiment
is performed. In contrast, epistemic
uncertainty, also known as systematic
uncertainty, originates from a lack of
knowledge of the NDE measurement.
This classification is foundational in
NDE-related uncertainty, with alea-
toric uncertainty often quantified using
probabilistic methods, and epistemic
uncertainty addressed through Bayesian
approaches or enhanced data collection
[13, 14].
While these distinctions provide a
useful framework, they exhibit limita-
tions in capturing the full complexity
of real-world NDE scenarios. Aleatoric
uncertainty assumes randomness is
purely stochastic, overlooking systematic
patterns or biases that could be
modeled. Epistemic uncertainty, though
effective for identifying knowledge gaps,
may not fully account for persistent
model inaccuracies or measurement
errors, even with additional data.
In NDE, uncertainties often arise
from a combination of sources—such as
sensor noise, operator skill, and model
simplifications—that cannot be neatly
separated into aleatoric or epistemic cat-
egories. This oversimplification can lead
to underestimating total uncertainty,
thereby compromising the reliability
of inspection results. Furthermore, this
general classification cannot adequately
address uncertainties introduced by
emerging technologies, such as machine
learning–based NDE systems, where
data-driven and model-driven uncer-
tainties interact in unpredictable ways,
as discussed by Ceberio et al. [15]. For
example, environmental factors like
temperature fluctuations introduce vari-
ability that defies neat categorization as
purely aleatoric or epistemic. Similarly,
emerging technologies, such as machine
learning–based NDE systems, introduce
interactions between data-driven and
model-driven uncertainties that tradi-
tional frameworks struggle to capture
[16]. These gaps underscore the need
to move beyond conventional classifi-
cations to better reflect the realities of
modern NDE applications.
Li [14] proposed a more detailed clas-
sification and framework for UA and UQ
for better understanding and managing
uncertainties in practical NDE applica-
tions. Typically, NDE-related uncertainty
is reclassified into data uncertainty,
forward modeling uncertainty, and
inverse learning uncertainty. Table 1
lists these reclassified NDE uncertainty
sources along with commonly applied
UQ methods.
Data uncertainty in NDE encom-
passes both input parameters and
obtained measurements. It arises
from factors such as inconsistencies in
material properties, variations in dis-
continuity geometry, and sensor-related
errors. Multiphysics material proper-
ties—such as conductivity, permittiv-
ity, density, elasticity, and microstruc-
ture—influence energy interactions like
wave propagation and signal response,
introducing variability into NDE mea-
surements [15, 17, 18, 19]. Discontinuity
geometry—including size, shape, and
orientation—affects detection sensitivity
and characterization accuracy, with even
TA B L E 1
Uncertainty sources and mitigation strategies for NDE applications
Uncertainty category Subcategory Description Mitigation strategies
Data
Material property variability Variations in material properties (e.g., grain size,
fatigue degradation)
Probabilistic calibration uncertainty
propagation models
Defect geometry uncertainty
Defect size, shape, and orientation variations
introduce uncertainty in forward and inverse
models.
Stochastic modeling Bayesian inference
for defect estimation
Measurement and sensor
uncertainty
Measurement errors due to liftoff effect, sensor
noise, operator variability, and environmental
conditions
Signal processing techniques adaptive
filtering sensor fusion
Modeling
Parametric uncertainty Uncertainty in material constants, defect
parameters, and boundary conditions in modeling
Bayesian calibration stochastic FEM
sensitivity analysis
Structural uncertainty Approximations, numerical errors, and unmodeled
physics in simulations impact accuracy.
Hybrid modeling approaches
perturbation methods
Learning
Overfitting Limited training data reduces model
generalization, causing false positives/negatives.
Bayesian neural networks (BNNs) Monte
Carlo dropout
Hybrid model calibration issues Discrepancies between AI predictions and
physics‑based models reduce predictive accuracy.
Physics‑informed AI hybrid modeling
techniques
Data assimilation challenges AI struggles to interpret complex relationships
between sensor data and defect parameters.
Uncertainty‑aware deep learning
adaptive feedback loops
A U G U S T 2 0 2 5 • M AT E R I A L S E V A L U AT I O N 25















































































































