It examines real-time UQ, digital twins,
and autonomous inspection systems
while exploring their practical applica-
tions across various NDE techniques and
industries.
Sources of Uncertainty in NDE
In engineering and science disciplines,
uncertainty is generally classified into
two broad categories: aleatoric and
epistemic uncertainties. Aleatoric
uncertainty, also known as stochastic
uncertainty, represents unknowns that
differ each time the same experiment
is performed. In contrast, epistemic
uncertainty, also known as systematic
uncertainty, originates from a lack of
knowledge of the NDE measurement.
This classification is foundational in
NDE-related uncertainty, with alea-
toric uncertainty often quantified using
probabilistic methods, and epistemic
uncertainty addressed through Bayesian
approaches or enhanced data collection
[13, 14].
While these distinctions provide a
useful framework, they exhibit limita-
tions in capturing the full complexity
of real-world NDE scenarios. Aleatoric
uncertainty assumes randomness is
purely stochastic, overlooking systematic
patterns or biases that could be
modeled. Epistemic uncertainty, though
effective for identifying knowledge gaps,
may not fully account for persistent
model inaccuracies or measurement
errors, even with additional data.
In NDE, uncertainties often arise
from a combination of sources—such as
sensor noise, operator skill, and model
simplifications—that cannot be neatly
separated into aleatoric or epistemic cat-
egories. This oversimplification can lead
to underestimating total uncertainty,
thereby compromising the reliability
of inspection results. Furthermore, this
general classification cannot adequately
address uncertainties introduced by
emerging technologies, such as machine
learning–based NDE systems, where
data-driven and model-driven uncer-
tainties interact in unpredictable ways,
as discussed by Ceberio et al. [15]. For
example, environmental factors like
temperature fluctuations introduce vari-
ability that defies neat categorization as
purely aleatoric or epistemic. Similarly,
emerging technologies, such as machine
learning–based NDE systems, introduce
interactions between data-driven and
model-driven uncertainties that tradi-
tional frameworks struggle to capture
[16]. These gaps underscore the need
to move beyond conventional classifi-
cations to better reflect the realities of
modern NDE applications.
Li [14] proposed a more detailed clas-
sification and framework for UA and UQ
for better understanding and managing
uncertainties in practical NDE applica-
tions. Typically, NDE-related uncertainty
is reclassified into data uncertainty,
forward modeling uncertainty, and
inverse learning uncertainty. Table 1
lists these reclassified NDE uncertainty
sources along with commonly applied
UQ methods.
Data uncertainty in NDE encom-
passes both input parameters and
obtained measurements. It arises
from factors such as inconsistencies in
material properties, variations in dis-
continuity geometry, and sensor-related
errors. Multiphysics material proper-
ties—such as conductivity, permittiv-
ity, density, elasticity, and microstruc-
ture—influence energy interactions like
wave propagation and signal response,
introducing variability into NDE mea-
surements [15, 17, 18, 19]. Discontinuity
geometry—including size, shape, and
orientation—affects detection sensitivity
and characterization accuracy, with even
TA B L E 1
Uncertainty sources and mitigation strategies for NDE applications
Uncertainty category Subcategory Description Mitigation strategies
Data
Material property variability Variations in material properties (e.g., grain size,
fatigue degradation)
Probabilistic calibration uncertainty
propagation models
Defect geometry uncertainty
Defect size, shape, and orientation variations
introduce uncertainty in forward and inverse
models.
Stochastic modeling Bayesian inference
for defect estimation
Measurement and sensor
uncertainty
Measurement errors due to liftoff effect, sensor
noise, operator variability, and environmental
conditions
Signal processing techniques adaptive
filtering sensor fusion
Modeling
Parametric uncertainty Uncertainty in material constants, defect
parameters, and boundary conditions in modeling
Bayesian calibration stochastic FEM
sensitivity analysis
Structural uncertainty Approximations, numerical errors, and unmodeled
physics in simulations impact accuracy.
Hybrid modeling approaches
perturbation methods
Learning
Overfitting Limited training data reduces model
generalization, causing false positives/negatives.
Bayesian neural networks (BNNs) Monte
Carlo dropout
Hybrid model calibration issues Discrepancies between AI predictions and
physics‑based models reduce predictive accuracy.
Physics‑informed AI hybrid modeling
techniques
Data assimilation challenges AI struggles to interpret complex relationships
between sensor data and defect parameters.
Uncertainty‑aware deep learning
adaptive feedback loops
A U G U S T 2 0 2 5 M AT E R I A L S E V A L U AT I O N 25
minor variations leading to significant
measurement variabilities [20]. Sensor-
related errors, such as calibration drift,
noise, or environmental interference,
further exacerbate uncertainty, partic-
ularly in challenging or extreme oper-
ational conditions. These factors often
interact, creating compounded uncer-
tainties that complicate data interpreta-
tion. For example, sensor inaccuracies
may obscure true material variations,
while geometric uncertainties can com-
plicate signal analysis [21].
Such challenges highlight the need
for robust methodologies, includ-
ing statistical analysis and advanced
signal processing, to mitigate uncer-
tainty and enhance NDE reliability.
Statistical analysis methods, such as
high-dimensional data analytics, are
widely used to manage large datasets
and reduce measurement variability
in structural health monitoring [22].
Sensor calibration and fusion tech-
niques, including multi-view ultrasonic
data integration, improve discontinuity
detection reliability by combining com-
plementary information from multiple
sources [23]. Stochastic simulations
and Bayesian inference methods are
powerful tools for addressing these
uncertainties, enabling more precise
discontinuity parameter estimation.
For instance, Li and Deng [24] demon-
strated how Bayesian approximation and
deep learning can quantify predictive
uncertainty in damage classification,
particularly in magnetic flux leakage
(MFL) inspections, where discontinuity
geometry plays a critical role.
Signal processing and adaptive filter-
ing also play a critical role in reducing
electronic noise and operator-induced
variability. Machine learning (ML)
approaches, like those described by
Huang et al. [25], can be used to decouple
material parameters and reduce errors in
property estimation to within 3.5%. Spatial
domain linearization, explored by Wang
et al. [26], transforms liftoff effects into
predictable linear relationships, enhanc-
ing measurement accuracy.
Model uncertainties arise during
the forward modeling process in NDE,
where simplified physical and math-
ematical models are used to simulate
inspection scenarios and predict system
responses. As there will always be sim-
plifications, assumptions, and approx-
imations in forward mathematical and
physics models to simulate the NDE
inspection and data generation, discrep-
ancies arise between model-predicted
and experimentally measured responses.
Specifically, two types of uncertainty
are introduced: parametric uncertainty,
which stems from uncertain input values
like material properties or discontinuity
dimensions and structural uncertainty,
which results from model assumptions,
unmodeled physics, and numerical
errors. The distinction between paramet-
ric and structural uncertainties is critical,
as their interplay can complicate model
calibration and validation. For instance,
nonlinear interactions in complex
systems can introduce biases during
parameter tuning [27], while divergence
between simplified models and real-
world behavior—known as model dis-
crepancy—poses additional challenges
[28].
For parametric uncertainty, several
methods have been proposed to address
these uncertainties. Bayesian calibration
provides a robust framework by incor-
porating prior knowledge and updating
beliefs with observed data, effectively
quantifying input parameter uncertainty
[29]. Stochastic finite element methods
help propagate these uncertainties, espe-
cially when dealing with spatial variabil-
ity. Global sensitivity analysis, such as
that using polynomial chaos expansions,
identifies the most influential parameters
and helps focus modeling efforts [30].
Other techniques, including perturba-
tion methods and fractional derivative
models, offer localized assessments and
help address model-form uncertainty
[31]. Ultimately, advancing UQ in NDE
requires integrating both parametric
and structural uncertainty into calibra-
tion workflows, balancing model fidelity,
computational efficiency, and predictive
accuracy for real-world applications.
Inverse characterization and learning
uncertainty arise during the inverse NDE
process, where discontinuity parame-
ters such as size, shape, and depth are
inferred from either model-generated or
experimentally/field-obtained NDE data.
Limited training data, sensor noise, and
indirect measurements all contribute
to challenges in discontinuity classifi-
cation and sizing [32]. In recent years,
as AI models are increasingly used for
damage prediction, learning model–
related uncertainty has become a major
concern. Overfitting, often caused by
insufficient data, can lead to false detec-
tions and misclassifications of damage.
Bayesian neural networks (BNNs)
provide a robust framework for address-
ing model uncertainty in AI-based
damage prediction by treating network
parameters as probabilistic distributions
rather than fixed values [33]. Hybrid
model calibration issues occur when ML
predictions conflict with physics-based
models, reducing predictive accuracy.
For example, Xiong et al. [34] embedded
MFL governing equations into a
neural-network loss function the result-
ing physics-informed model achieved
high-precision discontinuity quantifi-
cation, illustrating the value of integrat-
ing domain knowledge with learning
algorithms.
Data assimilation challenges occur
when combining noisy, incomplete, or
inconsistent data, which further impacts
AI-based NDE and makes discontinuity
signature interpretation unreliable [35].
To enhance discontinuity estimation reli-
ability and classification accuracy, robust
data augmentation, physics-informed AI
models, and adaptive learning frame-
works are essential and have been
investigated.
Uncertainty-aware deep learning
approaches address noise interfer-
ence, data heterogeneity, and nonlin-
ear correlations by explicitly account-
ing for uncertainties in both data and
models. For instance, Xu et al. [36]
proposed an evidential multi-view deep
learning method that dynamically fuses
view-specific evidence, improving deci-
sion-making in industrial IoT scenarios.
Additionally, adaptive feedback loops
dynamically adjust models based on
NDT TUTORIAL
|
UA&UQ
26
M AT E R I A L S E V A L U AT I O N A U G U S T 2 0 2 5
Previous Page Next Page