failure probability by randomly sampling
crack size, toughness, pressure, and
geometry, and then running each set
through the failure assessment diagram
(FAD) model to calculate how many
cases fail [69]. Despite it being easy to
apply, MCS still has limitations, includ-
ing slow convergence rates and sensi-
tivity to input distributions. Therefore,
careful input modeling is needed to
avoid biased results [66]. Future develop-
ments may focus on hybrid approaches,
combining Monte Carlo with other UQ
methods, such as polynomial chaos
expansion or Bayesian inference, to
further optimize performance.
BAYESIAN INFERENCE VIA MCMC
FOR ADAPTIVE NDE
Unlike traditional statistical methods,
which assume fixed probabilities,
Bayesian inference continuously refines
its predictions by incorporating prior
knowledge with real-time sensor mea-
surements. This approach is particularly
valuable for real-time monitoring and
adaptive inspection strategies, where
uncertainty evolves as new data is col-
lected. The core idea behind Bayesian
inference is to obtain a posterior distri-
bution that blends prior knowledge and
new evidence, which can be expressed as ​​
P(​​θ|​​​data)​​​​ where represents the uncer-
tain parameters. Typically, Markov chain
Monte Carlo (MCMC), Hamiltonian
Monte Carlo, or sequential Monte Carlo
are applied to sample the posterior
distribution. For each posterior sample of
θ​ a forward model is run to obtain dis-
tributions of inspection outputs (such as
POD, sizing errors, or failure probability).
In practice, Bayesian inference in
NDE often employs MCMC methods to
sample posterior distributions, which
are particularly effective for quantifying
uncertainties in corrosion models and
structural parameters, even with sparse
or heterogeneous data [70]. The adapt-
ability of Bayesian methods is further
highlighted by Nabiyan et al. [71], who
dynamically estimated prediction error
statistics during model updating, making
the approach suitable for handling noisy
or incomplete data.
POLYNOMIAL CHAOS EXPANSION
FOR EFFICIENT UQ
PCE is a computationally efficient method
to model uncertainty propagation in NDE
systems. Unlike MCS, PCE uses poly-
nomial expansions to approximate how
discontinuity geometry, sensor noise,
and material variability affect inspection
results. This approach reduces compu-
tational costs while maintaining high
accuracy. The process involves choosing
a polynomial basis from an orthogonal
family (e.g., Hermite, Legendre, Laguerre,
Jacobi) and truncating the order to
generate training points, using sparse
quadrature, Latin hypercube, or Sobol
sequences (often tens to hundreds of
samples, not thousands). Then, for each
training point, NDE-related simulations
are run—such as ultrasonic, eddy
current, or thermography simulations—
to compute PCE coefficients.
Applications of PCE in NDE and
related fields highlight its versatility.
For instance, Chen et al. [72] showed its
effectiveness in quantifying uncertainties
in composite cylindrical shells, address-
ing both geometric and material variabil-
ity. Similarly, its adaptability in sub-THz
antenna design and multiscale resilience
analysis has been demonstrated [73, 74]. By
combining computational efficiency with
high fidelity, PCE emerges as a powerful
tool for UQ in NDE, enabling reliable dis-
continuity detection and material charac-
terization under uncertainty.
AI-Driven UQ Methods for NDE
Artificial intelligence (AI) has revo-
lutionized discontinuity detection,
classification, and localization by
developing deep learning models for
NDE. However, traditional AI-based
approaches often operate as black
boxes, meaning they lack the ability to
quantify prediction uncertainty, which
increases the risk of false detections or
overlooked discontinuities. AI-driven
UQ methodologies have the potential to
address this issue by integrating prob-
abilistic learning frameworks. Three
widely used approaches in NDE are
Bayesian neural networks (BNNs) [75],
Monte Carlo dropout (MC dropout) [76],
and ensemble learning [77], which are
compared in Figure 5. These techniques
Uncertainty modeled via
stochastic inference
MC dropout
Input:
Output:
Uncertainty modeled through
weight distributions
BNN
Input:
Weights:
(with distribution)
Hidden layer
Weights:
(with distribution)
Output:
Ensemble learning
Uncertainty from prediction
variance across models
Neural network #1 Neural network #2 Neural network #3
Input
Output
Ensemble
Figure 5. Illustrative process of three AI-driven UQ methods: (a) Bayesian neural network (BNN) (b) Monte Carlo (MC) dropout (c) ensemble learning.
A U G U S T 2 0 2 5 M AT E R I A L S E V A L U AT I O N 31
combine the flexibility of machine
learning with the statistical rigor of a
Bayesian inference, enabling models to
capture both data variability and model
uncertainty.
BAYESIAN NEURAL NETWORKS
FOR UNCERTAINTY
Unlike conventional neural networks,
which produce deterministic outputs,
BNNs treat network weights as prob-
ability distributions rather than fixed
values. A BNN builds on a standard
feedforward or convolutional neural
network by replacing each fixed weight
and bias with a probability distribu-
tion. Instead of learning a single “best”
value for each parameter, a BNN learns
a posterior distribution over parame-
ters given the training data, enabling
uncertainty-aware predictions critical for
safety-critical NDE applications [75].
Typically, a normal neural network
defines a deterministic mapping: ​​
y =f(​​x w)​​​​ where is the vector of all
weights and biases. In a BNN, we define
a prior ​​ (​​w)​​​​ (commonly Gaussian) over
these parameters and combine it with a
likelihood ​​ (​​D|​​​w)​​​​ derived from the loss
function. Bayesian inference then yields a
posterior: ​​ (​​w|​​​D)​​ p(​​D|​​​w)​​p(​​w)​​​​ which
is approximated via variational methods
or Monte Carlo techniques. At prediction
time, instead of a single forward pass, we
draw multiple weight samples ​​​ t​ ~p​(​​w​|​​D​)​​​​
and compute:
(9)​ p​(y|x, D)​ 1
T

t=1​
T (x w​​t​)​​​
This ensemble of outputs provides
both a mean prediction and a variance
that quantifies both epistemic uncer-
tainty (from limited data, reflected in
weight spread) and aleatoric uncertainty
(modeled via an explicit output-noise
term, if included).
BNNs offer advantages such as inter-
pretable uncertainty metrics and the
ability to integrate physical constraints,
as seen in material property prediction
[78] and bearing remaining useful life
estimation [79]. However, BNNs face
computational challenges due to the
intractability of exact posterior inference.
Comparative studies suggest hybrid
approaches, combining BNNs with
Monte Carlo dropout or deep ensembles,
could improve robustness and scalability
in NDE workflows [80].
MONTE CARLO DROPOUT FOR MODEL
UNCERTAINTY
MC dropout is another popular
AI-driven UQ technique that helps deep
learning models estimate uncertainty
by introducing dropout at inference
time [76]. Dropout, a regularization
method, randomly deactivates a subset
of neurons during each forward pass
to prevent overfitting in deep neural
networks. During the NN training
process, each forward pass randomly
“drops” a subset of neurons at the
dropout layer, which effectively samples
from a simpler, approximate weight
distribution ​​ (​​w)​​​​ By keeping dropout
active at test time and running several
forward passes (MC dropout), multiple
weight samples are drawn from ​​ (​​w)​​​​ and
the outputs are averaged. This process
approximates the full Bayesian posterior ​​
p(​​w|​​​D)​​​​ without needing explicit priors
or complex inference. Additionally,
uncertainty metrics such as variance or
standard deviation can be derived from
the resulting prediction distribution.
MC dropout has high adaptability
to various base NN architectures (e.g.,
CNNs [81] and ResNets [82]). It has been
widely applied for reliability assessment
in NDE tasks such as crack characteriza-
tion [83] and seismic data reconstruction
[84]. Yonekura et al. [85] also leveraged
MC dropout in a generative adversarial
network (GAN) framework for uncer-
tainty reduction in airfoil design.
DEEP ENSEMBLES FOR ROBUST UQ
The deep ensembles (DE) technique
is effective for uncertainty estimation.
Unlike single deep learning models,
deep ensembles do not require architec-
tural modifications or complex training
procedures, simplifying their implemen-
tation in practical scenarios. The tech-
nique combines outputs from multiple
independently trained deep learning
models, ensuring predictions account for
sensor noise, material inconsistencies,
and operational variations. Each network
in the ensemble learns a different
mapping from input data (e.g., ultrasonic
waveforms, thermographic images) to
outputs (e.g., discontinuity probability,
size estimate). At inference, an input is
passed through every member of the
ensemble the mean of their outputs
serves as the final prediction, while
the variance captures epistemic uncer-
tainty—that is, the model’s “disagree-
ment” about unfamiliar or ambiguous
cases.
In NDE, deep ensembles have
demonstrated success in tasks such as
crack detection and material character-
ization, where reliable uncertainty esti-
mates are critical for decision-making
[77]. Their parallelizable nature allows
for efficient deployment, though compu-
tational resource requirements remain a
limitation compared to simpler methods
like MC dropout. In practice, Pyle et al.
[86] demonstrated that DE achieved
markedly better calibration and anomaly
detection compared with MC dropout
for ultrasonic crack detection, where
DE applied spectral normalization and
residual connections, which further
sharpened their calibration and boosted
out-of-distribution detection.
Recent Advancements and Trends
in UA&UQ for NDE
With advancements in sensor technol-
ogy, AI-driven automation, and NDE 4.0,
industries are increasingly adopting
UQ. Sectors such as aerospace, nuclear
energy, oil and gas, and advanced man-
ufacturing leverage UQ to enhance dis-
continuity detection reliability, minimize
false alarms, and ensure structural
integrity.
Multi-Sensor Data Fusion
and Bayesian Inference for
Comprehensive Uncertainty
Assessment
Unlike single-sensor methods, the inte-
gration of multiple sensors in NDE
has revolutionized UQ by combining
different NDE techniques and leverag-
ing their complementary strengths to
reduce false positives and negatives,
improve spatial resolution, and increase
NDT TUTORIAL
|
UA&UQ
32
M AT E R I A L S E V A L U AT I O N A U G U S T 2 0 2 5
Previous Page Next Page