Notice that the transmitter i can be a virtual source instead
of a physical element if a subarray emission is considered
(Lockwood et al. 1998). When a wedge is interposed between
the transducer array and the test piece (as in the present case
of the rail flaw imaging prototype), the wave path in the wedge
must be taken into account in the beamforming algorithm.
Referring to Figure 4, following Snell’s law, the new backpropa-
gation TOF can be calculated by finding the point of refraction
at the wedge-medium interface (Sternini et al. 2019a, 2019b).
Considering the fact that, in general, both L-waves and S-waves
can propagate in the test medium, where only L-waves can
be considered in the wedge, there exist, in general, up to four
wave mode combinations that can be theoretically utilized for
imaging. Accordingly, the backpropagation time ​​ ij,yz​​​ for each
of the possible wave mode combinations can be calculated as:
(2)​ τ​ij,yz​​​​LLLL, LLSL, LSLL,LSSL​ = di​L,y​(z​ 1)​​ _
cw​​ L
+ di​​L,y,S​(​2)​​ z _
cm,S​ L
+ dj​​Ly,S​(​3)​​ ,z _
cm,​S​ L
+ dj​​Ly​(z​ ,4)​​ _
cw​​ L
where
LLLL is L-wave transmitted in wedge +L-wave refracted in
medium +L-wave reflected in medium +L-wave received
in wedge ,
LLSL is L-wave transmitted in wedge +L-wave refracted in
medium +S-wave reflected in medium +L-wave received
in wedge ,
LSLL is L-wave transmitted in wedge +S-wave refracted in
medium +L-wave reflected in medium +L-wave received
in wedge ,
LSSL is L-wave transmitted in wedge +S-wave refracted in
medium +S-wave reflected in medium +L-wave received
in wedge ,
c​m,​S​​ is the L-wave or S-wave velocity in the medium,
c​w​​​ is the L-wave velocity in the wedge, and
d​i,y​(z​ )​ ​​ i,yz​ (2)​ ​​ j,yz​ , (3)​ and ​​ j,yz​ ()​ are the corresponding propa-
gation distances of each ray path segment as identified in
Figure 4.
It was previously shown that the compounding of multiple
wave modes can dramatically increase the array gain (Lanza
di Scalea et al. 2017 Sternini et al. 2019a, 2019b). In this paper,
only S-waves are considered in the rail steel because of the use
of the shear wedge that maximizes S-wave refractions.
In order to generate the final image, the raw waveforms
are analyzed via their Hilbert transform (analytical repre-
sentation) as customary in SAF (Frazier and O’Brien 1998).
Specifically, each waveform is decomposed into its in-phase
and phase-quadrature components, and the final image is built
by computing the modulus of these two contributions at each
pixel P(y, z).
Sparse SAF and Emission Using Subarrays
The general SAF scheme in full matrix capture (FMC) mode
requires emitting from each individual element of the trans-
ducer array sequentially (one channel at a time) with the full
aperture acting in reception for each transmission. However,
utilizing all possible transmissions slows down the imaging
process and increases the computational burden. That is why,
particularly so in the medical imaging field, “sparse” trans-
mission schemes are being considered to increase imaging
speed without sacrificing image quality (Karaman et al. 1995).
Since imaging speed is inversely proportional to the number
of transmissions, the sparse SAF technique utilized in the rail
flaw imaging prototype employs only a subset of all possible
transmission events. In order to compensate for the limited
energy transmissible by a single element at high frame rates,
multiple elements (a subarray) are fired at once (Lockwood
et al. 1998). As shown in Figure 5a, for example, an 8-element
array only transmits three defocused circular waves using
3-element subapertures to replace eight consecutive firings
of each element. In each transmit event i, the acoustic field
of the phased subaperture elements superimposes a circular
wavefront such that the transmission of the 3-element sub-
aperture can be modeled as a virtual element (point source)
placed behind the physical array. In the transmit beamform-
ing, a virtual element array substitutes the physical transmit
subapertures in the consideration of the DAS ray paths. As
shown in Figure 5b, each transmit beam can be properly time
delayed by calculating the ray path connecting the virtual array
element and the focus point P, so that the three transmitted
wave fronts are compounded coherently at an on-axis focus. By
adjusting the time delays, the synthetic focus can be achieved
at any point in the region of interest (ROI), such as an off-axis
location in Figure 5c. The ability to dynamically focus the
defocused beams at various locations ensures an acceptable
resolution of the SAF images throughout the ROI. This is par-
ticularly important for the imaging of rail flaws since the size of
the transverse-type defects can be fairly large compared to the
physical aperture of the array, thus occupying the full height
of the ROI. For the 64-element array in the imaging prototype,
the authors have found that using eight, 17-element subarrays
with a 9-element-wide pitch between virtual elements (the first
and last firings have to discard part of the subaperture that is
ME
|
RAILROADS
Defect
Transducer array
Medium
Wedge
R
j
(y
j
,z
j
)
T
i
(y
i
,z
i
)
P(y, z)
θ
w
θ
m
d(1)
y
z
d(2)
d(4)
d(3)
Figure 4. Ray tracing scheme connecting one virtual transmit element
Ti, the focal point P, and one receiver element Rj.
54
M A T E R I A L S E V A L U A T I O N J A N U A R Y 2 0 2 4
2401 ME January.indd 54 12/20/23 8:01 AM
beyond the physical array element numbers) is a reasonable
compromise between imaging speed and image quality (reso-
lution and signal-to-noise ratio [SNR]).
Quasi Real-Time Rail Flaw Image Display in 3D
The prototype includes a GUI that has been specifically
designed for the rail flaw imaging application. After the setup
configuration of the multiplexer, the user starts the scanning
process by moving the probe along the transverse direction
of the rail (perpendicularly to the imaging Y-Z plane). The
parallel computation capability of GPU in the host computer
achieves quasi real-time beamforming of the SAF images with
a frame rate of ~25 Hz using an eight-transmission modality
(Martin-Arguedas et al. 2012). The frame rate limit in the
system comes from the data transmission and conversion
hardware. The theoretical frame rate limit is much higher. As
shown in Figure 6, the quasi real-time 3D point cloud display
is created by compounding the beamformed 2D images at
each transverse position tracked by the encoder. The raw 2D
SAF image slices are displayed using a –30 dB threshold while
the 3D display highlights only the pixels with intensity above
the –15 dB threshold. To distinguish image slices of different
signal strengths in the volumetric compounding, each 2D
image is normalized by the maximum intensity value in the
total collection of 3D pixels. Such a normalization process
calibrates the decibel levels of “noised” image slices to those
images with a strong reflection, suppressing any noise-only
pixels between different image slices. In the 3D display, the
algorithm performs this normalization adaptively by retain-
ing the maximum intensity value from the previous 2D image
and updating it if a larger maximum value is obtained. Notice
that the temporary display of the 3D point cloud is only for an
initial visualization of any strong reflections, including artifacts
that could affect the final size estimation. A post-processing
algorithm is needed to extract accurate quantitative informa-
tion regarding a possible internal flaw.
Post-Processing of Volumetric SAF Images
Post-processing algorithms have been developed to further
analyze the volumetric SAF images in order to extract the final
size and shape of the flaw. The flowchart illustrating the steps
taken in post-processing is shown in Figure 7. Referring to the
schematic on the upper right, the SAF image slices are beam-
formed in the vertical plane, while the final plane of interest
is the transverse plane. To prepare for image processing, the
point cloud is first resized to high resolution through bilinear
interpolation and converted from the decibel level (–40 to
0 dB) to an 8-bit grayscale, as shown in Figure 7a with two
sample slices both in the vertical plane and the transverse
plane. The volumetric image first goes through a coupled
dilation-erosion operation, where the intensity of each pixel
is first increased and then decreased based on the inten-
sity distribution of the neighboring pixels in 3D. As shown
in Figure 7b, the coupled morphology process blurs the void
between the grating lobes that are caused by Rayleigh diffrac-
tion limit of the beamformed ultrasonic waves. Following the
dilation and erosion operation, the volumetric image is flat-
tened to an identified noise level through filtering techniques,
as shown in Figure 7c. Each transverse plane slice is low-pass
Defect
Artifact
0
Length (mm)
Progress bar
Le ngth
(mm
)Slice (mm)
5
10
10
15
15
20
20
25
30
35
0
5
10
15
20
25
30
35
40 40
0
–5
–10
–15
–20
–25
–30 60 60 70 80 90 100
80
100
Indication of encoder position
Figure 6. GUI runtime window displaying (a) compounded 3D point cloud
(–15 dB) and (b) raw 2D SAF image (–30 dB). The refreshing rate is 25 Hz
using the improved SAF technique.
Focus
i =1
On-axis focus Off-axis focus
i =2
i =3 ROI
y
z
P (y, z)
ROI
Figure 5. Subarray SAF technique for faster and more accurate images: (a) three defocused waves defined by the virtual elements are emitted
independently by subarrays. Beamforming in transmission is performed by applying time delays corresponding to a synthetic focus on point P
either at (b) on-axis positions or (c) off-axis positions.
J A N U A R Y 2 0 2 4 M A T E R I A L S E V A L U A T I O N 55
2401 ME January.indd 55 12/20/23 8:01 AM
Depth
(mm)
Depth
(mm)
Previous Page Next Page