Nayar (1996) and Baleani et al. (2021). The wire-form is typi-
cally held in a fixed position using either a robot end-effector
or a test station, placing the wire-form within the path of the
telecentric optics. Images of the wire-form’s 2D silhouette are
then captured. To properly assess the critical 3D features of a
given wire-form shape, typically two to three perspective views
are necessary. Analysis is performed on the silhouette images
from the different perspectives to ensure the form is within the
accepted tolerance criteria.
However, getting reliable image data from the different per-
spective views repeatably requires a high degree of positional
control and accurate alignment of the wire-form within the
optical field of view to avoid erroneous measurements. Time
improvements are possible if alignment and positioning are
automated, but each articulation position comes at the cost of
time required to make a full assessment. Having a methodol-
ogy that requires minimal part manipulation, uses hardware
capable of fast data acquisition and analysis, and provides all
the necessary perspectives to map the wire-form shape fully is,
therefore, critical to achieving 100% in-process part inspection.
Methodology for Rapid High-Fidelity 3D Wire-Form
Reconstruction
This section describes the sensor technology (laser-projected
structured light sensors), their technical capabilities, and key
considerations in data collection. Common application use
cases of these sensors will also be discussed. A critical aspect
of this study involves the simultaneous operation of multiple
sensors and the ability to fuse their data outputs. Discussion
regarding this topic is also included in this section.
Structured Light Sensors
Structured light imaging operates on principles similar to stereo
vision techniques. In stereo vision, a triangulation relationship
is formed between two camera sensors and the object being
imaged. In structured light imaging, a single camera sensor is
used in conjunction with a pattern-projection light source to
form a triangulation relationship with the object to be inspected.
The projected pattern illuminates the object while the camera
sensor captures an image of the pattern as deformed by the
object surface. If the camera sensor and the pattern projec-
tion light source have been calibrated, it is then possible to
reconstruct the 3D object surface by analyzing correspond-
ing features between the projected pattern and the pattern
observed in the captured image. Several factors impact the
overall dimensional accuracy of a 3D structured light sensor,
including how the pattern is controlled and projected onto the
object surface through modulation, the selected optics, and the
light source. For more in-depth discussion regarding the coding
and decoding of projected structured light patterns, how these
patterns are controlled through modulation systems, and the
pros and cons of strategies for 3D reconstruction, please refer to
Yang and Gu (2024) and Bell et al. (2016).
Within industry, laser-based structured light projec-
tion sensors are used for a variety of applications. The most
prominent is identifying parts and their orientation in 3D space
for bin-picking robotic systems (Kratky 2019). This arrangement
typically has a stationary structured light sensor that scans the
bin and reports the position and orientation of parts for a robot
system to automatically collect them for production. Other
uses relate to applications that require tracking changes in
part geometry over time, such as providing adaptive feedback
in additive manufacturing processes (Garmendia et al. 2018).
Having a stationary sensor also makes it possible to track
surface changes due to evolving environmental conditions
(Zaidan et al. 2024) or to track dunnage on conveyors within
warehouses.
There are, however, application-specific considerations
that must be investigated thoroughly to ensure these sensors
operate as needed. Object surface finish and its response to the
wavelength of light from the structured light sensor can con-
siderably impact the ability to properly receive the deformed
projection pattern from the surface. This can have a dramatic
impact on the decoding analysis process for 3D reconstruc-
tion and may result in missing point cloud data. The overall
surface geometry of the object being investigated also needs to
be taken into consideration with regards to the experimental
setup. If the geometry prevents the light pattern from reflect-
ing back to the camera sensor, holes or gaps will appear in the
data (Gupta et al. 2011), which can only be filled by articulating
either the sensor or the object relative to each other (Park and
Kak 2008). This is commonly performed within industry when
reconstructing challenging components (Antolin-Urbaneja et
al. 2024). The drawback is that having more points of articula-
tion increases the time required to produce a final 3D render-
ing and necessitates more sophisticated point cloud analysis
algorithms to fuse data from different perspectives without
introducing additional error sources.
Common World Coordinate System (WCS) Calibration
If only a single structured light sensor unit is used, calibra-
tion is typically carried out using a 2D calibration target. The
target is placed at various positions and orientations relative
to the sensor—while remaining within its field of view—
and data is acquired. This process allows the camera and
pattern-projection light source to be calibrated relative to
each other and establishes an internal coordinate system for
the sensor. When more than one sensor is used, this method
can still be applied to provide a common WCS, provided all
sensors are fixed and have clear view of the 2D calibration
target as it is positioned within the setup array’s effective field
of view (Kaiser et al. 2024). However, this approach generally
works only when all sensors are oriented roughly in the same
direction relative to the part being inspected.
Due to the complex 3D surface of the wire-forms, fully
reconstructing a point cloud representation of the wire-form
apex requires multiple structured light sensor perspectives.
This can be accomplished through articulation, though at
the cost of inspection time and the need for advanced point-
cloud meshing strategies. However, in the interest of meeting
ME
|
ELECTRICVEHICLES
38
M AT E R I A L S E V A L U AT I O N J A N U A R Y 2 0 2 6
in-process monitoring cycle-time requirements, a 3D cal-
ibration artifact approach was used, similar to calibration
objects applied in CMM studies (Agapiou and Du 2007). Using
an array of 3D structures, each with controlled dimensions
thoroughly verified through external testing, allows multiple
sensors to be calibrated relative to the coordinate system
defined by the artifact, as long as it remains within their fields
of view.
There are several advantages to this methodology when
key aspects of the artifact design are considered upfront—most
notably, selecting materials that minimize environmental
thermal effects and its impact on the calibration process. By
using materials such as Invar, ceramics, and appropriate adhe-
sives—all having very low coefficients of thermal expansion—
the artifact structure can be reliably maintained and trusted
across a wider range of temperatures as commonly experi-
enced in different manufacturing plant environments. Once
the calibration artifact has been thoroughly validated using a
highly sensitive technique such as CMM testing, the very low
coefficients of thermal expansion of the materials in the artifact
design ensure that even minor sensor shifts due to facility
temperatures changes can be corrected by referencing the
artifact-defined coordinate system. With scheduled downtime
allotted for sensor calibration, the sensors can maintain consis-
tent mapping to the artifact WCS without significant drift.
As a result, 3D point clouds from each structured light
sensor can be rapidly meshed, since all sensors are already
configured to the same coordinate system. This avoids the
need for more advanced meshing and shape-model matching
techniques—such as iterative closest point (ICP) registration
(Besl and McKay 1992)—as part of the data acquisition process,
limiting them instead to post-acquisition analysis and signifi-
cantly improving overall data-collection speed. This is the main
methodology applied in this paper.
Experimental Setup for Wire-Form 3D Measurements
The experimental setup presented in the next section describes
the structured light sensor array developed for 3D reconstruc-
tion measurements of the wire-form apex, along with the cali-
bration artifact processing required for accurate representation
of the wire-form as viewed by the optical system.
Structured Light Sensor Array
The full structured light sensor array used in this study is
shown in Figure 4. Five structured light 3D sensors are posi-
tioned around the set fixture location of the wire-form, such
that the apex of the hairpin geometry can be easily viewed by
all sensors. These sensors project laser-based binary fringe
patterns onto the object surface, which are then captured by
an internal camera sensor to decode and reconstruct the 3D
point cloud representation of the object. The fringe patterns
are controlled using a mechanical galvanometer within each
sensor. Each sensor can produce up to 3.2 million 3D points
per scan and maintains a scanning area of 118 mm × 78 mm
when positioned at the optimal working distance of 181 mm.
This results in a point-to-point spacing of 55 μm for each
individual sensor. Sensor acquisition settings—such as laser
power and exposure settings—were optimized in this study to
produce reliable 3D renderings of both the calibration artifact
and the wire-form while maintaining data acquisition times of
450 ms. Each sensor is triggered programmatically in sequence
to prevent laser projections from other sensors from interfering
with 3D reconstruction. It is also worth noting that the number
of sensors required for full 3D reconstruction of the wire-form
apex was reduced over time from five to only three or four,
depending on the part being inspected, improving overall
acquisition and analysis time.
3D Calibration Artifact
Figure 5 shows the custom calibration artifact used in this
study to unify the coordinate systems of the structured light
sensors into a common WCS defined by the artifact. The
physical artifact consists of ceramic alumina oxide Grade 20
spheres with diameters of 10 mm, 15 mm, and 20 mm. Figure 5a
shows a color-coded CAD drawing of the sphere arrangement,
where green represents 10 mm, pink represents 15 mm, and
blue represents 20 mm. The spheres rest on posts made from
Invar 36 nickel–iron alloy, set into a block of the same material.
The spheres are held in place using a two-component epoxy,
selected specifically for its ultra-low coefficient of thermal expan-
sion. The final sphere positions and diameters of the calibration
artifact were mapped in a controlled environment using a CMM
system to verify the true final assembly structure with micron-
level accuracy. This information was then translated into a rep-
resentative shape model for use in sensor calibration.
Coordinate Measuring Machine (CMM) for Comparison
Stator hairpin wire-forms, the calibration artifact, and a sphere
planar test object were all measured and verified on a CMM
metrology system. Geometries were scanned using automated
pathing routes to position either a contact touch probe or a
3D sensor array (5)
Wire-form
Figure 4. The multisensor structured light array experimental setup for
performing 3D shape scans of electric motor wire-forms. The full setup
incorporated five sensors in total.
J A N U A R Y 2 0 2 6 M AT E R I A L S E V A L U AT I O N 39
Previous Page Next Page