Ñ Integration with data analysis
tools. Visual data captured by robotic
inspection systems can be integrated
with data analysis tools and software
for further analysis. Advanced
image-processing algorithms can
detect patterns, anomalies, or defects
in visual data, supporting predictive
maintenance and asset management
strategies. By leveraging the power
of data analytics, companies can
optimize maintenance schedules,
extend asset lifespan, and reduce
operational costs.
These advantages make robotic
visual inspection a valuable solution for
industries seeking efficient, accurate,
and safe methods of assessing confined
spaces and maintaining critical assets.
Key Technology: Localization and
Data Geotagging
Robotic localization technology for
autonomous operation and report-
ing is available and is used by both
drones and mobile robots on the plant
level. However, most modern localiza-
tion technology cannot be applied to
confined spaces due to the lack of GPS
reception, weakly textured surfaces, asset
size, and complex geometries.
Current robotic practice in GPS-
restrictive areas is simultaneous
localization and mapping with lidar
remote sensing technology. By using
a lidar, a point cloud of the environ-
ment is created, and a mesh is stitched
together simultaneously while the robot
is moving. By comparing the point
clouds and mesh, the absolute distance
between positions can be computed,
and the robot can be located within an
asset.
Another much simpler approach is
to provide a 3D model of the asset as
input, measure the distance from the
robot to a specific point on the asset,
and compare this with the distance cal-
culated using the corresponding position
in the 3D model. To increase accuracy
and repeatability, additional navigation
sensors are integrated into the localiza-
tion process. These include an inertia
measurement unit (IMU), odometry
(distance measured by the driving
wheels), and kinematic constraints. All
this data is combined using a particle
filter and/or a Kalman filter. This allows
for the calculation of the robot’s 3D pose
(position and orientation within the
asset) and the specific location in the 3D
model where the inspection camera is
directed (Figure 2).
As a result, the system can geotag all
images to the 3D model and store them
in a database along with the camera
settings at the time of capture, such
as zoom level, lighting, and resolution
(Figure 3).
Notes—either as text or created with
a drawing editor—can be added to the
images during the inspection or later
when creating the documentation. These
annotated images are then stored in the
database. The inspection report can be
generated automatically using templates
(Figure 4).
A primary goal is to minimize the
time spent on-site and inspecting the
asset. The planning of the inspection,
based on the inspection plan, can be
completed before the mission by uti-
lizing a 3D model and a virtual repre-
sentation of the robot and inspection
camera system. This can be achieved
through a sophisticated simulation tool,
which enables running the inspection
scenario to assess technical feasibility
Area under
inspection
x x
y
y
Figure 2. The approach to calculate the 3D pose of a robotic system in a confined space and to
localize the inspection camera view on the asset being observed: (a) robot with distance sensors
(lidar), IMU, and odometry (b) confined environment (c) localization of robot and inspection
data.
Data point
selected
Camera cone
Inspection
data (visual
inspection)
Figure 3. 3D digital twin software calculates the view cone of an inspection camera and
automatically links the captured image with the correct asset coordinates. The data can be edited
and amended with comments and sketches.
J U L Y 2 0 2 4 M A T E R I A L S E V A L U A T I O N 51
and provides an opportunity to train
and rehearse the inspection (Figure 5).
Recommended Practices for
Robotics-Based Remote Visual
Inspection
Close visual inspection is a top priority
for robotic applications, but there are
discussions about whether robotics-
based remote visual inspection (RVI)
can fully replace close visual inspection
(CVI) performed by a human. Several
RVI limitations have been identified,
including the robot’s distance from
the inspection surface, limited viewing
angles, lack of tactile feedback, absence
of surface preparation or deployment
of inspection aids, and challenges with
artificial lighting. Due to these limita-
tions, it is advised not to claim robotics-
based RVI as a complete replacement for
human CVI. Instead, robotic inspection
should complement conventional CVI
by identifying areas that require further
examination.
Standards such as ASME V Article 9
[6] and BS EN 17637 [7] specify spatial
resolution requirements for CVI and
direct visual inspection (DVI), typi-
cally around 3 line pairs per millimeter
(lp/mm) under optimal viewing con-
ditions, based on human eye acuity.
Although ASME V Article 9 also ref-
erences the visibility of fine lines, this
is not considered a reliable measure
of spatial resolution. To comply with
ASME V Article 9, robotics-based RVI
images should demonstrate a spatial
resolution of approximately 3 lp/mm,
equivalent to that of CVI and DVI.
The “HOIS Guidance on Image
Quality for UAV/UAS–Based External
Remote Visual Inspection in the Oil
&Gas Industry” [5] provides detailed
guidance on maintaining image
quality during uncrewed aerial vehicle
(UAV) inspections within the oil and
gas sector. Its goal is to ensure that
the images obtained are of sufficient
quality for engineering assessments
of component integrity, aiding asset
operators in making critical decisions
about continued operation. While the
HOIS guidance focuses exclusively on
FEATURE
|
ROBOTICVT
New assets
3D CAD data
Asset in 3D Planning &simulation Inspection &reporting
Old assets
3D CAD data
Universal
2D drawing/
sketch available
Existing CAD
3D asset builder
3D dynamic reconstruction
Feasibility check
Mission &inspection planning
Training &rehearsal
Planning
Simulation
3D digital twin
3D spatial awareness at any time
Geotagging of all data
100% repeatablility
No risk for inspectors
Automated reporting
Mission execution Mission execution
3D digital twin
Figure 5. Integrating planning and simulation, using the 3D virtual representation of the asset and the kinematic representation of the robot with
the camera and the cables.
Thermal impact?
Check with operations Thermal impact?
Check with operations
Figure 4. 3D digital twin: (a) editor to amend and complete findings and recommendations,
possibility to annotate inspection data (b) automatically generated inspection report.
52
M A T E R I A L S E V A L U A T I O N J U L Y 2 0 2 4
Previous Page Next Page