Semantic Mapping for Autonomous Subsea Intervention.
Guillem VallicrosaKhadidja HimriPere RidaoNuno GraciasPublished in: Sensors (Basel, Switzerland) (2021)
This paper presents a method to build a semantic map to assist an underwater vehicle-manipulator system in performing intervention tasks autonomously in a submerged man-made pipe structure. The method is based on the integration of feature-based simultaneous localization and mapping (SLAM) and 3D object recognition using a database of a priori known objects. The robot uses Doppler velocity log (DVL), pressure, and attitude and heading reference system (AHRS) sensors for navigation and is equipped with a laser scanner providing non-coloured 3D point clouds of the inspected structure in real time. The object recognition module recognises the pipes and objects within the scan and passes them to the SLAM, which adds them to the map if not yet observed. Otherwise, it uses them to correct the map and the robot navigation if they were already mapped. The SLAM provides a consistent map and a drift-less navigation. Moreover, it provides a global identifier for every observed object instance and its pipe connectivity. This information is fed back to the object recognition module, where it is used to estimate the object classes using Bayesian techniques over the set of those object classes which are compatible in terms of pipe connectivity. This allows fusing of all the already available object observations to improve recognition. The outcome of the process is a semantic map made of pipes connected through valves, elbows and tees conforming to the real structure. Knowing the class and the position of objects will enable high-level manipulation commands in the near future.