Multi-LiDAR Mapping for Scene Segmentation in Indoor Environments for Mobile Robots.
Pavel GonzalezAlicia MoraSantiago GarridoRamón BarberLuis MorenoPublished in: Sensors (Basel, Switzerland) (2022)
Nowadays, most mobile robot applications use two-dimensional LiDAR for indoor mapping, navigation, and low-level scene segmentation. However, single data type maps are not enough in a six degree of freedom world. Multi-LiDAR sensor fusion increments the capability of robots to map on different levels the surrounding environment. It exploits the benefits of several data types, counteracting the cons of each of the sensors. This research introduces several techniques to achieve mapping and navigation through indoor environments. First, a scan matching algorithm based on ICP with distance threshold association counter is used as a multi-objective-like fitness function. Then, with Harmony Search, results are optimized without any previous initial guess or odometry. A global map is then built during SLAM, reducing the accumulated error and demonstrating better results than solo odometry LiDAR matching. As a novelty, both algorithms are implemented in 2D and 3D mapping, overlapping the resulting maps to fuse geometrical information at different heights. Finally, a room segmentation procedure is proposed by analyzing this information, avoiding occlusions that appear in 2D maps, and proving the benefits by implementing a door recognition system. Experiments are conducted in both simulated and real scenarios, proving the performance of the proposed algorithms.
Keyphrases
- deep learning
- high density
- high resolution
- machine learning
- convolutional neural network
- air pollution
- particulate matter
- artificial intelligence
- health risk
- big data
- electronic health record
- computed tomography
- physical activity
- climate change
- magnetic resonance imaging
- health information
- mass spectrometry
- minimally invasive