Full Text:  <3708>

Summary:  <414>

CLC number: TP399

On-line Access: 2022-02-28

Received: 2020-07-18

Revision Accepted: 2022-04-22

Crosschecked: 2020-11-18

Cited: 0

Clicked: 5417

Citations:  Bibtex RefMan EndNote GB/T7714

 ORCID:

Wei WEI

https://orcid.org/0000-0002-8998-045X

Xiaorui ZHU

https://orcid.org/0000-0003-1400-059X

-   Go to

Article info.
Open peer comments

Frontiers of Information Technology & Electronic Engineering 

Accepted manuscript available online (unedited version)


Novel robust simultaneous localization and mapping for long-term autonomous robots


Author(s):  Wei WEI, Xiaorui ZHU, Yi WANG

Affiliation(s):  School of Mechanical Engineering and Automation, Harbin Institute of Technology, Shenzhen 518055, China; more

Corresponding email(s):  weirui9003@gmail.com, xiaoruizhu@hit.edu.cn, wangyi601@aliyun.com

Key Words:  Simultaneous localization and mapping (SLAM); Long-term; Robustness; Light detection and ranging (LiDaR); Visual inertial LiDaR navigation (VILN)


Share this article to: More <<< Previous Paper|Next Paper >>>

Wei WEI, Xiaorui ZHU, Yi WANG. Novel robust simultaneous localization and mapping for long-term autonomous robots[J]. Frontiers of Information Technology & Electronic Engineering,in press.https://doi.org/10.1631/FITEE.2000358

@article{title="Novel robust simultaneous localization and mapping for long-term autonomous robots",
author="Wei WEI, Xiaorui ZHU, Yi WANG",
journal="Frontiers of Information Technology & Electronic Engineering",
year="in press",
publisher="Zhejiang University Press & Springer",
doi="https://doi.org/10.1631/FITEE.2000358"
}

%0 Journal Article
%T Novel robust simultaneous localization and mapping for long-term autonomous robots
%A Wei WEI
%A Xiaorui ZHU
%A Yi WANG
%J Frontiers of Information Technology & Electronic Engineering
%P 234-245
%@ 2095-9184
%D in press
%I Zhejiang University Press & Springer
doi="https://doi.org/10.1631/FITEE.2000358"

TY - JOUR
T1 - Novel robust simultaneous localization and mapping for long-term autonomous robots
A1 - Wei WEI
A1 - Xiaorui ZHU
A1 - Yi WANG
J0 - Frontiers of Information Technology & Electronic Engineering
SP - 234
EP - 245
%@ 2095-9184
Y1 - in press
PB - Zhejiang University Press & Springer
ER -
doi="https://doi.org/10.1631/FITEE.2000358"


Abstract: 
A fundamental task for mobile robots is simultaneous localization and mapping (SLAM). Moreover, long-term robustness is an important property for SLAM. When vehicles or robots steer fast or steer in certain scenarios, such as low-texture environments, long corridors, tunnels, or other duplicated structural environments, most SLAM systems might fail. In this paper, we propose a novel robust visual inertial light detection and ranging (LiDaR) navigation (VILN) SLAM system, including stereo visual-inertial LiDaR odometry and visual-LiDaR loop closure. The proposed VILN SLAM system can perform well with low drift after long-term experiments, even when the LiDaR or visual measurements are degraded occasionally in complex scenes. Extensive experimental results show that the robustness has been greatly improved in various scenarios compared to state-of-the-art SLAM systems.

用于长期自主机器人的新型鲁棒同时定位与建图方法

魏伟1,朱晓蕊1,2,王毅1
1哈尔滨工业大学(深圳)机电工程与自动化学院,中国深圳市,518055
2岭南大数据研究院,中国珠海市,519000
摘要:自主移动机器人的基本任务是同时定位与建图(SLAM)。此外,长期鲁棒性是SLAM的一个重要属性。当车辆或机器人快速旋转或在某些场景中(例如低纹理环境、长走廊、隧道或其他重复的结构环境)转向时,大多数SLAM系统可能会失效。本文提出一种新颖的鲁棒视觉惯性激光雷达(LiDaR)导航(VILN)SLAM系统,包括立体视觉-惯性LiDaR里程计和视觉-LiDaR闭环。所提出的VILN SLAM系统即使在偶尔会降低LiDaR或视觉测量性能的复杂场景中也可以长期稳定地运行。大量实验结果表明,与最先进的SLAM系统相比,VILN SLAM系统在各种场景下的鲁棒性都有了很大提高。

关键词组:同时定位与建图(SLAM);长期;鲁棒性;激光雷达(LiDaR);视觉惯性激光雷达导航(VILN)

Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article

Reference

[1]Banerjee N, Connolly RC, Lisin D, et al., 2019. View management for lifelong visual maps. IEEE/RSJ Int Conf on Intelligent Robots and Systems, p.7871-7878. doi: 10.1109/IROS40897.2019.8968245

[2]Davison AJ, Reid ID, Molton ND, et al., 2007. MonoSLAM: real-time single camera SLAM. IEEE Trans Patt Anal Mach Intell, 29(6):1052-1067. doi: 10.1109/TPAMI.2007.1049

[3]Deschaud JE, 2018. IMLS-SLAM: scan-to-model matching based on 3D data. IEEE Int Conf on Robotics and Automation, p.2480-2485. doi: 10.1109/ICRA.2018.8460653

[4]Engel J, Schöps T, Cremers D, 2014. LSD-SLAM: large-scale direct monocular SLAM. Proc 13th European Conf on Computer Vision, p.834-849. doi: 10.1007/978-3-319-10605-2_54

[5]Engel J, Koltun V, Cremers D, 2018. Direct sparse odometry. IEEE Trans Patt Anal Mach Intell, 40(3):611-625. doi: 10.1109/TPAMI.2017.2658577

[6]Forster C, Pizzoli M, Scaramuzza D, 2014. SVO: fast semi-direct monocular visual odometry. IEEE Int Conf on Robotics and Automation, p.15-22. doi: 10.1109/ICRA.2014.6906584

[7]Forster C, Carlone L, Dellaert F, et al., 2017. On-manifold preintegration for real-time visual-inertial odometry. IEEE Trans Robot, 33(1):1-21. doi: 10.1109/TRO.2016.2597321

[8]Geiger A, Lenz P, Stiller C, et al., 2013. Vision meets robotics: the KITTI dataset. Int J Robot Res, 32(11):1231-1237. doi: 10.1177/0278364913491297

[9]Grisetti G, Stachniss C, Burgard W, 2007. Improved techniques for grid mapping with Rao-Blackwellized particle filters. IEEE Trans Robot, 23(1):34-46. doi: 10.1109/TRO.2006.889486

[10]Hemann G, Singh S, Kaess M, 2016. Long-range GPS-denied aerial inertial navigation with lidar localization. IEEE/RSJ Int Conf on Intelligent Robots and Systems, p.1659-1666. doi: 10.1109/IROS.2016.7759267

[11]Hess W, Kohler D, Rapp H, et al., 2016. Real-time loop closure in 2D LIDAR SLAM. IEEE Int Conf on Robotics and Automation, p.1271-1278. doi: 10.1109/ICRA.2016.7487258

[12]Kerl C, Sturm J, Cremers D, 2013. Dense visual SLAM for RGB-D cameras. IEEE/RSJ Int Conf on Intelligent Robots and Systems, p.2100-2106. doi: 10.1109/IROS.2013.6696650

[13]Kim G, Park B, Kim A, 2019. 1-day learning, 1-year localization: long-term LiDAR localization using scan context image. IEEE Robot Autom Lett, 4(2):1948-1955. doi: 10.1109/LRA.2019.2897340

[14]Klein G, Murray D, 2007. Parallel tracking and mapping for small AR workspaces. Proc 6th IEEE and ACM Int Symp on Mixed and Augmented Reality, p.1-10. doi: 10.1109/ISMAR.2007.4538852

[15]Konolige K, Grisetti G, Kümmerle R, et al., 2010. Efficient sparse pose adjustment for 2D mapping. IEEE/RSJ Int Conf on Intelligent Robots and Systems, p.22-29. doi: 10.1109/IROS.2010.5649043

[16]Lee J, Hwang S, Lee K, et al., 2020. AD-VO: scale-resilient visual odometry using attentive disparity map. https://arxiv.org/abs/2001.02090

[17]Maurer CR, Qi RS, Raghavan V, 2003. A linear time algorithm for computing exact Euclidean distance transforms of binary images in arbitrary dimensions. IEEE Trans Patt Anal Mach Intell, 25(2):265-270. doi: 10.1109/TPAMI.2003.1177156

[18]Mur-Artal R, Tardós JD, 2017. ORB-SLAM2: an open-source SLAM system for monocular, stereo, and RGB-D cameras. IEEE Trans Robot, 33(5):1255-1262. doi: 10.1109/TRO.2017.2705103

[19]Mur-Artal R, Montiel JMM, Tardós JD, 2015. ORB-SLAM: a versatile and accurate monocular SLAM system. IEEE Trans Robot, 31(5):1147-1163. doi: 10.1109/TRO.2015.2463671

[20]Nair GB, Daga S, Sajnani R, et al., 2020. Multi-object monocular SLAM for dynamic environments. https://arxiv.org/abs/2002.03528

[21]Newcombe RA, Lovegrove SJ, Davison AJ, 2011. DTAM: dense tracking and mapping in real-time. Int Conf on Computer Vision, p.2320-2327. doi: 10.1109/ICCV.2011.6126513

[22]Patel N, Khorrami F, Krishnamurthy P, et al., 2019. Tightly coupled semantic RGB-D inertial odometry for accurate long-term localization and mapping. Proc 19th Int Conf on Advanced Robotics, p.523-528. doi: 10.1109/ICAR46387.2019.8981658

[23]Rusinkiewicz S, Levoy M, 2001. Efficient variants of the ICP algorithm. Proc 3rd Int Conf on 3-D Digital Imaging and Modeling, p.145-152. doi: 10.1109/IM.2001.924423

[24]Shao WZ, Vijayarangan S, Li C, et al., 2019. Stereo visual inertial LiDAR simultaneous localization and mapping. IEEE/RSJ Int Conf on Intelligent Robots and Systems, p.370-377. doi: 10.1109/IROS40897.2019.8968012

[25]Sibley G, Matthies L, Sukhatme G, 2010. Sliding window filter with application to planetary landing. J Field Robot, 27(5):587-608. doi: 10.1002/rob.20360

[26]Sünderhauf N, Neubert P, Protzel P, 2013. Predicting the change—a step towards life-long operation in everyday environments. Proc Challenges and Vision Workshop at RSS, p.1-4.

[27]Wagstaff B, Peretroukhin V, Kelly J, 2020. Self-supervised deep pose corrections for robust visual odometry. IEEE Int Conf on Robotics and Automation, p.2331-2337. doi: 10.1109/ICRA40945.2020.9197562

[28]Wang ZJ, Wu Y, Niu QQ, 2020. Multi-sensor fusion in automated driving: a survey. IEEE Access, 8:2847-2868. doi: 10.1109/ACCESS.2019.2962554

[29]Xu YL, Ou YS, Xu TT, 2018. SLAM of robot based on the fusion of vision and LIDAR. IEEE Int Conf on Cyborg and Bionic Systems, p.121-126. doi: 10.1109/CBS.2018.8612212

[30]Zhang J, Singh S, 2015. Visual-lidar odometry and mapping: low-drift, robust, and fast. IEEE Int Conf on Robotics and Automation, p.2174-2181. doi: 10.1109/ICRA.2015.7139486

[31]Zhao HJ, Chiba M, Shibasaki R, et al., 2008. SLAM in a dynamic large outdoor environment using a laser scanner. IEEE Int Conf on Robotics and Automation, p.1455-1462. doi: 10.1109/ROBOT.2008.4543407

[32]Zhao ZR, Mao YJ, Ding Y, et al., 2019. Visual-based semantic SLAM with landmarks for large-scale outdoor environment. Proc 2nd China Symp on Cognitive Computing and Hybrid Intelligence, p.149-154. doi: 10.1109/CCHI.2019.8901910

[33]Zhu XR, Qiu CX, Deng FC, et al., 2017. Cloud-based real-time outsourcing localization for a ground mobile robot in large-scale outdoor environments. J Field Robot, 34(7):1313-1331. doi: 10.1002/rob.21712

Open peer comments: Debate/Discuss/Question/Opinion

<1>

Please provide your name, email address and a comment





Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn
Copyright © 2000 - 2024 Journal of Zhejiang University-SCIENCE