Full Text:   <3668>

Summary:  <1875>

CLC number: TP181

On-line Access: 2017-01-20

Received: 2016-12-29

Revision Accepted: 2017-01-08

Crosschecked: 2017-01-10

Cited: 3

Clicked: 6734

Citations:  Bibtex RefMan EndNote GB/T7714

 ORCID:

Jian-ru Xue

http://orcid.org/0000-0002-4994-9343

-   Go to

Article info.
Open peer comments

Frontiers of Information Technology & Electronic Engineering  2017 Vol.18 No.1 P.122-138

http://doi.org/10.1631/FITEE.1601873


A vision-centered multi-sensor fusing approach to self-localization and obstacle perception for robotic cars


Author(s):  Jian-ru Xue, Di Wang, Shao-yi Du, Di-xiao Cui, Yong Huang, Nan-ning Zheng

Affiliation(s):  Lab of Visual Cognitive Computing and Intelligent Vehicle, Xi’an Jiaotong University, Xi’an 710049, China

Corresponding email(s):   jrxue@xjtu.edu.cn

Key Words:  Visual perception, Self-localization, Mapping, Motion planning, Robotic car


Jian-ru Xue, Di Wang, Shao-yi Du, Di-xiao Cui, Yong Huang, Nan-ning Zheng. A vision-centered multi-sensor fusing approach to self-localization and obstacle perception for robotic cars[J]. Frontiers of Information Technology & Electronic Engineering, 2017, 18(1): 122-138.

@article{title="A vision-centered multi-sensor fusing approach to self-localization and obstacle perception for robotic cars",
author="Jian-ru Xue, Di Wang, Shao-yi Du, Di-xiao Cui, Yong Huang, Nan-ning Zheng",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="18",
number="1",
pages="122-138",
year="2017",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.1601873"
}

%0 Journal Article
%T A vision-centered multi-sensor fusing approach to self-localization and obstacle perception for robotic cars
%A Jian-ru Xue
%A Di Wang
%A Shao-yi Du
%A Di-xiao Cui
%A Yong Huang
%A Nan-ning Zheng
%J Frontiers of Information Technology & Electronic Engineering
%V 18
%N 1
%P 122-138
%@ 2095-9184
%D 2017
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.1601873

TY - JOUR
T1 - A vision-centered multi-sensor fusing approach to self-localization and obstacle perception for robotic cars
A1 - Jian-ru Xue
A1 - Di Wang
A1 - Shao-yi Du
A1 - Di-xiao Cui
A1 - Yong Huang
A1 - Nan-ning Zheng
J0 - Frontiers of Information Technology & Electronic Engineering
VL - 18
IS - 1
SP - 122
EP - 138
%@ 2095-9184
Y1 - 2017
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.1601873


Abstract: 
Most state-of-the-art robotic cars’ perception systems are quite different from the way a human driver understands traffic environments. First, humans assimilate information from the traffic scene mainly through visual perception, while the machine perception of traffic environments needs to fuse information from several different kinds of sensors to meet safety-critical requirements. Second, a robotic car requires nearly 100% correct perception results for its autonomous driving, while an experienced human driver works well with dynamic traffic environments, in which machine perception could easily produce noisy perception results. In this paper, we propose a vision-centered multi-sensor fusing framework for a traffic environment perception approach to autonomous driving, which fuses camera, LIDAR, and GIS information consistently via both geometrical and semantic constraints for efficient self-localization and obstacle perception. We also discuss robust machine vision algorithms that have been successfully integrated with the framework and address multiple levels of machine vision techniques, from collecting training data, efficiently processing sensor data, and extracting low-level features, to higher-level object and environment mapping. The proposed framework has been tested extensively in actual urban scenes with our self-developed robotic cars for eight years. The empirical results validate its robustness and efficiency.

无人车自主定位和障碍物感知的视觉主导多传感器融合方法

概要:人类驾驶与自主驾驶在对交通环境的理解方式上有着明显差别。首先,人主要通过视觉来理解交通场景,而机器感知需要融合多种异构的传感信息才能保证行车安全。其次,一个熟练的驾驶员可以轻松适应各种动态交通环境,但现有的机器感知系统却会经常输出有噪声的感知结果,而自主驾驶要求感知结果近乎100%准确。本文提出了一种用于无人车交通环境感知的视觉主导的多传感器融合计算框架,通过几何和语义约束融合来自相机、激光雷达(LIDAR)及地理信息系统(GIS)的信息,为无人车提供高精度的自主定位和准确鲁棒的障碍物感知,并进一步讨论了已成功集成到上述框架内的鲁棒的视觉算法,主要包括从训练数据收集、传感器数据处理、低级特征提取到障碍物识别和环境地图创建等多个层次的视觉算法。所提出的框架里已用于自主研发的无人车,并在各种真实城区环境中进行了长达八年的实地测试,实验结果验证了视觉主导的多传感融合感知框架的鲁棒性和高效性。

关键词:视觉感知;自主定位;地图构建;运动规划;无人车

Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article

Reference

[1]Aeberhard, M., Rauch, S., Bahram, M., et al., 2015. Experience, results and lessons learned from automated driving on Germany’s highways. IEEE Intell. Transp. Syst. Mag., 7(1):42-57.

[2]Blanco, J.L., Fernádez-Madrigal, J.A., González, J., 2007. A new approach for large-scale localization and mapping: hybrid metric-topological SLAM. Proc. IEEE Int. Conf. on Robotics and Automation, p.2061-2067.

[3]Blanco, J.L., Fernádez-Madrigal, J.A., González, J., 2008. Toward a unified Bayesian approach to hybrid metric-topological SLAM. IEEE Trans. Robot., 24(2):259-270.

[4]Brubaker, M.A., Geiger, A., Urtasun, R., 2016. Map-based probabilistic visual self-localization. IEEE Trans. Patt. Anal. Mach. Intell., 38(4):652-665.

[5]Buehler, M., Iagnemma, K., Singh, S., 2009. The DARPA Urban Challenge: Autonomous Vehicles in City Traffic. Springer.

[6]Cho, H., Seo, Y.W., Kumar, B.V.K.V., et al., 2014. A multi-sensor fusion system for moving object detection and tracking in urban driving environments. IEEE Int. Conf. on Robotics and Automation, p.1836-1843.

[7]Cui, D., Xue, J., Du, S., et al., 2014. Real-time global localization of intelligent road vehicles in lane-level via lane marking detection and shape registration. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, p.4958-4964.

[8]Cui, D.X., Xue, J.R., Zheng, N.N., 2016. Real-time global localization of robotic cars in lane level via lane marking detection and shape registration. IEEE Trans. Intell. Transp. Syst., 17(4):1039-1050.

[9]Darms, M., Rybski, P., Urmson, C., 2008. Classification and tracking of dynamic objects with multiple sensors for autonomous driving in urban environments. IEEE Intelligent Vehicles Symp., p.1197-1202.

[10]Darms, M., Rybski, P., Baker, C., et al., 2009. Obstacle detection and tracking for the urban challenge. IEEE Trans. Intell. Transp. Syst., 10(3):475-485.

[11]Davison, A.J., Reid, I.D., Molton, N.D., et al., 2007. MonoSLAM: real-time single camera SLAM. IEEE Trans. Patt. Anal. Mach. Intell., 29(6):1052-1067.

[12]Dissanayake, M.W.M.G., Newman, P., Clark, S., et al., 2001. A solution to the simultaneous localization and map building (SLAM) problem. IEEE Trans. Robot. Autom., 17(3):229-241.

[13]Dollár, P., Appel, R., Belongie, S., et al., 2014. Fast feature pyramids for object detection. IEEE Trans. Patt. Anal. Mach. Intell., 36(8):1532-1545.

[14]Douillard, B., Fox, D., Ramos, F., 2009. Laser and vision based outdoor object mapping. Robotics: Science and Systems IV, p.9-16.

[15]Du, S.Y., Zheng, N.N., Xiong, L., et al., 2010. Scaling iterative closest point algorithm for registration of $m$-$D$ point sets. J. Vis. Commun. Image Represent., 21(5-6):442-452.

[16]Durrant-Whyte, H., Bailey, T., 2006. Simultaneous localization and mapping: part I. IEEE Robot. Autom. Mag., 13(2):99-110.

[17]Ess, A., Schindler, K., Leibe, B., et al., 2010. Object detection and tracking for autonomous navigation in dynamic environments. Int. J. Robot. Res., 29(14):1707-1725.

[18]Fuentes-Pacheco, J., Ruiz-Ascencio, J., Rendón-Mancha, J.M., 2015. Visual simultaneous localization and mapping: a survey. Artif. Intell. Rev., 43(1):55-81.

[19]Grisetti, G., Kummerle, R., Stachniss, C., et al., 2010. A tutorial on graph-based SLAM. IEEE Intell. Transp. Syst. Mag., 2(4):31-43.

[20]Hartley, R.I., Zisserman, A., 2004. Multiple View Geometry in Computer Vision. Cambridge University Press.

[21]Held, D., Levinson, J., Thrun, S., et al., 2016. Robust real-time tracking combining 3D shape, color, and motion. Int. J. Robot. Res., 35(1-3):30-49.

[22]Hillel, A.B., Lerner, R., Levi, D., et al., 2014. Recent progress in road and lane detection: a survey. Mach. Vis. Appl., 25(3):727-745.

[23]Hoiem, D., Hays, J., Xiao, J.X., et al., 2015. Guest editorial: scene understanding. Int. J. Comput. Vis., 112(2):131-132.

[24]Konolige, K., Marder-Eppstein, E., Marthi, B., 2011. Navigation in hybrid metric-topological maps. IEEE Int. Conf. on Robotics and Automation, p.3041-3047.

[25]Li, Q., Zheng, N.N., Cheng, H., 2004. Springrobot: a prototype autonomous vehicle and its algorithms for lane detection. IEEE Trans. Intell. Transp. Syst., 5(4):300-308.

[26]Mertz, C., Navarro-Serment, L.E., MacLachlan, R.A., et al., 2013. Moving object detection with laser scanners. J. Field Robot., 30(1):17-43.

[27]Montemerlo, M., Thrun, S., Koller, D., et al., 2002. Fast-linebreak SLAM: a factored solution to the simultaneous localization and mapping problem. 8th National Conf. on Artificial Intelligence, p.593-598.

[28]Pan, Y.H., 2016. Heading toward artificial intelligence 2.0. Engineering, 2(4):409-413.

[29]Schueler, K., Weiherer, T., Bouzouraa, E., et al., 2012. 360 degree multi sensor fusion for static and dynamic obstacles. IEEE Intelligent Vehicles Symp., p.692-697.

[30]Thrun, S., Leonard, J.J., 2008. Simultaneous localization and mapping. Int. Conf. on Artificial Intelligence, p.871-889.

[31]Ulrich, L., 2016. 2016’s Top Ten Tech Cars. http://spectrum.ieee.org/transportation/advanced-cars/2016s-top-ten-tech-cars

[32]Xue, J., Zheng, N.N., Geng, J., et al., 2008. Tracking multiple visual targets via particle-based belief propagation. IEEE Trans. Syst. Man Cybern. B, 38(1):196-209.

[33]Zhang, Z., 2000. A flexible new technique for camera calibration. IEEE Trans. Patt. Anal. Mach. Intell., 22(11):1330-1334.

[34]Zheng, N.N., Liu, Z.Y., Ren, P.J., et al., 2017. Hybrid-augmented intelligence: collaboration and cognition. Front. Inform. Technol. Electron. Eng., in press.

Open peer comments: Debate/Discuss/Question/Opinion

<1>

Please provide your name, email address and a comment





Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn
Copyright © 2000 - 2024 Journal of Zhejiang University-SCIENCE