Full Text:   <315>

Summary:  <67>

CLC number: TP242.6

On-line Access: 2020-05-18

Received: 2019-09-24

Revision Accepted: 2019-12-02

Crosschecked: 2020-04-01

Cited: 0

Clicked: 532

Citations:  Bibtex RefMan EndNote GB/T7714


Jin-wen Hu


Bo-yin Zheng


-   Go to

Article info.
Open peer comments

Frontiers of Information Technology & Electronic Engineering  2020 Vol.21 No.5 P.675-692


A survey on multi-sensor fusion based obstacle detection for intelligent ground vehicles in off-road environments

Author(s):  Jin-wen Hu, Bo-yin Zheng, Ce Wang, Chun-hui Zhao, Xiao-lei Hou, Quan Pan, Zhao Xu

Affiliation(s):  Key Laboratory of Information Fusion Technology, Northwestern Polytechnical University, Xi'an 710072, China

Corresponding email(s):   hujinwen@nwpu.edu.cn, zhengboyin@mail.nwpu.edu.cn

Key Words:  Multi-sensor fusion, Obstacle detection, Off-road environment, Intelligent vehicle, Unmanned ground vehicle

Jin-wen Hu, Bo-yin Zheng, Ce Wang, Chun-hui Zhao, Xiao-lei Hou, Quan Pan, Zhao Xu. A survey on multi-sensor fusion based obstacle detection for intelligent ground vehicles in off-road environments[J]. Frontiers of Information Technology & Electronic Engineering, 2020, 21(5): 675-692.

@article{title="A survey on multi-sensor fusion based obstacle detection for intelligent ground vehicles in off-road environments",
author="Jin-wen Hu, Bo-yin Zheng, Ce Wang, Chun-hui Zhao, Xiao-lei Hou, Quan Pan, Zhao Xu",
journal="Frontiers of Information Technology & Electronic Engineering",
publisher="Zhejiang University Press & Springer",

%0 Journal Article
%T A survey on multi-sensor fusion based obstacle detection for intelligent ground vehicles in off-road environments
%A Jin-wen Hu
%A Bo-yin Zheng
%A Ce Wang
%A Chun-hui Zhao
%A Xiao-lei Hou
%A Quan Pan
%A Zhao Xu
%J Frontiers of Information Technology & Electronic Engineering
%V 21
%N 5
%P 675-692
%@ 2095-9184
%D 2020
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.1900518

T1 - A survey on multi-sensor fusion based obstacle detection for intelligent ground vehicles in off-road environments
A1 - Jin-wen Hu
A1 - Bo-yin Zheng
A1 - Ce Wang
A1 - Chun-hui Zhao
A1 - Xiao-lei Hou
A1 - Quan Pan
A1 - Zhao Xu
J0 - Frontiers of Information Technology & Electronic Engineering
VL - 21
IS - 5
SP - 675
EP - 692
%@ 2095-9184
Y1 - 2020
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.1900518

With the development of sensor fusion technologies, there has been a lot of research on intelligent ground vehicles, where obstacle detection is one of the key aspects of vehicle driving. obstacle detection is a complicated task, which involves the diversity of obstacles, sensor characteristics, and environmental conditions. While the on-road driver assistance system or autonomous driving system has been well researched, the methods developed for the structured road of city scenes may fail in an off-road environment because of its uncertainty and diversity. A single type of sensor finds it hard to satisfy the needs of obstacle detection because of the sensing limitations in range, signal features, and working conditions of detection, and this motivates researchers and engineers to develop multi-sensor fusion and system integration methodology. This survey aims at summarizing the main considerations for the onboard multi-sensor configuration of intelligent ground vehicles in the off-road environments and providing users with a guideline for selecting sensors based on their performance requirements and application environments. State-of-the-art multi-sensor fusion methods and system prototypes are reviewed and associated to the corresponding heterogeneous sensor configurations. Finally, emerging technologies and challenges are discussed for future study.





Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article


[1]Arbanas B, Ivanovic A, Car M, et al., 2018. Decentralized planning and control for UAV-UGV cooperative teams. Auton Robot, 42(8):1601-1618.

[2]Arnold E, Al-Jarrah OY, Dianati M, et al., 2019. A survey on 3D object detection methods for autonomous driving applications. IEEE Trans Intell Trans Syst, 20(10):3782-linebreak 3795.

[3]Asvadi A, Gir ao P, Peixoto P, et al., 2016. 3D object tracking using RGB and LIDAR data. IEEE 19th Int Conf on Intelligent Transportation Systems, p.1255-1260.

[4]Aufr‘ere R, Gowdy J, Mertz C, et al., 2003. Perception for collision avoidance and autonomous driving. Mechatronics, 13(10):1149-1161.

[5]Avidan S, 2004. Support vector tracking. IEEE Trans Patt Anal Mach Intell, 26(8):1064-1072.

[6]Azim A, Aycard O, 2012. Detection, classification and tracking of moving objects in a 3D environment. IEEE Intelligent Vehicles Symp, p.802-807.

[7]Baig Q, Aycard O, Vu TD, et al., 2011. Fusion between laser and stereo vision data for moving objects tracking in intersection like scenario. IEEE Intelligent Vehicles Symp, p.362-367.

[8]Bajracharya M, Ma J, Malchano M, et al., 2013. High fidelity day/night stereo mapping with vegetation and negative obstacle detection for vision-in-the-loop walking. IEEE/RSJ Int Conf on Intelligent Robots and Systems, p.3663-3670.

[9]Bogoslavskyi I, Stachniss C, 2017. Efficient online segmentation for sparse 3D laser scans. PFG J Photogramm Remote Sens Geoinform Sci, 85(1):41-52.

[10]Bosch A, Zisserman A, Munoz X, 2007. Representing shape with a spatial pyramid kernel. 6th ACM Int Conf on Image and Video Retrieval, p.401-408.

[11]Burges CJC, 1998. A tutorial on support vector machines for pattern recognition. Data Min Knowl Discov, 2(2):121-167.

[12]Cadena C, Carlone L, Carrillo H, et al., 2016. Past, present, and future of simultaneous localization and mapping: toward the robust-perception age. IEEE Trans Robot, 32(6):1309-1332.

[13]Cai ZW, Fan QF, Feris RS, et al., 2016. A unified multi-scale deep convolutional neural network for fast object detection. Proc 14th European Conf on Computer Vision, p.354-370.

[14]Castorena J, Kamilov US, Boufounos PT, 2016. Autocalibration of lidar and optical cameras via edge alignment. Proc Int Conf on Acoustics, Speech and Signal Processing, p.2862-2866.

[15]Chavez-Garcia RO, 2014. Multiple Sensor Fusion for Detection, Classification and Tracking of Moving Objects in Driving Environments. PhD Thesis, Université de Grenoble, Grenoble, France.

[16]Chen L, Yang J, Kong H, 2017. Lidar-histogram for fast road and obstacle detection. Proc IEEE Int Conf on Robotics and Automation, p.1343-1348.

[17]Chen XZ, Ma HM, Wan J, et al., 2017. Multi-view 3D object detection network for autonomous driving. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.6526-6534.

[18]Cho H, Seo YW, Kumar BVKV, et al., 2014. A multi-sensor fusion system for moving object detection and tracking in urban driving environments. Proc IEEE Int Conf on Robotics and Automation, p.1836-1843.

[19]del Moral P, Doucet A, Jasra A, 2012. An adaptive sequential Monte Carlo method for approximate Bayesian computation. Stat Comput, 22(5):1009-1020.

[20]Denoeux T, 2000. A neural network classifier based on Dempster-Shafer theory. IEEE Trans Syst Man Cybern Part A, 30(2):131-150.

[21]Dey KC, Rayamajhi A, Chowdhury M, et al., 2016. Vehicle-linebreak to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communication in a heterogeneous wireless network—performance evaluation. Trans Res Part C, 68(7):168-184.

[22]Douillard B, Brooks A, Ramos F, 2009. A 3D laser and vision based classifier. Int Conf on Intelligent Sensors, Sensor Networks and Information Processing, p.295-300.

[23]Elfes A, 1989. Using occupancy grids for mobile robot perception and navigation. Computer, 22(6):46-57.

[24]Fernández C, Gavilán M, Llorca DF, et al., 2012. Free space and speed humps detection using lidar and vision for urban autonomous navigation. Proc IEEE Intelligent Vehicles Symp, p.698-703.

[25]Fourati H, 2015. Multisensor Data Fusion: from Algorithms and Architectural Design to Applications. CRC Press, Boca Raton, USA.

[26]Fukatsu R, Sakaguchi K, 2019. Millimeter-wave V2V communications with cooperative perception for automated driving. IEEE 89th Vehicular Technology Conf, p.1-5.

[27]García F, García J, Ponz A, et al., 2014. Context aided pedestrian detection for danger estimation based on laser scanner and computer vision. Expert Syst Appl, 41(15):linebreak 6646-6661.

[28]García-Moreno AI, Gonzalez-Barbosa JJ, Ornelas-Rodriguez FJ, et al., 2013. LIDAR and panoramic camera extrinsic calibration approach using a pattern plane. Proc 5th Mexican Conf on Pattern Recognition, p.104-113.

[29]Gning A, Abdallah F, Bonnifait P, 2007. A new estimation method for multisensor fusion by using interval analysis and particle filtering. Proc IEEE Int Conf on Robotics and Automation, p.3844-3849.

[30]Gong XJ, Lin Y, Liu JL, 2013a. Extrinsic calibration of a 3D LIDAR and a camera using a trihedron. Opt Laser Eng, 51(4):394-401.

[31]Gong XJ, Lin Y, Liu JL, 2013b. 3D LIDAR-camera extrinsic calibration using an arbitrary trihedron. Sensors, 13(2):linebreak 1902-1918.

[32]González A, Fang Z, Socarras Y, et al., 2016. Pedestrian detection at day/night time with visible and FIR cameras: a comparison. Sensors, 16(6):820.

[33]Grabe B, Ike T, Hoetter M, 2009. Evaluation method of grid based representation from sensor data. Proc IEEE Intelligent Vehicles Symp, p.1245-1250.

[34]Hadsell R, Bagnell JA, Huber D, et al., 2010. Space-carving kernels for accurate rough terrain estimation. Int J Robot Res, 29(8):981-996.

[35]Häselich M, Arends M, Lang D, et al., 2011. Terrain classification with Markov random fields on fused camera and 3D laser range data. Proc 5th European Conf on Mobile Robots, p.153-158.

[36]Hosang J, Benenson R, Dollár P, et al., 2016. What makes for effective detection proposals? IEEE Trans Patt Anal Mach Intell, 38(4):814-830.

[37]Hu X, Rodriguez FSA, Gepperth A, 2014. A multi-modal system for road detection and segmentation. Proc IEEE Intelligent Vehicles Symp, p.1365-1370.

[38]Hwang S, Park J, Kim N, et al., 2015. Multispectral pedestrian detection: benchmark dataset and baseline. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.1037-1045.

[39]Iqbal M, Morel M, Meriaudeau F, 2009. A survey on outdoor water hazard detection. 5th Int Conf on Information and Communication Technology and Systems, p.33-39.

[40]Jing X, Du ZC, Li F, 2013. Obstacle detection by Doppler frequency shift. Electron Sci Technol, 26(8):57-60 (in Chinese).

[41]Kaempchen N, Buehler M, Dietmayer K, 2005. Feature-level fusion for free-form object tracking using laserscanner and video. Proc IEEE Intelligent Vehicles Symp, p.453-458.

[42]Karunasekera H, Zhang H, Xi T, et al., 2017. Stereo vision based negative obstacle detection. Proc 13th IEEE Int Conf on Control and Automation, p.834-838.

[43]Kenney JB, 2011. Dedicated short-range communications (DSRC) standards in the United States. Proc IEEE, 99(7):1162-1182.

[44]Khaleghi B, Khamis A, Karray FO, et al., 2013. Multisensor data fusion: a review of the state-of-the-art. Inform Fus, 14(1):28-44.

[45]Kim J, Han DS, Senouci B, 2018. Radar and vision sensor fusion for object detection in autonomous vehicle surroundings. Proc IEEE 10th Int Conf on Ubiquitous and Future Networks, p.76-78.

[46]Kim JH, Starr JW, Lattimer BY, 2015. Firefighting robot stereo infrared vision and radar sensor fusion for imaging through smoke. Fire Technol, 51(4):823-845.

[47]Krafcik J, 2018. Where the Next 10 Million Miles Will Take Us at Waymo. https://santansun.com/2018/10/23/where-the-next-10-million-miles-will-take-us-at-waymo/ [Accessed on Jan. 15, 2020].

[48]Ku J, Mozifian M, Lee J, et al., 2018. Joint 3D proposal generation and object detection from view aggregation. Proc IEEE/RSJ Int Conf on Intelligent Robots and Systems, p.1-8.

[49]Labayrade R, Aubert D, Tarel JP, 2002. Real time obstacle detection in stereovision on non flat road geometry through “V-disparity” representation. Proc IEEE Intelligent Vehicle Symp, p.646-651.

[50]Lahat D, Adali T, Jutten C, 2015. Multimodal data fusion: an overview of methods, challenges, and prospects. Proc IEEE, 103(9):1449-1477.

[51]Larson J, Trivedi M, 2011. Lidar based off-road negative obstacle detection and analysis. Proc 14th Int IEEE Conf on Intelligent Transportation Systems, p.192-197.

[52]Levinson J, Thrun S, 2013. Automatic online calibration of cameras and lasers. Proc Robotics: Science and Systems, p.1-8.

[53]Li JQ, Deng GQ, Luo CW, et al., 2016. A hybrid path planning method in unmanned air/ground vehicle (UAV/UGV) cooperative systems. IEEE Trans Veh Technol, 65(12):9585-9596.

[54]Li QQ, Chen L, Li M, et al., 2014. A sensor-fusion drivable-region and lane-detection system for autonomous vehicle navigation in challenging road scenarios. IEEE Trans Veh Technol, 63(2):540-555.

[55]Li THS, Yeh YC, Wu JD, et al., 2010. Multifunctional intelligent autonomous parking controllers for carlike mobile robots. IEEE Trans Ind Electron, 57(5):1687-1700.

[56]Lim K, Tuladhar KM, 2019. LIDAR: lidar information based dynamic V2V authentication for roadside infrastructure-less vehicular networks. Proc 16th IEEE Annual Consumer Communications and Networking Conf, p.1-6.

[57]Liu HP, Sun FC, 2012. Fusion tracking in color and infrared images using joint sparse representation. Sci China Inform Sci, 55(3):590-599.

[58]Ma JY, Ma Y, Li C, 2019. Infrared and visible image fusion methods and applications: a survey. Inform Fus, 45(1):153-178.

[59]Majumder S, Pratihar DK, 2018. Multi-sensors data fusion through fuzzy clustering and predictive tools. Expert Syst Appl, 107(04):165-172.

[60]Matthies L, Rankin A, 2003. Negative obstacle detection by thermal signature. Proc IEEE/RSJ Int Conf on Intelligent Robots and Systems, p.906-913.

[61]Matthies L, Kelly A, Litwin T, et al., 1995. Obstacle detection for unmanned ground vehicles: a progress report. Proc Intelligent Vehicles Symp, p.66-71.

[62]Matthies LH, Bellutta P, McHenry M, 2003. Detecting water hazards for autonomous off-road navigation. Unmanned Ground Vehicle Technology V, p.231-243.

[63]Montemerlo M, Thrun S, 2006. Large-scale robotic 3-D mapping of urban structures. In: Ang MHJr, Khatib O (Eds.), Experimental Robotics IX. Springer, Berlin, p.141-150.

[64]Morton RD, Olson E, 2011. Positive and negative obstacle detection using the HLD classifier. Proc IEEE/RSJ Int Conf on Intelligent Robots and Systems, p.1579-1584.

[65]Müller FDP, 2017. Survey on ranging sensors and cooperative techniques for relative positioning of vehicles. Sensors, 17(2):271.

[66]Müller FDP, Diaz EM, Rashdan I, 2016. Cooperative positioning and radar sensor fusion for relative localization of vehicles. Proc IEEE Intelligent Vehicles Symp, p.1060-1065.

[67]Napier A, Corke P, Newman P, 2013. Cross-calibration of push-broom 2D LIDARs and cameras in natural scenes. Proc IEEE Int Conf on Robotics and Automation, p.3679-3684.

[68]Nguyen H, Tran V, Nguyen T, et al., 2018. Apprenticeship bootstrapping via deep learning with a safety net for UAV-UGV interaction. https://arxiv.org/abs/1810.04344

[69]Park Y, Yun S, Won CS, et al., 2014. Calibration between color camera and 3D LIDAR instruments with a polygonal planar board. Sensors, 14(3):5333-5353.

[70]Pfeiffer D, Franke U, 2011. Modeling dynamic 3D environments by means of the stixel world. IEEE Intell Trans Syst Mag, 3(3):24-36.

[71]Pinchon N, Cassignol O, Nicolas A, et al., 2018. All-weather vision for automotive safety: which spectral band? In: Dubbert J, Müller B, Meyer G (Eds.), Advanced Microsystems for Automotive Applications 2018. Springer, Cham, p.3-15.

[72]Premebida C, Nunes U, 2013. Fusing lidar, camera and semantic information: a context-based approach for pedestrian detection. Int J Robot Res, 32(3):371-384.

[73]Radecki P, Campbell M, Matzen K, 2016. All weather perception: joint data association, tracking, and classification for autonomous ground vehicles. https://arxiv.org/abs/1605.02196

[74]Raya M, Hubaux JP, 2007. Securing vehicular ad hoc networks. J Comput Secur, 15(1):39-68.

[75]Rosique F, Navarro PJ, Fernández C, et al., 2019. A systematic review of perception system and simulators for autonomous vehicles research. Sensors, 19(3):648.

[76]Rubaiyat AHM, Fallah Y, Li X, et al., 2018. Multi-sensor data fusion for vehicle detection in autonomous vehicle applications. Electron Image, 2018(17):257-1-257-6.

[77]Sarwal A, Nett J, Simon D, 2004. Detection of Small Water-bodies. PercepTek Robotics 12395. Perceptek Inc., Littleton Co., MA, USA.

[78]Schneider S, Himmelsbach M, Luettel T, et al., 2010. Fusing vision and LIDAR—synchronization, correction and occlusion reasoning. Proc IEEE Intelligent Vehicles Symp, p.388-393.

[79]Seitz SM, Curless B, Diebel J, et al., 2006. A comparison and evaluation of multi-view stereo reconstruction algorithms. IEEE Computer Society Conf on Computer Vision and Pattern Recognition, p.519-528.

[80]Seraji H, 2003. New traversability indices and traversability grid for integrated sensor/map-based navigation. J Robot Syst, 20(3):121-134.

[81]Shinzato PY, Wolf DF, Stiller C, 2014. Road terrain detection: avoiding common obstacle detection assumptions using sensor fusion. IEEE Intelligent Vehicles Symp, p.687-692.

[82]Shirkhodaie A, Amrani R, Tunstel E, 2005. Soft computing for visual terrain perception and traversability assessment by planetary robotic systems. IEEE Int Conf on Systems, Man and Cybernetics, p.1848-1855.

[83]Siritanawan P, Prasanjith MD, Wang DW, 2017. 3D feature points detection on sparse and non-uniform pointcloud for SLAM. 18th Int Conf on Advanced Robotics, p.112-117.

[84]Sock J, Kim J, Min JH, et al., 2016. Probabilistic traversability map generation using 3D-LIDAR and camera. Proc IEEE Int Conf on Robotics and Automation, p.5631-5637.

[85]Son J, Yoo H, Kim S, et al., 2015. Real-time illumination invariant lane detection for lane departure warning system. Expert Syst Appl, 42(4):1816-1824.

[86]Starr JW, Lattimer B, 2017. Evidential sensor fusion of long-wavelength infrared stereo vision and 3D-LIDAR for rangefinding in fire environments. Fire Technol, 53(6):linebreak 1961-1983.

[87]Su H, Maji S, Kalogerakis E, et al., 2015. Multi-view convolutional neural networks for 3D shape recognition. Proc IEEE Int Conf on Computer Vision, p.945-953.

[88]Subramanian V, Burks TF, Dixon WE, 2009. Sensor fusion using fuzzy logic enhanced Kalman filter for autonomous vehicle guidance in citrus groves. Trans ASABE, 52(5):1411-1422.

[89]Thrun S, Montemerlo M, Dahlkamp H, et al., 2006. Stanley: the robot that won the DARPA grand challenge. J Field Robot, 23(9):661-692.

[90]Tian Z, Cai Y, Huang S, et al., 2018. Vehicle tracking system for intelligent and connected vehicle based on radar and V2V fusion. Proc Chinese Control and Decision Conf, p.6598-6603.

[91]van Brummelen J, O’Brien M, Gruyer D, et al., 2018. Autonomous vehicle perception: the technology of today and tomorrow. Trans Res Part C, 89(3):384-406.

[92]Vu TD, 2009. Vehicle Perception: Localization, Mapping with Detection, Classification and Tracking of Moving Objects. PhD Thesis, Institut National Polytechnique de Grenoble, Grenoble, France.

[93]Wang J, Song Q, Jiang Z, et al., 2016. A novel InSAR based off-road positive and negative obstacle detection technique for unmanned ground vehicle. Proc IEEE Int Geoscience and Remote Sensing Symp, p.1174-1177.

[94]Wang JG, Chen SJ, Zhou LB, et al., 2018. Vehicle detection and width estimation in rain by fusing radar and vision. 15th Int Conf on Control, Automation, Robotics and Vision, p.1063-1068.

[95]Wei P, Cagle L, Reza T, et al., 2018. Lidar and camera detection fusion in a real-time industrial multi-sensor collision avoidance system. Electronics, 7(6):84.

[96]Xiao L, Wang R, Dai B, et al., 2018. Hybrid conditional random field based camera-lidar fusion for road detection. Inform Sci, 432(03):543-558.

[97]Yamada M, Ueda K, Horiba I, et al., 2005. Detection of wet-road conditions from images captured by a vehicle-mounted camera. J Robot Mech, 17(3):269-276.

[98]Yoo HW, Druml N, Brunner D, et al., 2018. MEMS-based lidar for autonomous driving. Elektrotech Inftech, 135(6):408-415.

[99]Yue YF, Wang DW, Senarathne PGCN, et al., 2017. Robust submap-based probabilistic inconsistency detection for multi-robot mapping. Proc European Conf on Mobile Robots, p.1-6.

[100]Yue YF, Senarathne PGCN, Yang C, et al., 2018. Hierarchical probabilistic fusion framework for matching and merging of 3-D occupancy maps. IEEE Sens J, 18(21):linebreak 8933-8949.

[101]Zhan YM, Leung H, Kwak KC, et al., 2009. Automated speaker recognition for home service robots using genetic algorithm and Dempster-Shafer fusion technique. IEEE Trans Instrum Meas, 58(9):3058-3068.

[102]Zhang J, Siritanawan P, Yue Y, et al., 2018. A two-step method for extrinsic calibration between a sparse 3D LiDAR and a thermal camera. 15th Int Conf on Control, Automation, Robotics and Vision, p.1039-1044.

[103]Zhao GQ, Xiao XH, Yuan JS, et al., 2014. Fusion of 3D-LIDAR and camera data for scene parsing. J Visual Commun Imag Represent, 25(1):165-183.

[104]Zhou Y, Tuzel O, 2018. VoxelNet: end-to-end learning for point cloud based 3D object detection. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.4490-4499.

[105]Zhu H, Yuen KV, Mihaylova L, et al., 2017. Overview of environment perception for intelligent vehicles. IEEE Trans Intell Trans Syst, 18(10):2584-2601.

Open peer comments: Debate/Discuss/Question/Opinion


Please provide your name, email address and a comment

Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn
Copyright © 2000 - Journal of Zhejiang University-SCIENCE