CLC number: TP242
On-line Access: 2024-08-27
Received: 2023-10-17
Revision Accepted: 2024-05-08
Crosschecked: 2022-07-29
Cited: 0
Clicked: 2510
Citations: Bibtex RefMan EndNote GB/T7714
Shaopeng LIU, Guohui TIAN, Yongcheng CUI, Xuyang SHAO. A deep Q-learning network based active object detection model with a novel training algorithm for service robots[J]. Frontiers of Information Technology & Electronic Engineering, 2022, 23(11): 1673-1683.
@article{title="A deep Q-learning network based active object detection model with a novel training algorithm for service robots",
author="Shaopeng LIU, Guohui TIAN, Yongcheng CUI, Xuyang SHAO",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="23",
number="11",
pages="1673-1683",
year="2022",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.2200109"
}
%0 Journal Article
%T A deep Q-learning network based active object detection model with a novel training algorithm for service robots
%A Shaopeng LIU
%A Guohui TIAN
%A Yongcheng CUI
%A Xuyang SHAO
%J Frontiers of Information Technology & Electronic Engineering
%V 23
%N 11
%P 1673-1683
%@ 2095-9184
%D 2022
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.2200109
TY - JOUR
T1 - A deep Q-learning network based active object detection model with a novel training algorithm for service robots
A1 - Shaopeng LIU
A1 - Guohui TIAN
A1 - Yongcheng CUI
A1 - Xuyang SHAO
J0 - Frontiers of Information Technology & Electronic Engineering
VL - 23
IS - 11
SP - 1673
EP - 1683
%@ 2095-9184
Y1 - 2022
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.2200109
Abstract: This paper focuses on the problem of active object detection (AOD). AOD is important for service robots to complete tasks in the family environment, and leads robots to approach the target object by taking appropriate moving actions. Most of the current AOD methods are based on reinforcement learning with low training efficiency and testing accuracy. Therefore, an AOD model based on a deep Q-learning network (DQN) with a novel training algorithm is proposed in this paper. The DQN model is designed to fit the Q-values of various actions, and includes state space, feature extraction, and a multilayer perceptron. In contrast to existing research, a novel training algorithm based on memory is designed for the proposed DQN model to improve training efficiency and testing accuracy. In addition, a method of generating the end state is presented to judge when to stop the AOD task during the training process. Sufficient comparison experiments and ablation studies are performed based on an AOD dataset, proving that the presented method has better performance than the comparable methods and that the proposed training algorithm is more effective than the raw training algorithm.
[1]Ammirato P, Poirson P, Park E, et al., 2017. A dataset for developing and benchmarking active vision. Proc IEEE Int Conf on Robotics and Automation, p.1378-1385.
[2]Ammirato P, Berg AC, Košecká J, 2018. Active vision dataset benchmark. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition Workshops, p.2046-2049.
[3]Dos Reis DH, Welfer D, De Souza Leite Cuadros MA, et al., 2019. Mobile robot navigation using an object recognition software with RGBD images and the YOLO algorithm. Appl Artif Intell, 33(14):1290-1305.
[4]Duan KW, Bai S, Xie LX, et al., 2019. CenterNet: keypoint triplets for object detection. Proc IEEE/CVF Int Conf on Computer Vision, p.6568-6577.
[5]Han XN, Liu HP, Sun FC, et al., 2019. Active object detection with multistep action prediction using deep Q-network. IEEE Trans Ind Inform, 15(6):3723-3731.
[6]He KM, Zhang XY, Ren SQ, et al., 2016. Deep residual learning for image recognition. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.770-778.
[7]Liu SP, Tian GH, Zhang Y, et al., 2022a. Active object detection based on a novel deep Q-learning network and long-term learning strategy for the service robot. IEEE Trans Ind Electron, 69(6):5984-5993.
[8]Liu SP, Tian GH, Zhang Y, et al., 2022b. Service planning oriented efficient object search: a knowledge-based framework for home service robot. Exp Syst Appl, 187:115853.
[9]Mnih V, Kavukcuoglu K, Silver D, et al., 2015. Human-level control through deep reinforcement learning. Nature, 518(7540):529-533.
[10]Mousavian A, Toshev A, Fišer M, et al., 2019. Visual representations for semantic target driven navigation. Proc IEEE Int Conf on Robotics and Automation, p.8846-8852.
[11]Paletta L, Pinz A, 2000. Active object recognition by view integration and reinforcement learning. Robot Autom Syst, 31(1-2):71-86.
[12]Pu SL, Zhao W, Chen WJ, et al., 2021. Unsupervised object detection with scene-adaptive concept learning. Front Inform Technol Electron Eng, 22(5):638-651.
[13]Ren SQ, He KM, Girshick R, et al., 2017. Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans Patt Anal Mach Intell, 39(6):1137-1149.
[14]Schmid JF, Lauri M, Frintrop S, 2019. Explore, approach, and terminate: evaluating subtasks in active visual object search based on deep reinforcement learning. Proc IEEE/RSJ Int Conf on Intelligent Robots and Systems, p.5008-5013.
[15]Shuai W, Chen XP, 2019. KeJia: towards an autonomous service robot with tolerance of unexpected environmental changes. Front Inform Technol Electron Eng, 20(3):307-317.
[16]Singh A, Sha J, Narayan KS, et al., 2014. BigBIRD: a large-scale 3D database of object instances. Proc IEEE Int Conf on Robotics and Automation, p.509-516.
[17]van Hasselt H, Guez A, Silver D, 2016. Deep reinforcement learning with double Q-learning. Proc AAAI Conf on Artificial Intelligence, p.2094-2100.
[18]Wan SH, Goudos S, 2020. Faster R-CNN for multi-class fruit detection using a robotic vision system. Comput Netw, 168:107036.
[19]Wang Q, Fan Z, Sheng WH, et al., 2019. Finding misplaced items using a mobile robot in a smart home environment. Front Inform Technol Electron Eng, 20(8):1036-1048.
[20]Xu QL, Fang F, Gauthier N, et al., 2021. Towards efficient multiview object detection with adaptive action prediction. Proc IEEE Int Conf on Robotics and Automation, p.13423-13429.
[21]Zhang H, Liu HP, Guo D, et al., 2017. From foot to head: active face finding using deep Q-learning. Proc IEEE Int Conf on Image Processing, p.1862-1866.
[22]Zhou XY, Zhuo JC, Krähenbühl P, 2019. Bottom-up object detection by grouping extreme and center points. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.850-859.
Open peer comments: Debate/Discuss/Question/Opinion
<1>