Full Text:   <8324>

Summary:  <1483>

CLC number: TP391.4

On-line Access: 2024-08-27

Received: 2023-10-17

Revision Accepted: 2024-05-08

Crosschecked: 2021-02-03

Cited: 0

Clicked: 4866

Citations:  Bibtex RefMan EndNote GB/T7714

 ORCID:

Liangliang Liu

https://orcid.org/0000-0002-7262-7502

-   Go to

Article info.
Open peer comments

Frontiers of Information Technology & Electronic Engineering  2021 Vol.22 No.5 P.709-719

http://doi.org/10.1631/FITEE.2000377


A partition approach for robust gait recognition based on gait template fusion


Author(s):  Kejun Wang, Liangliang Liu, Xinnan Ding, Kaiqiang Yu, Gang Hu

Affiliation(s):  College of Intelligent Systems Science and Engineering, Harbin Engineering University, Harbin 150001, China

Corresponding email(s):   heukejun@126.com, liuliangliang@hrbeu.edu.cn, dingxinnan@hrbeu.edu.cn, yukaiqiang@hrbeu.edu.cn, hugang@hrbeu.edu.cn

Key Words:  Gait recognition, Partition algorithms, Gait templates, Gait analysis, Gait energy image, Deep convolutional neural networks, Biometrics recognition, Pattern recognition


Kejun Wang, Liangliang Liu, Xinnan Ding, Kaiqiang Yu, Gang Hu. A partition approach for robust gait recognition based on gait template fusion[J]. Frontiers of Information Technology & Electronic Engineering, 2021, 22(5): 709-719.

@article{title="A partition approach for robust gait recognition based on gait template fusion",
author="Kejun Wang, Liangliang Liu, Xinnan Ding, Kaiqiang Yu, Gang Hu",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="22",
number="5",
pages="709-719",
year="2021",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.2000377"
}

%0 Journal Article
%T A partition approach for robust gait recognition based on gait template fusion
%A Kejun Wang
%A Liangliang Liu
%A Xinnan Ding
%A Kaiqiang Yu
%A Gang Hu
%J Frontiers of Information Technology & Electronic Engineering
%V 22
%N 5
%P 709-719
%@ 2095-9184
%D 2021
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.2000377

TY - JOUR
T1 - A partition approach for robust gait recognition based on gait template fusion
A1 - Kejun Wang
A1 - Liangliang Liu
A1 - Xinnan Ding
A1 - Kaiqiang Yu
A1 - Gang Hu
J0 - Frontiers of Information Technology & Electronic Engineering
VL - 22
IS - 5
SP - 709
EP - 719
%@ 2095-9184
Y1 - 2021
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.2000377


Abstract: 
gait recognition has significant potential for remote human identification, but it is easily influenced by identity-unrelated factors such as clothing, carrying conditions, and view angles. Many gait templates have been presented that can effectively represent gait features. Each gait template has its advantages and can represent different prominent information. In this paper, gait template fusion is proposed to improve the classical representative gait template (such as a gait energy image) which represents incomplete information that is sensitive to changes in contour. We also present a partition method to reflect the different gait habits of different body parts of each pedestrian. The fused template is cropped into three parts (head, trunk, and leg regions) depending on the human body, and the three parts are then sent into the convolutional neural network to learn merged features. We present an extensive empirical evaluation of the CASIA-B dataset and compare the proposed method with existing ones. The results show good accuracy and robustness of the proposed method for gait recognition.

一种基于分块步态模板的鲁棒性步态识别方法

王科俊,刘亮亮,丁欣楠,于凯强,胡钢
哈尔滨工程大学智能科学与工程学院,中国哈尔滨市,150001

摘要:步态识别具备远程识别的巨大潜力,但这种方法很容易受到与身份无关的因素影响,例如穿衣、随身携带的物体和角度。目前基于步态模板的方法可以有效表示步态特征。每一种步态模板都有其优势以及表征不同的显著信息。本文提出一种步态模板融合方法,以避免经典的步态模板(例如步态能量图像方法)的不足--经典步态模板表征的不完整信息对轮廓变化很敏感。所提步态模板融合方法采取分块的方法,以表征行人不同身体部位的不同步态习惯。根据人体各部分特点将融合的步态模板为3个部分(头部、躯干和腿部区域),然后将这3部分的步态模板分别输入卷积神经网络学习从而获得融合的步态特征。采用CASIA-B数据集进行充分的实验评估,并将所提方法与现有方法比较。实验结果表明,所提步态识别方法具有良好准确性和鲁棒性。

关键词:步态识别;分块算法;步态模板;步态分析;步态能量图;深度卷积神经网络;生物特征识别;模式识别

Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article

Reference

[1]Bashir K, Xiang T, Gong SG, 2010a. Cross-view gait recognition using correlation strength. Proc British Machine Vision Conf, p.1-11.

[2]Bashir K, Xiang T, Gong SG, 2010b. Gait recognition without subject cooperation. Patt Recogn Lett, 31(13):2052-2060.

[3]Ben XY, Gong C, Zhang P, et al., 2019a. Coupled patch alignment for matching cross-view gaits. IEEE Trans Image Process, 28(6):3142-3157.

[4]Ben XY, Zhang P, Lai ZH, et al., 2019b. A general tensor representation framework for cross-view gait recognition. Patt Recogn, 90:87-98.

[5]Ben XY, Gong C, Zhang P, et al., 2020. Coupled bilinear discriminant projection for cross-view gait recognition. IEEE Trans Circ Syst Video Technol, 30(3):734-747.

[6]Chao HQ, He YW, Zhang JP, et al., 2019. GaitSet: regarding gait as a set for cross-view gait recognition. Proc AAAI Conf on Artificial Intelligence, p.8126-8133.

[7]Fan C, Peng YJ, Cao CS, et al., 2020. GaitPart: temporal part-based model for gait recognition. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.14213-14221.

[8]Goodfellow IJ, Pouget-Abadie J, Mirza M, et al., 2014. Generative adversarial nets. Proc 27th Int Conf on Neural Information Processing Systems, p.2672-2680.

[9]Han J, Bhanu B, 2006. Individual recognition using gait energy image. IEEE Trans Patt Anal Mach Intell, 28(2):316-323.

[10]He YW, Zhang JP, Shan HM, et al., 2019. Multi-task GANs for view-specific feature learning in gait recognition. IEEE Trans Inform Forens Secur, 14(1):102-113.

[11]Hossain A, Makihara Y, Wang JQ, et al., 2010. Clothing-invariant gait identification using part-based clothing categorization and adaptive weight control. Patt Recogn, 43(6):2281-2291.

[12]Iwama H, Muramatsu D, Makihara Y, et al., 2013. Gait verification system for criminal investigation. IPSJ Trans Comput Vis Appl, 5:163-175.

[13]Kusakunniran W, Wu Q, Zhang J, et al., 2012. Gait recognition under various viewing angles based on correlated motion regression. IEEE Trans Circ Syst Video Technol, 22(6):966-980.

[14]Li C, Min X, Sun SQ, et al., 2017. DeepGait: a learning deep convolutional representation for view-invariant gait recognition using joint Bayesian. Appl Sci, 7(3):210.

[15]Li X, Makihara Y, Xu C, et al., 2020. Gait recognition invariant to carried objects using alpha blending generative adversarial networks. Patt Recogn, 105:107376.

[16]Lv ZW, Xing XL, Wang KJ, et al., 2015. Class energy image analysis for video sensor-based gait recognition: a review. Sensors, 15(1):932-964.

[17]Makihara Y, Sagawa R, Mukaigawa Y, et al., 2006. Gait recognition using a view transformation model in the frequency domain. Proc European Conf on Computer Vision, p.151-163.

[18]Muramatsu D, Makihara Y, Yagi Y, 2016. View transformation model incorporating quality measures for cross-view gait recognition. IEEE Trans Cybern, 46(7):1602-1615.

[19]Phillips PJ, 2002. Human identification technical challenges. Proc Int Conf on Image Processing, p.49-52.

[20]Rida I, Almaadeed N, Almaadeed S, 2019. Robust gait recognition: a comprehensive survey. IET Biometr, 8(1):14-28.

[21]Sarkar S, Phillips PJ, Liu Z, et al., 2005. The humanid gait challenge problem: data sets, performance, and analysis. IEEE Trans Patt Anal Mach Intell, 27(2):162-177.

[22]Shiraga K, Makihara Y, Muramatsu D, et al., 2016. GEINet: view-invariant gait recognition using a convolutional neural network. Proc Int Conf on Biometrics, p.1-8.

[23]Song CF, Huang YZ, Huang Y, et al., 2019. GaitNet: an end-to-end network for gait based human identification. Patt Recogn, 96:106988.

[24]Wang C, Zhang JP, Wang L, et al., 2012. Human identification using temporal information preserving gait template. IEEE Trans Patt Anal Mach Intell, 34(11):2164-2176.

[25]Wang KJ, Xing XL, Yan T, et al., 2014. Couple metric learning based on separable criteria with its application in cross-view gait recognition. Proc 9th Chinese Conf on Biometric Recognition, p.347-356.

[26]Wang YY, Song CF, Huang Y, et al., 2019. Learning view invariant gait features with Two-Stream GAN. Neurocomputing, 339:245-254.

[27]Wu ZF, Huang YZ, Wang L, 2015. Learning representative deep features for image set analysis. IEEE Trans Multim, 17(11):1960-1968.

[28]Wu ZF, Huang YZ, Wang L, et al., 2017. A comprehensive study on cross-view gait based human identification with deep CNNs. IEEE Trans Patt Anal Mach Intell, 39(2):209-226.

[29]Xing XL, Wang KJ, Yan T, et al., 2016. Complete canonical correlation analysis with application to multi-view gait recognition. Patt Recogn, 50:107-117.

[30]Yu SQ, Tan DL, Tan TN, 2006. A framework for evaluating the effect of view angle, clothing and carrying condition on gait recognition. Proc 18th Int Conf on Pattern Recognition, p.441-444.

[31]Yu SQ, Chen HF, Reyes EBG, et al., 2017a. GaitGAN: invariant gait feature extraction using generative adversarial networks. Proc IEEE Conf on Computer Vision and Pattern Recognition Workshops, p.532-539.

[32]Yu SQ, Chen HF, Wang Q, et al., 2017b. Invariant feature extraction for gait recognition using only one uniform model. Neurocomputing, 239:81-93.

[33]Zhang EH, Zhao YW, Xiong W, 2010. Active energy image plus 2DLPP for gait recognition. Signal Process, 90(7):2295-2302.

[34]Zhang P, Wu Q, Xu JS, 2019. VN-GAN: identity-preserved variation normalizing GAN for gait recognition. Proc Int Joint Conf on Neural Networks, p.1-8.

Open peer comments: Debate/Discuss/Question/Opinion

<1>

Please provide your name, email address and a comment





Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn
Copyright © 2000 - 2024 Journal of Zhejiang University-SCIENCE