Full Text:   <236>

Summary:  <30>

CLC number: TP181

On-line Access: 2020-08-10

Received: 0201-07-28

Revision Accepted: 2019-12-20

Crosschecked: 2020-07-13

Cited: 0

Clicked: 413

Citations:  Bibtex RefMan EndNote GB/T7714

 ORCID:

Cheng-wei Wang

https://orcid.org/0000-0002-6514-2603

Gang Chen

https://orcid.org/0000-0002-7483-0045

-   Go to

Article info.
Open peer comments

Frontiers of Information Technology & Electronic Engineering  2020 Vol.21 No.8 P.1206-1216

http://doi.org/10.1631/FITEE.1900382


HAM: a deep collaborative ranking method incorporating textual information


Author(s):  Cheng-wei Wang, Teng-fei Zhou, Chen Chen, Tian-lei Hu, Gang Chen

Affiliation(s):  The Key Laboratory of Big Data Intelligent Computing of Zhejiang Province, Hangzhou 310027, China; more

Corresponding email(s):   rr@zju.edu.cn, zhoutengfei@zju.edu.cn, cc33@zju.edu.cn, htl@zju.edu.cn, cg@zju.edu.cn

Key Words:  Deep learning, Recommendation system, Highway network, Block coordinate descent


Cheng-wei Wang, Teng-fei Zhou, Chen Chen, Tian-lei Hu, Gang Chen. HAM: a deep collaborative ranking method incorporating textual information[J]. Frontiers of Information Technology & Electronic Engineering, 2020, 21(8): 1206-1216.

@article{title="HAM: a deep collaborative ranking method incorporating textual information",
author="Cheng-wei Wang, Teng-fei Zhou, Chen Chen, Tian-lei Hu, Gang Chen",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="21",
number="8",
pages="1206-1216",
year="2020",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.1900382"
}

%0 Journal Article
%T HAM: a deep collaborative ranking method incorporating textual information
%A Cheng-wei Wang
%A Teng-fei Zhou
%A Chen Chen
%A Tian-lei Hu
%A Gang Chen
%J Frontiers of Information Technology & Electronic Engineering
%V 21
%N 8
%P 1206-1216
%@ 2095-9184
%D 2020
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.1900382

TY - JOUR
T1 - HAM: a deep collaborative ranking method incorporating textual information
A1 - Cheng-wei Wang
A1 - Teng-fei Zhou
A1 - Chen Chen
A1 - Tian-lei Hu
A1 - Gang Chen
J0 - Frontiers of Information Technology & Electronic Engineering
VL - 21
IS - 8
SP - 1206
EP - 1216
%@ 2095-9184
Y1 - 2020
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.1900382


Abstract: 
The recommendation task with a textual corpus aims to model customer preferences from both user feedback and item textual descriptions. It is highly desirable to explore a very deep neural network to capture the complicated nonlinear preferences. However, training a deeper recommender is not as effortless as simply adding layers. A deeper recommender suffers from the gradient vanishing/exploding issue and cannot be easily trained by gradient-based methods. Moreover, textual descriptions probably contain noisy word sequences. Directly extracting feature vectors from them can harm the recommender’s performance. To overcome these difficulties, we propose a new recommendation method named the HighwAy recoMmender (HAM). HAM explores a highway mechanism to make gradient-based training methods stable. A multi-head attention mechanism is devised to automatically denoise textual information. Moreover, a block coordinate descent method is devised to train a deep neural recommender. Empirical studies show that the proposed method outperforms state-of-the-art methods significantly in terms of accuracy.

HAM:一种融入文本信息的深度协同排序方法

王铖微1,3,周腾飞3,谌晨1,3,胡天磊1,3,陈刚2,3
1浙江省大数据智能计算重点实验室,中国杭州市,310027
2浙江大学计算机辅助设计与图形学国家重点实验室,中国杭州市,310027
3浙江大学计算机科学与技术学院,中国杭州市,310027

摘要:基于文本语料库的推荐任务旨在通过挖掘用户反馈数据以及物品文本描述数据对用户偏好建模。当前研究人员亟需探索使用深度神经网络捕获复杂的非线性偏好。然而,训练网络结构更深的推荐器并不能通过简单添加网络层数实现。一个网络结构更深的推荐器会面临梯度消失/爆炸问题,导致其无法通过基于梯度的方法进行模型训练。此外,物品文字描述数据可能包含嘈杂的单词序列;直接从这类特征向量中提取特征可能会影响推荐器性能。为解决上述问题,本文提出一种新的基于极深神经网络的排序推荐方法:高速网络推荐器(HighwAy recoMmender, HAM)。首先基于高速公路网络设计一个全新神经网络推荐框架,该框架能有效地稳定深度推荐器的梯度流。其次,使用一种多头注意力编码器,自动对文本信息降噪。最后,提出一种全新的基于块坐标下降的方法,可更加有效地训练具有更深网络结构的推荐器。实验结果表明,与当前先进方法相比,HAM具有更好推荐性能。

关键词:深度学习;推荐系统;高速公路网络;块坐标梯度下降

Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article

Reference

[1]Adomavicius G, Tuzhilin A, 2005. Toward the next generation of recommender systems: a survey of the state-of-the-art and possible extensions. IEEE Trans Knowl Data Eng, 17(6):734-749.

[2]Bansal T, Belanger D, McCallum A, 2016. Ask the GRU: multi-task learning for deep text recommendations. Proc 10th ACM Conf on Recommender Systems, p.107-114.

[3]Bennett J, Lanning S, 2007. The Netflix prize. Proc KDD Cup and Workshop, p.35.

[4]Cai XY, Han J, Yang L, 2018. Generative adversarial network based heterogeneous bibliographic network representation for personalized citation recommendation. 32nd AAAI Conf on Artificial Intelligence, p.5747-5754.

[5]Chorowski JK, Bahdanau D, Serdyuk D, et al., 2015. Attention-based models for speech recognition. Proc 30th Int Conf on Neural Information Processing Systems, p.577-585.

[6]Devooght R, Bersini H, 2016. Collaborative filtering with recurrent neural networks. https://arxiv.org/abs/1608.07400

[7]Goodfellow I, Bengio Y, Courville A, 2016. Deep Learning. MIT Press, Cambridge, MA.

[8]Gopalan PK, Charlin L, Blei D, 2014. Content-based recommendations with Poisson factorization. Proc Advances in Neural Information Processing Systems, p.3176-3184.

[9]Grvčar M, Mladenivč D, Fortuna B, et al., 2005. Data sparsity issues in the collaborative filtering framework. Proc 7th Int Workshop on Knowledge Discovery on the Web, p.58-76.

[10]He XN, Liao LZ, Zhang HW, et al., 2017. Neural collaborative filtering. Proc 26th Int Conf on World Wide Web, p.173-182.

[11]Hsieh CK, Yang L, Cui Y, et al., 2017. Collaborative metric learning. Proc 26th Int Conf on World Wide Web, p.193-201.

[12]Jin M, Luo X, Zhu H, et al., 2018. Combining deep learning and topic modeling for review understanding in context-aware recommendation. Proc Conf of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, p.1605-1614.

[13]Kim D, Park C, Oh J, et al., 2016. Convolutional matrix factorization for document context-aware recommendation. Proc 10th ACM Conf on Recommender Systems, p.233-240.

[14]Kiros R, Zhu Y, Salakhutdinov RR, et al., 2015. Skip-thought vectors. Advances in Neural Information Processing Systems, p.3294-3302.

[15]Koren Y, 2008. Factorization meets the neighborhood: a multifaceted collaborative filtering model. Proc 14th ACM SIGKDD Int Conf on Knowledge Discovery and Data Mining, p.426-434.

[16]Koren Y, Bell R, Volinsky C, 2009. Matrix factorization techniques for recommender systems. Computer, 42(8):30-37.

[17]Levy O, Goldberg Y, 2014. Neural word embedding as implicit matrix factorization. Proc 27th Int Conf on Neural Information Processing Systems, p.2177-2185.

[18]Linden G, Smith B, York J, 2003. Amazon.com recommendations: item-to-item collaborative filtering. IEEE Int Comput, 7(1):76-80.

[19]Liu CH, Jin T, Hoi SCH, et al., 2017. Collaborative topic regression for online recommender systems: an online and Bayesian approach. Mach Learn, 106(5):651-670.

[20]McLaughlin MR, Herlocker JL, 2004. A collaborative filtering algorithm and evaluation metric that accurately model the user experience. Proc 27th Annual Int ACM SIGIR Conf on Research and Development in Information Retrieval, p.329-336.

[21]Mhaskar H, Liao Q, Poggio T, 2017. When and why are deep networks better than shallow ones? Proc 31st AAAI Conf on Artificial Intelligence, p.2343-2349.

[22]Neyshabur B, Bhojanapalli S, McAllester D, et al., 2017. Exploring generalization in deep learning. Proc 30th Conf on Advances in Neural Information Processing Systems, p.5947-5956.

[23]Paterek A, 2007. Improving regularized singular value decomposition for collaborative filtering. Proc KDD Cup and Workshop, p.5-8.

[24]Raghu M, Poole B, Kleinberg J, et al., 2017. On the expressive power of deep neural networks. Proc 34th Int Conf on Machine Learning, p.2847-2854.

[25]Rendle S, Freudenthaler C, Gantner Z, et al., 2009. BPR: Bayesian personalized ranking from implicit feedback. Proc 25th Conf on Uncertainty in Artificial Intelligence, p.452-461.

[26]Ruder S, 2017. An overview of multi-task learning in deep neural networks. https://arxiv.org/abs/1706.05098

[27]Salakhutdinov R, Mnih A, 2007. Probabilistic matrix factorization. Proc 20th Int Conf on Neural Information Processing Systems, p.1257-1264.

[28]Salakhutdinov R, Mnih A, Hinton G, 2007. Restricted Boltzmann machines for collaborative filtering. Proc 24th Int Conf on Machine Learning, p.791-798.

[29]Shoja BM, Tabrizi N, 2019. Customer reviews analysis with deep neural networks for e-commerce recommender systems. EEE Access, 7:119121-119130.

[30]Srebro N, Rennie J, Jaakkola TS, 2004. Maximum-margin matrix factorization. Conf on Neural Information Processing Systems, p.1329-1336.

[31]Srivastava RK, Greff K, Schmidhuber J, 2015. Training very deep networks. Advances in Neural Information Processing Systems, p.2377-2385.

[32]Strub F, Mary J, 2015. Collaborative filtering with stacked denoising autoencoders and sparse inputs. NIPS Workshop on Machine Learning for e-Commerce, p.1-8.

[33]Vaswani A, Shazeer N, Parmar N, et al., 2017. Attention is all you need. 31st Conf on Neural Information Processing Systems, p.5998-6008.

[34]Wang C, Blei DM, 2011. Collaborative topic modeling for recommending scientific articles. Proc 17th ACM SIGKDD Int Conf on Knowledge Discovery and Data Mining, p.448-456.

[35]Wang H, Wang NY, Yeung DY, 2015. Collaborative deep learning for recommender systems. Proc 21st ACM SIGKDD Int Conf on Knowledge Discovery and Data Mining, p.1235-1244.

[36]Wang H, Shi XJ, Yeung DY, 2016. Collaborative recurrent autoencoder: recommend while learning to fill in the blanks. Proc 30th Int Conf on Neural Information Processing Systems, p.415-423.

[37]Wu Y, DuBois C, Zheng AX, et al., 2016. Collaborative denoising auto-encoders for top-N recommender systems. Proc 9th ACM Int Conf on Web Search and Data Mining, p.153-162.

Open peer comments: Debate/Discuss/Question/Opinion

<1>

Please provide your name, email address and a comment





Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn
Copyright © 2000 - Journal of Zhejiang University-SCIENCE