Full Text:  <2319>

Summary:  <1470>

CLC number: TP391

On-line Access: 2020-04-21

Received: 2018-09-14

Revision Accepted: 2018-12-08

Crosschecked: 2019-08-12

Cited: 0

Clicked: 6016

Citations:  Bibtex RefMan EndNote GB/T7714

 ORCID:

Dong-sheng LI

http://orcid.org/0000-0001-9743-2034

-   Go to

Article info.
Open peer comments

Frontiers of Information Technology & Electronic Engineering 

Accepted manuscript available online (unedited version)


An efficient parallel and distributed solution to nonconvex penalized linear SVMs


Author(s):  Lei Guan, Tao Sun, Lin-bo Qiao, Zhi-hui Yang, Dong-sheng Li, Ke-shi Ge, Xi-cheng Lu

Affiliation(s):  College of Computer, National University of Defense Technology, Changsha 410073, China; more

Corresponding email(s):  guanleics@gmail.com, nudtsuntao@163.com, dsli@nudt.edu.cn

Key Words:  Linear classification, Support vector machine (SVM), Nonconvex penalty, Alternating direction method of multipliers (ADMM), Parallel algorithm


Share this article to: More <<< Previous Paper|Next Paper >>>

Lei Guan, Tao Sun, Lin-bo Qiao, Zhi-hui Yang, Dong-sheng Li, Ke-shi Ge, Xi-cheng Lu. An efficient parallel and distributed solution to nonconvex penalized linear SVMs[J]. Frontiers of Information Technology & Electronic Engineering,in press.https://doi.org/10.1631/FITEE.1800566

@article{title="An efficient parallel and distributed solution to nonconvex penalized linear SVMs",
author="Lei Guan, Tao Sun, Lin-bo Qiao, Zhi-hui Yang, Dong-sheng Li, Ke-shi Ge, Xi-cheng Lu",
journal="Frontiers of Information Technology & Electronic Engineering",
year="in press",
publisher="Zhejiang University Press & Springer",
doi="https://doi.org/10.1631/FITEE.1800566"
}

%0 Journal Article
%T An efficient parallel and distributed solution to nonconvex penalized linear SVMs
%A Lei Guan
%A Tao Sun
%A Lin-bo Qiao
%A Zhi-hui Yang
%A Dong-sheng Li
%A Ke-shi Ge
%A Xi-cheng Lu
%J Frontiers of Information Technology & Electronic Engineering
%P 587-603
%@ 2095-9184
%D in press
%I Zhejiang University Press & Springer
doi="https://doi.org/10.1631/FITEE.1800566"

TY - JOUR
T1 - An efficient parallel and distributed solution to nonconvex penalized linear SVMs
A1 - Lei Guan
A1 - Tao Sun
A1 - Lin-bo Qiao
A1 - Zhi-hui Yang
A1 - Dong-sheng Li
A1 - Ke-shi Ge
A1 - Xi-cheng Lu
J0 - Frontiers of Information Technology & Electronic Engineering
SP - 587
EP - 603
%@ 2095-9184
Y1 - in press
PB - Zhejiang University Press & Springer
ER -
doi="https://doi.org/10.1631/FITEE.1800566"


Abstract: 
Support vector machines (SVMs) have been recognized as a powerful tool to perform linear classification. When combined with the sparsity-inducing nonconvex penalty, SVMs can perform classification and variable selection simultaneously. However, the nonconvex penalized SVMs in general cannot be solved globally and efficiently due to their nondifferentiability, nonconvexity, and nonsmoothness. Existing solutions to the nonconvex penalized SVMs typically solve this problem in a serial fashion, which are unable to fully use the parallel computing power of modern multi-core machines. On the other hand, the fact that many real-world data are stored in a distributed manner urgently calls for a parallel and distributed solution to the nonconvex penalized SVMs. To circumvent this challenge, we propose an efficient alternating direction method of multipliers (ADMM) based algorithm that solves the nonconvex penalized SVMs in a parallel and distributed way. We design many useful techniques to decrease the computation and synchronization cost of the proposed parallel algorithm. The time complexity analysis demonstrates the low time complexity of the proposed parallel algorithm. Moreover, the convergence of the parallel algorithm is guaranteed. Experimental evaluations on four LIBSVM benchmark datasets demonstrate the efficiency of the proposed parallel algorithm.

一种有效求解非凸正则化线性支持向量机的并行与分布式方法


关磊1,孙涛1,乔林波1,杨智慧2,3,李东升1,葛可适1,卢锡城1
1国防科技大学计算机学院,中国长沙市,410073
2复旦大学计算机科学技术学院,中国上海市,201203
3上海市数据科学重点实验室,中国上海市,201203

摘要:支持向量机(SVM)被视为线性分类的有力工具。与可以产生稀疏效果的非凸惩罚项组合时,SVM能同时执行分类和变量选择。然而,由于其不可微、非凸和非平滑特性,非凸正则化SVM通常难以有效求得全局最优解。已有针对非凸正则化SVM的求解方案通常以串行方式求解,因而无法充分利用现代多核机器的并行处理能力。另一方面,现实世界中数据多以分布式方式存储,迫切需要一种并行与分布式方法求解非凸正则化SVM问题。为应对这一挑战,本文提出一种基于交替方向乘子法(ADMM)的并行算法高效求解非凸正则化SVM问题。采用有效技术降低并行算法的计算与同步开销。时间复杂度分析证明所提并行算法具有低复杂度。此外,该并行算法能保证收敛性。在LIBSVM数据集上的实验证明了所提并行算法的有效性。

关键词组:线性分类;支持向量机;非凸惩罚项;交替方向乘子法(ADMM);并行算法

Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article

Reference

[1]Allen-Zhu Z, Hazan E, 2016. Variance reduction for faster non-convex optimization. Proc 33rd Int Conf on Machine Learning, p.699-707.

[2]Boyd S, Parikh N, Chu E, et al., 2011. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found Trends Mach Learn, 3(1):1-122.

[3]Cand‘es EJ, Wakin MB, Boyd SP, 2008. Enhancing sparsity by reweighted 1 minimization. J Fourier Anal Appl, 14(5-6):877-905.

[4]Daneshmand A, Facchinei F, Kungurtsev V, et al., 2015. Hybrid random/deterministic parallel algorithms for convex and nonconvex big data optimization. IEEE Trans Signal Process, 63(15):3914-3929.

[5]di Lorenzo P, Scutari G, 2015. Distributed nonconvex optimization over networks. IEEE 6th Int Workshop on Computational Advances in Multi-sensor Adaptive Processing, p.229-232.

[6]Facchinei F, Scutari G, Sagratella S, 2015. Parallel selective algorithms for nonconvex big data optimization. IEEE Trans Signal Process, 63(7):1874-1889.

[7]Fan JQ, Li RZ, 2001. Variable selection via nonconcave penalized likelihood and its oracle properties. J Am Stat Assoc, 96(456):1348-1360.

[8]Gong PH, Zhang CS, Lu ZS, et al., 2013. A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. Proc Int Conf on Machine Learning, p.37-45.

[9]Guan L, Qiao LB, Li DS, et al., 2018. An efficient ADMM-based algorithm to nonconvex penalized support vector machines. Proc 18th IEEE Int Conf on Data Mining, p.1209-1216.

[10]Hong MY, Luo ZQ, Razaviyayn M, 2016. Convergence analysis of alternating direction method of multipliers for a family of nonconvex problems. SIAM J Optim, 26(1):337-364.

[11]Laporte L, Flamary R, Canu S, et al., 2014. Nonconvex regularizations for feature selection in ranking with sparse SVM. IEEE Trans Neur Netw Learn Syst, 25(6):1118-1130.

[12]Liu HC, Yao T, Li RZ, 2016. Global solutions to folded concave penalized nonconvex learning. Ann Stat, 44(2):629-659.

[13]Mazumder R, Friedman JH, Hastie T, 2011. SparseNet: coordinate descent with nonconvex penalties. J Am Stat Assoc, 106(495):1125-1138.

[14]Ochs P, Chen YJ, Brox T, et al., 2014. iPiano: inertial proximal algorithm for nonconvex optimization. SIAM J Imag Sci, 7(2):1388-1419.

[15]Razaviyayn M, Hong MY, Luo ZQ, et al., 2014. Parallel successive convex approximation for nonsmooth nonconvex optimization. Proc 27th Int Conf on Neural Information Processing Systems, p.1440-1448.

[16]Reddi SJ, Sra S, Poczos B, et al., 2016. Proximal stochastic methods for nonsmooth nonconvex finite-sum optimization. Proc 30th Conf on Neural Information Processing Systems, p.1145-1153.

[17]Scutari G, Facchinei F, Lampariello L, 2017. Parallel and distributed methods for constrained nonconvex optimization—Part I: theory. IEEE Trans Signal Process, 65(8):1929-1944.

[18]Sherman J, Morrison WJ, 1950. Adjustment of an inverse matrix corresponding to a change in one element of a given matrix. Ann Math Stat, 21(1):124-127.

[19]Sun T, Jiang H, Cheng LZ, et al., 2017a. A convergence frame for inexact nonconvex and nonsmooth algorithms and its applications to several iterations. https://arxiv.org/abs/1709.04072

[20]Sun T, Jiang H, Cheng LZ, 2017b. Iteratively linearized reweighted alternating direction method of multipliers for a class of nonconvex problems. https://arxiv.org/abs/1709.00483

[21]Sun T, Yin PH, Cheng LZ, et al., 2018a. Alternating direction method of multipliers with difference of convex functions. Adv Comput Math, 44(3):723-744.

[22]Sun T, Yin PH, Li DS, et al., 2018b. Non-ergodic convergence analysis of heavy-ball algorithms. https://arxiv.org/abs/1811.01777v2

[23]Sun Y, Scutari G, Palomar D, 2016. Distributed nonconvex multiagent optimization over time-varying networks. Proc 50th Asilomar Conf on Signals, Systems and Computers, p.788-794.

[24]Wang Y, Yin WT, Zeng JS, 2015. Global convergence of ADMM in nonconvex nonsmooth optimization. https://arxiv.org/abs/1511.06324

[25]Zhang CH, 2010. Nearly unbiased variable selection under minimax concave penalty. Ann Stat, 38(2):894-942.

[26]Zhang HH, Ahn J, Lin XD, et al., 2006. Gene selection using support vector machines with non-convex penalty. Bioinformatics, 22(1):88-95.

[27]Zhang SB, Qian H, Gong XJ, 2016. An alternating proximal splitting method with global convergence for nonconvex structured sparsity optimization. Proc 30th AAAI Conf on Artificial Intelligence, p.2330-2336.

[28]Zhang T, 2010. Analysis of multi-stage convex relaxation for sparse regularization. J Mach Learn Res, 11:1081-1107.

[29]Zhang X, Wu YC, Wang L, et al., 2016. Variable selection for support vector machines in moderately high dimensions. J R Stat Soc Ser B, 78(1):53-76.

[30]Zhang YM, Li DS, Guo CX, et al., 2017. CubicRing: exploiting network proximity for distributed in-memory key-value store. IEEE/ACM Trans Netw, 25(4):2040-2053.

Open peer comments: Debate/Discuss/Question/Opinion

<1>

Please provide your name, email address and a comment





Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn
Copyright © 2000 - 2024 Journal of Zhejiang University-SCIENCE