运筹学学报

• 运筹学 • 上一篇    下一篇

从支持向量机到非平行支持向量机

邵元海1,*  杨凯丽刘明增王震4    李春娜5  陈伟杰5   

  1. 1. 海南大学经济与管理学院, 海口 570228; 2. 浙江工业大学理学院, 杭州 310023; 3. 大连理工大学理学院, 辽宁盘锦 124221; 4. 内蒙古大学数学学院, 呼和浩特 010021; 5. 浙江工业大学之江学院, 杭州 310024
  • 收稿日期:2017-09-30 出版日期:2018-06-15 发布日期:2018-06-15
  • 通讯作者: 邵元海 E-mail: shaoyuanhai21@163.com
  • 基金资助:

    海南省自然科学基金(No.118QN181), 国家自然科学基金(Nos. 11501310, 61703370, 61603338), 浙江省自然科学基金(Nos. LQ17F030003, LQ12A01020, LY18G010018), 内蒙古自然科学基金(No. 2015BS0606), 海南大学科研启动基金(No. kyqd(sk)1804)

From support vector machine to nonparallel support vector machine

SHAO Yuanhai1,*  YANG KailiLIU Mingzeng WANG Zhen LI ChunNaCHEN WeiJie5   

  1. 1.  School of Economics and Management, Hainan University, Haikou 570228, China; 2. College of Science, Zhejiang University of Technology, Hangzhou  310023, China; 3. School of Science, Dalian University of Technology, Panjin 124221, Liaoning, China; 4. School of Mathematical Sciences, Inner Mongolia University, Hohhot 010021, China; 5. Zhijiang College, Zhejiang University of Technology, Hangzhou 310024, China
  • Received:2017-09-30 Online:2018-06-15 Published:2018-06-15

摘要:

非平行支持向量机是支持向量机的延伸, 受到了广泛的关注. 非平行支持向量机构造允许非平行的支撑超平面, 可以描述不同类别之间的数据分布差异, 从而适用于更广泛的问题. 然而, 对非平行支持向量机模型与支持向量机模型之间的关系研究较少, 且尚未有等价于标准支持向量机模型的非平行支持向量机模型. 从支持向量机出发, 构造出新的非平行支持向量机模型, 该模型不仅可以退化为标准支持向量机, 保留了支持向量机的稀疏性和核函数可扩展性. 同时, 可以描述不同类别之间的数据分布差异, 适用于更广泛的非平行结构数据等. 最后, 通过实验初步验证了所提模型的有效性.

关键词: 数据挖掘, 支持向量机, 损失函数, 核学习, 非平行支持向量机

Abstract:

Nonparallel support vector machine (NSVM) is the extension of support vector machine (SVM), and it has been widely studied in recent years. The NSVM constructs nonparallel support hyperplanes for each class, which can describe the distribution of different classes, thus applicable to wider problems. However, the study of the relationship between NSVM and SVM is rarely. And to now, there is no NSVM could be degenerate or equivalent to the standard SVM. We start from this view of point, and construct a new NSVM model. Our model not only can be reduced to the standard SVM, preserves the sparsity and kernel scalability, but also can describe the distribution of the different classes. At last, we compare our model with start-of-art SVMs and NSVMs on benchmark datasets, and confirm the superiority of proposed NSVM.

Key words: data mining, support vector machines, loss function, kernel learning, nonparallel support vector machines