Operations Research Transactions

    Next Articles

Margin transfer-based multi-view support vector machine

TANG Jingjing1,2   TIAN Yingjie2,3,*   

  1. 1. School of Mathematical Sciences, University of Chinese Academy of Sciences, Beijing 100049, China; 2. Research Center  on Fictitious Economy and Data Science, Chinese Academy of Sciences,  Beijing 100190, China; 3. School of Economics and Management, University of Chinese Academy of Sciences, Beijing 100190, China
  • Received:2017-11-15 Online:2018-09-15 Published:2018-09-15

Abstract:

The data obtained from multiple sources  or different feature subsets are called multi-view data. Multi-view learning is a machine learning research field that models on the knowledge from multiple views. Many research works have verified that the utilization of multiple views can significantly improve the prediction effect of the model, so that a lot of models and algorithms are proposed. Existing multi-view learning models mainly follow the consensus principle and the complementarity principle. A typical SVM-based multi-view learning model, SVM-2K, extends support vector machine (SVM) for multi-view learning by using the distance minimization version of Kernel Canonical Correlation Analysis (KCCA). However, SVM-2K cannot fully unleash the power of the complementary information among different feature views. In this paper, we propose a new margin transfer-based multi-view support vector machine model, termed as M^2SVM. This brings a new model that incorporates both principles for multi-view learning. Furthermore, we theoretically analyze the performance of M^2SVM from the viewpoint of the consensus principle. Comparisons with SVM-2K reveal that M^2SVM is more flexible and favorable than SVM-2K. Experimental results on 50 binary data sets demonstrate the effectiveness of the proposed method.

Key words: multi-view learning, consensus principle, complementarity principle, kernel canonical correlation analysis