(1)LIBSVM: http://www.csie.ntu.edu.tw/~cjlin/libsvm/
LIBSVM is an integrated software for support vector classification, (C-SVC, nu-SVC), regression (epsilon-SVR, nu-SVR) and distribution estimation (one-class SVM). It supports multi-class classification.
Since version 2.8, it implements an SMO-type algorithm proposed in this paper:
R.-E. Fan, P.-H. Chen, and C.-J. Lin. Working set selection using second order information for training SVM. Journal of Machine Learning Research 6, 1889-1918, 2005. You can also find a pseudo code there. (how to cite LIBSVM)
Our goal is to help users from other fields to easily use SVM as a tool. LIBSVM provides a simple interface where users can easily link it with their own programs. Main features of LIBSVM include
LIBLINEAR is a linear classifier for data with millions of instances and features. It supports
Main features of LIBLINEAR include
SVMlight is an implementation of Vapnik's Support Vector Machine [Vapnik, 1995] for the problem of pattern recognition, for the problem of regression, and for the problem of learning a ranking function. The optimization algorithms used in SVMlight are described in [Joachims, 2002a ]. [Joachims, 1999a]. The algorithm has scalable memory requirements and can handle problems with many thousands of support vectors efficiently.
The software also provides methods for assessing the generalization performance efficiently. It includes two efficient estimation methods for both error rate and precision/recall. XiAlpha-estimates [Joachims, 2002a, Joachims, 2000b] can be computed at essentially no computational expense, but they are conservatively biased. Almost unbiased estimates provides leave-one-out testing. SVMlight exploits that the results of most leave-one-outs (often more than 99%) are predetermined and need not be computed [Joachims, 2002a].
New in this version is an algorithm for learning ranking functions [Joachims, 2002c]. The goal is to learn a function from preference examples, so that it orders a new set of objects as accurately as possible. Such ranking problems naturally occur in applications like search engines and recommender systems.
Futhermore, this version includes an algorithm for training large-scale transductive SVMs. The algorithm proceeds by solving a sequence of optimization problems lower-bounding the solution using a form of local search. A detailed description of the algorithm can be found in [Joachims, 1999c]. A similar transductive learner, which can be thought of as a transductive version of k-Nearest Neighbor is the Spectral Graph Transducer.
SVMlight can also train SVMs with cost models (see [Morik et al., 1999]).
The code has been used on a large range of problems, including text classification [Joachims, 1999c][Joachims, 1998a], image recognition tasks, bioinformatics and medical applications. Many tasks have the property of sparse instance vectors. This implementation makes use of this property which leads to a very compact and efficient representation.
SVMstruct is a Support Vector Machine (SVM) algorithm for predicting multivariate or structured outputs. It performs supervised learning by approximating a mapping
h: X --> Yusing labeled training examples (x1,y1), ..., (xn,yn). Unlike regular SVMs, however, which consider only univariate predictions like in classification and regression,SVMstruct can predict complex objects y like trees, sequences, or sets. Examples of problems with complex outputs are natural language parsing, sequence alignment in protein homology detection, and markov models for part-of-speech tagging. The SVMstruct algorithm can also be used for linear-time training of binary and multi-class SVMs under the linear kernel [4].
The 1-slack cutting-plane algorithm implemented in SVMstruct V3.10 uses a new but equivalent formulation of the structural SVM quadratic program and is several orders of magnitude faster than prior methods. The algorithm is described in [5]. The n-slack algorithm of SVMstruct V2.50 is described in [1][2]. The SVMstruct implementation is based on the SVMlight quadratic optimizer
The current implementation borrows the structure of libsvm. Similar options are also adopted. For the bound-constrained formulation for classification and regression, BSVM uses a decomposition method. BSVM uses a simple working set selection which leads to faster convergences for difficult cases. The use of a special implementation of the opmization solver TRON allows BSVM to stably identify bounded variables.
GPDT is a C++ software designed to train large-scale Support Vector Machines (SVMs) for binary classification in both scalar and distributed memory parallel environments. It uses a popular problem decomposition technique [1, 2, 4, 6, 7] to split the SVM quadratic programming (QP) problem into a sequence of smaller QP subproblems, each one being solved by a suitable gradient projection method (GPM). The currently implemented GPMs are the Generalized Variable Projection Method (GVPM) [3] and the Dai-Fletcher method (DFGPM) [5].
A few minor bugs fixed (see more details in the CHANGES file, also packaged with the sources distribution)
[Last updated: Fabruary 7, 2007.]
LSVM is a fast technique for training support vector machines (SVMs), based on a simple iterative approach. For example, it has been used to classify a dataset with 2 million points and 10 features in only 34 minutes on a 400 Mhz Pentium II. For more information, see our paper Lagrangian Support Vector Machines.
SVMs are optimization based tools for solving machine learning problems. For an introduction to SVMs, you may want to look at this tutorial.
The software is free for academic and research use. For commercial use, please contact Olvi Mangasarian or Dave Musicant.
Click here to download the software, which consists of MATLAB m-files.
If you publish any work based on LSVM, please cite both the software and the paper on which it is based. Here are recommended LaTeX bibliography entries:
@misc{lsvm,
author = "O.L. Mangasarian and D. R. Musicant",
title = {{LSVM Software:} Active Set Support Vector Machine Classification Software},
year = 2000,
institution = {Computer Sciences Department, University of Wisconsin, Madison},
note = { www.cs.wisc.edu/$\sim$musicant/lsvm/.}}
@techreport{mm:00,
author = "O. L. Mangasarian and David R. Musicant",
title = "Lagrangian Support Vector Machine Classification",
institution = "Data Mining Institute, Computer Sciences Department, University of Wisconsin",
month = {June},
year = 2000,
number = {00-06},
address = "Madison, Wisconsin",
note={ftp://ftp.cs.wisc.edu/pub/dmi/tech-reports/00-06.ps}}
For more information, contact:
Olvi L. Mangasarian
[email protected]
David R. Musicant
[email protected]
(14)ASVM http://research.cs.wisc.edu/dmi/asvm/
ASVM is a fast technique for training linear support vector machines (SVMs), based on an active set approach which results in very fast running times. For example, it has been used to classify a dataset with 4 million points and 32 features in only 38 minutes on a 400 Mhz Pentium II. For more information, see our paper Active Support Vector Machines.
SVMs are an optimization based approach for solving machine learning problems. For an introduction to SVMs, you may want to look at this tutorial.
The software is free for academic use. For commercial use, please contact Dave Musicant.
Click here to download the software. The software consists of:
No additional software whatsoever is required to use these tools.
If you publish any work based on ASVM, please cite both the software and the paper on which it is based. Here are recommended LaTeX bibliography entries:
@misc{asvm,
author = "D. R. Musicant",
title = {{ASVM Software:} Active Set Support Vector Machine Classification Software},
year = 2000,
institution = {Computer Sciences Department, University of Wisconsin, Madison},
note = { www.cs.wisc.edu/$\sim$musicant/asvm/.}}
@techreport{mm:00,
author = "O. L. Mangasarian and David R. Musicant",
title = "Active Support Vector Machine Classification",
institution = "Data Mining Institute, Computer Sciences Department, University of Wisconsin",
month = {April},
year = 2000,
number = {00-04},
address = "Madison, Wisconsin",
note={ftp://ftp.cs.wisc.edu/pub/dmi/tech-reports/00-04.ps}}
For more information, contact:
David R. Musicant
[email protected]
Iinstead of a standard support vector machine that classifies points by assigning them to one of two disjoint half-spaces, PSVM classifies points by assigning them to the closest of two parallel planes. For more information, see our paper Proximal Support Vector Machines.
SVMs are an optimization based approach for solving machine learning problems. For an introduction to SVMs, you may want to look at this tutorial.
The software is free for academic use. For commercial use, please contact Olvi Mangasarian.
Click here to download the software. The software consists of:
The only software needed to run these programs is MATLAB www.mathworks.com.
(16)Linear SVM http://linearsvm.com/
Linear SVM is the newest extremely fast machine learning (data mining) algorithm for solving multiclassclassification problems from ultra large data sets that implements an original proprietary version of acutting plane algorithm for designing a linear support vector machine. LinearSVM is a linearly scalable routine meaning that it creates an SVM model in a CPU time which scales linearly with the size of the training data set. Our comparisons with other known SVM models clearly show its superior performance when high accuracy is required. We would highly appreciate if you may share LinearSVM performance on your data sets with us.