Every classifier must be initialized with a specific set of parameters. Two distinct methods are deployed for the training (compute()) and the testing (predict()) phases. Whenever possible, the real valued prediction is stored in the realpred variable.
Support Vector Machines (SVM).
Example : |
---|
>>> import numpy as np
>>> import mlpy
>>> xtr = np.array([[1.0, 2.0, 3.0, 1.0], # first sample
... [1.0, 2.0, 3.0, 2.0], # second sample
... [1.0, 2.0, 3.0, 1.0]]) # third sample
>>> ytr = np.array([1, -1, 1]) # classes
>>> mysvm = mlpy.Svm() # initialize Svm class
>>> mysvm.compute(xtr, ytr) # compute SVM
1
>>> mysvm.predict(xtr) # predict SVM model on training data
array([ 1, -1, 1])
>>> xts = np.array([4.0, 5.0, 6.0, 7.0]) # test point
>>> mysvm.predict(xts) # predict SVM model on test point
-1
>>> mysvm.realpred # real-valued prediction
-5.5
>>> mysvm.weights(xtr, ytr) # compute weights on training data
array([ 0., 0., 0., 1.])
Initialize the Svm class
Parameters : |
|
---|
Compute SVM model
Parameters : |
|
---|---|
Returns : |
|
Predict svm model on a test point(s)
Parameters : |
|
---|---|
Returns : |
|
Attributes : |
|
Return feature weights
Parameters : |
|
---|---|
Returns : |
|
Note
For tr kernel (Terminated Ramp Kernel) see [Merler06].
k-Nearest Neighbor (KNN).
Example:
>>> import numpy as np
>>> import mlpy
>>> xtr = np.array([[1.0, 2.0, 3.1, 1.0], # first sample
... [1.0, 2.0, 3.0, 2.0], # second sample
... [1.0, 2.0, 3.1, 1.0]]) # third sample
>>> ytr = np.array([1, -1, 1]) # classes
>>> myknn = mlpy.Knn(k = 1) # initialize knn class
>>> myknn.compute(xtr, ytr) # compute knn
1
>>> myknn.predict(xtr) # predict knn model on training data
array([ 1, -1, 1])
>>> xts = np.array([4.0, 5.0, 6.0, 7.0]) # test point
>>> myknn.predict(xts) # predict knn model on test point
-1
>>> myknn.realpred # real-valued prediction
0.0
Initialize the Knn class.
Parameters : |
|
---|
Store x and y data.
Parameters : |
|
---|---|
Returns : | 1 |
Raises : |
|
Predict knn model on a test point(s).
Parameters : |
|
---|---|
Returns : | the predicted value(s) on success: integer or 1d numpy array integer (-1 or 1) for binary classification integer or 1d numpy array integer (1, ..., nclasses) for multiclass classification 0 on succes with non unique classification -2 otherwise |
Raises : |
|
Described in [Mika01].
Fisher Discriminant Analysis.
Example:
>>> import numpy as np
>>> import mlpy
>>> xtr = np.array([[1.0, 2.0, 3.1, 1.0], # first sample
... [1.0, 2.0, 3.0, 2.0], # second sample
... [1.0, 2.0, 3.1, 1.0]]) # third sample
>>> ytr = np.array([1, -1, 1]) # classes
>>> myfda = mlpy.Fda() # initialize fda class
>>> myfda.compute(xtr, ytr) # compute fda
1
>>> myfda.predict(xtr) # predict fda model on training data
array([ 1, -1, 1])
>>> xts = np.array([4.0, 5.0, 6.0, 7.0]) # test point
>>> myfda.predict(xts) # predict fda model on test point
-1
>>> myfda.realpred # real-valued prediction
-42.51475717037367
>>> myfda.weights(xtr, ytr) # compute weights on training data
array([ 9.60629896, 9.77148463, 9.82027615, 11.58765243])
Initialize Fda class.
Parameters : |
|
---|
Compute fda model.
Parameters : |
|
---|---|
Returns : | 1 |
Predict fda model on test point(s).
Parameters : |
|
---|---|
Returns : |
|
Attributes : |
|
Return feature weights.
Parameters : |
|
---|---|
Returns : |
|
Described in [Cai08].
Spectral Regression Discriminant Analysis (SRDA).
Example:
>>> import numpy as np
>>> import mlpy
>>> xtr = np.array([[1.0, 2.0, 3.1, 1.0], # first sample
... [1.0, 2.0, 3.0, 2.0], # second sample
... [1.0, 2.0, 3.1, 1.0]]) # third sample
>>> ytr = np.array([1, -1, 1]) # classes
>>> mysrda = mlpy.Srda() # initialize srda class
>>> mysrda.compute(xtr, ytr) # compute srda
1
>>> mysrda.predict(xtr) # predict srda model on training data
array([ 1, -1, 1])
>>> xts = np.array([4.0, 5.0, 6.0, 7.0]) # test point
>>> mysrda.predict(xts) # predict srda model on test point
-1
>>> mysrda.realpred # real-valued prediction
-6.8283034257748758
>>> mysrda.weights(xtr, ytr) # compute weights on training data
array([ 0.10766721, 0.21533442, 0.51386623, 1.69331158])
Initialize the Srda class.
Parameters : |
|
---|
Parameters : |
|
---|---|
Returns : | 1 |
Raises : |
|
Predict Srda model on test point(s).
Parameters : |
|
---|---|
Returns : |
|
Attributes : |
|
Return feature weights.
Parameters : |
|
---|---|
Returns : |
|
Described in [Ghosh03].
Penalized Discriminant Analysis (PDA).
Example:
>>> import numpy as np
>>> import mlpy
>>> xtr = np.array([[1.0, 2.0, 3.1, 1.0], # first sample
... [1.0, 2.0, 3.0, 2.0], # second sample
... [1.0, 2.0, 3.1, 1.0]]) # third sample
>>> ytr = np.array([1, -1, 1]) # classes
>>> mypda = mlpy.Pda() # initialize pda class
>>> mypda.compute(xtr, ytr) # compute pda
1
>>> mypda.predict(xtr) # predict pda model on training data
array([ 1, -1, 1])
>>> xts = np.array([4.0, 5.0, 6.0, 7.0]) # test point
>>> mypda.predict(xts) # predict pda model on test point
-1
>>> mypda.realpred # real-valued prediction
-7.6106885609535624
>>> mypda.weights(xtr, ytr) # compute weights on training data
array([ 4.0468174 , 8.0936348 , 18.79228266, 58.42466988])
Initialize Pda class.
Parameters : |
|
---|
Compute Pda model.
Parameters : |
|
---|---|
Returns : | 1 |
Raises : |
|
Predict Pda model on test point(s).
Parameters : |
|
---|---|
Returns : |
|
Attributes : |
|
Compute feature weights.
Parameters : |
|
---|---|
Returns : |
|
Diagonal Linear Discriminant Analysis.
Example:
>>> from numpy import *
>>> from mlpy import *
>>> xtr = array([[1.1, 2.4, 3.1, 1.0], # first sample
... [1.2, 2.3, 3.0, 2.0], # second sample
... [1.3, 2.2, 3.5, 1.0], # third sample
... [1.4, 2.1, 3.2, 2.0]]) # fourth sample
>>> ytr = array([1, -1, 1, -1]) # classes
>>> mydlda = Dlda(nf = 2) # initialize dlda class
>>> mydlda.compute(xtr, ytr) # compute dlda
1
>>> mydlda.predict(xtr) # predict dlda model on training data
array([ 1, -1, 1, -1])
>>> xts = array([4.0, 5.0, 6.0, 7.0]) # test point
>>> mydlda.predict(xts) # predict dlda model on test point
-1
>>> mydlda.realpred # real-valued prediction
-21.999999999999954
>>> mydlda.weights(xtr, ytr) # compute weights on training data
array([ 2.13162821e-14, 0.00000000e+00, 0.00000000e+00, 4.00000000e+00])
Initialize Dlda class.
Parameters : |
|
---|
Compute Dlda model.
Parameters : |
|
---|---|
Returns : | 1 |
Raises : |
|
Predict Dlda model on test point(s).
Parameters : |
|
---|---|
Returns : |
|
Attributes : |
|
Return feature weights.
Parameters : |
|
---|---|
Returns : |
|
[Vapnik95] | V Vapnik. The Nature of Statistical Learning Theory. Springer-Verlag, 1995. |
[Cristianini] | N Cristianini and J Shawe-Taylor. An introduction to support vector machines. Cambridge University Press. |
[Merler06] | S Merler and G Jurman. Terminated Ramp - Support Vector Machine: a nonparametric data dependent kernel. Neural Network, 19:1597-1611, 2006. |
[Nasr09] |
|
[Mika01] | S Mika and A Smola and B Scholkopf. An improved training algorithm for kernel fisher discriminants. Proceedings AISTATS 2001, 2001. |
[Cristianini02] | N Cristianini, J Shawe-Taylor and A Elisseeff. On Kernel-Target Alignment. Advances in Neural Information Processing Systems, Volume 14, 2002. |
[Cai08] | D Cai, X He, J Han. SRDA: An Efficient Algorithm for Large-Scale Discriminant Analysis. Knowledge and Data Engineering, IEEE Transactions on Volume 20, Issue 1, Jan. 2008 Page(s):1 - 12. |
[Ghosh03] | D Ghosh. Penalized discriminant methods for the classification of tumors from gene expression data. Biometrics on Volume 59, Dec. 2003 Page(s):992 - 1000(9). |