: Q: I am puzzled, shogun interfaces to so many languages but which language and shogun interface should I use? A: That depends a lot on your taste. I personally consider the modular interfaces (python_modular, octave_modular) to be "best". Here best means very flexible and easily extensible. However, in case you just want to train a single SVM with a single or multiple kernels all of the static interfaces are sufficient. And well, of course you should be using pythonhttp://www.python.org :-)
: Q: I've found a bug, where should I report it? A: Either report it in our bug tracker http://trac.tuebingen.mpg.de/shogun or ask on the mailinglist.
Q:Do I need CPLEX to use multiple kernel learning? A: No, as of version 0.7.0 you won't need cplex if you want to learn the weights in front of the kernels. However, to enable Multiple Kernel Learning with CPLEX(tm) just make sure cplex can be found in the PATH. For standard 1-norm multiple kernel learning (MKL) the GNU Linear Programming Kit (GLPK) version at least 4.29 or CPLEX is required. For general p-norm MKL with p>1 it will work nonetheless.
: Q: Is it multiple kernel learning when I use many kernels? A: No, a plain combination of features/kernels will remain a plain concatenation. Only in case you learn the kernel weights you really do MKL. In the static interfaces you can issue
[W]=sg('get_subkernel_weights')
and check whether all weights W are still 1.0.
Q:Why does shogun not compile under windows? A: It requires cygwin and the dependencies need to be compiled manually. As none one of us use windows, and things (in external dependencies) break frequently, feel free to submit patches. If your are looking for a win32 port, there is none, but we take patches :-)
Q:How does shogun do its memory management? A: As does python, we use reference counting internally, i.e. objects holding a reference to another object should increase the reference count of the object they are referencing and decrease the counter when they finished using the object. It should be noted that loops (e.g., object A holding a reference to object B and vice versa) are not detecting and may thus create memory leaks. However, this scenario can so far be easily avoided - just don't create a combined kernel that contains itself as a subkernel ;-)