Skip navigation links

Package de.uni_stuttgart.isa.liquidsvm

liquidSVM contains bindings to Ingo Steinwarts liquidSVM implementation.

See: Description

Package de.uni_stuttgart.isa.liquidsvm Description

liquidSVM contains bindings to Ingo Steinwarts liquidSVM implementation.

liquidSVM for Java

Welcome to the Java bindings for liquidSVM.

Summary:

Both liquidSVM and these bindings are provided under the AGPL 3.0 license.

API Usage Example

The API can be investigated in the javadoc But to give you a heads up consider the File liquidSVM_java/Example.java:

import de.uni_stuttgart.isa.liquidsvm.Config;
import de.uni_stuttgart.isa.liquidsvm.ResultAndErrors;
import de.uni_stuttgart.isa.liquidsvm.SVM;
import de.uni_stuttgart.isa.liquidsvm.SVM.LS;
import de.uni_stuttgart.isa.liquidsvm.LiquidData;

public class Example {

    public static void main(String[] args) throws java.io.IOException {
    
        String filePrefix = (args.length==0) ? "reg-1d" : args[0];
        
        // read comma separated training and testing data
        LiquidData data = new LiquidData(filePrefix);

        // Now train a least squares SVM on a 10by10 hyperparameter grid
        // and select the best parameters. The configuration displays
        // some progress information and specifies to only use two threads.
        SVM s = new LS(data.train, new Config().display(1).threads(2));

        // evaluate the selected SVM on the test features  
        double[] predictions = s.predict(data.testX);
        // or (since we have labels) do this and calculate the error
        ResultAndErrors result = s.test(data.test);
        
        System.out.println("Test error: " + result.errors[0][0]);
        for(int i=0; i<Math.min(result.result.length, 5); i++)
            System.out.println(predictions[i] + "==" + result.result[i][0]);

    }
}

The reg-1d data set is a artificial dataset provided by us.

Compile and run this:

javac -classpath liquidSVM.jar Example.java
java -Djava.library.path=. -cp .:liquidSVM.jar Example reg-1d

Using

Native Library Compilation

liquidSVM is implemented in C++ therefore a native library needs to be compiled and included in the Java process. Binaries for MacOS and Windows are included, however if it is possible for you, we recommend you compile it for every machine to get full performance. Two prerequisites have to be fulfilled:

  1. the environment Variable JAVA_HOME has to be set
  2. a Unix-type toolchain is available including make and a compiler like gcc or clang.

Then on the command line you can use different options:

make native
usually the fastest, but the resulting library is usually not portable to other machines.
make generic
should be portable to most machines, yet slower (factor 2 to 4?)
make debug
compiles with debugging activated (can be debugged e.g. with gdb)
make empty
No special compilation options activated.

To fulfill the prerequisites here follow some hints depending on your OS.

Linux

If echo $JAVA_HOME gives nothing, in many cases it suffices to issue

export JAVA_HOME=/usr/lib/jvm/default-java

Which can be put e.g. into ~/.bashrc.

MacOS

The toolchain can be installed if Xcode is installed and then the optional command line tools are installed from therein.

Usually JAVA_HOME is given under

export JAVA_HOME=/Library/Java/JavaVirtualMachines/*/Contents/Home

Windows

To have JAVA_HOME correct use something like

set JAVA_HOME=C:\Program Files\Java\jdk1.8.0_92

An easy possibility to install a Unix-type toolchain are the Rtools:

https://cran.r-project.org/bin/windows/Rtools/Rtools33.exe

They should be usable without installing R. We assume here:

path=%RTOOLS%\bin;%RTOOLS%\gcc-4.6.3\bin;%path% 

where %RTOOLS% is the location where they were installed (e.g. C:\Rtools).

Overview of Configuration Parameters

display

This parameter determines the amount of output of you see at the screen: The larger its value is, the more you see. This can help as a progress indication.

scale

If set to a true value then for every feature in the training data a scaling is calculated so that its values lie in the interval [0, 1]. The training then is performed using these scaled values and any testing data is scaled transparently as well.

Because SVMs are not scale-invariant any data should be scaled for two main reasons: First that all features have the same weight, and second to assure that the default gamma parameters that liquidSVM provide remain meaningful.

If you do not have scaled the data previously this is an easy option.

threads

This parameter determines the number of cores used for computing the kernel matrices, the validation error, and the test error.

  • threads=0 (default) means that all physical cores of your CPU run one thread.
  • threads=-1 means that all but one physical cores of your CPU run one thread.
partition_choice

This parameter determines the way the input space is partitioned. This allows larger data sets for which the kernel matrix does not fit into memory.

  • partition_choice=0 (default) disables partitioning.
  • partition_choice=6 gives usually highest speed.
  • partition_choice=5 gives usually the best test error.
grid_choice

This parameter determines the size of the hyper- parameter grid used during the training phase. Larger values correspond to larger grids. By default, a 10x10 grid is used. Exact descriptions are given in the next section.

adaptivity_control

This parameter determines, whether an adaptive grid search heuristic is employed. Larger values lead to more aggressive strategies. The default adaptivity_control = 0 disables the heuristic.

random_seed

This parameter determines the seed for the random generator. random_seed = -1 uses the internal timer create the seed. All other values lead to repeatable behavior of the svm.

folds

How many folds should be used.

Specialized configuration parameters

Parameters for regression (least-squares, quantile, and expectile)

clipping
This parameter determines whether the decision functions should be clipped at the specified value. The value clipping = -1.0 leads to an adaptive clipping value, whereas clipping = 0 disables clipping.

Parameter for multiclass classification determine the multiclass strategy: mc-type=0 : AvA with hinge loss. mc-type=1 : OvA with least squares loss. mc-type=2 : OvA with hinge loss. mc-type=3 : AvA with least squares loss.

Parameters for Neyman-Pearson Learning

class

The class, the constraint is enforced on.

constraint

The constraint on the false alarm rate. The script actually considers a couple of values around the value of constraint to give the user an informed choice.

Hyperparameter Grid

For Support Vector Machines two hyperparameters need to be determined:

liquidSVM has a built-in a cross-validation scheme to calculate validation errors for many values of these hyperparameters and then to choose the best pair. Since there are two parameters this means we consider a two-dimensional grid.

For both parameters either specific values can be given or a geometrically spaced grid can be specified.

gamma_steps, min_gamma, max_gamma

specifies in the interval between min_gamma and max_gamma there should be gamma_steps many values

gammas

e.g. gammas=c(0.1,1,10,100) will do these four gamma values

lambda_steps, min_lambda, max_lambda

specifies in the interval between min_lambda and max_lambda there should be lambda_steps many values

lambdas

e.g. lambdas=c(0.1,1,10,100) will do these four lambda values

c_values

the classical term in front of the empirical error term, e.g. c_values=c(0.1,1,10,100) will do these four cost values (basically inverse of lambdas)

Note the min and max values are scaled according the the number of samples, the dimensionality of the data sets, the number of folds used, and the estimated diameter of the data set.

Using grid_choice allows for some general choices of these parameters

grid_choice 0 1 2
gamma_steps 10 15 20
lambda_steps 10 15 20
min_gamma 0.2 0.1 0.05
max_gamma 5.0 10.0 20.0
min_lambda 0.001 0.0001 0.00001
max_lambda 0.01 0.01 0.01

Using negative values of grid_choice we create a grid with listed gamma and lambda values:

grid_choice -1
gammas c(10.0, 5.0, 2.0, 1.0, 0.5, 0.25, 0.1, 0.05)
lambdas c(1.0, 0.1, 0.01, 0.001, 0.0001, 0.00001, 0.000001, 0.0000001)
grid_choice -2
gammas c(10.0, 5.0, 2.0, 1.0, 0.5, 0.25, 0.1, 0.05)
c_values c(0.01, 0.1, 1, 10, 100, 1000, 10000)

Adaptive Grid

An adaptive grid search can be activated. The higher the values of MAX_LAMBDA_INCREASES and MAX_NUMBER_OF_WORSE_GAMMAS are set the more conservative the search strategy is. The values can be freely modified.

ADAPTIVITY_CONTROL 1 2
MAX_LAMBDA_INCREASES 4 3
MAX_NUMBER_OF_WORSE_GAMMAS 4 3

Cells

A major issue with SVMs is that for larger sample sizes the kernel matrix does not fit into the memory any more. Classically this gives an upper limit for the class of problems that traditional SVMs can handle without significant runtime increase. Furthermore also the time complexity is at least O(n2).

liquidSVM implements two major concepts to circumvent these issues. One is random chunks which is known well in the literature. However we prefer the new alternative of splitting the space into spatial cells and use local SVMs on every cell.

If you specify useCells=TRUE then the sample space X gets partitioned into a number of cells. The training is done first for cell 1 then for cell 2 and so on. Now, to predict the label for a value x ∈ X liquidSVM first finds out to which cell this x belongs and then uses the SVM of that cell to predict a label for it.

If you run into memory issues turn cells on: useCells=TRUE

This is quite performant, since the complexity in both time and memore are both O(CELLSIZE × n) and this holds both for training as well as testing! It also can be shown that the quality of the solution is comparable, at least for moderate dimensions.

The cells can be configured using the partition_choice:

  1. This gives a partition into random chunks of size 2000

    VORONOI=c(1, 2000)

  2. This gives a partition into 10 random chunks

    VORONOI=c(2, 10)

  3. This gives a Voronoi partition into cells with radius not larger than 1.0. For its creation a subsample containing at most 50.000 samples is used.

    VORONOI=c(3, 1.0, 50000)

  4. This gives a Voronoi partition into cells with at most 2000 samples (approximately). For its creation a subsample containing at most 50.000 samples is used. A shrinking heuristic is used to reduce the number of cells.

    VORONOI=c(4, 2000, 1, 50000)

  5. This gives a overlapping regions with at most 2000 samples (approximately). For its creation a subsample containing at most 50.000 samples is used. A stopping heuristic is used to stop the creation of regions if 0.5 * 2000 samples have not been assigned to a region, yet.

    VORONOI=c(5, 2000, 0.5, 50000, 1)

  6. This splits the working sets into Voronoi like with PARTITION_TYPE=4. Unlike that case, the centers for the Voronoi partition are found by a recursive tree approach, which in many cases may be faster.

    VORONOI=c(6, 2000, 1, 50000, 2.0, 20, 4,)

The first parameter values correspond to NO_PARTITION, RANDOM_CHUNK_BY_SIZE, RANDOM_CHUNK_BY_NUMBER, VORONOI_BY_RADIUS, VORONOI_BY_SIZE, OVERLAP_BY_SIZE

Weights

NPL:
WEIGHT_STEPS=10
MIN_WEIGHT=0.001
MAX_WEIGHT=0.5
GEO_WEIGHTS=1

ROC:
WEIGHT_STEPS=9
MAX_WEIGHT=0.9
MIN_WEIGHT=0.1
GEO_WEIGHTS=0

More Advanced Parameters

The following parameters should only employed by experienced users and are self-explanatory for these:

KERNEL

specifies the kernel to use, at the moment either GAUSS_RBF or POISSON

RETRAIN_METHOD

After training on grids and folds there are only solutions on folds. In order to construct a global solution one can either retrain on the whole training data (SELECT_ON_ENTIRE_TRAIN_SET) or the (partial) solutions from the training are kept and combined using voting (SELECT_ON_EACH_FOLD default)

store_solutions_internally

If this is true (default in all applicable cases) then the solutions of the train phase are stored and can be just reused in the select phase. If you slowly run out of memory during the train phase maybe disable this. However then in the select phase the best models have to be trained again.

For completeness here are some values that usually get set by the learning scenario

SVM_TYPE

KERNEL_RULE, SVM_LS_2D, SVM_HINGE_2D, SVM_QUANTILE, SVM_EXPECTILE_2D, SVM_TEMPLATE

LOSS_TYPE

CLASSIFICATION_LOSS, MULTI_CLASS_LOSS, LEAST_SQUARES_LOSS, WEIGHTED_LEAST_SQUARES_LOSS, PINBALL_LOSS, TEMPLATE_LOSS

VOTE_SCENARIO

VOTE_CLASSIFICATION, VOTE_REGRESSION, VOTE_NPL

KERNEL_MEMORY_MODEL

LINE_BY_LINE, BLOCK, CACHE, EMPTY

FOLDS_KIND

FROM_FILE, BLOCKS, ALTERNATING, RANDOM, STRATIFIED, RANDOM_SUBSET

WS_TYPE

FULL_SET, MULTI_CLASS_ALL_VS_ALL, MULTI_CLASS_ONE_VS_ALL, BOOT_STRAP

Skip navigation links