Skip to content

Logistic-classifier SGD model

Logistic classification is an abstraction of logistic regression used to generate models for classification of boolean and multi-class use-cases. The model does this by calculating the probability that a given datapoint belongs to a specified category. A threshold is then set above which an item is deemed to belong to one class or another.

Similar to linear regression, the \(y\) values are assumed to be dependent on the linear combination of the input \(X\) values. Calculation of the probability is defined by the following formula:

\[P(y_i)= \frac{1}{1+\exp{-z_i}}\]

Where \(z\) is linear combination of the \(X\) variable in \(N\) dimensions and their associated weights \(\theta\) defined by the function:

\[z_i= \sum_{n=1}^{N} \theta_{n} X_{i,n}\]

SGD can be used as a method of fitting the \(X\) data to the target variable \(y\) in order to determine the coefficient weights \(\theta\) that best represent this combination.

Fit a logistic classification stochastic gradient descent model[X;y;trend;paramDict]


  • X is the input/training data of N dimensions
  • y is the output/target classification data
  • trend is whether a trend is to be accounted for (boolean)
  • paramDict is the configurable dictionary defining any modifications to be applied during the fitting process of SGD (See here for more details).

returns a dictionary containing all information collected during the fitting of a model, along with prediction and update functionality.

The information collected during the fitting of the model are contained within modelInfo and include:

parameter description
theta The weights calculated during the process
iter The number of iterations applied during the process
diff The difference between the final theta values and the preceding values
trend Whether or not a trend value was fitted during the process
paramDict The parameter dictionary used during the process

Prediction functionality is contained within the predict key. The function takes as argument the input/training data of \(N\) dimensions and returns the predicted classes.

Prediction probability functionality is contained within the predictProba key. The function takes as argument the input/training data of \(N\) dimensions and returns the predicted probability of each class. For binary classification, a single probability is returned indicating the probability of the positive class being predicted; for multiclass models a one-vs-rest approach is used.

The model contains two types of update functions

  • update, where models are updated assuming that the data given is suitable
  • updateSecure, where additional checks are applied to the data to ensure that it is in the correct format to ensure no 'model pollution' occurs

Both functions are binary, with arguments

  1. the input/training data of N dimensions
  2. the output/target classification data

and return a dictionary containing all information collected during the updating of a model, along with a prediction and update function.

If updateSecure is used, an error will be returned if appropriate data is not used.

During the update phase, the same model parameters are used as were applied during the fitting process, except that the maximum iteration is set to 1.

// Create data with strong correlation but also some noise
q)yClass:y<avg y

// Fit a logistic regression SGD
modelInfo   | `theta`iter`diff`trend`paramDict`inputType!(0.05981966 -0.2055255;100..
predict     | {[config;X]
predictProba| {[config;X]
update      | {[config;X;y]
updateSecure| {[config;secure;X;y]

// Information generated during the fitting of the model
theta    | 0.05981966 -0.2055255
iter     | 100
diff     | -0.00124056 -0.0009483654
trend    | 1b
paramDict| `alpha`maxIter`gTol`theta`k`seed`batchType`....

/ Predict on new data
0 0 0 0 0 0...
0.2065456 0.3266713 0.2207807 0.2183085 0.3717741 0..

// Update the fitted model
q)yClassUpd:yUpd<avg yUpd
q)show logUpd:logMdl.update[Xupd;yClassUpd]
modelInfo   | `theta`iter`diff`trend`paramDict!(0.06008984 -0.2086289;1;-0.00027..
predict     | {[config;X]
update      | {[config;X;y]
updateSecure| {[config;secure;X;y]

theta    | 0.06008984 -0.2086289
iter     | 1
diff     | -0.0002701815 0.003103383
trend    | 1b
paramDict| `alpha`maxIter`gTol`theta`k`seed`batchType...
inputTyp | -9h

Configurable parameters

In the above function, the following are the optional configurable entries for paramDict:

key type default description
alpha float 0.01 The learning rate applied
maxIter integer 100 The maximum possible number of iterations before the run is terminated, this does not guarantee convergence
gTol float 1e-5 If the difference in gradient falls below this value the run is terminated
theta float 0 The initial starting weights
k integer *n The number of batches used or random points chosen each iteration
seed integer random The random seed
batchType symbol shuffle The batch type (`single`shuffle`shuffleRep`nonShuffle`noBatch)
penalty symbol l2 The penalty/regularization term (`l1`l2`elasticNet)
lambda float 0.001 The penalty term coefficient
l1Ratio float 0.5 The elastic net mixing parameter (Only used if penalty type `ElasticNet is applied)
decay float 0 The decay coefficient
p float 0 The momentum coefficient
verbose boolean 0b If information about the fitting process is to be printed after every epoch
accumulation boolean 0b If the theta value after each epoch is returned as the output
thresholdFunc list () The threshold function and value (optional) to apply when using updateSecure

In the above table *n is the length of the dataset.

A number of batchTypes can be applied when fitting a model using SGD, the supported types and an explanation of their use of the k parameter are explained below:

Option Description
noBatch No batching occurs and all data points are used (regular gradient descent)
nonShuffle The data is split into k batches with no shuffling applied.
shuffle The data is shuffled into k batches. Each data point appears once.
shuffleRep The data is shuffled into k batches. Data points can appear more than once and not all data points may be used.
single k random points are chosen each iteration.