output - output layer classes

The file yann.layers.output.py contains the definition for the conv pool layers.

class yann.layers.output.classifier_layer(input, input_shape, id, num_classes=10, rng=None, input_params=None, borrow=True, activation='softmax', verbose=2)[source]

This class is the typical classifier layer. It should be called by the add_layer method in network class.

  • input – An input theano.tensor variable. Even theano.shared will work as long as they are in the following shape mini_batch_size, height, width, channels
  • verbose – similar to the rest of the toolbox.
  • input_shape(mini_batch_size, features)
  • num_classes – number of classes to classify into
  • filter_shape – (<int>,<int>)
  • batch_norm – <bool> (Not active yet. Will be implemented in near future.)
  • rng – typically numpy.random.
  • borrowtheano borrow, typicall True.
  • rng – typically numpy.random.
  • activation – String, takes options that are listed in activations Needed for layers that use activations. Some activations also take support parameters, for instance maxout takes maxout type and size, softmax takes an option temperature. Refer to the module activations to know more. Default is ‘softmax’
  • input_params – Supply params or initializations from a pre-trained system.


Use classifier_layer.output and classifier_layer.output_shape from this class. L1 and L2 are also public and can also can be used for regularization. The class also has in public w, b and alpha which are also a list in params, another property of this class.


This function returns a count of wrong predictions.

Parameters:y – datastreamer’s y variable, that has the lables.
Returns:number of wrong predictions.
Return type:theano variable
get_params(borrow=True, verbose=2)[source]

This method returns the parameters of the layer in a numpy ndarray format.

  • borrow – Theano borrow, default is True.
  • verbose – As always


This is a slow method, because we are taking the values out of GPU. Ordinarily, I should have used get_value( borrow = True ), but I can’t do this because some parameters are theano.tensor.var.TensorVariable which needs to be run through eval.

loss(y, type)[source]

This method will return the cost function of the classifier layer. This can be used by the optimizer module for instance to acquire a symbolic loss function.

  • y – symbolic theano.ivector variable of labels to calculate loss from.
  • type – options ‘nll’ - negative log likelihood, ‘cce’ - categorical cross entropy, ‘bce’ - binary cross entropy, ‘hinge’ - max-margin hinge loss.

loss value.

Return type:

theano symbolic variable

class yann.layers.output.objective_layer(id, loss, labels=None, objective='nll', L1=None, L2=None, l1_coeff=0.001, l2_coeff=0.001, verbose=2)[source]

This class is an objective layer. It just has a wrapper for loss function. I need this because I am making objective as a loss layer.

  • lossyann.network.layers.classifier_layer.loss() method, or some thenao variable if other types of objective layers.
  • labelstheano.shared variable of labels.
  • objective

    'nll', 'cce', 'nll' or ''bce'' or 'hinge' for classifier kayers. 'value'. Value will just use the value as an objective and minimizes that. depends on what is the classifier layer being used.

    Each have their own options. This is usually a string.
  • L1 – Symbolic weight of the L1 added together
  • L2 – Sumbolic L2 of the weights added together
  • l1_coeff – Coefficient to weight L1 by.
  • l2_coeff – Coefficient to weight L2 by.
  • verbose – Similar to the rest of the toolbox.


The loss method needs to change in input.


Use objective_layer.output and from this class.