output
 output layer classes¶
The file yann.layers.output.py
contains the definition for the conv pool layers.

class
yann.layers.output.
classifier_layer
(input, input_shape, id, num_classes=10, rng=None, input_params=None, borrow=True, activation='softmax', verbose=2)[source]¶ This class is the typical classifier layer. It should be called by the
add_layer
method in network class.Parameters:  input – An input
theano.tensor
variable. Eventheano.shared
will work as long as they are in the following shapemini_batch_size, height, width, channels
 verbose – similar to the rest of the toolbox.
 input_shape –
(mini_batch_size, features)
 num_classes – number of classes to classify into
 filter_shape – (<int>,<int>)
 batch_norm – <bool> (Not active yet. Will be implemented in near future.)
 rng – typically
numpy.random
.  borrow –
theano
borrow, typicallTrue
.  rng – typically
numpy.random
.  activation – String, takes options that are listed in
activations
Needed for layers that use activations. Some activations also take support parameters, for instancemaxout
takes maxout type and size,softmax
takes an option temperature. Refer to the moduleactivations
to know more. Default is ‘softmax’  input_params – Supply params or initializations from a pretrained system.
Notes
Use
classifier_layer.output
andclassifier_layer.output_shape
from this class.L1
andL2
are also public and can also can be used for regularization. The class also has in publicw
,b
andalpha
which are also a list inparams
, another property of this class.
errors
(y)[source]¶ This function returns a count of wrong predictions.
Parameters: y – datastreamer’s y
variable, that has the lables.Returns: number of wrong predictions. Return type: theano variable

get_params
(borrow=True, verbose=2)[source]¶ This method returns the parameters of the layer in a numpy ndarray format.
Parameters:  borrow – Theano borrow, default is True.
 verbose – As always
Notes
This is a slow method, because we are taking the values out of GPU. Ordinarily, I should have used get_value( borrow = True ), but I can’t do this because some parameters are theano.tensor.var.TensorVariable which needs to be run through eval.

loss
(y, type)[source]¶ This method will return the cost function of the classifier layer. This can be used by the optimizer module for instance to acquire a symbolic loss function.
Parameters:  y – symbolic
theano.ivector
variable of labels to calculate loss from.  type – options ‘nll’  negative log likelihood, ‘cce’  categorical cross entropy, ‘bce’  binary cross entropy, ‘hinge’  maxmargin hinge loss.
Returns: loss value.
Return type: theano symbolic variable
 y – symbolic
 input – An input

class
yann.layers.output.
objective_layer
(id, loss, labels=None, objective='nll', L1=None, L2=None, l1_coeff=0.001, l2_coeff=0.001, verbose=2)[source]¶ This class is an objective layer. It just has a wrapper for loss function. I need this because I am making objective as a loss layer.
Parameters:  loss –
yann.network.layers.classifier_layer.loss()
method, or some thenao variable if other types of objective layers.  labels –
theano.shared
variable of labels.  objective –
'nll'
,'cce'
,'nll'
or''bce''
or'hinge'
for classifier kayers.'value'
. Value will just use the value as an objective and minimizes that. depends on what is the classifier layer being used.Each have their own options. This is usually a string.  L1 – Symbolic weight of the L1 added together
 L2 – Sumbolic L2 of the weights added together
 l1_coeff – Coefficient to weight L1 by.
 l2_coeff – Coefficient to weight L2 by.
 verbose – Similar to the rest of the toolbox.
Todo
The loss method needs to change in input.
Notes
Use
objective_layer.output
and from this class. loss –