activations - Definitions for all activations functions.

The file yann.core.activations.py contains the definition for all the activation functions available.

You can import all these functions and supply the fuctions as arguments to functions that use activation variable as an input. Refer to the mnist example in the modelzoo for how to do this. It contains various activations as defined below:

yann.core.activations.Abs(x)[source]

Absolute value Units.

Applies point-wise absolute value to the input supplied.

Parameters:x – could be a theano.tensor or a theano.shared or numpy arrays or python lists.
Returns:returns a absolute output of the same shape as the input.
Return type:same as input
yann.core.activations.Elu(x, alpha=1)[source]

Exponential Linear Units.

Applies point-wise ela to the input supplied. alpha is defualt to 0. Supplying a value to alpha would make this a leay Elu.

Notes

Reference :Clevert, Djork-Arne, Thomas Unterthiner, and Sepp Hochreiter. “Fast and accurate
deep network learning by exponential linear units (elus).” arXiv preprint arXiv:1511.07289 (2015).
Parameters:
  • x – could be a theano.tensor or a theano.shared or numpy arrays or python lists.
  • alpha – should be a float. Default is 1.
Returns:

returns a point-wise rectified output.

Return type:

same as input

yann.core.activations.Maxout(x, maxout_size, input_size, type='maxout', dimension=1)[source]

Function performs the maxout activation. You can import all these functions and supply the fuctions as arguments to functions that use activation variable as an input. Refer to the mnist example in the modelzoo for how to do this.

Parameters:
  • x – could be a theano.tensor or a theano.shared or numpy arrays or python lists. Size of the argument must strictly be windowed runnable through stride. Second dimension must be the channels to maxout from
  • maxout_size – is the size of the window to stride through
  • input_size – is number of nodes in the input
  • dimension – If 1 perform MLP layer maxout, input must be two dimensional. If 2 perform CNN layer maxout, input must be four dimensional.
  • type – If maxout perform, [1] If meanout or mixedout perform, meanout or mixed out respectively from [2]
[1]Yu, Dingjun, et al. “Mixed Pooling for Convolutional Neural Networks.” Rough Sets and Knowledge Technology. Springer International Publishing, 2014. 364-375.
[2]Ian Goodfellow et al. ” Maxout Networks ” on arXiv. (jmlr).
Returns:
  1. theano.tensor4 output, Output that could be provided
    as output to the next layer or to other convolutional layer options. the size of the output depends on border mode and subsample operation performed.
    1. tuple, Number of feature maps after maxout is applied
Return type:theano.tensor4
yann.core.activations.ReLU(x, alpha=0)[source]

Rectified Linear Units.

Applies point-wise rectification to the input supplied. alpha is defualt to 0. Supplying a value to alpha would make this a leay ReLU.

Notes

Reference: Nair, Vinod, and Geoffrey E. Hinton. “Rectified linear units improve restricted
boltzmann machines.” Proceedings of the 27th International Conference on Machine Learning (ICML-10). 2010.
Parameters:
  • x – could be a theano.tensor or a theano.shared or numpy arrays or python lists.
  • alpha – should be a float.
Returns:

returns a point-wise rectified output.

Return type:

same as input

yann.core.activations.Sigmoid(x)[source]

Sigmoid Units.

Applies point-wise sigmoid to the input supplied.

Parameters:x – could be a theano.tensor or a theano.shared or numpy arrays or python lists.
Returns:returns a point-wise sigmoid output of the same shape as the input.
Return type:same as input
yann.core.activations.Softmax(x, temp=1)[source]

Softmax Units.

Applies row-wise softmax to the input supplied.

Parameters:
  • x – could be a theano.tensor or a theano.shared or numpy arrays or python lists.
  • temp – temperature of type float. Mainly used during distillation, normal softmax prefer T=1.

Notes

Refer [3] for details.

[3]Hinton, Geoffrey, Oriol Vinyals, and Jeff Dean. “Distilling the knowledge in a neural network.” arXiv preprint arXiv:1503.02531 (2015).
Returns:returns a row-wise softmax output of the same shape as the input.
Return type:same as input
yann.core.activations.Squared(x)[source]

Squared Units.

Applies point-wise squaring to the input supplied.

Parameters:x – could be a theano.tensor or a theano.shared or numpy arrays or python lists.
Returns:returns a squared output of the same shape as the input.
Return type:same as input
yann.core.activations.Tanh(x)[source]

Tanh Units.

Applies point-wise hyperbolic tangent to the input supplied.

Parameters:x – could be a theano.tensor or a theano.shared or numpy arrays or python lists.
Returns:returns a point-wise hyperbolic tangent output.
Return type:same as input