conv
- Definitions for all convolution functions.¶
The file yann.core.conv.py
contains the definition for all the convolution
functions available.
yann.core.conv.py
is one file that contains all the convolution operators.
It contains two functions for performing either 2d convolution (conv2d
) or 3d convolution
(conv3d
).
These functions shall be called by every convolution layer from yann.layers.py
Todo
- Add 3D convolution support from theano.
- Add Masked convolution support.
-
class
yann.core.conv.
convolver_2d
(input, filters, subsample, filter_shape, image_shape, border_mode='valid', verbose=1)[source]¶ Class that performs convolution
This class basically performs convolution. These ouputs can be probed using the convolution layer if needed. This keeps things simple.
Parameters: - input – This variable should either
thenao.tensor4
(theano.matrix
reshaped also works) variable or an output from a pervious layer which is atheano.tensor4
convolved with atheano.shared
. The input should be of shape(batchsize, channels, height, width)
. For those who have triedpylearn2
or such, this is called bc01 format. - fitlers – This variable should be
theano.shared
variables of filter weights could even be a filter bank.filters
should be of shape(nchannels, nkerns, filter_height, filter_width)
.nchannles
is the number of input channels andnkerns
is the number of kernels or output channels. - subsample – Stride Tuple of
(int, int)
. - filter_shape – This variable should be a tuple or an array:
[nkerns, nchannles, filter_height, filter_width]
- image_shape – This variable should a tuple or an array:
[batchsize, channels, height, width]
image_shape[1]
must be equal tofilter_shape[1]
- border_mode – The input to this can be either
'same'
or other theano defaults
Notes
conv2d.out
output, Output that could be provided as output to the next layer or to other convolutional layer options. The size of the outut depends on border mode and subsample operation performed.conv2d.out_shp
: (int
,int
), A tuple (height, width) of all feature maps
The options for
border_mode
input which at the moment of writing this doc are'valid'
- apply filter wherever it completely overlaps with the input. Generates output of shapeinput shape - filter shape + 1
'full'
- apply filter wherever it partly overlaps with the input. Generates output of shapeinput shape + filter shape - 1
'half'
: pad input with a symmetric border offilter rows // 2
rows andfilter columns // 2
columns, then perform a valid convolution. For filters with an odd number of rows and columns, this leads to the output shape being equal to the input shape.<int>
: pad input with a symmetric border of zeros of the given width, then perform a valid convolution.(<int1>, <int2>)
: pad input with a symmetric border ofint1
rows andint2
columns, then perform a valid convolution.
Refer to theano documentation’s convolution page for more details on this. Basically cuDNN is used for
same
because at the moment of writing this funciton,theano.conv2d
doesn’t support``same`` convolutions on the GPU. For everything else,theano
default will be used.Todo
Implement
border_mode = 'same'
for libgpuarray backend. As of now only supports CUDA backend.Need to something about this. With V0.10 of theano, I cannot use cuda.dnn for same convolution.
- input – This variable should either
-
class
yann.core.conv.
deconvolver_2d
(input, filters, subsample, filter_shape, image_shape, output_shape, border_mode='valid', verbose=1)[source]¶ class that performs deconvolution
This class basically performs convolution.
Parameters: - input – This variable should either
thenao.tensor4
(theano.matrix
reshaped also works) variable or an output from a pervious layer which is atheano.tensor4
convolved with atheano.shared
. The input should be of shape(batchsize, channels, height, width)
. For those who have triedpylearn2
or such, this is called bc01 format. - fitlers – This variable should be
theano.shared
variables of filter weights could even be a filter bank.filters
should be of shape(nchannels, nkerns, filter_height, filter_width)
.nchannles
is the number of input channels andnkerns
is the number of kernels or output channels. - subsample – Stride Tuple of
(int, int)
. - filter_shape – This variable should be a tuple or an array:
[nkerns, nchannles, filter_height, filter_width]
- image_shape – This variable should a tuple or an array:
[batchsize, channels, height, width]
image_shape[1]
must be equal tofilter_shape[1]
- output_shape – Request a size of output of image required. This variable should a tuple.
- border_mode – The input to this can be either
'same'
or other theano defaults
Notes
conv2d.out
output, Output that could be provided as output to the next layer or to other convolutional layer options. The size of the outut depends on border mode and subsample operation performed.conv2d.out_shp
: (int
,int
), A tuple (height, width) of all feature maps
The options for
border_mode
input which at the moment of writing this doc are'valid'
- apply filter wherever it completely overlaps with the input. Generates output of shapeinput shape - filter shape + 1
'full'
- apply filter wherever it partly overlaps with the input. Generates output of shapeinput shape + filter shape - 1
'half'
: pad input with a symmetric border offilter rows // 2
rows andfilter columns // 2
columns, then perform a valid convolution. For filters with an odd number of rows and columns, this leads to the output shape being equal to the input shape.<int>
: pad input with a symmetric border of zeros of the given width, then perform a valid convolution.(<int1>, <int2>)
: pad input with a symmetric border ofint1
rows andint2
columns, then perform a valid convolution.
Refer to theano documentation’s convolution page for more details on this. Basically cuDNN is used for
same
because at the moment of writing this funciton,theano.conv2d
doesn’t support``same`` convolutions on the GPU. For everything else,theano
default will be used.Todo
Implement
border_mode = 'same'
and full for libgpuarray backend. As of now only supports CUDA backend.Need to something about this. With V0.10 of theano, I cannot use cuda.dnn for same convolution.
Right now deconvolution works only with
border_mode = 'valid'
- input – This variable should either