[source]

Convolution1D

keras.layers.convolutional.Convolution1D(nb_filter, filter_length, init='glorot_uniform', activation=None, weights=None, border_mode='valid', subsample_length=1, W_regularizer=None, b_regularizer=None, activity_regularizer=None, W_constraint=None, b_constraint=None, bias=True, input_dim=None, input_length=None)

Convolution operator for filtering neighborhoods of one-dimensional inputs. When using this layer as the first layer in a model, either provide the keyword argument input_dim (int, e.g. 128 for sequences of 128-dimensional vectors), or input_shape (tuple of integers, e.g. (10, 128) for sequences of 10 vectors of 128-dimensional vectors).

Example

# apply a convolution 1d of length 3 to a sequence with 10 timesteps,
# with 64 output filters
model = Sequential()
model.add(Convolution1D(64, 3, border_mode='same', input_shape=(10, 32)))
# now model.output_shape == (None, 10, 64)

# add a new conv1d on top
model.add(Convolution1D(32, 3, border_mode='same'))
# now model.output_shape == (None, 10, 32)

Arguments

  • nb_filter: Number of convolution kernels to use (dimensionality of the output).
  • filter_length: The extension (spatial or temporal) of each filter.
  • init: name of initialization function for the weights of the layer (see initializations), or alternatively, Theano function to use for weights initialization. This parameter is only relevant if you don't pass a weights argument.
  • activation: name of activation function to use (see activations), or alternatively, elementwise Theano function. If you don't specify anything, no activation is applied (ie. "linear" activation: a(x) = x).
  • weights: list of numpy arrays to set as initial weights.
  • border_mode: 'valid', 'same' or 'full'. ('full' requires the Theano backend.)
  • subsample_length: factor by which to subsample output.
  • W_regularizer: instance of WeightRegularizer (eg. L1 or L2 regularization), applied to the main weights matrix.
  • b_regularizer: instance of WeightRegularizer, applied to the bias.
  • activity_regularizer: instance of ActivityRegularizer, applied to the network output.
  • W_constraint: instance of the constraints module (eg. maxnorm, nonneg), applied to the main weights matrix.
  • b_constraint: instance of the constraints module, applied to the bias.
  • bias: whether to include a bias (i.e. make the layer affine rather than linear).
  • input_dim: Number of channels/dimensions in the input. Either this argument or the keyword argument input_shapemust be provided when using this layer as the first layer in a model.
  • input_length: Length of input sequences, when it is constant. This argument is required if you are going to connect Flatten then Dense layers upstream (without it, the shape of the dense outputs cannot be computed).

Input shape

3D tensor with shape: (samples, steps, input_dim).

Output shape

3D tensor with shape: (samples, new_steps, nb_filter). steps value might have changed due to padding.


[source]

AtrousConvolution1D

keras.layers.convolutional.AtrousConvolution1D(nb_filter, filter_length, init='glorot_uniform', activation=None, weights=None, border_mode='valid', subsample_length=1, atrous_rate=1, W_regularizer=None, b_regularizer=None, activity_regularizer=None, W_constraint=None, b_constraint=None, bias=True)

Atrous Convolution operator for filtering neighborhoods of one-dimensional inputs. A.k.a dilated convolution or convolution with holes. When using this layer as the first layer in a model, either provide the keyword argument input_dim (int, e.g. 128 for sequences of 128-dimensional vectors), or input_shape (tuples of integers, e.g. (10, 128) for sequences of 10 vectors of 128-dimensional vectors).

Example

# apply an atrous convolution 1d with atrous rate 2 of length 3 to a sequence with 10 timesteps,
# with 64 output filters
model = Sequential()
model.add(AtrousConvolution1D(64, 3, atrous_rate=2, border_mode='same', input_shape=(10, 32)))
# now model.output_shape == (None, 10, 64)

# add a new atrous conv1d on top
model.add(AtrousConvolution1D(32, 3, atrous_rate=2, border_mode='same'))
# now model.output_shape == (None, 10, 32)

Arguments

  • nb_filter: Number of convolution kernels to use (dimensionality of the output).
  • filter_length: The extension (spatial or temporal) of each filter.
  • init: name of initialization function for the weights of the layer (see initializations), or alternatively, Theano function to use for weights initialization. This parameter is only relevant if you don't pass a weights argument.
  • activation: name of activation function to use (see activations), or alternatively, elementwise Theano function. If you don't specify anything, no activation is applied (ie. "linear" activation: a(x) = x).
  • weights: list of numpy arrays to set as initial weights.
  • border_mode: 'valid', 'same' or 'full'. ('full' requires the Theano backend.)
  • subsample_length: factor by which to subsample output.
  • atrous_rate: Factor for kernel dilation. Also called filter_dilation elsewhere.
  • W_regularizer: instance of WeightRegularizer (eg. L1 or L2 regularization), applied to the main weights matrix.
  • b_regularizer: instance of WeightRegularizer, applied to the bias.
  • activity_regularizer: instance of ActivityRegularizer, applied to the network output.
  • W_constraint: instance of the constraints module (eg. maxnorm, nonneg), applied to the main weights matrix.
  • b_constraint: instance of the constraints module, applied to the bias.
  • bias: whether to include a bias (i.e. make the layer affine rather than linear).
  • input_dim: Number of channels/dimensions in the input. Either this argument or the keyword argument input_shapemust be provided when using this layer as the first layer in a model.
  • input_length: Length of input sequences, when it is constant. This argument is required if you are going to connect Flatten then Dense layers upstream (without it, the shape of the dense outputs cannot be computed).

Input shape

3D tensor with shape: (samples, steps, input_dim).

Output shape

3D tensor with shape: (samples, new_steps, nb_filter). steps value might have changed due to padding.


[source]

Convolution2D

keras.layers.convolutional.Convolution2D(nb_filter, nb_row, nb_col, init='glorot_uniform', activation=None, weights=None, border_mode='valid', subsample=(1, 1), dim_ordering='default', W_regularizer=None, b_regularizer=None, activity_regularizer=None, W_constraint=None, b_constraint=None, bias=True)

Convolution operator for filtering windows of two-dimensional inputs. When using this layer as the first layer in a model, provide the keyword argument input_shape (tuple of integers, does not include the sample axis), e.g. input_shape=(3, 128, 128) for 128x128 RGB pictures.

Examples

# apply a 3x3 convolution with 64 output filters on a 256x256 image:
model = Sequential()
model.add(Convolution2D(64, 3, 3, border_mode='same', input_shape=(3, 256, 256)))
# now model.output_shape == (None, 64, 256, 256)

# add a 3x3 convolution on top, with 32 output filters:
model.add(Convolution2D(32, 3, 3, border_mode='same'))
# now model.output_shape == (None, 32, 256, 256)

Arguments

  • nb_filter: Number of convolution filters to use.
  • nb_row: Number of rows in the convolution kernel.
  • nb_col: Number of columns in the convolution kernel.
  • init: name of initialization function for the weights of the layer (see initializations), or alternatively, Theano function to use for weights initialization. This parameter is only relevant if you don't pass a weights argument.
  • activation: name of activation function to use (see activations), or alternatively, elementwise Theano function. If you don't specify anything, no activation is applied (ie. "linear" activation: a(x) = x).
  • weights: list of numpy arrays to set as initial weights.
  • border_mode: 'valid', 'same' or 'full'. ('full' requires the Theano backend.)
  • subsample: tuple of length 2. Factor by which to subsample output. Also called strides elsewhere.
  • W_regularizer: instance of WeightRegularizer (eg. L1 or L2 regularization), applied to the main weights matrix.
  • b_regularizer: instance of WeightRegularizer, applied to the bias.
  • activity_regularizer: instance of ActivityRegularizer, applied to the network output.
  • W_constraint: instance of the constraints module (eg. maxnorm, nonneg), applied to the main weights matrix.
  • b_constraint: instance of the constraints module, applied to the bias.
  • dim_ordering: 'th' or 'tf'. In 'th' mode, the channels dimension (the depth) is at index 1, in 'tf' mode is it at index 3. It defaults to the image_dim_ordering value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be "tf".
  • bias: whether to include a bias (i.e. make the layer affine rather than linear).

Input shape

4D tensor with shape: (samples, channels, rows, cols) if dim_ordering='th' or 4D tensor with shape: (samples, rows, cols, channels) if dim_ordering='tf'.

Output shape

4D tensor with shape: (samples, nb_filter, new_rows, new_cols) if dim_ordering='th' or 4D tensor with shape: (samples, new_rows, new_cols, nb_filter) if dim_ordering='tf'. rows and cols values might have changed due to padding.


[source]

AtrousConvolution2D

keras.layers.convolutional.AtrousConvolution2D(nb_filter, nb_row, nb_col, init='glorot_uniform', activation=None, weights=None, border_mode='valid', subsample=(1, 1), atrous_rate=(1, 1), dim_ordering='default', W_regularizer=None, b_regularizer=None, activity_regularizer=None, W_constraint=None, b_constraint=None, bias=True)

Atrous Convolution operator for filtering windows of two-dimensional inputs. A.k.a dilated convolution or convolution with holes. When using this layer as the first layer in a model, provide the keyword argument input_shape (tuple of integers, does not include the sample axis), e.g. input_shape=(3, 128, 128) for 128x128 RGB pictures.

Examples

# apply a 3x3 convolution with atrous rate 2x2 and 64 output filters on a 256x256 image:
model = Sequential()
model.add(AtrousConvolution2D(64, 3, 3, atrous_rate=(2,2), border_mode='valid', input_shape=(3, 256, 256)))
# now the actual kernel size is dilated from 3x3 to 5x5 (3+(3-1)*(2-1)=5)
# thus model.output_shape == (None, 64, 252, 252)

Arguments

  • nb_filter: Number of convolution filters to use.
  • nb_row: Number of rows in the convolution kernel.
  • nb_col: Number of columns in the convolution kernel.
  • init: name of initialization function for the weights of the layer (see initializations), or alternatively, Theano function to use for weights initialization. This parameter is only relevant if you don't pass a weights argument.
  • activation: name of activation function to use (see activations), or alternatively, elementwise Theano function. If you don't specify anything, no activation is applied (ie. "linear" activation: a(x) = x).
  • weights: list of numpy arrays to set as initial weights.
  • border_mode: 'valid', 'same' or 'full'. ('full' requires the Theano backend.)
  • subsample: tuple of length 2. Factor by which to subsample output. Also called strides elsewhere.
  • atrous_rate: tuple of length 2. Factor for kernel dilation. Also called filter_dilation elsewhere.
  • W_regularizer: instance of WeightRegularizer (eg. L1 or L2 regularization), applied to the main weights matrix.
  • b_regularizer: instance of WeightRegularizer, applied to the bias.
  • activity_regularizer: instance of ActivityRegularizer, applied to the network output.
  • W_constraint: instance of the constraints module (eg. maxnorm, nonneg), applied to the main weights matrix.
  • b_constraint: instance of the constraints module, applied to the bias.
  • dim_ordering: 'th' or 'tf'. In 'th' mode, the channels dimension (the depth) is at index 1, in 'tf' mode is it at index 3. It defaults to the image_dim_ordering value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be "tf".
  • bias: whether to include a bias (i.e. make the layer affine rather than linear).

Input shape

4D tensor with shape: (samples, channels, rows, cols) if dim_ordering='th' or 4D tensor with shape: (samples, rows, cols, channels) if dim_ordering='tf'.

Output shape

4D tensor with shape: (samples, nb_filter, new_rows, new_cols) if dim_ordering='th' or 4D tensor with shape: (samples, new_rows, new_cols, nb_filter) if dim_ordering='tf'. rows and cols values might have changed due to padding.

References


[source]

SeparableConvolution2D

keras.layers.convolutional.SeparableConvolution2D(nb_filter, nb_row, nb_col, init='glorot_uniform', activation=None, weights=None, border_mode='valid', subsample=(1, 1), depth_multiplier=1, dim_ordering='default', depthwise_regularizer=None, pointwise_regularizer=None, b_regularizer=None, activity_regularizer=None, depthwise_constraint=None, pointwise_constraint=None, b_constraint=None, bias=True)

Separable convolution operator for 2D inputs.

Separable convolutions consist in first performing a depthwise spatial convolution (which acts on each input channel separately) followed by a pointwise convolution which mixes together the resulting output channels. The depth_multiplier argument controls how many output channels are generated per input channel in the depthwise step.

Intuitively, separable convolutions can be understood as a way to factorize a convolution kernel into two smaller kernels, or as an extreme version of an Inception block.

When using this layer as the first layer in a model, provide the keyword argument input_shape (tuple of integers, does not include the sample axis), e.g. input_shape=(3, 128, 128) for 128x128 RGB pictures.

Theano warning

This layer is only available with the TensorFlow backend for the time being.

Arguments

  • nb_filter: Number of convolution filters to use.
  • nb_row: Number of rows in the convolution kernel.
  • nb_col: Number of columns in the convolution kernel.
  • init: name of initialization function for the weights of the layer (see initializations), or alternatively, Theano function to use for weights initialization. This parameter is only relevant if you don't pass a weights argument.
  • activation: name of activation function to use (see activations), or alternatively, elementwise Theano function. If you don't specify anything, no activation is applied (ie. "linear" activation: a(x) = x).
  • weights: list of numpy arrays to set as initial weights.
  • border_mode: 'valid' or 'same'.
  • subsample: tuple of length 2. Factor by which to subsample output. Also called strides elsewhere.
  • depth_multiplier: how many output channel to use per input channel for the depthwise convolution step.
  • depthwise_regularizer: instance of WeightRegularizer (eg. L1 or L2 regularization), applied to the depthwise weights matrix.
  • pointwise_regularizer: instance of WeightRegularizer (eg. L1 or L2 regularization), applied to the pointwise weights matrix.
  • b_regularizer: instance of WeightRegularizer, applied to the bias.
  • activity_regularizer: instance of ActivityRegularizer, applied to the network output.
  • depthwise_constraint: instance of the constraints module (eg. maxnorm, nonneg), applied to the depthwise weights matrix.
  • pointwise_constraint: instance of the constraints module (eg. maxnorm, nonneg), applied to the pointwise weights matrix.
  • b_constraint: instance of the constraints module, applied to the bias.
  • dim_ordering: 'th' or 'tf'. In 'th' mode, the channels dimension (the depth) is at index 1, in 'tf' mode is it at index 3. It defaults to the image_dim_ordering value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be "tf".
  • bias: whether to include a bias (i.e. make the layer affine rather than linear).

Input shape

4D tensor with shape: (samples, channels, rows, cols) if dim_ordering='th' or 4D tensor with shape: (samples, rows, cols, channels) if dim_ordering='tf'.

Output shape

4D tensor with shape: (samples, nb_filter, new_rows, new_cols) if dim_ordering='th' or 4D tensor with shape: (samples, new_rows, new_cols, nb_filter) if dim_ordering='tf'. rows and cols values might have changed due to padding.


[source]

Deconvolution2D

keras.layers.convolutional.Deconvolution2D(nb_filter, nb_row, nb_col, output_shape, init='glorot_uniform', activation=None, weights=None, border_mode='valid', subsample=(1, 1), dim_ordering='default', W_regularizer=None, b_regularizer=None, activity_regularizer=None, W_constraint=None, b_constraint=None, bias=True)

Transposed convolution operator for filtering windows of two-dimensional inputs. The need for transposed convolutions generally arises from the desire to use a transformation going in the opposite direction of a normal convolution, i.e., from something that has the shape of the output of some convolution to something that has the shape of its input while maintaining a connectivity pattern that is compatible with said convolution. [1]

When using this layer as the first layer in a model, provide the keyword argument input_shape (tuple of integers, does not include the sample axis), e.g. input_shape=(3, 128, 128) for 128x128 RGB pictures.

To pass the correct output_shape to this layer, one could use a test model to predict and observe the actual output shape.

Examples

# apply a 3x3 transposed convolution with stride 1x1 and 3 output filters on a 12x12 image:
model = Sequential()
model.add(Deconvolution2D(3, 3, 3, output_shape=(None, 3, 14, 14), border_mode='valid', input_shape=(3, 12, 12)))
# Note that you will have to change the output_shape depending on the backend used.

# we can predict with the model and print the shape of the array.
dummy_input = np.ones((32, 3, 12, 12))
# For TensorFlow dummy_input = np.ones((32, 12, 12, 3))
preds = model.predict(dummy_input)
print(preds.shape)
# Theano GPU: (None, 3, 13, 13)
# Theano CPU: (None, 3, 14, 14)
# TensorFlow: (None, 14, 14, 3)

# apply a 3x3 transposed convolution with stride 2x2 and 3 output filters on a 12x12 image:
model = Sequential()
model.add(Deconvolution2D(3, 3, 3, output_shape=(None, 3, 25, 25), subsample=(2, 2), border_mode='valid', input_shape=(3, 12, 12)))
model.summary()

# we can predict with the model and print the shape of the array.
dummy_input = np.ones((32, 3, 12, 12))
# For TensorFlow dummy_input = np.ones((32, 12, 12, 3))
preds = model.predict(dummy_input)
print(preds.shape)
# Theano GPU: (None, 3, 25, 25)
# Theano CPU: (None, 3, 25, 25)
# TensorFlow: (None, 25, 25, 3)

Arguments

  • nb_filter: Number of transposed convolution filters to use.
  • nb_row: Number of rows in the transposed convolution kernel.
  • nb_col: Number of columns in the transposed convolution kernel.
  • output_shape: Output shape of the transposed convolution operation. tuple of integers (nb_samples, nb_filter, nb_output_rows, nb_output_cols) Formula for calculation of the output shape [1], [2]: o = s (i - 1) + a + k - 2p, \quad a \in {0, \ldots, s - 1}
    • where: i - input size (rows or cols), k - kernel size (nb_filter), s - stride (subsample for rows or cols respectively), p - padding size, a - user-specified quantity used to distinguish between the s different possible output sizes. Because a is not specified explicitly and Theano and Tensorflow use different values, it is better to use a dummy input and observe the actual output shape of a layer as specified in the examples.
  • init: name of initialization function for the weights of the layer (see initializations), or alternatively, Theano function to use for weights initialization. This parameter is only relevant if you don't pass a weights argument.
  • activation: name of activation function to use (see activations), or alternatively, elementwise Theano/TensorFlow function. If you don't specify anything, no activation is applied (ie. "linear" activation: a(x) = x).
  • weights: list of numpy arrays to set as initial weights.
  • border_mode: 'valid', 'same' or 'full'. ('full' requires the Theano backend.)
  • subsample: tuple of length 2. Factor by which to oversample output. Also called strides elsewhere.
  • W_regularizer: instance of WeightRegularizer (eg. L1 or L2 regularization), applied to the main weights matrix.
  • b_regularizer: instance of WeightRegularizer, applied to the bias.
  • activity_regularizer: instance of ActivityRegularizer, applied to the network output.
  • W_constraint: instance of the constraints module (eg. maxnorm, nonneg), applied to the main weights matrix.
  • b_constraint: instance of the constraints module, applied to the bias.
  • dim_ordering: 'th' or 'tf'. In 'th' mode, the channels dimension (the depth) is at index 1, in 'tf' mode is it at index 3. It defaults to the image_dim_ordering value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be "tf".
  • bias: whether to include a bias (i.e. make the layer affine rather than linear).

Input shape

4D tensor with shape: (samples, channels, rows, cols) if dim_ordering='th' or 4D tensor with shape: (samples, rows, cols, channels) if dim_ordering='tf'.

Output shape

4D tensor with shape: (samples, nb_filter, new_rows, new_cols) if dim_ordering='th' or 4D tensor with shape: (samples, new_rows, new_cols, nb_filter) if dim_ordering='tf'. rows and cols values might have changed due to padding.

References

[1] A guide to convolution arithmetic for deep learning [2] Transposed convolution arithmetic [3] Deconvolutional Networks


[source]

Convolution3D

keras.layers.convolutional.Convolution3D(nb_filter, kernel_dim1, kernel_dim2, kernel_dim3, init='glorot_uniform', activation=None, weights=None, border_mode='valid', subsample=(1, 1, 1), dim_ordering='default', W_regularizer=None, b_regularizer=None, activity_regularizer=None, W_constraint=None, b_constraint=None, bias=True)

Convolution operator for filtering windows of three-dimensional inputs. When using this layer as the first layer in a model, provide the keyword argument input_shape (tuple of integers, does not include the sample axis), e.g. input_shape=(3, 10, 128, 128) for 10 frames of 128x128 RGB pictures.

Arguments

  • nb_filter: Number of convolution filters to use.
  • kernel_dim1: Length of the first dimension in the convolution kernel.
  • kernel_dim2: Length of the second dimension in the convolution kernel.
  • kernel_dim3: Length of the third dimension in the convolution kernel.
  • init: name of initialization function for the weights of the layer (see initializations), or alternatively, Theano function to use for weights initialization. This parameter is only relevant if you don't pass a weights argument.
  • activation: name of activation function to use (see activations), or alternatively, elementwise Theano function. If you don't specify anything, no activation is applied (ie. "linear" activation: a(x) = x).
  • weights: list of Numpy arrays to set as initial weights.
  • border_mode: 'valid', 'same' or 'full'. ('full' requires the Theano backend.)
  • subsample: tuple of length 3. Factor by which to subsample output. Also called strides elsewhere.
    • Note: 'subsample' is implemented by slicing the output of conv3d with strides=(1,1,1).
  • W_regularizer: instance of WeightRegularizer (eg. L1 or L2 regularization), applied to the main weights matrix.
  • b_regularizer: instance of WeightRegularizer, applied to the bias.
  • activity_regularizer: instance of ActivityRegularizer, applied to the network output.
  • W_constraint: instance of the constraints module (eg. maxnorm, nonneg), applied to the main weights matrix.
  • b_constraint: instance of the constraints module, applied to the bias.
  • dim_ordering: 'th' or 'tf'. In 'th' mode, the channels dimension (the depth) is at index 1, in 'tf' mode is it at index 4. It defaults to the image_dim_ordering value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be "tf".
  • bias: whether to include a bias (i.e. make the layer affine rather than linear).

Input shape

5D tensor with shape: (samples, channels, conv_dim1, conv_dim2, conv_dim3) if dim_ordering='th' or 5D tensor with shape: (samples, conv_dim1, conv_dim2, conv_dim3, channels) if dim_ordering='tf'.

Output shape

5D tensor with shape: (samples, nb_filter, new_conv_dim1, new_conv_dim2, new_conv_dim3) if dim_ordering='th' or 5D tensor with shape: (samples, new_conv_dim1, new_conv_dim2, new_conv_dim3, nb_filter) if dim_ordering='tf'. new_conv_dim1, new_conv_dim2 and new_conv_dim3 values might have changed due to padding.


[source]

Cropping1D

keras.layers.convolutional.Cropping1D(cropping=(1, 1))

Cropping layer for 1D input (e.g. temporal sequence). It crops along the time dimension (axis 1).

Arguments

  • cropping: tuple of int (length 2) How many units should be trimmed off at the beginning and end of the cropping dimension (axis 1).

Input shape

3D tensor with shape (samples, axis_to_crop, features)

Output shape

3D tensor with shape (samples, cropped_axis, features)


[source]

Cropping2D

keras.layers.convolutional.Cropping2D(cropping=((0, 0), (0, 0)), dim_ordering='default')

Cropping layer for 2D input (e.g. picture). It crops along spatial dimensions, i.e. width and height.

Arguments

  • cropping: tuple of tuple of int (length 2) How many units should be trimmed off at the beginning and end of the 2 cropping dimensions (width, height).
  • dim_ordering: 'th' or 'tf'. In 'th' mode, the channels dimension (the depth) is at index 1, in 'tf' mode is it at index 3. It defaults to the image_dim_ordering value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be "tf".

Input shape

4D tensor with shape: (samples, depth, first_axis_to_crop, second_axis_to_crop)

Output shape

4D tensor with shape: (samples, depth, first_cropped_axis, second_cropped_axis)

Examples

# Crop the input 2D images or feature maps
model = Sequential()
model.add(Cropping2D(cropping=((2, 2), (4, 4)), input_shape=(3, 28, 28)))
# now model.output_shape == (None, 3, 24, 20)
model.add(Convolution2D(64, 3, 3, border_mode='same))
model.add(Cropping2D(cropping=((2, 2), (2, 2))))
# now model.output_shape == (None, 64, 20, 16)


[source]

Cropping3D

keras.layers.convolutional.Cropping3D(cropping=((1, 1), (1, 1), (1, 1)), dim_ordering='default')

Cropping layer for 3D data (e.g. spatial or spatio-temporal).

Arguments

  • cropping: tuple of tuple of int (length 3) How many units should be trimmed off at the beginning and end of the 3 cropping dimensions (kernel_dim1, kernel_dim2, kernerl_dim3).
  • dim_ordering: 'th' or 'tf'. In 'th' mode, the channels dimension (the depth) is at index 1, in 'tf' mode is it at index 4. It defaults to the image_dim_ordering value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be "tf".

Input shape

5D tensor with shape: (samples, depth, first_axis_to_crop, second_axis_to_crop, third_axis_to_crop)

Output shape

5D tensor with shape: (samples, depth, first_cropped_axis, second_cropped_axis, third_cropped_axis)


[source]

UpSampling1D

keras.layers.convolutional.UpSampling1D(length=2)

Repeat each temporal step length times along the time axis.

Arguments

  • length: integer. Upsampling factor.

Input shape

3D tensor with shape: (samples, steps, features).

Output shape

3D tensor with shape: (samples, upsampled_steps, features).


[source]

UpSampling2D

keras.layers.convolutional.UpSampling2D(size=(2, 2), dim_ordering='default')

Repeat the rows and columns of the data by size[0] and size[1] respectively.

Arguments

  • size: tuple of 2 integers. The upsampling factors for rows and columns.
  • dim_ordering: 'th' or 'tf'. In 'th' mode, the channels dimension (the depth) is at index 1, in 'tf' mode is it at index 3. It defaults to the image_dim_ordering value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be "tf".

Input shape

4D tensor with shape: (samples, channels, rows, cols) if dim_ordering='th' or 4D tensor with shape: (samples, rows, cols, channels) if dim_ordering='tf'.

Output shape

4D tensor with shape: (samples, channels, upsampled_rows, upsampled_cols) if dim_ordering='th' or 4D tensor with shape: (samples, upsampled_rows, upsampled_cols, channels) if dim_ordering='tf'.


[source]

UpSampling3D

keras.layers.convolutional.UpSampling3D(size=(2, 2, 2), dim_ordering='default')

Repeat the first, second and third dimension of the data by size[0], size[1] and size[2] respectively.

Arguments

  • size: tuple of 3 integers. The upsampling factors for dim1, dim2 and dim3.
  • dim_ordering: 'th' or 'tf'. In 'th' mode, the channels dimension (the depth) is at index 1, in 'tf' mode is it at index 4. It defaults to the image_dim_ordering value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be "tf".

Input shape

5D tensor with shape: (samples, channels, dim1, dim2, dim3) if dim_ordering='th' or 5D tensor with shape: (samples, dim1, dim2, dim3, channels) if dim_ordering='tf'.

Output shape

5D tensor with shape: (samples, channels, upsampled_dim1, upsampled_dim2, upsampled_dim3) if dim_ordering='th' or 5D tensor with shape: (samples, upsampled_dim1, upsampled_dim2, upsampled_dim3, channels) if dim_ordering='tf'.


[source]

ZeroPadding1D

keras.layers.convolutional.ZeroPadding1D(padding=1)

Zero-padding layer for 1D input (e.g. temporal sequence).

Arguments

  • padding: int, or tuple of int (length 2), or dictionary.
    • If int: How many zeros to add at the beginning and end of the padding dimension (axis 1).
    • If tuple of int (length 2) How many zeros to add at the beginning and at the end of the padding dimension, in order '(left_pad, right_pad)'.
    • If dictionary: should contain the keys {'left_pad', 'right_pad'}. If any key is missing, default value of 0 will be used for the missing key.

Input shape

3D tensor with shape (samples, axis_to_pad, features)

Output shape

3D tensor with shape (samples, padded_axis, features)


[source]

ZeroPadding2D

keras.layers.convolutional.ZeroPadding2D(padding=(1, 1), dim_ordering='default')

Zero-padding layer for 2D input (e.g. picture).

Arguments

  • padding: tuple of int (length 2), or tuple of int (length 4), or dictionary.
    • If tuple of int (length 2): How many zeros to add at the beginning and end of the 2 padding dimensions (rows and cols).
    • If tuple of int (length 4): How many zeros to add at the beginning and at the end of the 2 padding dimensions (rows and cols), in the order '(top_pad, bottom_pad, left_pad, right_pad)'.
    • If dictionary: should contain the keys {'top_pad', 'bottom_pad', 'left_pad', 'right_pad'}. If any key is missing, default value of 0 will be used for the missing key.
  • dim_ordering: 'th' or 'tf'. In 'th' mode, the channels dimension (the depth) is at index 1, in 'tf' mode is it at index 3. It defaults to the image_dim_ordering value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be "tf".

Input shape

4D tensor with shape: (samples, channels, rows, cols) if dim_ordering='th' or 4D tensor with shape: (samples, rows, cols, channels) if dim_ordering='tf'.

Output shape

4D tensor with shape: (samples, channels, padded_rows, padded_cols) if dim_ordering='th' or 4D tensor with shape: (samples, padded_rows, padded_cols, channels) if dim_ordering='tf'.


[source]

ZeroPadding3D

keras.layers.convolutional.ZeroPadding3D(padding=(1, 1, 1), dim_ordering='default')

Zero-padding layer for 3D data (spatial or spatio-temporal).

Arguments

  • padding: tuple of int (length 3) How many zeros to add at the beginning and end of the 3 padding dimensions (axis 3, 4 and 5). Currently only symmetric padding is supported.
  • dim_ordering: 'th' or 'tf'. In 'th' mode, the channels dimension (the depth) is at index 1, in 'tf' mode is it at index 4. It defaults to the image_dim_ordering value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be "tf".

Input shape

5D tensor with shape: (samples, depth, first_axis_to_pad, second_axis_to_pad, third_axis_to_pad)

Output shape

5D tensor with shape: (samples, depth, first_padded_axis, second_padded_axis, third_axis_to_pad)