LocallyConnected1D
keras.layers.local.LocallyConnected1D(nb_filter, filter_length, init='glorot_uniform', activation=None, weights=None, border_mode='valid', subsample_length=1, W_regularizer=None, b_regularizer=None, activity_regularizer=None, W_constraint=None, b_constraint=None, bias=True, input_dim=None, input_length=None)
The LocallyConnected1D
layer works similarly to
the Convolution1D
layer, except that weights are unshared,
that is, a different set of filters is applied at each different patch
of the input.
When using this layer as the first layer in a model,
either provide the keyword argument input_dim
(int, e.g. 128 for sequences of 128-dimensional vectors), or input_shape
(tuple of integers, e.g. input_shape=(10, 128)
for sequences of 10 vectors of 128-dimensional vectors).
Also, note that this layer can only be used with
a fully-specified input shape (None
dimensions not allowed).
Example
# apply a unshared weight convolution 1d of length 3 to a sequence with
# 10 timesteps, with 64 output filters
model = Sequential()
model.add(LocallyConnected1D(64, 3, input_shape=(10, 32)))
# now model.output_shape == (None, 8, 64)
# add a new conv1d on top
model.add(LocallyConnected1D(32, 3))
# now model.output_shape == (None, 6, 32)
Arguments
- nb_filter: Dimensionality of the output.
- filter_length: The extension (spatial or temporal) of each filter.
- init: name of initialization function for the weights of the layer
(see initializations),
or alternatively, Theano function to use for weights initialization.
This parameter is only relevant if you don't pass a
weights
argument. - activation: name of activation function to use (see activations), or alternatively, elementwise Theano function. If you don't specify anything, no activation is applied (ie. "linear" activation: a(x) = x).
- weights: list of numpy arrays to set as initial weights.
- border_mode: Only support 'valid'. Please make good use of ZeroPadding1D to achieve same output length.
- subsample_length: factor by which to subsample output.
- W_regularizer: instance of WeightRegularizer (eg. L1 or L2 regularization), applied to the main weights matrix.
- b_regularizer: instance of WeightRegularizer, applied to the bias.
- activity_regularizer: instance of ActivityRegularizer, applied to the network output.
- W_constraint: instance of the constraints module (eg. maxnorm, nonneg), applied to the main weights matrix.
- b_constraint: instance of the constraints module, applied to the bias.
- bias: whether to include a bias (i.e. make the layer affine rather than linear).
- input_dim: Number of channels/dimensions in the input.
Either this argument or the keyword argument
input_shape
must be provided when using this layer as the first layer in a model. - input_length: Length of input sequences, when it is constant.
This argument is required if you are going to connect
Flatten
thenDense
layers upstream (without it, the shape of the dense outputs cannot be computed).
Input shape
3D tensor with shape: (samples, steps, input_dim)
.
Output shape
3D tensor with shape: (samples, new_steps, nb_filter)
.
steps
value might have changed due to padding.
LocallyConnected2D
keras.layers.local.LocallyConnected2D(nb_filter, nb_row, nb_col, init='glorot_uniform', activation=None, weights=None, border_mode='valid', subsample=(1, 1), dim_ordering='default', W_regularizer=None, b_regularizer=None, activity_regularizer=None, W_constraint=None, b_constraint=None, bias=True)
The LocallyConnected2D
layer works similarly
to the Convolution2D
layer, except that weights are unshared,
that is, a different set of filters is applied at each
different patch of the input.
When using this layer as the
first layer in a model, provide the keyword argument input_shape
(tuple
of integers, does not include the sample axis), e.g.
input_shape=(3, 128, 128)
for 128x128 RGB pictures.
Also, note that this layer can only be used with
a fully-specified input shape (None
dimensions not allowed).
Examples
# apply a 3x3 unshared weights convolution with 64 output filters on a 32x32 image:
model = Sequential()
model.add(LocallyConnected2D(64, 3, 3, input_shape=(3, 32, 32)))
# now model.output_shape == (None, 64, 30, 30)
# notice that this layer will consume (30*30)*(3*3*3*64) + (30*30)*64 parameters
# add a 3x3 unshared weights convolution on top, with 32 output filters:
model.add(LocallyConnected2D(32, 3, 3))
# now model.output_shape == (None, 32, 28, 28)
Arguments
- nb_filter: Number of convolution filters to use.
- nb_row: Number of rows in the convolution kernel.
- nb_col: Number of columns in the convolution kernel.
- init: name of initialization function for the weights of the layer
(see initializations), or alternatively,
Theano function to use for weights initialization.
This parameter is only relevant if you don't pass
a
weights
argument. - activation: name of activation function to use (see activations), or alternatively, elementwise Theano function. If you don't specify anything, no activation is applied (ie. "linear" activation: a(x) = x).
- weights: list of numpy arrays to set as initial weights.
- border_mode: Only support 'valid'. Please make good use of ZeroPadding2D to achieve same output shape.
- subsample: tuple of length 2. Factor by which to subsample output. Also called strides elsewhere.
- W_regularizer: instance of WeightRegularizer (eg. L1 or L2 regularization), applied to the main weights matrix.
- b_regularizer: instance of WeightRegularizer, applied to the bias.
- activity_regularizer: instance of ActivityRegularizer, applied to the network output.
- W_constraint: instance of the constraints module (eg. maxnorm, nonneg), applied to the main weights matrix.
- b_constraint: instance of the constraints module, applied to the bias.
- dim_ordering: 'th' or 'tf'. In 'th' mode, the channels dimension (the depth) is at index 1, in 'tf' mode is it at index 3.
- bias: whether to include a bias (i.e. make the layer affine rather than linear).
Input shape
4D tensor with shape:
(samples, channels, rows, cols)
if dim_ordering='th'
or 4D tensor with shape:
(samples, rows, cols, channels)
if dim_ordering='tf'.
Output shape
4D tensor with shape:
(samples, nb_filter, new_rows, new_cols)
if dim_ordering='th'
or 4D tensor with shape:
(samples, new_rows, new_cols, nb_filter)
if dim_ordering='tf'.
rows
and cols
values might have changed due to padding.