Keras has two models: Sequential, a linear stack of layers, and Graph, a directed acyclic graph of layers.
Using the Sequential model
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation
from keras.optimizers import SGD
model = Sequential()
model.add(Dense(2, init='uniform', input_dim=64))
model.add(Activation('softmax'))
model.compile(optimizer='sgd', loss='mse')
'''
Train the model for 3 epochs, in batches of 16 samples,
on data stored in the Numpy array X_train,
and labels stored in the Numpy array y_train:
'''
model.fit(X_train, y_train, nb_epoch=3, batch_size=16, verbose=1)
'''
What you will see with mode verbose=1:
Train on 37800 samples, validate on 4200 samples
Epoch 0
37800/37800 [==============================] - 7s - loss: 0.0385
Epoch 1
37800/37800 [==============================] - 8s - loss: 0.0140
Epoch 2
10960/37800 [=======>......................] - ETA: 4s - loss: 0.0109
'''
model.fit(X_train, y_train, nb_epoch=3, batch_size=16, verbose=2)
'''
What you will see with mode verbose=2:
Train on 37800 samples, validate on 4200 samples
Epoch 0
loss: 0.0190
Epoch 1
loss: 0.0146
Epoch 2
loss: 0.0049
'''
'''
Demonstration of the show_accuracy argument
'''
model.fit(X_train, y_train, nb_epoch=3, batch_size=16, verbose=2, show_accuracy=True)
'''
Train on 37800 samples, validate on 4200 samples
Epoch 0
loss: 0.0190 - acc.: 0.8750
Epoch 1
loss: 0.0146 - acc.: 0.8750
Epoch 2
loss: 0.0049 - acc.: 1.0000
'''
'''
Demonstration of the validation_split argument
'''
model.fit(X_train, y_train, nb_epoch=3, batch_size=16,
validation_split=0.1, show_accuracy=True, verbose=1)
'''
Train on 37800 samples, validate on 4200 samples
Epoch 0
37800/37800 [==============================] - 7s - loss: 0.0385 - acc.: 0.7258 - val. loss: 0.0160 - val. acc.: 0.9136
Epoch 1
37800/37800 [==============================] - 8s - loss: 0.0140 - acc.: 0.9265 - val. loss: 0.0109 - val. acc.: 0.9383
Epoch 2
10960/37800 [=======>......................] - ETA: 4s - loss: 0.0109 - acc.: 0.9420
'''
Using the Graph model
# graph model with one input and two outputs
graph = Graph()
graph.add_input(name='input', input_shape=(32,))
graph.add_node(Dense(16), name='dense1', input='input')
graph.add_node(Dense(4), name='dense2', input='input')
graph.add_node(Dense(4), name='dense3', input='dense1')
graph.add_output(name='output1', input='dense2')
graph.add_output(name='output2', input='dense3')
graph.compile(optimizer='rmsprop', loss={'output1':'mse', 'output2':'mse'})
history = graph.fit({'input':X_train, 'output1':y_train, 'output2':y2_train}, nb_epoch=10)
# graph model with two inputs and one output
graph = Graph()
graph.add_input(name='input1', input_shape=(32,))
graph.add_input(name='input2', input_shape=(32,))
graph.add_node(Dense(16), name='dense1', input='input1')
graph.add_node(Dense(4), name='dense2', input='input2')
graph.add_node(Dense(4), name='dense3', input='dense1')
graph.add_output(name='output', inputs=['dense2', 'dense3'], merge_mode='sum')
graph.compile(optimizer='rmsprop', loss={'output':'mse'})
history = graph.fit({'input1':X_train, 'input2':X2_train, 'output':y_train}, nb_epoch=10)
predictions = graph.predict({'input1':X_test, 'input2':X2_test}) # {'output':...}
Model API documentation
Sequential
keras.layers.containers.Sequential(layers=[])
Linear stack of layers.
Inherits from containers.Sequential.
Methods
compile(optimizer, loss, class_mode=None, sample_weight_mode=None)
Configure the learning process.
Arguments
- optimizer: str (name of optimizer) or optimizer object. See optimizers.
- loss: str (name of objective function) or objective function. See objectives.
- class_mode: deprecated argument, it is set automatically starting with Keras 0.3.3.
- sample_weight_mode: if you need to do timestep-wise sample weighting (2D weights), set this to "temporal". "None" defaults to sample-wise weights (1D).
- kwargs: for Theano backend, these are passed into K.function. Ignored for Tensorflow backend.
evaluate(X, y, batch_size=128, show_accuracy=False, verbose=1, sample_weight=None)
Compute the loss on some input data, batch by batch.
Arguments
- X: input data, as a numpy array.
- y: labels, as a numpy array.
- batch_size: integer.
- show_accuracy: boolean.
- verbose: verbosity mode, 0 or 1.
- sample_weight: sample weights, as a numpy array.
evaluate_generator(generator, val_samples, show_accuracy=False, verbose=1)
Evaluates the model on a generator. The generator should
return the same kind of data with every yield as accepted
by evaluate
- Arguments:
- generator:
generator yielding dictionaries of the kind accepted
by
evaluate
, or tuples of such dictionaries and associated dictionaries of sample weights. - val_samples:
total number of samples to generate from
generator
to use in validation. - show_accuracy: whether to display accuracy in logs.
- verbose: verbosity mode, 0 (silent), 1 (per-batch logs), or 2 (per-epoch logs).
fit(X, y, batch_size=128, nb_epoch=100, verbose=1, callbacks=[], validation_split=0.0, validation_data=None, shuffle=True, show_accuracy=False, class_weight=None, sample_weight=None)
Train the model for a fixed number of epochs.
Returns a history object. Its history
attribute is a record of
training loss values at successive epochs,
as well as validation loss values (if applicable).
Arguments
- X: data, as a numpy array.
- y: labels, as a numpy array.
- batch_size: int. Number of samples per gradient update.
- nb_epoch: int.
- verbose: 0 for no logging to stdout, 1 for progress bar logging, 2 for one log line per epoch.
- callbacks:
keras.callbacks.Callback
list. List of callbacks to apply during training. See callbacks. - validation_split: float (0. < x < 1). Fraction of the data to use as held-out validation data.
- validation_data: tuple (X, y) to be used as held-out validation data. Will override validation_split.
- shuffle: boolean or str (for 'batch'). Whether to shuffle the samples at each epoch. 'batch' is a special option for dealing with the limitations of HDF5 data; it shuffles in batch-sized chunks.
- show_accuracy: boolean. Whether to display class accuracy in the logs to stdout at each epoch.
- class_weight: dictionary mapping classes to a weight value, used for scaling the loss function (during training only).
- sample_weight: list or numpy array of weights for
the training samples, used for scaling the loss function
(during training only). You can either pass a flat (1D)
Numpy array with the same length as the input samples
- (1:1 mapping between weights and samples), or in the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample. In this case you should make sure to specify sample_weight_mode="temporal" in compile().
fit_generator(generator, samples_per_epoch, nb_epoch, verbose=1, show_accuracy=False, callbacks=[], validation_data=None, nb_val_samples=None, class_weight=None, nb_worker=1, nb_val_worker=None)
Fit a model on data generated batch-by-batch by a Python generator. The generator is run in parallel to the model, for efficiency, and can be run by multiple workers at the same time. For instance, this allows you to do real-time data augmentation on images on CPU in parallel to training your model on GPU.
Arguments
- generator: a Python generator,
yielding either (X, y) or (X, y, sample_weight).
The generator is expected to loop over its data
indefinitely. An epoch finishes when
samples_per_epoch
samples have been seen by the model. The output of the generator must be a tuple of either 2 or 3 numpy arrays. If the output tuple has two elements, they are assumed to be (input_data, target_data). If it has three elements, they are assumed to be (input_data, target_data, sample_weight). All arrays should contain the same number of samples. - samples_per_epoch: integer, number of samples to process before starting a new epoch.
- nb_epoch: integer, total number of iterations on the data.
- verbose: verbosity mode, 0, 1, or 2.
- show_accuracy: boolean. Whether to display accuracy (only relevant for classification problems).
- callbacks: list of callbacks to be called during training.
- validation_data: tuple of 2 or 3 numpy arrays, or a generator.
If 2 elements, they are assumed to be (input_data, target_data);
if 3 elements, they are assumed to be
(input_data, target_data, sample weights). If generator,
it is assumed to yield tuples of 2 or 3 elements as above.
The generator will be called at the end of every epoch until
at least
nb_val_samples
examples have been obtained, with these examples used for validation. - nb_val_samples: number of samples to use from validation generator at the end of every epoch.
- class_weight: dictionary mapping class indices to a weight for the class.
- nb_worker: integer, number of workers to use for running the generator (in parallel to model training). If using multiple workers, the processing order of batches generated by the model will be non-deterministic. If using multiple workers, make sure to protect any thread-unsafe operation done by the generator using a Python mutex.
- nb_val_worker: same as
nb_worker
, except for validation data. Has no effect if no validation data or validation data is not a generator. Ifnb_val_worker
is None, defaults tonb_worker
.
Returns
A History
object.
Examples
def generate_arrays_from_file(path):
while 1:
f = open(path)
for line in f:
# create numpy arrays of input data
# and labels, from each line in the file
x, y = process_line(line)
yield x, y
f.close()
model.fit_generator(generate_arrays_from_file('/my_file.txt'),
samples_per_epoch=10000, nb_epoch=10)
load_weights(filepath)
Load all layer weights from a HDF5 save file.
predict(X, batch_size=128, verbose=0)
Generate output predictions for the input samples batch by batch.
Arguments
- X: the input data, as a numpy array.
- batch_size: integer.
- verbose: verbosity mode, 0 or 1.
Returns
A numpy array of predictions.
predict_classes(X, batch_size=128, verbose=1)
Generate class predictions for the input samples batch by batch.
Arguments
- X: the input data, as a numpy array.
- batch_size: integer.
- verbose: verbosity mode, 0 or 1.
Returns
A numpy array of class predictions.
predict_on_batch(X)
Returns predictions for a single batch of samples.
predict_proba(X, batch_size=128, verbose=1)
Generate class probability predictions for the input samples batch by batch.
Arguments
- X: the input data, as a numpy array.
- batch_size: integer.
- verbose: verbosity mode, 0 or 1.
Returns
A numpy array of probability predictions.
save_weights(filepath, overwrite=False)
Dump all layer weights to a HDF5 file.
test_on_batch(X, y, accuracy=False, sample_weight=None)
Returns the loss over a single batch of samples,
or a tuple (loss, accuracy)
if accuracy=True
.
- Arguments: see
fit
method.
train_on_batch(X, y, accuracy=False, class_weight=None, sample_weight=None)
Single gradient update over one batch of samples.
Returns the loss over the data,
or a tuple (loss, accuracy)
if accuracy=True
.
- Arguments: see
fit
method.
add(layer)
Defined by Sequential.
clear_previous(reset_weights=True)
Defined by Layer.
count_params()
Defined by Layer.
get_config(verbose=0)
Defined by Layer.
get_input(train=False)
Defined by Layer.
get_output(train=False)
Defined by Layer.
get_output_mask(train=None)
Defined by Layer.
get_weights()
Defined by Layer.
reset_states()
Defined by Sequential.
set_input()
Defined by Sequential.
set_input_shape(input_shape)
Defined by Layer.
set_previous(layer, reset_weights=True)
Defined by Layer.
set_weights(weights)
Defined by Layer.
summary()
Defined by Model.
supports_masked_input()
Defined by Layer.
to_json()
Defined by Model.
to_yaml()
Defined by Model.
Graph
keras.layers.containers.Graph()
Arbitrary connection graph. It can have any number of inputs and outputs, with each output trained with its own loss function. The quantity being optimized by a Graph model is the sum of all loss functions over the different outputs.
Inherits from containers.Graph
.
Methods
compile(optimizer, loss, sample_weight_modes={}, loss_weights={})
Configure the learning process.
Arguments
- optimizer: str (name of optimizer) or optimizer object. See optimizers.
- loss: dictionary mapping the name(s) of the output(s) to a loss function (string name of objective function or objective function. See objectives).
- sample_weight_modes: optional dictionary mapping certain output names to a sample weight mode ("temporal" and None are the only supported modes). If you need to do timestep-wise loss weighting on one of your graph outputs, you will need to set the sample weight mode for this output to "temporal".
- loss_weights: dictionary you can pass to specify a weight coefficient for each loss function (in a multi-output model). If no loss weight is specified for an output, the weight for this output's loss will be considered to be 1.
- kwargs: for Theano backend, these are passed into K.function. Ignored for Tensorflow backend.
evaluate(data, batch_size=128, show_accuracy=False, verbose=0, sample_weight={})
Compute the loss on some input data, batch by batch.
Returns the loss over the data,
or a tuple (loss, accuracy)
if show_accuracy=True
.
- Arguments: see
fit
method.
evaluate_generator(generator, nb_val_samples, show_accuracy=False, verbose=1)
Evaluates the model on a generator. The generator should
return the same kind of data with every yield as accepted
by evaluate
.
If show_accuracy
, it returns a tuple (loss, accuracy)
,
otherwise it returns the loss value.
- Arguments:
- generator:
generator yielding dictionaries of the kind accepted
by
evaluate
, or tuples of such dictionaries and associated dictionaries of sample weights. - nb_val_samples:
total number of samples to generate from
generator
to use in validation. - show_accuracy: whether to log accuracy. Can only be used if your Graph has a single output (otherwise "accuracy" is ill-defined).
Other arguments are the same as for fit
.
fit(data, batch_size=128, nb_epoch=100, verbose=1, callbacks=[], validation_split=0.0, validation_data=None, shuffle=True, show_accuracy=False, class_weight={}, sample_weight={})
Train the model for a fixed number of epochs.
Returns a history object. Its history
attribute is a record of
training loss values at successive epochs,
as well as validation loss values (if applicable).
Arguments
- data: dictionary mapping input names and outputs names to appropriate numpy arrays. All arrays should contain the same number of samples.
- batch_size: int. Number of samples per gradient update.
- nb_epoch: int.
- verbose: 0 for no logging to stdout, 1 for progress bar logging, 2 for one log line per epoch.
- callbacks:
keras.callbacks.Callback
list. List of callbacks to apply during training. See callbacks. - validation_split: float (0. < x < 1). Fraction of the data to use as held-out validation data.
- validation_data: dictionary mapping input names and outputs names to appropriate numpy arrays to be used as held-out validation data. All arrays should contain the same number of samples. Will override validation_split.
- shuffle: boolean. Whether to shuffle the samples at each epoch.
- show_accuracy: whether to log accuracy. Can only be used if your Graph has a single output (otherwise "accuracy" is ill-defined).
- class_weight: dictionary mapping output names to class weight dictionaries.
- sample_weight: dictionary mapping output names to numpy arrays of sample weights.
fit_generator(generator, samples_per_epoch, nb_epoch, verbose=1, show_accuracy=False, callbacks=[], validation_data=None, nb_val_samples=None, class_weight={}, nb_worker=1, nb_val_worker=None)
Fit a model on data generated batch-by-batch by a Python generator. The generator is run in parallel to the model, for efficiency, and can be run by multiple workers at the same time. For instance, this allows you to do real-time data augmentation on images on CPU in parallel to training your model on GPU.
Arguments
- generator: a generator.
The output of the generator must be either a dictionary
mapping inputs and outputs names to numpy arrays, or
a tuple of dictionaries (input_data, sample_weight).
All arrays should contain the same number of samples.
The generator is expected to loop over its data
indefinitely. An epoch finishes when
samples_per_epoch
samples have been seen by the model. - samples_per_epoch: integer, number of samples to process before going to the next epoch.
- nb_epoch: integer, total number of iterations on the data.
- verbose: verbosity mode, 0, 1, or 2.
- show_accuracy: whether to log accuracy. Can only be used if your Graph has a single output (otherwise "accuracy" is ill-defined).
- callbacks: list of callbacks to be called during training.
- validation_data: dictionary mapping input names and outputs names
to appropriate numpy arrays to be used as
held-out validation data, or a generator yielding such
dictionaries. All arrays should contain the same number
of samples. If a generator, will be called until more than
nb_val_samples
examples have been generated at the end of every epoch. These examples will then be used as the validation data. - nb_val_samples: number of samples to use from validation generator at the end of every epoch.
- class_weight: dictionary mapping class indices to a weight for the class.
- nb_worker: integer, number of workers to use for running the generator (in parallel to model training). If using multiple workers, the processing order of batches generated by the model will be non-deterministic. If using multiple workers, make sure to protect any thread-unsafe operation done by the generator using a Python mutex.
- nb_val_worker: same as
nb_worker
, except for validation data. Has no effect if no validation data or validation data is not a generator. IfNone
, defaults to nb_worker.
Returns
A History
object.
Examples
def generate_arrays_from_file(path):
while 1:
f = open(path)
for line in f:
# create numpy arrays of input data
# and labels, from each line in the file
x1, x2, y = process_line(line)
yield {'input_1': x1, 'input_2': x2, 'output': y}
f.close()
graph.fit_generator(generate_arrays_from_file('/my_file.txt'),
samples_per_epoch=10000, nb_epoch=10)
load_weights(filepath)
Load weights from a HDF5 file.
predict(data, batch_size=128, verbose=0)
Generate output predictions for the input samples batch by batch.
- Arguments: see
fit
method.
predict_on_batch(data)
Generate predictions for a single batch of samples.
save_weights(filepath, overwrite=False)
Save weights from all layers to a HDF5 files.
test_on_batch(data, accuracy=False, sample_weight={})
Test the network on a single batch of samples.
If accuracy
, it returns a tuple (loss, accuracy)
,
otherwise it returns the loss value.
- Arguments: see
fit
method.
train_on_batch(data, accuracy=False, class_weight={}, sample_weight={})
Single gradient update on a batch of samples.
Returns the loss over the data,
or a tuple (loss, accuracy)
if accuracy=True
.
- Arguments: see
fit
method.
add_input(name, input_shape=None, batch_input_shape=None, dtype='float')
Defined by Graph.
add_node(layer, name, input=None, inputs=[], merge_mode='concat', concat_axis=-1, dot_axes=-1, create_output=False)
Defined by Graph.
add_output(name, input=None, inputs=[], merge_mode='concat', concat_axis=-1, dot_axes=-1)
Defined by Graph.
add_shared_node(layer, name, inputs=[], merge_mode=None, concat_axis=-1, dot_axes=-1, outputs=[], create_output=False)
Defined by Graph.
clear_previous(reset_weights=True)
Defined by Layer.
count_params()
Defined by Layer.
get_config(verbose=0)
Defined by Layer.
get_input(train=False)
Defined by Layer.
get_output(train=False)
Defined by Layer.
get_output_mask(train=None)
Defined by Layer.
get_weights()
Defined by Layer.
reset_states()
Defined by Graph.
set_input_shape(input_shape)
Defined by Layer.
set_previous(layer, connection_map={}, reset_weights=True)
Defined by Layer.
set_weights(weights)
Defined by Layer.
summary()
Defined by Model.
supports_masked_input()
Defined by Layer.
to_json()
Defined by Model.
to_yaml()
Defined by Model.
Model
keras.models.Model()
Abstract base model class.