Keras FAQ: Frequently Asked Keras Questions
- How should I cite Keras?
- How can I run Keras on GPU?
- What does "sample", "batch", "epoch" mean?
- How can I save a Keras model?
- Why is the training loss much higher than the testing loss?
- How can I obtain the output of an intermediate layer?
- How can I use Keras with datasets that don't fit in memory?
- How can I interrupt training when the validation loss isn't decreasing anymore?
- How is the validation split computed?
- Is the data shuffled during training?
- How can I record the training / validation loss / accuracy at each epoch?
- How can I "freeze" layers?
- How can I use stateful RNNs?
- How can I remove a layer from a Sequential model?
- How can I use pre-trained models in Keras?
- How can I use HDF5 inputs with Keras?
- Where is the Keras configuration file stored?
How should I cite Keras?
Please cite Keras in your publications if it helps your research. Here is an example BibTeX entry:
@misc{chollet2015keras,
title={Keras},
author={Chollet, Fran\c{c}ois and others},
year={2015},
publisher={GitHub},
howpublished={\url{https://github.com/fchollet/keras}},
}
How can I run Keras on GPU?
If you are running on the TensorFlow or CNTK backends, your code will automatically run on GPU if any available GPU is detected.
If you are running on the Theano backend, you can use one of the following methods:
Method 1: use Theano flags.
THEANO_FLAGS=device=gpu,floatX=float32 python my_keras_script.py
The name 'gpu' might have to be changed depending on your device's identifier (e.g. gpu0
, gpu1
, etc).
Method 2: set up your .theanorc
: Instructions
Method 3: manually set theano.config.device
, theano.config.floatX
at the beginning of your code:
import theano
theano.config.device = 'gpu'
theano.config.floatX = 'float32'
What does "sample", "batch", "epoch" mean?
Below are some common definitions that are necessary to know and understand to correctly utilize Keras:
- Sample: one element of a dataset.
- Example: one image is a sample in a convolutional network
- Example: one audio file is a sample for a speech recognition model
- Batch: a set of N samples. The samples in a batch are processed independently, in parallel. If training, a batch results in only one update to the model.
- A batch generally approximates the distribution of the input data better than a single input. The larger the batch, the better the approximation; however, it is also true that the batch will take longer to processes and will still result in only one update. For inference (evaluate/predict), it is recommended to pick a batch size that is as large as you can afford without going out of memory (since larger batches will usually result in faster evaluating/prediction).
- Epoch: an arbitrary cutoff, generally defined as "one pass over the entire dataset", used to separate training into distinct phases, which is useful for logging and periodic evaluation.
- When using
evaluation_data
orevaluation_split
with thefit
method of Keras models, evaluation will be run at the end of every epoch. - Within Keras, there is the ability to add callbacks specifically designed to be run at the end of an epoch. Examples of these are learning rate changes and model checkpointing (saving).
How can I save a Keras model?
It is not recommended to use pickle or cPickle to save a Keras model.
You can use model.save(filepath)
to save a Keras model into a single HDF5 file which will contain:
- the architecture of the model, allowing to re-create the model
- the weights of the model
- the training configuration (loss, optimizer)
- the state of the optimizer, allowing to resume training exactly where you left off.
You can then use keras.models.load_model(filepath)
to reinstantiate your model.
load_model
will also take care of compiling the model using the saved training configuration
(unless the model was never compiled in the first place).
Example:
from keras.models import load_model
model.save('my_model.h5') # creates a HDF5 file 'my_model.h5'
del model # deletes the existing model
# returns a compiled model
# identical to the previous one
model = load_model('my_model.h5')
If you only need to save the architecture of a model, and not its weights or its training configuration, you can do:
# save as JSON
json_string = model.to_json()
# save as YAML
yaml_string = model.to_yaml()
The generated JSON / YAML files are human-readable and can be manually edited if needed.
You can then build a fresh model from this data:
# model reconstruction from JSON:
from keras.models import model_from_json
model = model_from_json(json_string)
# model reconstruction from YAML
from keras.models import model_from_yaml
model = model_from_yaml(yaml_string)
If you need to save the weights of a model, you can do so in HDF5 with the code below.
Note that you will first need to install HDF5 and the Python library h5py, which do not come bundled with Keras.
model.save_weights('my_model_weights.h5')
Assuming you have code for instantiating your model, you can then load the weights you saved into a model with the same architecture:
model.load_weights('my_model_weights.h5')
If you need to load weights into a different architecture (with some layers in common), for instance for fine-tuning or transfer-learning, you can load weights by layer name:
model.load_weights('my_model_weights.h5', by_name=True)
For example:
"""
Assume original model looks like this:
model = Sequential()
model.add(Dense(2, input_dim=3, name='dense_1'))
model.add(Dense(3, name='dense_2'))
...
model.save_weights(fname)
"""
# new model
model = Sequential()
model.add(Dense(2, input_dim=3, name='dense_1')) # will be loaded
model.add(Dense(10, name='new_dense')) # will not be loaded
# load weights from first model; will only affect the first layer, dense_1.
model.load_weights(fname, by_name=True)
Why is the training loss much higher than the testing loss?
A Keras model has two modes: training and testing. Regularization mechanisms, such as Dropout and L1/L2 weight regularization, are turned off at testing time.
Besides, the training loss is the average of the losses over each batch of training data. Because your model is changing over time, the loss over the first batches of an epoch is generally higher than over the last batches. On the other hand, the testing loss for an epoch is computed using the model as it is at the end of the epoch, resulting in a lower loss.
How can I obtain the output of an intermediate layer?
One simple way is to create a new Model
that will output the layers that you are interested in:
from keras.models import Model
model = ... # create the original model
layer_name = 'my_layer'
intermediate_layer_model = Model(inputs=model.input,
outputs=model.get_layer(layer_name).output)
intermediate_output = intermediate_layer_model.predict(data)
Alternatively, you can build a Keras function that will return the output of a certain layer given a certain input, for example:
from keras import backend as K
# with a Sequential model
get_3rd_layer_output = K.function([model.layers[0].input],
[model.layers[3].output])
layer_output = get_3rd_layer_output([x])[0]
Similarly, you could build a Theano and TensorFlow function directly.
Note that if your model has a different behavior in training and testing phase (e.g. if it uses Dropout
, BatchNormalization
, etc.), you will need
to pass the learning phase flag to your function:
get_3rd_layer_output = K.function([model.layers[0].input, K.learning_phase()],
[model.layers[3].output])
# output in test mode = 0
layer_output = get_3rd_layer_output([x, 0])[0]
# output in train mode = 1
layer_output = get_3rd_layer_output([x, 1])[0]
How can I use Keras with datasets that don't fit in memory?
You can do batch training using model.train_on_batch(x, y)
and model.test_on_batch(x, y)
. See the models documentation.
Alternatively, you can write a generator that yields batches of training data and use the method model.fit_generator(data_generator, steps_per_epoch, epochs)
.
You can see batch training in action in our CIFAR10 example.
How can I interrupt training when the validation loss isn't decreasing anymore?
You can use an EarlyStopping
callback:
from keras.callbacks import EarlyStopping
early_stopping = EarlyStopping(monitor='val_loss', patience=2)
model.fit(x, y, validation_split=0.2, callbacks=[early_stopping])
Find out more in the callbacks documentation.
How is the validation split computed?
If you set the validation_split
argument in model.fit
to e.g. 0.1, then the validation data used will be the last 10% of the data. If you set it to 0.25, it will be the last 25% of the data, etc. Note that the data isn't shuffled before extracting the validation split, so the validation is literally just the last x% of samples in the input you passed.
The same validation set is used for all epochs (within a same call to fit
).
Is the data shuffled during training?
Yes, if the shuffle
argument in model.fit
is set to True
(which is the default), the training data will be randomly shuffled at each epoch.
Validation data is never shuffled.
How can I record the training / validation loss / accuracy at each epoch?
The model.fit
method returns an History
callback, which has a history
attribute containing the lists of successive losses and other metrics.
hist = model.fit(x, y, validation_split=0.2)
print(hist.history)
How can I "freeze" Keras layers?
To "freeze" a layer means to exclude it from training, i.e. its weights will never be updated. This is useful in the context of fine-tuning a model, or using fixed embeddings for a text input.
You can pass a trainable
argument (boolean) to a layer constructor to set a layer to be non-trainable:
frozen_layer = Dense(32, trainable=False)
Additionally, you can set the trainable
property of a layer to True
or False
after instantiation. For this to take effect, you will need to call compile()
on your model after modifying the trainable
property. Here's an example:
x = Input(shape=(32,))
layer = Dense(32)
layer.trainable = False
y = layer(x)
frozen_model = Model(x, y)
# in the model below, the weights of `layer` will not be updated during training
frozen_model.compile(optimizer='rmsprop', loss='mse')
layer.trainable = True
trainable_model = Model(x, y)
# with this model the weights of the layer will be updated during training
# (which will also affect the above model since it uses the same layer instance)
trainable_model.compile(optimizer='rmsprop', loss='mse')
frozen_model.fit(data, labels) # this does NOT update the weights of `layer`
trainable_model.fit(data, labels) # this updates the weights of `layer`
How can I use stateful RNNs?
Making a RNN stateful means that the states for the samples of each batch will be reused as initial states for the samples in the next batch.
When using stateful RNNs, it is therefore assumed that:
- all batches have the same number of samples
- If
x1
andx2
are successive batches of samples, thenx2[i]
is the follow-up sequence tox1[i]
, for everyi
.
To use statefulness in RNNs, you need to:
- explicitly specify the batch size you are using, by passing a
batch_size
argument to the first layer in your model. E.g.batch_size=32
for a 32-samples batch of sequences of 10 timesteps with 16 features per timestep. - set
stateful=True
in your RNN layer(s). - specify
shuffle=False
when calling fit().
To reset the states accumulated:
- use
model.reset_states()
to reset the states of all layers in the model - use
layer.reset_states()
to reset the states of a specific stateful RNN layer
Example:
x # this is our input data, of shape (32, 21, 16)
# we will feed it to our model in sequences of length 10
model = Sequential()
model.add(LSTM(32, input_shape=(10, 16), batch_size=32, stateful=True))
model.add(Dense(16, activation='softmax'))
model.compile(optimizer='rmsprop', loss='categorical_crossentropy')
# we train the network to predict the 11th timestep given the first 10:
model.train_on_batch(x[:, :10, :], np.reshape(x[:, 10, :], (32, 16)))
# the state of the network has changed. We can feed the follow-up sequences:
model.train_on_batch(x[:, 10:20, :], np.reshape(x[:, 20, :], (32, 16)))
# let's reset the states of the LSTM layer:
model.reset_states()
# another way to do it in this case:
model.layers[0].reset_states()
Notes that the methods predict
, fit
, train_on_batch
, predict_classes
, etc. will all update the states of the stateful layers in a model. This allows you to do not only stateful training, but also stateful prediction.
How can I remove a layer from a Sequential model?
You can remove the last added layer in a Sequential model by calling .pop()
:
model = Sequential()
model.add(Dense(32, activation='relu', input_dim=784))
model.add(Dense(32, activation='relu'))
print(len(model.layers)) # "2"
model.pop()
print(len(model.layers)) # "1"
How can I use pre-trained models in Keras?
Code and pre-trained weights are available for the following image classification models:
- Xception
- VGG16
- VGG19
- ResNet50
- Inception v3
They can be imported from the module keras.applications
:
from keras.applications.xception import Xception
from keras.applications.vgg16 import VGG16
from keras.applications.vgg19 import VGG19
from keras.applications.resnet50 import ResNet50
from keras.applications.inception_v3 import InceptionV3
model = VGG16(weights='imagenet', include_top=True)
For a few simple usage examples, see the documentation for the Applications module.
For a detailed example of how to use such a pre-trained model for feature extraction or for fine-tuning, see this blog post.
The VGG16 model is also the basis for several Keras example scripts:
How can I use HDF5 inputs with Keras?
You can use the HDF5Matrix
class from keras.utils.io_utils
. See the HDF5Matrix documentation for details.
You can also directly use a HDF5 dataset:
import h5py
with h5py.File('input/file.hdf5', 'r') as f:
x_data = f['x_data']
model.predict(x_data)
Where is the Keras configuration file stored?
The default directory where all Keras data is stored is:
$HOME/.keras/
Note that Windows users should replace $HOME
with %USERPROFILE%
.
In case Keras cannot create the above directory (e.g. due to permission issues), /tmp/.keras/
is used as a backup.
The Keras configuration file is a JSON file stored at $HOME/.keras/keras.json
. The default configuration file looks like this:
{
"image_data_format": "channels_last",
"epsilon": 1e-07,
"floatx": "float32",
"backend": "tensorflow"
}
It contains the following fields:
- The image data format to be used as default by image processing layers and utilities (either
channels_last
orchannels_first
). - The
epsilon
numerical fuzz factor to be used to prevent division by zero in some operations. - The default float data type.
- The default backend. See the backend documentation.
Likewise, cached dataset files, such as those downloaded with get_file()
, are stored by default in $HOME/.keras/datasets/
.