Serving TensorFlow Models in Docker
Contents
Note
Click here to download the full example code
Serving TensorFlow Models in Docker#
In this project/tutorial, we will
Train a convolutional neural network (CNN) to do multi-class classification on Zalando’s Fashion-MNIST dataset
Use image augmentation to make the model more general
Serve the model with TensorFlow Serving in a Docker container
The source code files for this tutorial are located in
examples/3-tensorflow-serving-docker/
.
In the first sections, we will have a look at the Fashion-MNIST dataset and set up and train a convolutional neural network. If you are just interested in the part about serving a model in Docker, please skip ahead to the final section Serving the Model in Docker.
The Fashion-MNIST Dataset#
Zalando’s Fashion-MNIST dataset is one of the standard benchmarking datasets in computer vision, designed to be a drop-in replacement for the original MNIST dataset of handwritten digits. Fashion-MNIST consists of greyscale images of different items of clothing. It includes 60 000 images for training and 10 000 for validation and testing, 28 pixels in height and 28 pixels in width, divided into 10 classes indicating the type of clothing:
0: t-shirt, top
1: trouser
2: pullover
3: dress
4: coat
5: sandal
6: shirt
7: sneaker
8: bag
9: ankle boot
TensorFlow and Keras provide the Fashion-MNIST dataset as
numpy.ndarray
s containing the 28x28 pixel values and the labels.
They can be loaded with
import tensorflow as tf
fashion_mnist = tf.keras.datasets.fashion_mnist
(train_images, train_labels), (val_images, val_labels) \
= fashion_mnist.load_data()
print(type(train_images), type(train_labels))
print(train_images.shape, train_labels.shape)
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-labels-idx1-ubyte.gz
16384/29515 [===============>..............] - ETA: 0s
32768/29515 [=================================] - 0s 0us/step
40960/29515 [=========================================] - 0s 0us/step
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-images-idx3-ubyte.gz
16384/26421880 [..............................] - ETA: 1s
7086080/26421880 [=======>......................] - ETA: 0s
19644416/26421880 [=====================>........] - ETA: 0s
26427392/26421880 [==============================] - 0s 0us/step
26435584/26421880 [==============================] - 0s 0us/step
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-labels-idx1-ubyte.gz
16384/5148 [===============================================================================================] - 0s 0us/step
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-images-idx3-ubyte.gz
16384/4422102 [..............................] - ETA: 0s
3653632/4422102 [=======================>......] - ETA: 0s
4423680/4422102 [==============================] - 0s 0us/step
4431872/4422102 [==============================] - 0s 0us/step
<class 'numpy.ndarray'> <class 'numpy.ndarray'>
(60000, 28, 28) (60000,)
Each image is a 28x28 array of values between 0 and 255, with the background filled with zeros and the object itself given with values larger than 0:
import numpy as np
print(np.array2string(train_images[0], max_line_width=150))
[[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 13 73 0 0 1 4 0 0 0 0 1 1 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 3 0 36 136 127 62 54 0 0 0 1 3 4 0 0 3]
[ 0 0 0 0 0 0 0 0 0 0 0 0 6 0 102 204 176 134 144 123 23 0 0 0 0 12 10 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 155 236 207 178 107 156 161 109 64 23 77 130 72 15]
[ 0 0 0 0 0 0 0 0 0 0 0 1 0 69 207 223 218 216 216 163 127 121 122 146 141 88 172 66]
[ 0 0 0 0 0 0 0 0 0 1 1 1 0 200 232 232 233 229 223 223 215 213 164 127 123 196 229 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 183 225 216 223 228 235 227 224 222 224 221 223 245 173 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 193 228 218 213 198 180 212 210 211 213 223 220 243 202 0]
[ 0 0 0 0 0 0 0 0 0 1 3 0 12 219 220 212 218 192 169 227 208 218 224 212 226 197 209 52]
[ 0 0 0 0 0 0 0 0 0 0 6 0 99 244 222 220 218 203 198 221 215 213 222 220 245 119 167 56]
[ 0 0 0 0 0 0 0 0 0 4 0 0 55 236 228 230 228 240 232 213 218 223 234 217 217 209 92 0]
[ 0 0 1 4 6 7 2 0 0 0 0 0 237 226 217 223 222 219 222 221 216 223 229 215 218 255 77 0]
[ 0 3 0 0 0 0 0 0 0 62 145 204 228 207 213 221 218 208 211 218 224 223 219 215 224 244 159 0]
[ 0 0 0 0 18 44 82 107 189 228 220 222 217 226 200 205 211 230 224 234 176 188 250 248 233 238 215 0]
[ 0 57 187 208 224 221 224 208 204 214 208 209 200 159 245 193 206 223 255 255 221 234 221 211 220 232 246 0]
[ 3 202 228 224 221 211 211 214 205 205 205 220 240 80 150 255 229 221 188 154 191 210 204 209 222 228 225 0]
[ 98 233 198 210 222 229 229 234 249 220 194 215 217 241 65 73 106 117 168 219 221 215 217 223 223 224 229 29]
[ 75 204 212 204 193 205 211 225 216 185 197 206 198 213 240 195 227 245 239 223 218 212 209 222 220 221 230 67]
[ 48 203 183 194 213 197 185 190 194 192 202 214 219 221 220 236 225 216 199 206 186 181 177 172 181 205 206 115]
[ 0 122 219 193 179 171 183 196 204 210 213 207 211 210 200 196 194 191 195 191 198 192 176 156 167 177 210 92]
[ 0 0 74 189 212 191 175 172 175 181 185 188 189 188 193 198 204 209 210 210 211 188 188 194 192 216 170 0]
[ 2 0 0 0 66 200 222 237 239 242 246 243 244 221 220 193 191 179 182 182 181 176 166 168 99 58 0 0]
[ 0 0 0 0 0 0 0 40 61 44 72 41 35 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]]
While it is possible to use just these arrays to train a neural network
classifier, let’s go a step further and use
tf.keras.preprocessing.image.ImageDataGenerator
for creating
a flow of images for training and validation.
With tf.keras.preprocessing.image.ImageDataGenerator
it is
possible to flow images from numpy.ndarray
s, but also to load
and flow them from directories or pandas.DataFrame
s containing
the image filepaths. At the same time,
tf.keras.preprocessing.image.ImageDataGenerator
is able to
prepare the labels, batch the images and the labels as well as apply
image augmentation. In image augmentation, transformations are
applied to the images, including, but not limited to
horizontally or vertically flipping
rotating
zooming
shearing
them randomly within configurable ranges where applicable. The randomness and the effective increase in the number of training images reduce the likelihood of overtraining and the widened range of positions, orientations and appearances of the objects in the images can help make the model more applicable to a larger number of images in inference. We can set up the generator and obtain the flow iterator with
import numpy as np
def get_generator(images, labels, batch_size=32, **kwargs):
datagen = tf.keras.preprocessing.image.ImageDataGenerator(
horizontal_flip=True,
rotation_range=20,
rescale=1/255)
gen = datagen.flow(
x = np.expand_dims(images, axis=3),
y = tf.keras.utils.to_categorical(labels, num_classes=10),
batch_size = batch_size,
**kwargs
)
return gen
Note that we also rescaled the images with a factor of 1/255 to restrict
the values to the range between 0 and 1. This is a customary input
normalisation since neural networks generally perform better with data
of the order of one.
We also turned the simple integer class labels into one-hot encoded
label vectors using tf.keras.utils.to_categorical()
.
We can then instantiate a generator, e.g. for the training set, and retrieve the first batch of augmented images
import helpers
n = 7
train_gen = helpers.get_generator(train_images, train_labels, batch_size=n,
shuffle=False)
train_images_augmented = next(train_gen)
Here, we set shuffle=False
because we want to preserve the order of
the images for comparing the augmented to the original images.
So let’s proceed to plot a few images in two rows - the original images
in the upper row and the corresponding augmented images in the lower
row. The result is shown in Figure 1.
import matplotlib as mpl
import matplotlib.pyplot as plt
import spellbook as sb
fig = plt.figure(figsize=(7,2))
grid = mpl.gridspec.GridSpec(nrows=2, ncols=n, wspace=0.1, hspace=0.1)
for i in range(n):
ax = plt.Subplot(fig, grid[0,i])
fig.add_subplot(ax)
ax.set_axis_off()
ax = plt.imshow(train_images[i], cmap='gray')
ax = plt.Subplot(fig, grid[1,i])
fig.add_subplot(ax)
ax.set_axis_off()
ax = plt.imshow(train_images_augmented[0][i], cmap='gray')
sb.plot.save(fig, 'images.png')

Figure 1: The first few training images before and after augmentation#
As we can see, the clothes are flipped and rotated as specified.
Training a Neural Network Classifier#
We are now going to set up a neural network for classifying the Fashion-MNIST images according to their respective labels.
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(filters=30, kernel_size=(3,3), activation='relu',
input_shape=(28,28,1)),
tf.keras.layers.MaxPool2D(pool_size=(2,2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(units=50, activation='relu'),
tf.keras.layers.Dense(units=10, activation='softmax')
])
model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 26, 26, 30) 300
max_pooling2d (MaxPooling2D (None, 13, 13, 30) 0
)
flatten (Flatten) (None, 5070) 0
dense (Dense) (None, 50) 253550
dense_1 (Dense) (None, 10) 510
=================================================================
Total params: 254,360
Trainable params: 254,360
Non-trainable params: 0
_________________________________________________________________
The networks begins with a 2D convolutional layer (CNN) with 30 filters of 3x3 pixels each, followed by a max-pooling layer. Since each filter has 3x3=9 pixels and a bias, the total of 30 filters correspond to 300 trainable parameters. Sliding a 3x3 pixel filter across 28x28 pixel images yields 26x26 pixel images and applying 2x2 pixel max-pooling cuts the image size down to 13x13 pixels. The flattening layer turns the 2D pixel arrays into a vector and feeds them to the final part of the network, consisting of a dense layer with 50 nodes and the output layer. Since we turned the labels into one-hot encoded label vectors, we use a dense output layer with 10 nodes, i.e. one node per class, and softmax activation. This ensures that the sum of all 10 outputs of the last layer sum up to unity, which at least numerically corresponds to properties expected from discreet probabilities. However, as long as a classifier is not calibrated, one cannot be sure that its outputs really give the probabilities for the different classes.
This network is deliberately simple because the main focus of this tutorial is on serving the model in Docker rather than achieving the best possible performance. Improved performance can be achieved by adding more filters, more convolutional layers, followed by a larger dense network, while paying attention to overtraining and keeping it in check, e.g. by using dropout layers.
Instead of pursuing this approach and the correspondingly larger computational complexity, we will keep it fast and simple and proceed to configure the model with appropriate loss and metrics and finally train it for only just 3 epochs:
model.compile(
loss = 'categorical_crossentropy',
optimizer = tf.keras.optimizers.Adam(),
metrics = [
tf.keras.metrics.CategoricalCrossentropy(),
tf.keras.metrics.CategoricalAccuracy(name='accuracy')
]
)
epochs = 3
history = model.fit(train_gen, epochs=epochs, validation_data=val_gen,
callbacks = [
tf.keras.callbacks.CSVLogger(filename='fmnist-model-history.csv'),
sb.train.ModelSavingCallback(foldername='fmnist-model')
]
)
Out:
Epoch 1/3
938/938 [==============================] - 56s 59ms/step - loss: 0.5996 - categorical_crossentropy: 0.5996 - accuracy: 0.7853 - val_loss: 0.4732 - val_categorical_crossentropy: 0.4732 - val_accuracy: 0.8255
Epoch 2/3
938/938 [==============================] - 55s 58ms/step - loss: 0.4286 - categorical_crossentropy: 0.4286 - accuracy: 0.8444 - val_loss: 0.4286 - val_categorical_crossentropy: 0.4286 - val_accuracy: 0.8442
Epoch 3/3
938/938 [==============================] - 55s 58ms/step - loss: 0.3829 - categorical_crossentropy: 0.3829 - accuracy: 0.8601 - val_loss: 0.4052 - val_categorical_crossentropy: 0.4052 - val_accuracy: 0.8517
2021-06-20 13:17:43.055082: W tensorflow/python/util/util.cc:348] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
Training finished after 3 epochs: Saved model to folder 'fmnist-model'</pre>
'categorical_crossentropy'
sets an instance of
tf.keras.losses.CategoricalCrossentropy
configured with the
default values as the loss function. This is the appropriate loss function
when using one-hot encoded labels. Lkewise, we are using the
tf.keras.metrics.CategoricalCrossentropy
metric.
Finally, the model is trained with model.fit
, passing instances
of tf.keras.callbacks.CSVLogger
for saving the metrics to a
*.csv
-file during training and
spellbook.train.ModelSavingCallback
for saving the model
during and at the end of the training. As we can see from the ouput, a
classification accuracy of about 85% is achieved during validation, which
doesn’t seem too great, but is enough for our purposes in this tutorial.
Evaluating the Model#
But before we go on, let’s have just a slightly closer look at the model’s performance. We begin by loading the saved model, preparing an image generator for the validation dataset and use the model to calculate the predictions:
model = tf.keras.models.load_model('fmnist-model')
val_gen = helpers.get_generator(val_images, val_labels, batch_size, shuffle=False)
val_predictions = model.predict(val_gen)
val_predicted_labels = np.argmax(val_predictions, axis=1)
Again we use shuffle=False
so that the order of the predictions
corresponds to the order of the orginal labels in val_labels
.
We can then go on to determine and plot the confusion matrix with
spellbook.plot.plot_confusion_matrix()
- one version
with the absolute datapoint counts given in Figure 2 and one version
normalised across each true class in Figure 3:
class_ids = list(helpers.label_dict.keys())
class_names = list(helpers.label_dict.values())
val_confusion = tf.math.confusion_matrix(
val_labels, val_predicted_labels, num_classes=len(class_ids))
sb.plot.save(
sb.plot.plot_confusion_matrix(
confusion_matrix = val_confusion,
class_names = class_names,
class_ids = class_ids,
fontsize = 9.0,
fontsize_annotations = 'x-small'
),
filename = 'fmnist-model-confusion.png'
)
sb.plot.save(
sb.plot.plot_confusion_matrix(
confusion_matrix = val_confusion,
class_names = class_names,
class_ids = class_ids,
normalisation = 'norm-true',
crop = False,
figsize = (6.4, 4.8),
fontsize = 9.0,
fontsize_annotations = 'x-small'
),
filename = 'fmnist-model-confusion-norm-true.png'
)
![]() Figure 2: Absolute datapoint counts# |
![]() Figure 3: Relative frequencies normalised in each true category# |
We can see that by and large at least 75% of the items of each category are correctly classified, except for shirts which are most often confused with t-shirts/tops (15.8%) and to a lesser extent coats (8.7%) and pullovers (7.4%).
Making Predictions About Other Pictures#
While evaluating and benchmarking a model’s performance is still part of
the development process, the eventual interest is of course in deploying
and using the model in production to obtain the predictions for images
that are not part of the training and validation sets.
To simulate this, I downloaded a few random images of different pieces
of clothing and loaded them into numpy.ndarray
s using
tf.keras.preprocessing.image.load_img()
and
tf.keras.preprocessing.image.img_to_array()
.
import numpy as np
import tensorflow as tf
import helpers
model = tf.keras.models.load_model('fmnist-model')
test_images = helpers.load_images(helpers.tshirts)
# test_images = helpers.load_images(helpers.sandals)
# test_images = helpers.load_images(helpers.sneakers)
When loading the images, it is important to size the arrays
in accordance with the model’s architecture and the images used during
training and validation. Therefore, in this example, we choose a target
size of 28x28 pixels and use the 'grayscale'
colour mode:
tshirts = [
'images/test/tshirt-1.jpg', 'images/test/tshirt-2.png',
'images/test/tshirt-3.jpg', 'images/test/tshirt-4.png'
]
sandals = [f'images/test/sandal-{i}.jpg' for i in range(1, 5)]
sneakers = [f'images/test/sneaker-{i}.jpg' for i in range(1, 5)]
def load_images(images: Union[str, List[str]]):
if isinstance(images, str): images = [images]
array = np.empty(shape=(len(images), 28, 28, 1))
for i, image in enumerate(images):
img = tf.keras.preprocessing.image.load_img(
path = image,
color_mode = 'grayscale',
target_size = (28, 28))
array[i] = tf.keras.preprocessing.image.img_to_array(img=img)
return (255 - array) / 255
Once, the images are loaded, the model.predict
function can be used
to apply the model to the data:
predictions = model.predict(test_images)
for prediction in predictions:
print('prediction:', prediction,
'-> predicted class:', np.argmax(prediction))
Out:
prediction: [5.1984423e-01 4.8716479e-06 2.3578823e-04 4.8502727e-04 7.7355535e-06
1.2823233e-06 4.7876367e-01 1.8715612e-06 6.5485924e-04 6.3797620e-07] -> predicted class: 0
prediction: [2.1379247e-02 2.8888881e-04 5.3068418e-03 4.2415579e-04 2.2449503e-05
5.5147725e-04 5.9862167e-02 2.1699164e-07 9.1194308e-01 2.2140094e-04] -> predicted class: 8
prediction: [8.70128453e-01 1.32829140e-04 1.44031774e-02 2.06211829e-04
8.69949628e-03 4.22267794e-06 1.03566416e-01 5.67484494e-05
2.79841595e-03 4.09945960e-06] -> predicted class: 0
prediction: [9.6439028e-01 7.7955298e-07 1.6881671e-02 6.1920066e-03 2.3770966e-05
5.2960712e-07 1.2419004e-02 1.9606419e-09 9.1841714e-05 1.0729976e-09] -> predicted class: 0
As we can see, three of the four t-shirt images are correctly classified, while the second one is mistaken for a bag - which is perhaps as much as can be expected with such a simple and only extremely briefly trained model.
Serving the Model in Docker#
Finally, let’s see how we can serve the model inside
a Docker container using TensorFlow Serving.
To do that, first install Docker and get the tensorflow/serving
image
$ docker pull tensorflow/serving
from Docker Hub.
When started, a container created from this image will run
tensorflow_model_server
and expose the REST API on port 8501
The model to serve can be specified via the MODEL_NAME
and
MODEL_BASE_PATH
environment variables.
By default, MODEL_BASE_PATH=/models
and MODEL_NAME=model
.
Creating a Custom Image for the Model#
To serve our own model, we first need to create a custom Docker image
based on the tensorflow/serving
image.
First, create a container from the tensorflow/serving
image and start it:
$ sudo docker run -d --name tf-serving-base tensorflow/serving
Out:
4f9109df18bcace745d108dd1fba0659b65db62e5f4104f99ca4ed5536d194c6
This creates a container named tf-serving-base
and returns the
container ID. We can verify that the container is running with
$ sudo docker ps
Out:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4f9109df18bc tensorflow/serving "/usr/bin/tf_serving…" 2 minutes ago Up 2 minutes 8500-8501/tcp tf-serving-base
Next, we have to create a folder for the model inside the container. This folder will later hold one subfolder for each version of the model so that the model can be seamlessly updated.
$ sudo docker exec tf-serving-base mkdir /models/fmnist-model
Now we can copy the saved model over to the container, creating the first version subfolder at the same time:
$ sudo docker cp fmnist-model tf-serving-base:/models/fmnist-model/1
Instead of copying the model, we can also mount it into the container from the host filesystem when we start the container. While this will work when developing locally, it is of course not a good idea when the container is to be sent elsewhere.
Note
If no version folder is created and the model is copied or mounted into
the model folder (/models/fmnist-model/
) directly, the model cannot
be served and the following message will be shown:
Out:
2021-06-17 19:17:06.689731: W tensorflow_serving/sources/storage_path/file_system_storage_path_source.cc:268] No versions of servable fmnist-model found under base path /models/fmnist-model. Did you forget to name your leaf directory as a number (eg. '/1/')?
Once this is done, we can create a new image tf-serving-fmnist
from this container, specifying the name of the model folder as the
environment variable MODEL_NAME
$ sudo docker commit --change "ENV MODEL_NAME fmnist-model" tf-serving-base tf-serving-fmnist
Out:
sha256:f34fefc2ee4ccc5d0b8a9ab756a48062fe23bcd4bc493b1fe165f66d7bfd3318
and double-check the list of images
$ sudo docker images
Out:
REPOSITORY TAG IMAGE ID CREATED SIZE
tf-serving-fmnist latest f34fefc2ee4c 57 seconds ago 411MB
tensorflow/serving latest e874bf5e4700 5 weeks ago 406MB
Finally, we can stop the tf-serving-base
container by doing
either
$ sudo docker stop tf-serving-base
or
$ sudo docker kill tf-serving-base
which can readily be verified with sudo docker ps
.
Serving and Querying Our Own Model#
We can create a container from the tf-serving-fmnist
image and run it
with
$ sudo docker run -p 8501:8501 -e MODEL_NAME=fmnist-model -t tf-serving-fmnist
In case we didn’t copy the model into the container, we have to mount it from the host filesystem with
$ sudo docker run -p 8501:8501 \
--mount \
type=bind,\
source=/home/daniel/Computing/Programming/spellbook/examples/3-model-internal-image-preprocessing-pipeline/fmnist-model/,\
target=/models/fmnist-model/1 \
-e MODEL_NAME=fmnist-model \
-t \
tf-serving-fmnist
We can then query the model and obtain its predictions for some images by
means of a POST
request from the command line using
curl
or from a python script using the requests
module/library.
The example script 5-request.py
does both - it prints out a curl
command that can be copied, pasted and run in a terminal and it also submits
a request directly from the Python code and prints out the resulting
predictions:
import json
import numpy as np
import requests
import helpers
test_images = helpers.load_images(helpers.tshirts)
# test_images = helpers.load_images(helpers.sandals)
# test_images = helpers.load_images(helpers.sneakers)
data = json.dumps({
'signature_name': 'serving_default',
'instances': test_images.tolist() # *either* 'instances'
# 'inputs': test_images.tolist() # *or* 'inputs'
})
print('------------------------------------------------------------')
print("for querying the served model from the terminal with 'curl',"
" use the following command\n")
print("curl -d '{}' -X POST {}".format(
data, 'http://localhost:8501/v1/models/fmnist-model:predict'))
print('\n------------------------------------------------------------')
headers = {'content-type': 'application/json'}
json_response = requests.post(
'http://localhost:8501/v1/models/fmnist-model:predict',
headers=headers,
data=data
)
print(json_response.text)
predictions = json.loads(json_response.text)['predictions'] # for 'instances'
# predictions = json.loads(json_response.text)['outputs'] # for 'inputs'
for i, prediction in enumerate(predictions):
print('prediction {}: {} -> predicted class: {}'.format(
i, prediction, np.argmax(prediction)))
For using a specific version of the model, e.g. version 2, the URL
http://localhost:8501/v1/models/fmnist-model/versions/2:predict
can be used.
Once it has been created, the container can be stopped and started again with
$ sudo docker stop <CONTAINER-ID>
and
$ sudo docker start <CONTAINER-ID>
If a name was specified for the container with --name <NAME>
when creating it with the docker run
command, then this name can be
used to refer to the container instead of the ID.
Links
Total running time of the script: ( 0 minutes 0.843 seconds)