Eyes, JAPAN Blog > Getting Started with Deep Learning in Keras and Tensorflow

Getting Started with Deep Learning in Keras and Tensorflow

victor

この記事は1年以上前に書かれたもので、内容が古い可能性がありますのでご注意ください。

I bet you heard a lot about deep learning recently, but if you haven’t tried it yet, you should give it a shot after reading this article. You are expected to have some basic knowledge of Python to read the code below, and I will do my best to explain all the tools that are used in the article.

We will use Keras framework which is a high-level neural-networks API, that can be run on top of TensorFlow or several other frameworks. Numpy is a Python library to manipulate with multidimensional arrays, and it is handy for scientific computation.

Setting up the environment

There are several ways you can run the examples described below:

The iconic exercise to start with deep learning and neural networks, in general, is to run the classification task on the MNIST dataset and recognize hand-written digits. There are several reasons I would like to stick with it:

  • it was one of the first tasks to show that Convolutional Neural Networks work;
  • it is included in the Keras framework and can be easily downloaded and used;
  • it is quick and easy to get started with and doesn’t require a lot of network traffic or space on the computer.

Preparing the data

Let’s begin. First, we load the dataset:

from keras.datasets import mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()

train_images and train_labels is the learning data, test_images and test_labels is the test data. Each image is represented as a Numpy array of pixels, and each pixel has a value from 0 to 255. Labels are the corresponding numbers from 0 to 9. To check that run the following code (>>> represents Python prompt). Let’s examine learning data first:

>>> len(train_labels)
60000

>>> train_images.shape
(60000, 28, 28)

>>> train_labels
array([5, 0, 4, ..., 5, 6, 8], dtype=uint8)

Next, examine test data:

>>> len(test_labels)
10000

>>> test_images.shape
(10000, 28, 28)

>>> test_labels
array([7, 2, 1, ..., 4, 5, 6], dtype=uint8)

Specifying a neural network

The data is loaded and verified; it’s time to specify a network. Every network consists of layers (it’s just easier to think about a bunch of neurons when you represent them as layers). Each layer of the network “cleans” the data in a certain way and extracts a useful representation out of that data. Think about deep learning as a stack of such layers. For linear stacking of layers a Sequential model is the best fit, but if you want to build a custom architecture, you can use Functional API. In our example, we specify a Sequential model and add two fully connected (Dense) layers to our network:

from keras import models
from keras import layers

network = models.Sequential()
network.add(layers.Dense(512, activation='relu', input_shape=(28 * 28,)))
network.add(layers.Dense(10, activation='softmax'))

The first layer consists of 512 neurons, and the second layer is a softmax layer that returns a set of probabilities how the analyzed image corresponds to each class (digits from 0 to 9), that is why it consists of only ten neurons.

To compile the network, we need to specify an optimizer, a loss function and metrics to pay attention while learning and testing. An optimizer updates network parameters depending on learning data and a loss function. The loss function calculates how good the network performs. Metrics are used to calculate the performance of the network too, but they are not used in the training process. Let’s choose Adam optimizer (one of the variants of Stochastic gradient descent) for updating network parameters. Loss function in our case is categorical cross entropy because each image belongs to one of 10 categories. And we will look at the accuracy metric to see how well the network performs. We are ready to compile the network:

network.compile(optimizer='adam',
                loss='categorical_crossentropy',
                metrics=['accuracy'])

Adjusting the data

Before learning on the data, we need to adjust learning and test data to fit our network. That’s said, instead of each image to be a matrix of 28 by 28 pixels of integer values between 0 and 255 we change it to a vector of size 28*28 with floating point values in the interval between 0 and 1:

train_images = train_images.reshape((60000, 28 * 28))
train_images = train_images.astype('float32') / 255

test_images = test_images.reshape((10000, 28 * 28))
test_images = test_images.astype('float32') / 255

There is one more step to adjust the data: convert numerical labels into the categorical format. A format where a label of 5 is represented as a 10-dimensional vector with all zeros except 1 at the index corresponding to 5. We will use the Keras utility to_categorical:

from keras.utils import to_categorical

train_labels = to_categorical(train_labels)
test_labels = to_categorical(test_labels)

Running the experiments

Everything is ready to start learning by calling fit method and specifying training images and labels, the number of epochs – how many times to repeat learning on the whole training set, and batch_size for the number of images in one training batch:

>>> network.fit(train_images, train_labels, epochs=10, batch_size=64)
Epoch 1/10
60000/60000 [==============================] - 12s - loss: 0.2421 - acc: 0.9125
Epoch 2/10
50432/60000 [========================>.....] - ETA: 2s - loss: 0.0981 - acc: 0.9701
...

After 10 epochs we get an accuracy of 99.1% and it is time to check the accuracy on a test set (the network haven’t seen it yet):

>>> test_loss, test_accuracy = network.evaluate(test_images, test_labels)
>>> print('test accuracy:', test_accuracy)
test accuracy: 0.9795

It is almost 98%, and it is a great result. In fact, the most important is to check the test accuracy, which shows how good is the network in generalizing to the previously unseen data. The big gap between training and test data means that the network was more likely overfitted. But it doesn’t seem to be the case here.

Final notes

That’s it! We’ve just built and trained a neural network in less than 30 lines of code. Isn’t it amazing? I hope you enjoyed the article and will give it a try. To understand each line of the code profoundly and to master Deep Learning I recommend starting with the book Deep Learning with Python and with Keras tutorials.

  • このエントリーをはてなブックマークに追加

Comments are closed.