Easy Hyperparameter Tuning in Neural Networks using Keras Tuner

Deepanshi Last Updated : 17 Aug, 2021
6 min read

This article was published as a part of the Data Science Blogathon

In the last few articles, we discussed Neural Networks, their work and, their practical implementation in Python on the MNIST dataset. Continuing to the same, in this article we will look at how to tune the parameters of Neural Network to achieve the appropriate parameters which provide the highest training and testing accuracy, we don’t want overfitting in our data, right.

I would highly suggest going through the Implementation of ANN on MNIST data blog to understand this one better.

 

What are Hyperparameters?

Hyperparameters are the values we provide to the model and are used to improve the performance of the model. They are not automatically learned during the training phase but have to be provided explicitly.

Hyperparameters play a major role in the performance of the model and should be chosen and set such that the model accuracy improves. In Neural Network some hyperparameters are the Number of Hidden layers, Number of neurons in each hidden layer, Activation functions, Learning rate, Drop out ratio, Number of epochs, and many more. In this article, We are going to use the simplest possible way for tuning hyperparameters using Keras Tuner. 

Using the Fashion MNIST Clothing Classification problem which is one of the most common datasets to learn about Neural Networks. But before moving on to the Implementation there are some prerequisites to use Keras tuner. The following is required:

  • Python 3.6+
  • Tensorflow 2.0+ (I had Tensorflow 2.1.0 in my system but still it didn’t work so had to upgrade it to 2.6.0)

Some Frequently asked questions(FAQs):

1. How to check the Tensorflow version:

#use this command
print(tensorflow.__version__)

2. How to upgrade Tensorflow?

#Use the following command 
pip install --upgrade tensorflow --user

3. What to do if it still does not work?
–> Use Google colab

Let’s move on to the problem statement now. In the Fashion MNIST dataset, we have images of clothing such as Tshirt, trousers, pullovers, dresses, coats, sandals,s and have a total of 10 labels.

#importing necessary libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Dropout
from tensorflow.keras.layers import Flatten
from tensorflow.keras.datasets import fashion_mnist
#loading the data
(X_train,y_train),(X_test,y_test)=fashion_mnist.load_data()
#visualizing the dataset
for i in range(25):
    # define subplot
    plt.subplot(5, 5, i+1)
    # plot raw pixel data
    plt.imshow(X_train[i], cmap=plt.get_cmap('gray'))
# show the figure
plt.show()
#normalizing the images
X_train=X_train/255
X_test=X_test/255

In the last MNIST digit classification example, we flattened the dataset before building the model, but here we will do it in the model building code itself. I have explained the model building code in detail in the last article, kindly refer to that for an explanation.

Model Building

model=Sequential([
    #flattening the images
    Flatten(input_shape=(28,28)),
    #adding first hidden layer
    Dense(256,activation='relu'),
    #adding second hidden layer
    Dense(128,activation='relu'),
    #adding third hidden layer
    Dense(64,activation='relu'),
    #adding output layer
    Dense(10,activation='softmax')
])
#compiling the model
model.compile(loss='sparse_categorical_crossentropy',optimizer='adam',metrics=['accuracy'])
#fitting the model
model.fit(X_train,y_train,epochs=10)
#evaluating the model
model.evaluate(X_test,y_test)

We have built the basic ANN model and got the training and testing accuracy as shown in the above figures. We can see the difference in accuracies and losses of the training and test sets. The loss in the training data is less but increases for the test data which can lead to wrong predictions on the unseen data.

Now let’s tune the Hyperparameters to get the values that can help in improving the model. We will be optimizing the following Hyperparameters in the model:

  • Number of hidden layers
  • Number of neurons in each hidden layer
  • Learning rate
  • Activation Function

But first, we need to install the Keras Tuner.

#use this command to install Keras tuner
pip install keras-tuner
#installing the required libraries
from tensorflow import keras
from keras_tuner import RandomSearch

Defining the function to build an ANN model where the hyperparameters will be the Number of neurons in the hidden layer and Learning rate.

def build_model(hp):          #hp means hyper parameters
    model=Sequential()
    model.add(Flatten(input_shape=(28,28)))
    #providing range for number of neurons in a hidden layer
    model.add(Dense(units=hp.Int('num_of_neurons',min_value=32,max_value=512,step=32),
                                    activation='relu'))
    #output layer
    model.add(Dense(10,activation='softmax'))
    #compiling the model
    model.compile(optimizer=keras.optimizers.Adam(hp.Choice('learning_rate',values=[1e-2, 1e-3, 1e-4])),loss='sparse_categorical_crossentropy',metrics=['accuracy'])
    return model

In the above code, we have defined the function by the name build_model(hp) where hp stands for hyperparameter. While adding the hidden layer we use hp.Int( ) function which takes the Integer value and tests on the range specified in it for tuning. We have provided the range for neurons from 32 to 512 with a step size of 32 so the model will test on neurons 32, 64,96,128…,512.

Then we have added the output layer. While compiling the model Adam optimizer is used with different values of learning rate which is the next hyperparameter for tuning. hp.Choice( ) function is used which will test on any one of the three values provided for the learning rate.

#feeding the model and parameters to Random Search
tuner=RandomSearch(build_model,
    objective='val_accuracy',
    max_trials=5,
    executions_per_trial=3,
    directory='tuner1',
    project_name='Clothing')

The code above uses the Random Search Hyperparameter Optimizer. The following variables are provided to the Random Search. The first is model i.e build_model, next objective is val_accuracy that means the objective of the model is to get a good validation accuracy. Next, the value of trails and execution per trail provided which is 5 and 3 respectively in our case meaning 15 (5*3) iterations will be done by the model to find the best parameters. Directory and project name are provided to save the values of every trial.

#this tells us how many hyperparameter we are tuning
#in our case it's 2 = neurons,learning rate
tuner.search_space_summary()
#fitting the tuner on train dataset
tuner.search(X_train,y_train,epochs=10,validation_data=(X_test,y_test))

The above code will run 5 trails with 3 executions each and will print the trail details which provide the highest validation accuracy. In the below figure, we can see the best validation accuracy achieved by the model.

We can also check the summary of all the trails done and the hyperparameters chosen for the best accuracy using the below code. The best accuracy is achieved using 416 neurons in the hidden layer and 0.0001 as the learning rate.

tuner.results_summary()

That’s how we perform tuning for Neural Networks using Keras Tuner.

Let’s tune some more parameters in the next code. Here we are also providing the range of the number of layers to be used in the model which is between 2 to 20.

def build_model(hp):                 #hp means hyper parameters
    model=Sequential()
    model.add(Flatten(input_shape=(28,28)))
    #providing the range for hidden layers  
    for i in range(hp.Int('num_of_layers',2,20)):         
        #providing range for number of neurons in hidden layers
        model.add(Dense(units=hp.Int('num_of_neurons'+ str(i),min_value=32,max_value=512,step=32),
                                    activation='relu'))
    model.add(Dense(10,activation='softmax'))    #output layer
    #compiling the model
    model.compile(optimizer=keras.optimizers.Adam(hp.Choice('learning_rate',values=[1e-2, 1e-3, 1e-4])),   #tuning learning rate
                  loss='sparse_categorical_crossentropy',metrics=['accuracy'])
    return model
#feeding the model and parameters to Random Search
tuner=RandomSearch(build_model,
    objective='val_accuracy',
    max_trials=5,
    executions_per_trial=3,
    directory='project',
    project_name='Clothing')
#tells us how many hyperparameters we are tuning
#in our case it's 3 =layers,neurons,learning rate
tuner.search_space_summary()
#fitting the tuner
tuner.search(X_train,y_train,epochs=10,validation_data=(X_test,y_test))

Summary and the best accuracy of the model in the below code. This time we got 0.89 as the validation accuracy.

tuner.results_summary()

 

Endnotes:

This was the simplest possible way to tune the parameters in Neural Network. Please refer to the official documentation of Keras Tuner for more details: https://keras.io/keras_tuner/

 

About the Author:

I am Deepanshi Dhingra currently working as a Data Science Researcher, and possess knowledge of Analytics, Exploratory Data Analysis, Machine Learning, and Deep Learning. Feel free to content with me on LinkedIn for any feedback and suggestions.

The media shown in this article are not owned by Analytics Vidhya and are used at the Author’s discretion.

Responses From Readers

Clear

Congratulations, You Did It!
Well Done on Completing Your Learning Journey. Stay curious and keep exploring!

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details