Landmark Detection with Deep Learning

Shikha Gupta Last Updated : 03 Dec, 2021
7 min read

This article was published as a part of the Data Science Blogathon

Whenever we saw our childhood diaries, we always wanted to recognize all the places we had visited at least once in our childhood. But we don’t remember their names. Most Indian people visit various temples but they always forget the names of these temples. Sometimes this also puts us in a shy moment where we can’t even tell our peers, oh yeah even I visit this place. Even we forget, Who had built this monument? The solution to these problems is Landmark Detection, which helps us memorize the names of these places. Do you know the working of the landmarks detection? In this blog, we are going to create a deep learning project on landmark detection with Python.

What is Landmark Detection?

The mechanism of detecting the famous human-made sculptures, buildings, and monuments inside an image is defined as Landmark Detection. You can simply compare it with the famous application of Google known as Google Landmark Detection, which is used by Google Maps.

At the end of this blog, you will be able to create your own landmark detector like Google using the Keras library of Deep learning.

Dataset

Our task is to build neural networks to recognize the landmarks inside the images using the Python programming language. The most critical task for any project is to choose an appropriate dataset for model training. We choose Kaggle’s dataset for our deep learning project. The dataset consists of image URLs that are publicly available online. The dataset contains three CSV files including test images, training images, and index images. The test images are for the purpose of image recognition and landmark labelling predicted by the deep learning model. The training images are already defined and associated with landmark labels that are used to train models for accurate landmark recognition. The use of index images is found in the image retrieval task. You can download the dataset from https://www.kaggle.com/google/google-landmarks-dataset.

Landmark Detection with Deep Learning

 

Google Landmark Detection with Keras

Step One: Import the libraries

To begin with this task, our first step is to import all the required python libraries that we need to create our Deep Learning model for the sake of landmark detection:

import numpy as np
import pandas as pd
import keras
import cv2
from matplotlib import pyplot as plt
import os
import random
from PIL import Image

Step Two: Import dataset

So after importing libraries our next task is to import the landmark datasets containing images:

samples = 20000
df = pd.read_csv("train.csv")
df = df.loc[:samples,:]
num_classes = len(df["landmark_id"].unique())
num_data = len(df)

Now we’ll check the size of the training data and the number of unique classes present in the training data:

print("Size of training data:", df.shape)
print("Number of unique classes:", num_classes)

Output

Size of training data: (20001, 2)

Number of unique classes: 1020

The number of training sets available in the training samples is 20,001, they belong to around 1,020 different classes, which gives us on an average 19.6 images per class, however, this distribution might not be the case, so let’s observe the distribution of samples by class:

data = pd.DataFrame(df['landmark_id'].value_counts())
#index the data frame
data.reset_index(inplace=True) 
data.columns=['landmark_id','count']
print(data.head(10))
print(data.tail(10))

Output

landmark_id count

0 1924 944

1 27 504

2 454 254

3 1346 244

4 1127 201

5 870 193

6 2185 177

7 1101 162

8 389 140

9 219 139

landmark_id count

1010 499 2

1011 1942 2

1012 875 2

1013 2297 2

1014 611 2

1015 1449 2

1016 1838 2

1017 604 2

1018 374 2

1019 991 2

As we can see, the range of 10 most frequent landmarks from 139 data points to 944 data points while the last 10 landmarks have only 2 data points.

print(data['count'].describe())#statistical data for the distribution
plt.hist(data['count'],100,range = (0,944),label = 'test')#Histogram of the distribution
plt.xlabel("Amount of images")
plt.ylabel("Occurences")

Output

count 1020.000000

mean 19.608824

std 41.653684

min 2.000000

25% 5.000000

50% 9.000000

75% 21.000000

max 944.000000

Name: count, dtype: float64

Text(0, 0.5, ‘Occurences’)

histogram for landmark recognition

After observing the above histogram, we can conclude that the huge majority of classes are not associated with a lot of images.

print("Amount of classes with less than or equal to five datapoints:", (data['count'].between(0,5)).sum()) 
print("Amount of classes between five and 10 datapoints:", (data['count'].between(5,10)).sum())
n = plt.hist(df["landmark_id"],bins=df["landmark_id"].unique())
freq_info = n[0]
plt.xlim(0,data['landmark_id'].max())
plt.ylim(0,data['count'].max())
plt.xlabel('Landmark ID')
plt.ylabel('Number of images')

Output

Amount of classes with less than or equal to five data points: 322

Amount of classes between five and 10 data points: 342

Text(0, 0.5, 'Number of images')

number of landmarks ID and images

The above graph depicts that around 50% of the 1020 classes have fewer than 10 images, which can create a problem while training a classifier.

In terms of the number of images they have, there are some “outliers”, which represents that there might have a higher chance of getting a correct “guess” with the highest amount in these classes.

Training Model

Now, our task is to train the Deep Learning model using the Python programming language to detect the landmark which will work the same as the Google landmark detection model.

from sklearn.preprocessing import LabelEncoder
lencoder = LabelEncoder()
lencoder.fit(df["landmark_id"])
def encode_label(lbl):
    return lencoder.transform(lbl)
def decode_label(lbl):
    return lencoder.inverse_transform(lbl)
def get_image_from_number(num):
    fname, label = df.loc[num,:]
    fname = fname + ".jpg"
    f1 = fname[0]
    f2 = fname[1]
    f3 = fname[2]
    path = os.path.join(f1,f2,f3,fname)
    im = cv2.imread(os.path.join(base_path,path))
    return im, label
print("4 sample images from random classes:")
fig=plt.figure(figsize=(16, 16))
for i in range(1,5):
    a = random.choices(os.listdir(base_path), k=3)
    folder = base_path+'/'+a[0]+'/'+a[1]+'/'+a[2]
    random_img = random.choice(os.listdir(folder))
    img = np.array(Image.open(folder+'/'+random_img))
    fig.add_subplot(1, 4, i)
    plt.imshow(img)
    plt.axis('off')
plt.show()

Landmark Detection with Deep Learning | Monuments 2

from keras.applications import VGG19
from keras.layers import *
from keras import Sequential
# Parameters
# learning_rate   = 0.0001
# decay_speed     = 1e-6
# momentum        = 0.09
# loss_function   = "sparse_categorical_crossentropy"
source_model = VGG19(weights=None)
#new_layer = Dense(num_classes, activation=activations.softmax, name='prediction')
drop_layer = Dropout(0.5)
drop_layer2 = Dropout(0.5)
model = Sequential()
for layer in source_model.layers[:-1]: # go through until last layer
    if layer == source_model.layers[-25]:
        model.add(BatchNormalization())
    model.add(layer)
#     if layer == source_model.layers[-3]:
#         model.add(drop_layer)
# model.add(drop_layer2)
model.add(Dense(num_classes, activation="softmax"))
model.summary()
optim1 = keras.optimizers.RMSprop(learning_rate = 0.0001, momentum = 0.09)
optim2 = keras.optimizers.Adam(learning_rate=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-07)
model.compile(optimizer=optim1,
             loss="sparse_categorical_crossentropy",
             metrics=["accuracy"])
sgd = SGD(lr=learning_rate, decay=decay_speed, momentum=momentum, nesterov=True)
rms = keras.optimizers.RMSprop(lr=learning_rate, momentum=momentum)
model.compile(optimizer=rms,
              loss=loss_function,
              metrics=["accuracy"])
print("Model compiled! n")

Output

Model: “sequential”

_________________________________________________________________

Layer (type) Output Shape Param #

=================================================================

batch_normalization (BatchNo (None, 224, 224, 3) 12

_________________________________________________________________

block1_conv1 (Conv2D) (None, 224, 224, 64) 1792

_________________________________________________________________

block1_conv2 (Conv2D) (None, 224, 224, 64) 36928

_________________________________________________________________

block1_pool (MaxPooling2D) (None, 112, 112, 64) 0

_________________________________________________________________

block2_conv1 (Conv2D) (None, 112, 112, 128) 73856

_________________________________________________________________

block2_conv2 (Conv2D) (None, 112, 112, 128) 147584

_________________________________________________________________

block2_pool (MaxPooling2D) (None, 56, 56, 128) 0

_________________________________________________________________

block3_conv1 (Conv2D) (None, 56, 56, 256) 295168

_________________________________________________________________

block3_conv2 (Conv2D) (None, 56, 56, 256) 590080

_________________________________________________________________

block3_conv3 (Conv2D) (None, 56, 56, 256) 590080

_________________________________________________________________

block3_conv4 (Conv2D) (None, 56, 56, 256) 590080

_________________________________________________________________

block3_pool (MaxPooling2D) (None, 28, 28, 256) 0

_________________________________________________________________

block4_conv1 (Conv2D) (None, 28, 28, 512) 1180160

_________________________________________________________________

block4_conv2 (Conv2D) (None, 28, 28, 512) 2359808

_________________________________________________________________

block4_conv3 (Conv2D) (None, 28, 28, 512) 2359808

_________________________________________________________________

block4_conv4 (Conv2D) (None, 28, 28, 512) 2359808

_________________________________________________________________

block4_pool (MaxPooling2D) (None, 14, 14, 512) 0

_________________________________________________________________

block5_conv1 (Conv2D) (None, 14, 14, 512) 2359808

_________________________________________________________________

block5_conv2 (Conv2D) (None, 14, 14, 512) 2359808

_________________________________________________________________

block5_conv3 (Conv2D) (None, 14, 14, 512) 2359808

_________________________________________________________________

block5_conv4 (Conv2D) (None, 14, 14, 512) 2359808

_________________________________________________________________

block5_pool (MaxPooling2D) (None, 7, 7, 512) 0

_________________________________________________________________

flatten (Flatten) (None, 25088) 0

_________________________________________________________________

fc1 (Dense) (None, 4096) 102764544

_________________________________________________________________

fc2 (Dense) (None, 4096) 16781312

_________________________________________________________________

dense (Dense) (None, 1020) 4178940

=================================================================

Total params: 143,749,192

Trainable params: 143,749,186

Non-trainable params: 6

#Function used to process the data, fitted into a data generator.
def get_image_from_number(num, df):
    fname, label = df.iloc[num,:]
    fname = fname + ".jpg"
    f1 = fname[0]
    f2 = fname[1]
    f3 = fname[2]
    path = os.path.join(f1,f2,f3,fname)
    im = cv2.imread(os.path.join(base_path,path))
    return im, label
def image_reshape(im, target_size):
    return cv2.resize(im, target_size)
def get_batch(dataframe,start, batch_size):
    image_array = []
    label_array = []
    end_img = start+batch_size
    if end_img > len(dataframe):
        end_img = len(dataframe)
    for idx in range(start, end_img):
        n = idx
        im, label = get_image_from_number(n, dataframe)
        im = image_reshape(im, (224, 224)) / 255.0
        image_array.append(im)
        label_array.append(label)
    label_array = encode_label(label_array)
    return np.array(image_array), np.array(label_array)
batch_size = 16
epoch_shuffle = True
weight_classes = True
epochs = 15
# Split train data up into 80% and 20% validation
train, validate = np.split(df.sample(frac=1), [int(.8*len(df))])
print("Training on:", len(train), "samples")
print("Validation on:", len(validate), "samples")
for e in range(epochs):
    print("Epoch: ", str(e+1) + "/" + str(epochs))
    if epoch_shuffle:
        train = train.sample(frac = 1)
    for it in range(int(np.ceil(len(train)/batch_size))):
        X_train, y_train = get_batch(train, it*batch_size, batch_size)
        model.train_on_batch(X_train, y_train)
model.save("Model.h5")

Output

Training on: 16000 samples

Validation on: 4001 samples

Epoch: 1/15

Epoch: 2/15

Epoch: 3/15

Epoch: 4/15

Epoch: 5/15

Epoch: 6/15

Epoch: 7/15

Epoch: 8/15

Epoch: 9/15

Epoch: 10/15

Epoch: 11/15

Epoch: 12/15

Epoch: 13/15

Epoch: 14/15

Epoch: 15/15

Now we are done with successful model training. Our next step is to test the model, let’s see the results of our trained landmark detection model:

### Test on the training set
batch_size = 16
errors = 0
good_preds = []
bad_preds = []
for it in range(int(np.ceil(len(validate)/batch_size))):
    X_train, y_train = get_batch(validate, it*batch_size, batch_size)
    result = model.predict(X_train)
    cla = np.argmax(result, axis=1)
    for idx, res in enumerate(result):
        print("Class:", cla[idx], "- Confidence:", np.round(res[cla[idx]],2), "- GT:", y_train[idx])
        if cla[idx] != y_train[idx]:
            errors = errors + 1
            bad_preds.append([batch_size*it + idx, cla[idx], res[cla[idx]]])
        else:
            good_preds.append([batch_size*it + idx, cla[idx], res[cla[idx]]])
print("Errors: ", errors, "Acc:", np.round(100*(len(validate)-errors)/len(validate),2))
#Good predictions
good_preds = np.array(good_preds)
good_preds = np.array(sorted(good_preds, key = lambda x: x[2], reverse=True))
fig=plt.figure(figsize=(16, 16))
for i in range(1,6):
    n = int(good_preds[i,0])
    img, lbl = get_image_from_number(n, validate)
    img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
    fig.add_subplot(1, 5, i)
    plt.imshow(img)
    lbl2 = np.array(int(good_preds[i,1])).reshape(1,1)
    sample_cnt = list(df.landmark_id).count(lbl)
    plt.title("Label: " + str(lbl) + "nClassified as: " + str(decode_label(lbl2)) + "nSamples in class " + str(lbl) + ": " + str(sample_cnt))
    plt.axis('off')
plt.show()

Landmark Detection with Deep Learning | Monuments

Conclusion

You can see the model output, how monument images are classified depending on the classes and labels. It uses the Keras library of deep learning to create a convolutional network which in turn trains the model. Hope you linked the blog, for any doubt please hit the box.

The media shown in this article is not owned by Analytics Vidhya and are used at the Author’s discretion. 

My full name is Shikha Gupta , pursuing B.tech in computer science from Banasthali vidhyapeeth Rajasthan.
I am from East Champaran in Bihar.
My area of interest ,Deep learning,NLP,Java,Data Structure,DBMS and many more.

Responses From Readers

Clear

Congratulations, You Did It!
Well Done on Completing Your Learning Journey. Stay curious and keep exploring!

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details