In today’s fast-paced world of local food delivery, ensuring customer satisfaction is key for companies. Major players like Zomato and Swiggy dominate this industry. Customers expect fresh food; if they receive spoiled items, they appreciate a refund or discount voucher. However, manually determining food freshness is cumbersome for customers and company staff. One solution is to automate this process using Deep Learning models. These models can predict food freshness, allowing only flagged complaints to be reviewed by employees for final validation. If the model confirms food freshness, it can automatically dismiss the complaint. In this article we will be building a Food Quality Detector using Deep Learning.
Deep Learning, a subset of artificial intelligence, offers significant utility in this context. Specifically, CNNs (Convolutional Neural Networks) can be employed to train models using food images to discern their freshness. The accuracy of our model hinges entirely on the quality of the dataset. Ideally, incorporating real food images from users’ chatbot complaints in hyperlocal food delivery apps would greatly enhance accuracy. However, lacking access to such data, we rely on a widely-used dataset known as the “Fresh and Rotten Classification dataset,” accessible on Kaggle. To explore the complete deep-learning code, simply click the “Copy & Edit” button provided here.
This article was published as a part of the Data Science Blogathon.
Deep Learning, a subset of Artificial Intelligence, primarily employs spatial datasets to construct models. Neural networks within Deep Learning are utilized to train these models, mimicking the functionality of the human brain.
In the context of food quality detection, training deep learning models with extensive sets of food images is essential for accurately distinguishing between good and bad quality food items. We can do hyperparameter tuning based on the data that is being fed, in order to make the model more accurate.
Integrating this feature into hyperlocal food delivery offers several benefits. The model avoids bias towards specific customers and predicts accurately, thereby reducing complaint resolution time. Additionally, we can employ this feature during the order packing process to inspect food quality before delivery, ensuring customers consistently receive fresh food.
In order to completely build this feature, we need to follow a lot of steps like obtaining and cleaning the dataset, training the deep learning model, Evaluating the performance and doing hyperparameter tuning, and finally saving the model in h5 format. After this, we can implement the frontend using React, and the backend using Python’s framework Django. We will use Django to handle image upload and process it.
Before going deep into the data preprocessing and model building, it’s crucial to understand the dataset. As discussed earlier, we will be using a dataset from Kaggle named Fresh and Rotten Food Classification. This dataset is split into two main categories named Train and Test which are used for training and testing purposes respectively. Under the train folder, we have 9 sub-folders of fresh fruits and fresh vegetables and 9 sub-folders of rotten fruits and rotten vegetables.
Key Features of Dataset
In this section, we will first load the images using ‘tensorflow.keras.preprocessing.image.load_img‘ function and visualize the images using the matplotlib library. Preprocessing these images for model training is really important. This involves cleaning and organizing the images to make it suitable for the model.
import os
import matplotlib.pyplot as plt
from tensorflow.keras.preprocessing.image import load_img
def visualize_sample_images(dataset_dir, categories):
n = len(categories)
fig, axs = plt.subplots(1, n, figsize=(20, 5))
for i, category in enumerate(categories):
folder = os.path.join(dataset_dir, category)
image_file = os.listdir(folder)[0]
img_path = os.path.join(folder, image_file)
img = load_img(img_path)
axs[i].imshow(img)
axs[i].set_title(category)
plt.tight_layout()
plt.show()
dataset_base_dir = '/kaggle/input/fresh-and-stale-classification/dataset'
train_dir = os.path.join(dataset_base_dir, 'Train')
categories = ['freshapples', 'rottenapples', 'freshbanana', 'rottenbanana']
visualize_sample_images(train_dir, categories)
Now let’s load the training and testing images into variables. We will resize all images into same height and width of 180.
from tensorflow.keras.preprocessing.image import ImageDataGenerator
batch_size = 32
img_height = 180
img_width = 180
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest',
validation_split=0.2)
train_generator = train_datagen.flow_from_directory(
train_dir,
target_size=(img_height, img_width),
batch_size=batch_size,
class_mode='binary',
subset='training')
validation_generator = train_datagen.flow_from_directory(
train_dir,
target_size=(img_height, img_width),
batch_size=batch_size,
class_mode='binary',
subset='validation')
Now let’s build the deep-learning model using the Sequential algorithm from ‘tensorflow.keras’. We will add 3 convolution layers and an Adam optimizer. Before dwelling on the practical part let’s first understand what the terms ‘Sequential Model‘, ‘Adam Optimizer‘, and ‘Convolution Layer‘ mean.
The sequential model comprises a stack of layers, offering a fundamental structure in Keras. It’s ideal for scenarios where your neural network features a single input tensor and a single output tensor. You add layers in the sequential order of execution, making it suitable for constructing straightforward models with stacked layers. This simplicity makes the sequential model highly useful and easier to implement.
The abbreviation of Adam is ‘Adaptive Moment Estimation.’ It serves as an optimization algorithm alternative to stochastic gradient descent, updating network weights iteratively. Adam Optimizer is beneficial as it maintains a learning rate (LR) for each network weight, which is advantageous in handling noise in the data.
It is the main component of the Convolutional Neural Networks (CNNs). It is mainly used for processing spatial datasets such as images. This layer applies a convolution function or operation to the input and then passes the result to the next layer.
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Dropout
model = Sequential([
Conv2D(32, (3, 3), activation='relu', input_shape=(img_height, img_width, 3)),
MaxPooling2D(2, 2),
Conv2D(64, (3, 3), activation='relu'),
MaxPooling2D(2, 2),
Conv2D(128, (3, 3), activation='relu'),
MaxPooling2D(2, 2),
Flatten(),
Dense(512, activation='relu'),
Dropout(0.5),
Dense(1, activation='sigmoid')
])
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
epochs = 10
history = model.fit(
train_generator,
steps_per_epoch=train_generator.samples // batch_size,
epochs=epochs,
validation_data=validation_generator,
validation_steps=validation_generator.samples // batch_size)
Now let’s test the model by giving it a new food image and let’s see how accurately it can classify into fresh and rotten food.
from tensorflow.keras.preprocessing import image
import numpy as np
def classify_image(image_path, model):
img = image.load_img(image_path, target_size=(img_height, img_width))
img_array = image.img_to_array(img)
img_array = np.expand_dims(img_array, axis=0)
img_array /= 255.0
predictions = model.predict(img_array)
if predictions[0] > 0.5:
print("Rotten")
else:
print("Fresh")
image_path = '/kaggle/input/fresh-and-stale-classification/dataset/Train/
rottenoranges/Screen Shot 2018-06-12 at 11.18.28 PM.png'
classify_image(image_path, model)
As we can see the model has predicted correctly. As we have given rottenorange image as input the model has correctly predicted it as Rotten.
For the frontend(React) and backend(Django) code, you can see my complete code on GitHub here: Link
In conclusion, to automate food quality complaints in Hyperlocal Delivery apps, we propose building a deep learning model integrated with a web app. However, due to the limited training data, the model may not accurately detect every food image. This implementation serves as a foundational step towards a larger solution. Access to real-time user-uploaded images within these apps would significantly enhance the accuracy of our model.
The media shown in this article is not owned by Analytics Vidhya and is used at the Authorโs discretion.