In a rapidly evolving technological landscape, understanding human identity through artificial intelligence is more crucial than ever. Picture a bustling city where cameras capture images and accurately detect age and gender. This article takes you into age and gender detection using image processing and deep learning techniques. By exploring the UTK dataset, you’ll learn how to build a convolutional neural network with Keras and TensorFlow, covering essential steps like data preprocessing and model training. Whether you’re a budding data scientist or a seasoned professional, this guide will equip you with the skills to leverage AI to understand demographics, paving the way for innovative marketing and customer engagement applications.
This article was published as a part of the Data Science Blogathon.
The upgrading of image pictures taken from camera sources, from satellites, aeroplanes, and the images caught in everyday life is called picture processing. Processing the image based on analysis requires many different techniques and calculations. Digital-formed pictures need to be carefully imagined and studied.
Image processing has two main steps followed by simple steps. To improve an image and produce more high-quality pictures, other programs can be adopted, such as picture upgrades. The other procedure is the most pursued strategy for extracting data from a picture. The division of images into certain parts is called segmentation.
The location of the information accessible in the pictures is much-needed information. The image’s data is to be changed and adjusted for discovery purposes.
Just as the issue is expunged, different procedures are required. In a Facial identification strategy, the articulations that the faces contain hold a great deal of data. At whatever point the individual associates with the other individual, many ideas are associated.
The evolving of ideas helps in figuring out certain boundaries. Age assessment is a multi-class issue in which the years are categorized into classes. Individuals of various ages have various facial features, so it is hard to assemble the pictures.
To identify the age and gender of several faces’ procedures are followed by several methods. From the neural network, features are taken by the convolution network. The image is processed into one of the age classes in light of the prepared models. The highlights are handled further and shipped off the preparation frameworks.
The UTK Dataset comprises age, gender, images, and pixels in .csv format. Age and gender detection according to images has been researched for a long time. Over the years, different methodologies have been used to handle this issue. Presently, we start with the assignment of recognizing age and gender utilizing the Python programming language.
Keras is the interface for the TensorFlow library. Use Keras if you need a profound learning library that allows simple and quick prototyping (through ease of use, seclusion, and extensibility). It supports convolutional networks and repetitive organizations, as well as blends of the two. It runs flawlessly on CPU and GPU.
The data set can be downloaded from AGE, GENDER AND ETHNICITY (FACE DATA) CSV | Kaggle.
We will import the necessary libraries, load the age and gender dataset, and perform initial data processing to prepare the data for analysis. We’ll also visualize key demographic features, such as gender and ethnicity distributions, to gain insights into the dataset before developing the model.
#Import libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
df=pd.read_csv("age_gender.csv")
df1= pd.DataFrame(df)
plt.xlabel = 'Gender (1= Female, 0-Male)'
plt.figure(figsize=(10,7))
ax=df1.gender.value_counts().plot.bar(x='Gender (1= Female, 0-Male)', y='Count', title='Gender', legend = (1,0, ('Female', 'Male')))
plt.figure(figsize=(10,7))
labels =['White','Black','Indian','Asian','Hispanic']
ax=df1.ethnicity.value_counts().plot.bar()
ax.set_xticklabels(labels)
ax.set_title('Ethinicity')
## Converting pixels into numpy array
df1['pixels'] = df1['pixels'].apply(lambda x: np.reshape(np.array(x.split(), dtype="float32"), (48,48)))
print(df1.head())
def plot_data(rows, cols, lower_value, upper_value):
fig = plt.figure(figsize=(cols*3,rows*4))
for i in range(1, cols*rows + 1):
k = np.random.randint(lower_value,upper_value)
fig.add_subplot(rows, cols, i) # adding sub plot
gender = gender_values_to_labels[df.gender[k]]
ethnicity = eth_values_to_labels[df.ethnicity[k]]
age = df.age[k]
im = df.pixels[k]
plt.imshow(im, cmap='gray')
plt.axis('off')
plt.title(f'Gender:{gender}nAge:{age}nEthnicity:{ethnicity}')
plt.tight_layout()
plt.show()
plot_data(rows=1, cols=7, lower_value=0, upper_value=len(df))
Keras is an open-source Neural Network library written in Python and sufficiently fit to run on Theano, TensorFlow, or CNTK, developed by one of Google’s engineers, Francois Chollet. It is easy to understand, extensible, and particularly suitable for quicker experimentation with profound neural organizations.
First, we will upload all libraries required for the data set. We will convert all the columns into an array by using the np.array and into dtype float. We will then split the data set into xTrain, yTrain, yTest, and xtest. Ultimately, we will apply the model sequentially and test the predictions.
In detail, first, we read the CSV file containing five columns: age, ethnicity, gender, img_name, and pixels, using the pandas, read_csv function. The first five rows got by using DataFrame.head() method. We converted the column-named pixels into an array using the NumPy library and Reshaping them into dimensions 48 and 48 using the lambda function. The values in the float were converted using the same lambda function. Then, we divided the values further by 255.
We assigned the variable name to get the first row of the column of the pixel. We further checked the image using matplotlib to see if it was seen.
This section imports the necessary libraries for data handling, image processing, and neural network operations. The dataset (age_gender.csv
) is loaded, and the pixel data is reshaped and normalized for further analysis and model training. This preprocessing step ensures the images are ready for the machine learning model to process.
import keras import json import sys import tensorflow as tf from keras.layers import Input import numpy as np import argparse from keras_applications.resnext import ResNeXt50 from keras.utils.data_utils import get_file import face_recognition
import numpy as np
import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import cv2 from PIL import Image df=pd.read_csv("age_gender.csv")df.head()df1= pd.DataFrame(df) df1['pixels'] = df1.pixels.apply(lambda x: np.reshape(np.array(x.split(' '),dtype='float32'),(48,48))) df1['pixels']= df1['pixels']/255 im = df1['pixels'][0] im plt.imshow(im, cmap='gray') plt.axis('off')
Now, the pixel values of the images are converted into floating-point format and reshaped for model compatibility. Additionally, age and gender values are extracted and stored for later use in model training and validation. The reshaping process prepares the data for efficient processing and ensures proper image data handling for deep learning applications.
X = np.zeros(shape=(23705,48,48)) for i in range(len(df1["pixels"])): X[i] = df1["pixels"][i] X.dtype Output - dtype('float64') #Age ag = df1['age'] ag=ag.astype(float) ag= np.array(ag) ag.shape
Output:
(23705,)
This section will explain how the gender data is processed and combined with the age data:
# Gender Data Preparation
g = df1['gender']
g = np.array(g)
g.shape # (23705,)
# Combining age and gender
labels_f = []
i = 0
while i < len(a):
label = []
label.append([a[i]]) # Age
label.append([g[i]]) # Gender
labels_f.append(label)
i += 1
# Convert the list into an array
labels_f = np.array(labels_f)
labels_f.shape # (23705, 2, 1)
labels_f =np.array(labels_f)
labels_f.shape
Output:
(23705, 2, 1)
This section focuses on splitting the dataset into training and testing sets:
# Splitting the Data for Training and Testing
import tensorflow as tf
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(X, a, test_size=0.25)
# Displaying the shape of the train and test sets
print(X_test.shape)
print(X_train.shape)
print(Y_test.shape)
print(Y_train.shape)
# Example Shape Output:
# (X_train, X_test, Y_train, and Y_test)
# Further processing the training labels
Y_train_2 = [Y_train[:, 1], Y_train[:, 0]]
Here, the dataset is split into training and testing subsets using sklearn
, and the shapes of the resulting data are printed for validation. It also shows further processing of the training labels for use in machine learning models.
To build a robust age and gender detection model, we use a Convolutional Neural Network (CNN) architecture, which is especially effective for image classification tasks. CNNs are designed to recognize spatial patterns in images, making them ideal for distinguishing age-related and gender-specific features within facial images. The architecture we developed includes several layers of convolutional and max-pooling operations, followed by fully connected dense layers. This structure allows the model to progressively extract features like edges, textures, and facial patterns, crucial for accurate age and gender classification.
The model architecture begins with four convolutional layers with increasing filter sizes (32, 64, 128, and 256), each followed by max-pooling layers to reduce dimensionality and retain essential features. After flattening the data, two dense layers are used to capture complex patterns before branching out to two outputs: one for predicting gender and another for predicting age. We used the ReLU activation function for intermediate layers to introduce non-linearity, and sigmoid and ReLU activations for the final output layers, tuned to our binary (gender) and regression (age) tasks.
Now, we will develop the convolutional neural network model using Keras, build the architecture, and train it on our dataset.
from tensorflow.keras.layers import Dropout
from tensorflow.keras.layers import Flatten,BatchNormalization
from tensorflow.keras.layers import Dense, MaxPooling2D,Conv2D
from tensorflow.keras.layers import Input,Activation,Add
from tensorflow.keras.models import Model
from tensorflow.keras.regularizers import l2
from tensorflow.keras.optimizers import Adam
import tensorflow as tf
def Convolution(input_tensor,filters):
x = Conv2D(filters=filters,kernel_size=(3, 3),padding = 'same',strides=(1, 1),kernel_regularizer=l2(0.001))(input_tensor)
x = Dropout(0.1)(x)
x= Activation('relu')(x)
return x
def model(input_shape):
inputs = Input((input_shape))
conv_1= Convolution(inputs,32)
maxp_1 = MaxPooling2D(pool_size = (2,2)) (conv_1)
conv_2 = Convolution(maxp_1,64)
maxp_2 = MaxPooling2D(pool_size = (2, 2)) (conv_2)
conv_3 = Convolution(maxp_2,128)
maxp_3 = MaxPooling2D(pool_size = (2, 2)) (conv_3)
conv_4 = Convolution(maxp_3,256)
maxp_4 = MaxPooling2D(pool_size = (2, 2)) (conv_4)
flatten= Flatten() (maxp_4)
dense_1= Dense(64,activation='relu')(flatten)
dense_2= Dense(64,activation='relu')(flatten)
drop_1=Dropout(0.2)(dense_1)
drop_2=Dropout(0.2)(dense_2)
output_1= Dense(1,activation="sigmoid",name='sex_out')(drop_1)
output_2= Dense(1,activation="relu",name='age_out')(drop_2)
model = Model(inputs=[inputs], outputs=[output_1,output_2])
model.compile(loss=["binary_crossentropy","mae"], optimizer="Adam",
metrics=["accuracy"])
return model
Model=model((48,48,1))
Model.summary()
History=Model.fit(X_train,Y_train_2,batch_size=64,validation_data=(X_test,Y_test_2),epochs=5,callbacks=[callback_list])
Model.evaluate(X_test,Y_test_2)
pred=Model.predict(X_test)
pred[1]
We will now see how to test the model on a specific image from the dataset, predict the age and gender, and visualize the image using matplotlib
. This is part of the model evaluation or testing phase.
# Plot the image and test the model's predictions
def test_image(ind, X, Model):
plt.imshow(X[ind]) # Plot the image
image_test = X[ind]
# Predict using the trained model
pred_1 = Model.predict(np.array([image_test]))
# Mapping prediction results for gender
sex_f = ['Female', 'Male']
# Get predicted age and gender
age = int(np.round(pred_1[1][0]))
sex = int(np.round(pred_1[0][0]))
# Output predicted values
print("Predicted Age: " + str(age))
print("Predicted Sex: " + sex_f[sex])
# Example usage of the function
test_image(1980, X, Model)
The task of recognizing age and gender, nonetheless, is an innately troublesome issue, more so than numerous other PC vision undertakings. The fundamental justification for this troublehole lies in the information needed to prepare these frameworks. While general article discovery errands can regularly approach many thousands or even large numbers of pictures for preparing, datasets with age and gender names are extensively more modest, as a rule, in the large numbers or, best case scenario, several thousand. Python obtained images and the Model did not do much in the accuracy rate, further, improvement is required in the model algorithm.
A. Age and gender prediction finds applications in various fields, including targeted advertising, market research, customer segmentation, and personalized user experiences. It helps businesses tailor their products and services to specific demographics and analyze consumer behaviour. Additionally, it aids in age and gender-based content recommendations, social media marketing strategies, and public health initiatives, allowing for more effective and targeted campaigns.
A. Age and gender prediction algorithms can be reasonably accurate, but their performance can vary depending on the dataset used for training and the specific model employed. While they often achieve high accuracy rates, occasional misclassifications may occur due to factors like diverse appearances, age-related changes, and cultural variations. Continuous improvement in technology and data quality can enhance their precision further.
The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.
can you give advantages of this project? what was the real world application of age and gender detection