Finding and tracking the positions of important body joints or keypoints in an image or video sequence is the task of posture detection, commonly referred to as pose estimation or keypoint detection. It seeks to comprehend and depict the positioning and spatial arrangement of people or other things in a scene. Pose detection plays a crucial role and finds applications in various sectors, including robotics and automation, gaming, security and surveillance, as well as sports and fitness monitoring. It enables a wide range of applications that involve human-computer interaction, analysis, animation, healthcare, security, and robotics by offering insightful information on human movement and spatial relationships. In this article, we are going to study some exiting pose detection algorithms using new computer vision techniques. Understand how as a beginner we can use them in real environment. These algorithms are:
We will also examine two new algorithms recently contributed by Google that utilize the MobileNet V2 architecture. These two algorithms are, MoveNet Lightning and MoveNet Thunder.
This article was published as a part of the Data Science Blogathon.
The OpenPose pose detection model has a complex architecture with several stages and parts. To find and estimate keypoints of numerous people in an image or video, a multi-stage pipeline is used. The model first examines the input image and makes an effort to comprehend its features. The model maps these features to represent various aspects of the image, including forms, colors, and textures. In the subsequent stage, the model then focuses on understanding the connections between different body parts.
It produces another set of maps that illustrate the potential linkages between different body parts, such as the connection between the wrist and elbow or the shoulder and hip. In order to determine each person’s true pose, the model employs an algorithm to decipher the link maps. It analyses the maps to establish the relationships between body components and builds a comprehensive skeletal model of every pose.
These steps enable the OpenPose model to detect and track the poses of several people in real-time with accuracy and efficiency.
The convolutional neural network (CNN) serves as the foundation for the design of the PoseNet pose detection model. To extract useful information, it takes an input image and runs it through several layers of convolutional processing. These convolutional layers aid in the capture of the image’s numerous patterns and structures. The one-person pose estimate method used by PoseNet focuses on estimating the pose keypoints of a single person. The 2D coordinates of body keypoints can be directly regressed using the CNN architecture. This means that the model develops the ability to forecast the X and Y coordinates of bodily joints such the wrists, elbows, knees, and ankles throughout training.
Pose estimation is quick and easy because to the PoseNet architecture’s simplicity, making it ideal for applications with constrained processing resources, such web browsers or smartphones. It offers a quick and simple approach for determining a person’s stance in an image or video.
The architecture of the MoveNet pose detection model is also constructed using a deep convolutional neural network (CNN). It employs a mobile-friendly design that is optimized to operate on embedded systems and other devices with limited resources. MoveNet uses a single-person pose estimation approach with the goal of estimating a person’s pose keypoints. It starts with a simple backbone network, then moves on to keypoint association and keypoint detection stages. The backbone network processes the input image, isolating significant characteristics. The keypoint association step further refines the keypoints by taking into account their dependencies and geographical relationships. The keypoint detection stage predicts the confidence ratings and exact positions of body keypoints.
The MoveNet design balances efficiency and precision, making it appropriate for real-time pose estimation on devices with constrained computational power. In a number of applications, such as fitness tracking, augmented reality, and gesture-based interaction, it offers a practical method for identifying and tracking human positions.
As specialised versions of the MoveNet family of models, Google created the Lightning and Thunder pose detection models. In 2021, the Lightning team unveiled an improved version of Lightning specifically designed for lightning-fast pose estimation. It is perfect for applications with strict latency limitations as it makes use of model compression techniques and architectural upgrades to reduce computing requirements and achieve blazing-fast inference times. The 2022 release of Thunder, on the other hand, focuses on multi-person pose estimate. It increases MoveNet’s capacity to precisely identify and follow multiple people’s positions at once in live situations.
Both the Lightning and Thunder models distinguish themselves from competing methods by providing precise posture estimation that is efficient and suited to certain use cases: Lightning for lightning-fast inference and universal device support, and Thunder for improved accuracy and performance. These models demonstrate Google’s dedication to developing pose detection technology to meet a range of application needs.
We need to stick to a few procedures in order to use the MoveNet Lightning model for pose detection on an image. To begin, ensure that you have installed the necessary software libraries and dependencies. Two widely used deep learning frameworks, TensorFlow and PyTorch, are examples of such libraries. Next, load the MoveNet Lightning model weights, typically available in a pre-trained format. Once the model is loaded, preprocess the input image by scaling it to the appropriate input size and applying any necessary normalization. Feed the model the preprocessed image, then use forward inference to get the results. Predicted keypoints for various bodily areas, commonly shown as (x, y) coordinates, will make up the output.
Finally, perform any necessary post-processing on the keypoints, such as linking keypoints to create skeleton representations or applying confidence criteria. The outcomes of pose estimation are improved by this post-processing phase. By using the MoveNet Lightning model to do posture detection on an image, you may estimate and analyze the poses of people inside the image by following these steps.
Detailed explanation for detection of a pose on an input image, is given here.
Let us start building a MoveNet lightning model to for implementing real time pose detection on a video data.
First of all, importing necessary libraries.
import tensorflow as tf
import numpy as np
from matplotlib import pyplot as plt
import cv2
Then the most important step, loading our MoveNet Lightning single-pose model. Single pose here describes that the model is going to detect the pose of only one single individual, whereas its another version termed multi-pose detects poses of multiple person in a frame.
interpreter = tf.lite.Interpreter(model_path='lite-model_movenet_singlepose_lightning_3.tflite')
interpreter.allocate_tensors()
You can download this model from TensorFlow hub. Now, we shall define the keypoints for the model. Keypoints are unique areas or landmarks on the human body that are identified and monitored in the context of pose detection models like MoveNet. These keypoints stand in for important joints and body parts, enabling a thorough comprehension of the body stance. The wrists, elbows, shoulders, hips, knees, and ankles are frequently used keypoints, in addition to the head, eyes, nose, and ears.
EDGES = {
(0, 1): 'm',
(0, 2): 'c',
(1, 3): 'm',
(2, 4): 'c',
(0, 5): 'm',
(0, 6): 'c',
(5, 7): 'm',
(7, 9): 'm',
(6, 8): 'c',
(8, 10): 'c',
(5, 6): 'y',
(5, 11): 'm',
(6, 12): 'c',
(11, 12): 'y',
(11, 13): 'm',
(13, 15): 'm',
(12, 14): 'c',
(14, 16): 'c'
}
#Function for drawing keypoints
def draw_keypoints(frame, keypoints, confidence_threshold):
y, x, c = frame.shape
shaped = np.squeeze(np.multiply(keypoints, [y,x,1]))
for kp in shaped:
ky, kx, kp_conf = kp
if kp_conf > confidence_threshold:
cv2.circle(frame, (int(kx), int(ky)), 4, (0,255,0), -1)
The keypoint dictionary for the above set keypoints are,
nose:0, left_eye:1, right_eye:2, left_ear:3, right_ear:4, left_shoulder:5, right_shoulder:6, left_elbow:7, right_elbow:8, left_wrist:9, right_wrist:10, left_hip:11, right_hip:12, left_knee:13, right_knee:14, left_ankle: 15, right_ankle:16
Now, after drawing the Connections, let us see how to capture video through OpenCV library.
cap = cv2.VideoCapture(0)
while cap.isOpened():
ret, frame = cap.read()
cv2.imshow(frame)
cv2.VideoCapture, is an important function of OpenCV for reading input videos. In this case, 0 specifies that you capture the video from the primary camera of the laptop, while 1 is used for individuals using external webcams. To use a custom input video, simply enclose the path of the image in quotation marks.
While estimating the pose through computer vision, padding and frame processing and resizing is very important. It provides the functionalities like:
Refer to the detailed code here.
So, what do we have learnt from this article? Let’s study some important takeaways from this article.
A potent method for precisely predicting human poses in real-time applications is the MoveNet pose detection model, particularly its enhanced form MoveNet Lightning. These models can detect and track keypoints that accurately represent different body parts by using deep convolutional neural networks. They are ideal for deployment on devices with limited resources, such as mobile phones and embedded systems, because to their simplicity and efficiency. MoveNet models provide a flexible solution for a variety of applications, including fitness tracking, augmented reality, gesture-based interaction, and crowd analysis. They can handle both single-person and multi-person pose estimation. They have made significant contributions to the field of posture identification that demonstrate the development of computer vision technology and its promise to improve human-computer interaction and movement comprehension.
A. Pose detection is determining and tracking an individual’s body pose in still or moving pictures by applying computer vision algorithms. It entails locating important joints or keypoints on the body and determining their locations and orientations.
A. Convolutional neural networks (CNNs), a type of deep learning model, use pose detection algorithms to locate important human body parts. Following their connection, these keypoints create skeletal representations that reveal details on the body’s posture, position, and movement.
A. Pose detection has a wide range of uses, including augmented reality, robotics, animation, surveillance, motion analysis, sports analysis, and healthcare. It makes it possible to recognize gestures, track activities, animate characters, use rehabilitation exercises, monitor security, and more.
A. Pose detection has difficulties identifying critical points of several people in crowded environments because to occlusions (body parts that are covered or overlapped), fluctuating lighting conditions, and complex positions. Real-time performance, precision, and robustness are other crucial factors.
A. OpenPose, MoveNet, PoseNet, and AlphaPose are a few of the more well-known pose detection models. These models, which make use of deep learning methods, are also available in the field of computer vision for pose identification applications.
The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.