This article was published as a part of the Data Science Blogathon.
In this article, we are going to design a counter system using OpenCV in Python that will be able to track any moving objects using the idea of Euclidean distance tracking and contouring.
You might have worked on computer vision before. Have you ever thought of tracking a single object wherever it goes?
Object tracking is the methodology used to track and keep pointing the same object wherever the object goes.
There are multiple techniques to implement object tracking using OpenCV. This can be either-
In this article, we will perform a Multiple Object Tracker since our goal is to track the number of vehicles passed in the time frame.
Multiple tracking algorithms depend on complexity and computation.
In this article, we will be using Centroid Tracking Algorithm to build our tracker.
Step 1. Calculate the Centroid of detected objects using the bounding box coordinates.
Step 2. For every ongoing frame, it does the same; it computes the centroid by using the coordinates of the bounding box and assigns an id to every bounding box it detects. Finally, it computes the Euclidean distance between every pair of centroids possible.
Step 3. We assume that the same object will be moved the minimum distance compared to other centroids, which means the two pairs of centroids having minimum distance in subsequent frames are considered to be the same object.
Step 4. Now it’s time to assign the IDs to the moved centroid that will indicate the same object.
We will use the frame subtraction technique to capture a moving object . F(t+1) -F(t) => moved object.
Object Tracking is getting robust due to the growing computation resources and research work going on it. There are various major use cases where object tracking is extensively getting used.
We have built a class EuclideanDistTracker
for object tracking combining all the steps we learned.
This includes all the mathematical calculations behind the euclidean distance tracker.
import math class EuclideanDistTracker: def __init__(self): self.center_points = {} self.id_count = 0
def update(self, objects_rect): objects_bbs_ids = [] for rect in objects_rect: x, y, w, h = rect center_x = (x + x + w) // 2 center_y = (y + y + h) // 2 same_object_detected = False for id, pt in self.center_points.items(): if distance < 25: self.center_points[id] = (center_x, center_y) print(self.center_points) objects_bbs_ids.append([x, y, w, h, id]) same_object_detected = True break if same_object_detected is False: self.center_points[self.id_count] = (center_x, center_y) objects_bbs_ids.append([x, y, w, h, self.id_count]) self.id_count += 1 new_center_points = {} for obj_bb_id in objects_bbs_ids: var,var,var,var, object_id = obj_bb_id center = self.center_points[object_id] new_center_points[object_id] = center self.center_points = new_center_points.copy() return objects_bbs_ids
You can download all the source code used in this article using this link. In order to avoid any mistakes, I suggest you download the tracker file using the link.
Save the above code in a python file and save it as tracker.py
. We will import our tracker class while working with detection. the file tracker.py
can also be downloaded using this link.
update
→ It accepts an array of bounding box coordinates.object_id
].object_id
is the id assigned to that particular bounding box?Importing the necessary packages along with our tracker class which we recently made.
import cv2 import numpy as np from tracker import EuclideanDistTracker tracker = EuclideanDistTracker() cap = cv2.VideoCapture('highway.mp4') ret, frame1 = cap.read() ret, frame2 = cap.read()
We are using a highway video that comes with OpenCV as a sample video.
cap.read()
It reads the frame and returns a boolean value and frame.
In this section, we will detect the moving objects by reading two subsequent frames and binding them with our tracker object.
The tracker object takes the coordinates of our detected bounding box around the moving object. We will filter out all the noise by proving the minimum area.
while cap.isOpened(): diff = cv2.absdiff(frame1, frame2) gray = cv2.cvtColor(diff, cv2.COLOR_BGR2GRAY) blur = cv2.GaussianBlur(gray, (5,5), 0 ) height, width = blur.shape print(height, width) dilated = cv2.dilate(threshold, (1,1), iterations=1) contours, _, = cv2.findContours(dilated, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) detections = [] for contour in contours: (x,y,w,h) = cv2.boundingRect(contour) if cv2.contourArea(contour) <300: continue detections.append([x,y,w,h]) boxes_ids = tracker.update(detections) for box_id in boxes_ids: x,y,w,h,id = box_id cv2.putText(frame1, str(id),(x,y-15), cv2.FONT_HERSHEY_SIMPLEX, 0.8, (0,0,255), 2) cv2.rectangle(frame1, (x,y),(x+w, y+h), (0,255,0), 2) cv2.imshow('frame',frame1)
frame1 = frame2 ret, frame2 = cap.read() key = cv2.waitKey(30) if key == ord('q): break cv2.destroyAllWindows()
cv2.absdiff
It is used to get the frame difference between two subsequent frames. it detects the changes between two frames.boxes_ids
It returned by the tracker contains (x,y,w,h, id) coordinate of bounding boxes and the id associated with it.Output Frame:
In this article, we talked about object detection and tracking using OpenCV, and we used a Euclidean tracker to track our objects.
Trying out different deep learning-based trackers like YOLO DEEPSORT promises a better result but that is computationally expensive.
Centroid tracker also doesn’t take camera angle into consideration; in order to counter this problem we need to implement the bird’s eye view before calculating distances.
We built a vehicle counter system using the concept we discussed in this article.
I hope you enjoyed reading this article !!
The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.