Labeling the image, or annotating the photo, in the big picture of computer vision was challenging. Our exploration delves into the teamwork of LabelImg and Detectron, a powerful duo that combines precise annotation with efficient model building.LabelImg, which is easy to use and accurate, leads in careful annotation, laying a solid foundation for clear object detection.
As we explore LabelImg and get better at drawing bounding boxes, we seamlessly move to Detectron. This robust framework organizes our marked data, making it helpful in training advanced models. LabelImg and Detectron together make object detection easy for everyone, whether you’re a beginner or an expert. Come along, where each marked image helps us unlock the full power of visual information.
Learning Objectives
This article was published as a part of the Data Science Blogathon.
1. Create a Virtual Environment:
conda create -p ./venv python=3.8 -y
This command creates a virtual environment named “venv” using Python version 3.8.
2. Activate the Virtual Environment:
conda activate venv
Activate the virtual environment to isolate the installation of LabelImg.
1. Install LabelImg:
pip install labelImg
Install LabelImg within the activated virtual environment.
2. Launch LabelImg:
labelImg
If you encounter errors while running the script, I’ve prepared a zip archive containing the virtual environment (venv) for your convenience.
1. Download the Zip Archive:
2. Create a LabelImg Folder:
3. Extract the venv Folder:
4. Activate the Virtual Environment:
conda activate ./venv
This process ensures you have a pre-configured virtual environment ready to use with LabelImg. The provided zip archive encapsulates the necessary dependencies, allowing a smoother experience without worrying about potential installation.
Now, proceed with the earlier steps for installing and using LabelImg within this activated virtual environment.
1. Annotate Images in PascalVOC Format:
inside the .xml
<annotation>
<folder>train</folder>
<filename>0a8a68ee-f587-4dea-beec-79d02e7d3fa4___RS_Early.B 8461.JPG</filename>
<path>/home/suyodhan/Documents/Blog /label
/train/0a8a68ee-f587-4dea-beec-79d02e7d3fa4___RS_Early.B 8461.JPG</path>
<source>
<database>Unknown</database>
</source>
<size>
<width>256</width>
<height>256</height>
<depth>3</depth>
</size>
<segmented>0</segmented>
<object>
<name>Potato___Early_blight</name>
<pose>Unspecified</pose>
<truncated>0</truncated>
<difficult>0</difficult>
<bndbox>
<xmin>12</xmin>
<ymin>18</ymin>
<xmax>252</xmax>
<ymax>250</ymax>
</bndbox>
</object>
</annotation>
This XML structure follows the Pascal VOC annotation format, commonly used for object detection datasets. This format provides a standardized representation of annotated data for training computer vision models. If you have additional images with annotations, you can continue to generate similar XML files for each annotated object in the respective images.
Object detection models often require annotations in specific formats to train and evaluate effectively. While Pascal VOC is a widely used format, specific frameworks like Detectron prefer COCO annotations. To bridge this gap, we introduce a versatile Python script, voc2coco.py, designed to convert Pascal VOC annotations to the COCO format seamlessly.
#!/usr/bin/python
# pip install lxml
import sys
import os
import json
import xml.etree.ElementTree as ET
import glob
START_BOUNDING_BOX_ID = 1
PRE_DEFINE_CATEGORIES = None
# If necessary, pre-define category and its id
# PRE_DEFINE_CATEGORIES = {"aeroplane": 1, "bicycle": 2, "bird": 3, "boat": 4,
# "bottle":5, "bus": 6, "car": 7, "cat": 8, "chair": 9,
# "cow": 10, "diningtable": 11, "dog": 12, "horse": 13,
# "motorbike": 14, "person": 15, "pottedplant": 16,
# "sheep": 17, "sofa": 18, "train": 19, "tvmonitor": 20}
def get(root, name):
vars = root.findall(name)
return vars
def get_and_check(root, name, length):
vars = root.findall(name)
if len(vars) == 0:
raise ValueError("Can not find %s in %s." % (name, root.tag))
if length > 0 and len(vars) != length:
raise ValueError(
"The size of %s is supposed to be %d, but is %d."
% (name, length, len(vars))
)
if length == 1:
vars = vars[0]
return vars
def get_filename_as_int(filename):
try:
filename = filename.replace("\\", "/")
filename = os.path.splitext(os.path.basename(filename))[0]
return str(filename)
except:
raise ValueError("Filename %s is supposed to be an integer." % (filename))
def get_categories(xml_files):
"""Generate category name to id mapping from a list of xml files.
Arguments:
xml_files {list} -- A list of xml file paths.
Returns:
dict -- category name to id mapping.
"""
classes_names = []
for xml_file in xml_files:
tree = ET.parse(xml_file)
root = tree.getroot()
for member in root.findall("object"):
classes_names.append(member[0].text)
classes_names = list(set(classes_names))
classes_names.sort()
return {name: i for i, name in enumerate(classes_names)}
def convert(xml_files, json_file):
json_dict = {"images": [], "type": "instances", "annotations": [], "categories": []}
if PRE_DEFINE_CATEGORIES is not None:
categories = PRE_DEFINE_CATEGORIES
else:
categories = get_categories(xml_files)
bnd_id = START_BOUNDING_BOX_ID
for xml_file in xml_files:
tree = ET.parse(xml_file)
root = tree.getroot()
path = get(root, "path")
if len(path) == 1:
filename = os.path.basename(path[0].text)
elif len(path) == 0:
filename = get_and_check(root, "filename", 1).text
else:
raise ValueError("%d paths found in %s" % (len(path), xml_file))
## The filename must be a number
image_id = get_filename_as_int(filename)
size = get_and_check(root, "size", 1)
width = int(get_and_check(size, "width", 1).text)
height = int(get_and_check(size, "height", 1).text)
image = {
"file_name": filename,
"height": height,
"width": width,
"id": image_id,
}
json_dict["images"].append(image)
## Currently we do not support segmentation.
# segmented = get_and_check(root, 'segmented', 1).text
# assert segmented == '0'
for obj in get(root, "object"):
category = get_and_check(obj, "name", 1).text
if category not in categories:
new_id = len(categories)
categories[category] = new_id
category_id = categories[category]
bndbox = get_and_check(obj, "bndbox", 1)
xmin = int(get_and_check(bndbox, "xmin", 1).text) - 1
ymin = int(get_and_check(bndbox, "ymin", 1).text) - 1
xmax = int(get_and_check(bndbox, "xmax", 1).text)
ymax = int(get_and_check(bndbox, "ymax", 1).text)
assert xmax > xmin
assert ymax > ymin
o_width = abs(xmax - xmin)
o_height = abs(ymax - ymin)
ann = {
"area": o_width * o_height,
"iscrowd": 0,
"image_id": image_id,
"bbox": [xmin, ymin, o_width, o_height],
"category_id": category_id,
"id": bnd_id,
"ignore": 0,
"segmentation": [],
}
json_dict["annotations"].append(ann)
bnd_id = bnd_id + 1
for cate, cid in categories.items():
cat = {"supercategory": "none", "id": cid, "name": cate}
json_dict["categories"].append(cat)
#os.makedirs(os.path.dirname(json_file), exist_ok=True)
json_fp = open(json_file, "w")
json_str = json.dumps(json_dict)
json_fp.write(json_str)
json_fp.close()
if __name__ == "__main__":
import argparse
parser = argparse.ArgumentParser(
description="Convert Pascal VOC annotation to COCO format."
)
parser.add_argument("xml_dir", help="Directory path to xml files.", type=str)
parser.add_argument("json_file", help="Output COCO format json file.", type=str)
args = parser.parse_args()
xml_files = glob.glob(os.path.join(args.xml_dir, "*.xml"))
# If you want to do train/test split, you can pass a subset of xml files to convert function.
print("Number of xml files: {}".format(len(xml_files)))
convert(xml_files, args.json_file)
print("Success: {}".format(args.json_file))
The voc2coco.py script simplifies the conversion process by leveraging the lxml library. Before diving into usage, let’s explore its key components:
1. Dependencies:
2. Configuration:
3. FunctioGet
Executing the script is straightforward run it from the command line, providing the path to your Pascal VOC XML files and specifying the desired output path for the COCO format JSON file. Here’s an example:
python voc2coco.py /path/to/xml/files /path/to/output/output.json
Output:
The script outputs a well-structured COCO format JSON file containing essential information about images, annotations, and categories.
In conclusion, Wrapping up our journey through object detection with LabelImg and Detectron, it’s crucial to recognize the variety of annotation tools catering to enthusiasts and professionals. LabelImg, as an open-source gem, offers versatility and accessibility, making it a top choice.
Beyond free tools, paid solutions like VGG Image Annotator (VIA), RectLabel, and Labelbox step in for complex tasks and large projects. These platforms bring advanced features and scalability, albeit with a financial investment, ensuring efficiency in high-stakes endeavors.
Our exploration emphasizes choosing the right annotation tool based on project specifics, budget, and sophistication level. Whether sticking to LabelImg’s openness or investing in paid tools, the key is alignment with your project’s scale and goals. In the evolving field of computer vision, annotation tools continue to diversify, providing options for projects of all sizes and complexities.
1. LabelImg Documentation:
2. Detectron Framework Documentation:
3. VGG Image Annotator (VIA) Guide:
4.RectLabel Documentation:
5.Labelbox Learning Center:
A: LabelImg is an open-source image annotation tool for object detection tasks. Its user-friendly interface and versatility set it apart. Unlike some tools, LabelImg allows precise bounding box annotation, making it a preferred choice for those new to object detection.
A: Yes, several paid annotation tools, such as VGG Image Annotator (VIA), RectLabel, and Labelbox, offer advanced features and scalability. While free tools like LabelImg are excellent for basic tasks, paid solutions are tailored for more complex projects, providing collaboration features and enhanced efficiency.
A: Converting annotations to Pascal VOC format is crucial for compatibility with frameworks like Detectron. It ensures consistent class labeling and seamless integration into the training pipeline, facilitating the creation of accurate object detection models.
A: Detectron is a robust object detection framework streamlining the model training process. It plays a crucial role in handling annotated data, preparing it for training, and optimizing the overall efficiency of object detection models.
A: While paid annotation tools are often associated with enterprise-level tasks, they can also benefit small-scale projects. The decision depends on the specific requirements, budget constraints, and the desired level of sophistication for annotation tasks.
The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.