Annotation Mastery: Seamless Detectron Integration with LabelImg

suyodhanj6 Last Updated : 06 Dec, 2023
8 min read

Introduction

Labeling the image, or annotating the photo, in the big picture of computer vision was challenging. Our exploration delves into the teamwork of LabelImg and Detectron, a powerful duo that combines precise annotation with efficient model building.LabelImg, which is easy to use and accurate, leads in careful annotation, laying a solid foundation for clear object detection.

As we explore LabelImg and get better at drawing bounding boxes, we seamlessly move to Detectron. This robust framework organizes our marked data, making it helpful in training advanced models. LabelImg and Detectron together make object detection easy for everyone, whether you’re a beginner or an expert. Come along, where each marked image helps us unlock the full power of visual information.

Detectron Integration with LabelImg

Learning Objectives

  • Getting Started with LabelImg.
  • Environment Setup and LabelImg Installation.
  • Understanding LabelImg and Its Functionality.
  • Converting VOC or Pascal Data to COCO Format for Object Detection.

This article was published as a part of the Data Science Blogathon.

Flowchart

Flowchart of Seamless Detectron Integration with LabelImg

Setting Up Your Environment

1. Create a Virtual Environment:

conda create -p ./venv python=3.8 -y

This command creates a virtual environment named “venv” using Python version 3.8.

2. Activate the Virtual Environment: 

conda activate venv

Activate the virtual environment to isolate the installation of LabelImg.

Installing and Using LabelImg

1. Install LabelImg:

pip install labelImg

Install LabelImg within the activated virtual environment.

2. Launch LabelImg:

labelImg
Launch LabelImg

Troubleshooting: If You Encounter Errors Running the Script

If you encounter errors while running the script, I’ve prepared a zip archive containing the virtual environment (venv) for your convenience.

1. Download the Zip Archive:

  • Download the venv.zip archive from the Link

2. Create a LabelImg Folder:

  • Create a new folder named LabelImg on your local machine.

3. Extract the venv Folder:

  • Extract the contents of the venv.zip archive into the LabelImg folder.

4. Activate the Virtual Environment:

  • Open your command prompt or terminal.
  • Navigate to the LabelImg folder.
  • Run the following command to activate the virtual environment:
conda activate ./venv

This process ensures you have a pre-configured virtual environment ready to use with LabelImg. The provided zip archive encapsulates the necessary dependencies, allowing a smoother experience without worrying about potential installation.

Now, proceed with the earlier steps for installing and using LabelImg within this activated virtual environment.

Annotation Workflow with LabelImg

1. Annotate Images in PascalVOC Format:

  • Build and launch LabelImg.
  • Click ‘Change default saved annotation folder’ in Menu/File.
Steps to do Annotation Workflow with LabelImg
  • Click ‘Open Dir’ to select the image directory.
Steps to do Annotation Workflow with LabelImg
Steps to do Annotation Workflow with LabelImg
  • Use ‘Create RectBox’ to annotate objects in the image.
Steps to do Annotation Workflow with LabelImg
Steps to do Annotation Workflow with LabelImg
Steps to do Annotation Workflow with LabelImg
  • Save the annotations to the specified folder.
Steps to do Annotation Workflow with LabelImg

inside the .xml 

<annotation>
	<folder>train</folder>
	<filename>0a8a68ee-f587-4dea-beec-79d02e7d3fa4___RS_Early.B 8461.JPG</filename>
	<path>/home/suyodhan/Documents/Blog /label
/train/0a8a68ee-f587-4dea-beec-79d02e7d3fa4___RS_Early.B 8461.JPG</path>
	<source>
		<database>Unknown</database>
	</source>
	<size>
		<width>256</width>
		<height>256</height>
		<depth>3</depth>
	</size>
	<segmented>0</segmented>
	<object>
		<name>Potato___Early_blight</name>
		<pose>Unspecified</pose>
		<truncated>0</truncated>
		<difficult>0</difficult>
		<bndbox>
			<xmin>12</xmin>
			<ymin>18</ymin>
			<xmax>252</xmax>
			<ymax>250</ymax>
		</bndbox>
	</object>
</annotation>

This XML structure follows the Pascal VOC annotation format, commonly used for object detection datasets. This format provides a standardized representation of annotated data for training computer vision models. If you have additional images with annotations, you can continue to generate similar XML files for each annotated object in the respective images.

Converting Pascal VOC Annotations to COCO Format: A Python Script

Object detection models often require annotations in specific formats to train and evaluate effectively. While Pascal VOC is a widely used format, specific frameworks like Detectron prefer COCO annotations. To bridge this gap, we introduce a versatile Python script, voc2coco.py, designed to convert Pascal VOC annotations to the COCO format seamlessly.

#!/usr/bin/python

# pip install lxml

import sys
import os
import json
import xml.etree.ElementTree as ET
import glob

START_BOUNDING_BOX_ID = 1
PRE_DEFINE_CATEGORIES = None
# If necessary, pre-define category and its id
#  PRE_DEFINE_CATEGORIES = {"aeroplane": 1, "bicycle": 2, "bird": 3, "boat": 4,
#  "bottle":5, "bus": 6, "car": 7, "cat": 8, "chair": 9,
#  "cow": 10, "diningtable": 11, "dog": 12, "horse": 13,
#  "motorbike": 14, "person": 15, "pottedplant": 16,
#  "sheep": 17, "sofa": 18, "train": 19, "tvmonitor": 20}


def get(root, name):
    vars = root.findall(name)
    return vars


def get_and_check(root, name, length):
    vars = root.findall(name)
    if len(vars) == 0:
        raise ValueError("Can not find %s in %s." % (name, root.tag))
    if length > 0 and len(vars) != length:
        raise ValueError(
            "The size of %s is supposed to be %d, but is %d."
            % (name, length, len(vars))
        )
    if length == 1:
        vars = vars[0]
    return vars


def get_filename_as_int(filename):
    try:
        filename = filename.replace("\\", "/")
        filename = os.path.splitext(os.path.basename(filename))[0]
        return str(filename)
    except:
        raise ValueError("Filename %s is supposed to be an integer." % (filename))


def get_categories(xml_files):
    """Generate category name to id mapping from a list of xml files.
    
    Arguments:
        xml_files {list} -- A list of xml file paths.
    
    Returns:
        dict -- category name to id mapping.
    """
    classes_names = []
    for xml_file in xml_files:
        tree = ET.parse(xml_file)
        root = tree.getroot()
        for member in root.findall("object"):
            classes_names.append(member[0].text)
    classes_names = list(set(classes_names))
    classes_names.sort()
    return {name: i for i, name in enumerate(classes_names)}


def convert(xml_files, json_file):
    json_dict = {"images": [], "type": "instances", "annotations": [], "categories": []}
    if PRE_DEFINE_CATEGORIES is not None:
        categories = PRE_DEFINE_CATEGORIES
    else:
        categories = get_categories(xml_files)
    bnd_id = START_BOUNDING_BOX_ID
    for xml_file in xml_files:
        tree = ET.parse(xml_file)
        root = tree.getroot()
        path = get(root, "path")
        if len(path) == 1:
            filename = os.path.basename(path[0].text)
        elif len(path) == 0:
            filename = get_and_check(root, "filename", 1).text
        else:
            raise ValueError("%d paths found in %s" % (len(path), xml_file))
        ## The filename must be a number
        image_id = get_filename_as_int(filename)
        size = get_and_check(root, "size", 1)
        width = int(get_and_check(size, "width", 1).text)
        height = int(get_and_check(size, "height", 1).text)
        image = {
            "file_name": filename,
            "height": height,
            "width": width,
            "id": image_id,
        }
        json_dict["images"].append(image)
        ## Currently we do not support segmentation.
        #  segmented = get_and_check(root, 'segmented', 1).text
        #  assert segmented == '0'
        for obj in get(root, "object"):
            category = get_and_check(obj, "name", 1).text
            if category not in categories:
                new_id = len(categories)
                categories[category] = new_id
            category_id = categories[category]
            bndbox = get_and_check(obj, "bndbox", 1)
            xmin = int(get_and_check(bndbox, "xmin", 1).text) - 1
            ymin = int(get_and_check(bndbox, "ymin", 1).text) - 1
            xmax = int(get_and_check(bndbox, "xmax", 1).text)
            ymax = int(get_and_check(bndbox, "ymax", 1).text)
            assert xmax > xmin
            assert ymax > ymin
            o_width = abs(xmax - xmin)
            o_height = abs(ymax - ymin)
            ann = {
                "area": o_width * o_height,
                "iscrowd": 0,
                "image_id": image_id,
                "bbox": [xmin, ymin, o_width, o_height],
                "category_id": category_id,
                "id": bnd_id,
                "ignore": 0,
                "segmentation": [],
            }
            json_dict["annotations"].append(ann)
            bnd_id = bnd_id + 1

    for cate, cid in categories.items():
        cat = {"supercategory": "none", "id": cid, "name": cate}
        json_dict["categories"].append(cat)

    #os.makedirs(os.path.dirname(json_file), exist_ok=True)
    json_fp = open(json_file, "w")
    json_str = json.dumps(json_dict)
    json_fp.write(json_str)
    json_fp.close()


if __name__ == "__main__":
    import argparse

    parser = argparse.ArgumentParser(
        description="Convert Pascal VOC annotation to COCO format."
    )
    parser.add_argument("xml_dir", help="Directory path to xml files.", type=str)
    parser.add_argument("json_file", help="Output COCO format json file.", type=str)
    args = parser.parse_args()
    xml_files = glob.glob(os.path.join(args.xml_dir, "*.xml"))

    # If you want to do train/test split, you can pass a subset of xml files to convert function.
    print("Number of xml files: {}".format(len(xml_files)))
    convert(xml_files, args.json_file)
    print("Success: {}".format(args.json_file))

Script Overview

The voc2coco.py script simplifies the conversion process by leveraging the lxml library. Before diving into usage, let’s explore its key components:

1. Dependencies:

  • Ensure the lxml library is installed using pip install lxml.

2. Configuration:

  • Optionally pre-define categories using the PRE_DEFINE_CATEGORIES variable. Uncomment and modify this section according to your dataset.

3. FunctioGet

  • get, get_and_check, get_filename_as_int: Helper functions for XML parsing.
  • get_categories: Generates a category name to ID mapping from a list of XML files.
  • convert: The main conversion function processes XML files and generates COCO format JSON.

How to Use

Executing the script is straightforward run it from the command line, providing the path to your Pascal VOC XML files and specifying the desired output path for the COCO format JSON file. Here’s an example:

python voc2coco.py /path/to/xml/files /path/to/output/output.json

Output:

The script outputs a well-structured COCO format JSON file containing essential information about images, annotations, and categories.

Output of COCO format JSON file
Output from COCO format JSON file

Conclusion

In conclusion, Wrapping up our journey through object detection with LabelImg and Detectron, it’s crucial to recognize the variety of annotation tools catering to enthusiasts and professionals. LabelImg, as an open-source gem, offers versatility and accessibility, making it a top choice.

Beyond free tools, paid solutions like VGG Image Annotator (VIA), RectLabel, and Labelbox step in for complex tasks and large projects. These platforms bring advanced features and scalability, albeit with a financial investment, ensuring efficiency in high-stakes endeavors.

Our exploration emphasizes choosing the right annotation tool based on project specifics, budget, and sophistication level. Whether sticking to LabelImg’s openness or investing in paid tools, the key is alignment with your project’s scale and goals. In the evolving field of computer vision, annotation tools continue to diversify, providing options for projects of all sizes and complexities.

Key Takeaways

  • LabelImg’s intuitive interface and advanced features make it a versatile open-source tool for precise image annotation, ideal for those entering object detection.
  • Paid tools like VIA, RectLabel, and Labelbox cater to complex annotation tasks and large-scale projects, offering advanced features and scalability.
  • The critical takeaway is choosing the right annotation tool based on project needs, budget, and desired sophistication, ensuring efficiency and success in object detection endeavors.

Resources for Further Learning:

1. LabelImg Documentation:

  • Explore the official documentation for LabelImg to gain in-depth insights into its features and functionalities.
  • LabelImg Documentation

2. Detectron Framework Documentation:

  • Dive into the documentation of Detectron, the powerful object detection framework, to understand its capabilities and usage.
  • Detectron Documentation

3. VGG Image Annotator (VIA) Guide:

  • If you’re interested in exploring VIA, the VGG Image Annotator, refer to the comprehensive guide for detailed instructions.
  • VIA User Guide

4.RectLabel Documentation:

  • Learn more about RectLabel, a paid annotation tool, by referring to its official documentation for guidance on usage and features.
  • RectLabel Documentation

5.Labelbox Learning Center:

  • Discover educational resources and tutorials in the Labelbox Learning Center to enhance your understanding of this annotation platform.
  • Labelbox Learning Center

Frequently Asked Questions

Q1: What is LabelImg, and how does it differ from other annotation tools?

A: LabelImg is an open-source image annotation tool for object detection tasks. Its user-friendly interface and versatility set it apart. Unlike some tools, LabelImg allows precise bounding box annotation, making it a preferred choice for those new to object detection.

Q2: Are there alternative paid annotation tools, and how do they compare to free options?

A: Yes, several paid annotation tools, such as VGG Image Annotator (VIA), RectLabel, and Labelbox, offer advanced features and scalability. While free tools like LabelImg are excellent for basic tasks, paid solutions are tailored for more complex projects, providing collaboration features and enhanced efficiency.

Q3: What is the significance of converting annotations to the Pascal VOC format?

A: Converting annotations to Pascal VOC format is crucial for compatibility with frameworks like Detectron. It ensures consistent class labeling and seamless integration into the training pipeline, facilitating the creation of accurate object detection models.

Q4: How does Detectron contribute to efficient model training in object detection?

A: Detectron is a robust object detection framework streamlining the model training process. It plays a crucial role in handling annotated data, preparing it for training, and optimizing the overall efficiency of object detection models.

Q5: Can I use paid annotation tools for small-scale projects, or are they mainly for enterprise-level tasks?

A: While paid annotation tools are often associated with enterprise-level tasks, they can also benefit small-scale projects. The decision depends on the specific requirements, budget constraints, and the desired level of sophistication for annotation tasks.

The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion. 

As a Data Scientist, I leverage my expertise in statistical analysis, machine learning, and data visualization to derive insights and make informed decisions. I have experience working with various programming languages, databases, and machine learning frameworks, enabling me to tackle complex data problems and deliver actionable results. I am a collaborative problem-solver who can work with stakeholders to deliver scalable and secure data solutions.

Responses From Readers

Congratulations, You Did It!
Well Done on Completing Your Learning Journey. Stay curious and keep exploring!

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details