When it comes to face recognition, researchers are constantly pushing the boundaries of accuracy and scalability. However, a significant challenge arises with the exponential growth of identities juxtaposed with the finite capacity of GPU memory. Previous studies have primarily focused on refining loss functions for facial feature extraction networks, with softmax-based loss functions driving advancements in face recognition performance. Nevertheless, bridging the widening disparity between the escalating number of identities and the limitations of GPU memory has proven increasingly challenging. In this article, we will explore strategies for Face Recognition at Massive Scale with Partial FC.
This article was published as a part of the Data Science Blogathon.
The softmax loss and its variants have been widely adopted as objectives for face recognition tasks. These functions make global feature-to-class comparisons during the multiplication between the embedding features and the linear transformation matrix.
However, when dealing with a massive number of identities in the training set, the cost of storing and computing the final linear matrix often exceeds the capabilities of current GPU hardware. This can result in training failures.
Researchers have explored various techniques to alleviate this bottleneck. Each has its own set of trade-offs and limitations.
HF-softmax employs a dynamic selection process for active class centers within each mini-batch. This selection is facilitated through the construction of a random hash forest in the embedding space, enabling the retrieval of approximate nearest class centers based on features. However, it’s crucial to note that storing all class centers in RAM and not overlooking the computational overhead for feature retrieval are essential.
On the other hand, Softmax Dissection divides the softmax loss into intra-class and inter-class objectives, thereby reducing redundant computations for the inter-class component. While this approach is commendable, it is limited in its adaptability and versatility, as it is applicable only to specific softmax-based loss functions.
Both of these methods operate on the principle of data parallelism during multi-GPU training. Despite attempting to approximate the softmax loss function with a subset of class centers, they still incur significant inter-GPU communication costs for gradient averaging and SGD synchronization. Additionally, the selection of class centers is constrained by the memory capacity of individual GPUs, further restricting their scalability.
ArcFace loss function introduced model parallelism, which separates the softmax weight matrix across different GPUs and calculates the full-class softmax loss with minimal communication overhead. This approach successfully trained 1 million identities using eight GPUs on a single machine.
The model parallel approach partitions the softmax weight matrix W β R (dΓC) into k sub-matrices w of size d Γ (C/k), where d is the embedding feature dimension and C is the number of classes. Each sub-matrix wi is then placed on the ith GPU.
To calculate the final softmax outputs, each GPU independently computes the numerator e^((wi)T * X), where X is the input feature. The denominator β j=1 to C e^((wj)T * X) requires gathering information from all other GPUs, which is done by first calculating the local sum on each GPU and then communicating the local sums to compute the global sum.
This approach significantly reduces inter-GPU communication compared to naive data parallelism, as only the local sums need to be communicated instead of the gradients for the entire weight matrix W.
For more details on the arcface loss function please go through my previous blog(ArcFace loss function for Deep Face Recognition) in which i have explained in detail.
While model parallelism mitigates the memory burden of storing the weight matrix W, it introduces a new bottleneck β the storage of predicted logits.
The predicted logits are intermediate values computed during the forward pass, and their storage requirements scale with the total batch size across all GPUs. As the number of identities and GPUs increase, the memory consumption for storing logits can quickly exceed the GPU memory capacity.
This limitation restricts the scalability of the model parallel approach, even with an increasing number of GPUs.
To overcome the limitations of previous approaches, the authors of the βPartial FCβ paper propose a groundbreaking solution!
Partial FC (Fully Connected)
Partial FC introduces a softmax approximation algorithm that can maintain state-of-the-art accuracy while using only a fraction (e.g., 10%) of the class centers. By carefully selecting a subset of class centers during training, it can significantly reduces the memory and computational requirements. This will further enable the training of face recognition models with an unprecedented number of identities.
The key to Partial FCβs magic lies in how it selects the class centers for each iteration. Two strategies are proposed:
According to the research, PPRN outperforms the completely random approach, especially at lower sampling rates. This is because PPRN ensures that the gradients learn both the direction to push the sample away from negative centers and the intra-class clustering objective.
By splitting the softmax weight matrix across multiple GPUs and partitioning the input samples across these GPUs, Partial FC ensures that each GPU only processes a subset of the identities. This ingenious approach not only tackles the memory bottleneck but also minimizes the costly inter-GPU communication required for gradient synchronization.
Partial FC is easy to use. The paper gives clear instructions and code to add it to your projects. Plus, they released a massive, high-quality dataset (Glint360K) to train your models with Partial FC. With these tools, anyone can unlock the power of large-scale face recognition.
def sample(self, labels, index_positive):
with torch.no_grad():
positive = torch.unique(labels[index_positive], sorted=True).cuda()
if self.num_sample - positive.size(0) >= 0:
perm = torch.rand(size=[self.num_local]).cuda()
perm[positive] = 2.0
index = torch.topk(perm, k=self.num_sample)[1].cuda()
index = index.sort()[0].cuda()
else:
index = positive
self.weight_index = index
labels[index_positive] = torch.searchsorted(index, labels[index_positive])
return self.weight[self.weight_index]
The provided code block can implement Partial FC in Python. For reference, you can explore my repository, sourced from the insight face repository.
Partial FC is a game-changer in face recognition. It lets you train models with way more identities than ever before. This technique rethinks how to scale models, balancing memory, speed, and accuracy. With Partial FC, the future of large-scale face recognition is amazing! Keep an eye on Partial FC, it’s going to revolutionize the field.
The media shown in this article is not owned by Analytics Vidhya and is used at the Authorβs discretion.