A Parallel Implementation of Computing Mean Average Precision

19 Jun 2022  ·  Beinan Wang ·

Mean Average Precision (mAP) has been widely used for evaluating the quality of object detectors, but an efficient implementation is still absent. Current implementations can only count true positives (TP's) and false positives (FP's) for one class at a time by looping through every detection of that class sequentially. Not only are these approaches inefficient, but they are also inconvenient for reporting validation mAP during training. We propose a parallelized alternative that can process mini-batches of detected bounding boxes (DTBB's) and ground truth bounding boxes (GTBB's) as inference goes such that mAP can be instantly calculated after inference is finished. Loops and control statements in sequential implementations are replaced with extensive uses of broadcasting, masking, and indexing. All operators involved are supported by popular machine learning frameworks such as PyTorch and TensorFlow. As a result, our implementation is much faster and can easily fit into typical training routines. A PyTorch version of our implementation is available at https://github.com/bwangca/fast-map.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here