SSD: Single Shot MultiBox Detector
Liu, Wei
and
Anguelov, Dragomir
and
Erhan, Dumitru
and
Szegedy, Christian
and
Reed, Scott E.
and
Fu, Cheng-Yang
and
Berg, Alexander C.
European Conference on Computer Vision - 2016 via Local Bibsonomy
Keywords:
dblp
SSD aims to solve the major problem with most of the current state of the art object detectors namely Faster RCNN and like. All the object detection algortihms have same methodology
- Train 2 different nets - Region Proposal Net (RPN) and advanced classifier to detect class of an object and bounding box separately.
- During inference, run the test image at different scales to detect object at multiple scales to account for invariance
This makes the nets extremely slow. Faster RCNN could operate at **7 FPS with 73.2% mAP** while SSD could achieve **59 FPS with 74.3% mAP ** on VOC 2007 dataset.
#### Methodology
SSD uses a single net for predict object class and bounding box. However it doesn't do that directly. It uses a mechanism for choosing ROIs, training end-to-end for predicting class and boundary shift for that ROI.
##### ROI selection
Borrowing from FasterRCNNs SSD uses the concept of anchor boxes for generating ROIs from the feature maps of last layer of shared conv layer. For each pixel in layer of feature maps, k default boxes with different aspect ratios are chosen around every pixel in the map. So if there are feature maps each of m x n resolutions - that's *mnk* ROIs for a single feature layer. Now SSD uses multiple feature layers (with differing resolutions) for generating such ROIs primarily to capture size invariance of objects. But because earlier layers in deep conv net tends to capture low level features, it uses features after certain levels and layers henceforth.
##### ROI labelling
Any ROI that matches to Ground Truth for a class after applying appropriate transforms and having Jaccard overlap greater than 0.5 is positive. Now, given all feature maps are at different resolutions and each boxes are at different aspect ratios, doing that's not simple. SDD uses simple scaling and aspect ratios to get to the appropriate ground truth dimensions for calculating Jaccard overlap for default boxes for each pixel at the given resolution
##### ROI classification
SSD uses single convolution kernel of 3*3 receptive fields to predict for each ROI the 4 offsets (centre-x offset, centre-y offset, height offset , width offset) from the Ground Truth box for each RoI, along with class confidence scores for each class. So that is if there are c classes (including background), there are (c+4) filters for each convolution kernels that looks at a ROI.
So summarily we have convolution kernels that look at ROIs (which are default boxes around each pixel in feature map layer) to generate (c+4) scores for each RoI. Multiple feature map layers with different resolutions are used for generating such ROIs. Some ROIs are positive and some negative depending on jaccard overlap after ground box has scaled appropriately taking resolution differences in input image and feature map into consideration.
Here's how it looks :
![](https://i.imgur.com/HOhsPZh.png)
##### Training
For each ROI a combined loss is calculated as a combination of localisation error and classification error. The details are best explained in the figure.
![](https://i.imgur.com/zEDuSgi.png)
##### Inference
For each ROI predictions a small threshold is used to first filter out irrelevant predictions, Non Maximum Suppression (nms) with jaccard overlap of 0.45 per class is applied then on the remaining candidate ROIs and the top 200 detections per image are kept.
For further understanding of the intuitions regarding the paper and the results obtained please consider giving the full paper a read.
The open sourced code is available at this [Github repo](https://github.com/weiliu89/caffe/tree/ssd)