You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We provide a collection of detection models pre-trained on the
4
-
[COCO dataset](http://mscoco.org).
5
-
These models can be useful for out-of-the-box inference if you are interested
6
-
in categories already in COCO (e.g., humans, cars, etc).
7
-
They are also useful for initializing your models when training on novel
8
-
datasets.
3
+
We provide a collection of detection models pre-trained on the[COCO
4
+
dataset](http://mscoco.org) and the [Kitti dataset](http://www.cvlibs.net/datasets/kitti/).
5
+
These models can be useful for
6
+
out-of-the-box inference if you are interested in categories already in COCO
7
+
(e.g., humans, cars, etc). They are also useful for initializing your models when
8
+
training on novel datasets.
9
9
10
10
In the table below, we list each such pre-trained model including:
11
11
12
12
* a model name that corresponds to a config file that was used to train this
13
13
model in the `samples/configs` directory,
14
14
* a download link to a tar.gz file containing the pre-trained model,
15
-
* model speed (one of {slow, medium, fast}),
16
-
* detector performance on COCO data as measured by the COCO mAP measure.
15
+
* model speed --- we report running time in ms per 600x600 image (including all
16
+
pre and post-processing), but please be
17
+
aware that these timings depend highly on one's specific hardware
18
+
configuration (these timings were performed using an Nvidia
19
+
GeForce GTX TITAN X card) and should be treated more as relative timings in
20
+
many cases.
21
+
* detector performance on subset of the COCO validation set.
17
22
Here, higher is better, and we only report bounding box mAP rounded to the
18
23
nearest integer.
19
24
* Output types (currently only `Boxes`)
@@ -32,12 +37,54 @@ Inside the un-tar'ed directory, you will find:
32
37
* a frozen graph proto with weights baked into the graph as constants
33
38
(`frozen_inference_graph.pb`) to be used for out of the box inference
34
39
(try this out in the Jupyter notebook!)
40
+
* a config file (`pipeline.config`) which was used to generate the graph. These
41
+
directly correspond to a config file in the
42
+
[samples/configs](https://github.com/tensorflow/models/tree/master/research/object_detection/samples/configs)) directory but often with a modified score threshold. In the case
43
+
of the heavier Faster R-CNN models, we also provide a version of the model
44
+
that uses a highly reduced number of proposals for speed.
35
45
36
-
| Model name | Speed | COCO mAP | Outputs |
46
+
Some remarks on frozen inference graphs:
47
+
48
+
* If you try to evaluate the frozen graph, you may find performance numbers for
49
+
some of the models to be slightly lower than what we report in the below
50
+
tables. This is because we discard detections with scores below a
51
+
threshold (typically 0.3) when creating the frozen graph. This corresponds
52
+
effectively to picking a point on the precision recall curve of
53
+
a detector (and discarding the part past that point), which negatively impacts
54
+
standard mAP metrics.
55
+
* Our frozen inference graphs are generated using the
0 commit comments