Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
E2E-AD authored Oct 20, 2023
1 parent 37d1491 commit 730070c
Showing 1 changed file with 6 additions and 1 deletion.
7 changes: 6 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,11 @@ This repository is an official implementation of the technical report [AD-MLP](h
> \*: equal contribution, <sup>&#8224;</sup>: corresponding author.
>
## Claim
We would like to thank @PointsCoder (related to (https://github.com/E2E-AD/AD-MLP/issues/4)) for finding out the mistake in the data we used for training. In short, we mistakenly packaged and used real trajectory data intended for performance debugging. We have fixed the above issue and updated the related links and numbers.
In general, we have arrived at numbers that differ slightly from what was reported in the previous technical report. Overall, under open-loop evaluation, the L2 error remains superior (0.23 -> 0.29 vs. VAD-Base’s 0.37), while the collision rate has increased slightly(0.12 -> 0.19) and is slightly lower than that of VAD(0.19 vs. 0.14).
However, we argue that the fundamental conclusion remains unchanged, which is that superior open-loop evaluation performance can be achieved on the nuScenes dataset with just past motion information and a simple model. It may introduce some challenges in measuring and comparing different methods following this evaluation protocol.

## News
* 2023.05.18: Paper is released on [arxiv](https://arxiv.org/pdf/2305.10430.pdf)!
* 2023.05.17: Code / Models are released!
Expand Down Expand Up @@ -68,7 +73,7 @@ To verify the performance on the nuScenes Dataset, we provide the pretrained mod
Two versions of evaluation metrics are provided: online and offline. The offline version uses pre-stored ground truth and is far faster than online one. The code defaults to offline.

* Training:
The code we use for training the model is located in the [Baidu Netdisk](https://pan.baidu.com/s/1IjN2pUUB35k2ScOitVsuGA?pwd=t4br). To use it, you need to refer to https://github.com/mit-han-lab/bevfusion to configure the nuscenes dataset and the environment. For convenience, we also upload the training code in pytorch/ folder. Start the training process with r1.py under the folder:
We upload the training code in pytorch/ folder. Additional files required for training is in [Baidu Netdisk](https://pan.baidu.com/s/1dv7pyrPKdh-g-3rLOqwCRQ?pwd=yvu6). Start the training process with train.py under the folder:
```
python train.py
```
Expand Down

0 comments on commit 730070c

Please sign in to comment.