Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questions about testing my own video data #8

Open
yavon818 opened this issue Aug 21, 2023 · 1 comment
Open

Questions about testing my own video data #8

yavon818 opened this issue Aug 21, 2023 · 1 comment

Comments

@yavon818
Copy link

Thanks for your great work! I wonder if I could use your pretrained model to test my video data directly, or should I finetune the model using the video data first, if I want to get the right results.

@RaymondWang987
Copy link
Owner

Thanks for your great work! I wonder if I could use your pretrained model to test my video data directly, or should I finetune the model using the video data first, if I want to get the right results.

It depends on the domain gaps between your test data and our VDW training set. VDW dataset is highly diverse, thus VDW pre-trained NVDS can directly produce satisfactory results on many scenes without fine-tuning (e.g., zero-shot evaluations on Sintel and DAVIS dataset in our experiments). However, if the scenes of your test data are highly different from or rarely appear in the VDW dataset (e.g., surgical datasets in issue #6), the performance might degrade due to the domain gap between the training and test data. In this case, you can fine-tune NVDS for better performance.

Overall, you can try on the VDW pre-trained NVDS on your test scenarios. If the performance is not satisfactory for your usage, you should fine-tune NVDS for better performance. In most cases, fine-tuning will improve the quantitative performance on the certain closed domain but decrease the model generalization.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants