Skip to content

Code of the CVPR 2022 paper "HOP: History-and-Order Aware Pre-training for Vision-and-Language Navigation"

License

Notifications You must be signed in to change notification settings

YanyuanQiao/HOP-VLN

Folders and files

NameName
Last commit message
Last commit date

Latest commit

e23842c · Aug 21, 2023

History

32 Commits
Apr 15, 2022
Aug 21, 2023
Mar 18, 2022
Apr 21, 2022
Apr 21, 2022
Mar 16, 2022
Aug 29, 2022
Mar 18, 2022
Mar 18, 2022

Repository files navigation

HOP: History-and-Order Aware Pre-training for Vision-and-Language Navigation

This repository is the official implementation of HOP: History-and-Order Aware Pre-training for Vision-and-Language Navigation.

architecture

Prerequisites

# Set up with Anaconda
conda env create -f hop_env.yaml
conda activate hop

Quick Start

  1. Download processed data and pretrained models. Please follow the instructions below to prepare the data in directories:

  2. Run Pre-training

    bash run/pretrain.bash

    The trained model will be saved under result/.

    You can also train model using only the processed PREVALENT data:

    let --prevalent_only = True in pretrain.bash

  3. Run finetuning

    • Please check here for experiment setup and HOP application.

Citation

If you use or discuss our HOP, please cite our paper:

@InProceedings{Qiao2022HOP,
    author    = {Qiao, Yanyuan, Qi Yuankai, Hong, Yicong, Yu, Zheng, Wang, Peng and Wu, Qi},
    title     = {HOP: History-and-Order Aware Pre-training for Vision-and-Language Navigation},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2022},
    pages     = {15418-15427}
}

About

Code of the CVPR 2022 paper "HOP: History-and-Order Aware Pre-training for Vision-and-Language Navigation"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published