Skip to content

Imitation learning algorithms with Co-training for Mobile ALOHA: ACT, Diffusion Policy, VINN

License

Notifications You must be signed in to change notification settings

jaiber/act-plus-plus

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Imitation Learning algorithms and Co-training for Mobile ALOHA

This repo contains the implementation of ACT, Diffusion Policy and VINN, together with 2 simulated environments: Transfer Cube and Bimanual Insertion. You can train and evaluate them in sim or real. For real, you would also need to install Mobile ALOHA. This repo is forked from the ACT repo.

Updates:

You can find all scripted/human demo for simulated environments here.

Repo Structure

  • imitate_episodes.py Train and Evaluate ACT
  • policy.py An adaptor for ACT policy
  • detr Model definitions of ACT, modified from DETR
  • sim_env.py Mujoco + DM_Control environments with joint space control
  • ee_sim_env.py Mujoco + DM_Control environments with EE space control
  • scripted_policy.py Scripted policies for sim environments
  • constants.py Constants shared across files
  • utils.py Utils such as data loading and helper functions
  • visualize_episodes.py Save videos from a .hdf5 dataset

Installation

    # Install miniconda (https://docs.conda.io/projects/miniconda/en/latest/)
    
    $ wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
    $ bash Miniconda3-latest-Linux-x86_64.sh

    $ conda env create --name=aloha  --file=conda_env.yaml
    $ conda activate aloha
    $ mkdir -p data/sim_transfer_cube_scripted/
    $ export MUJOCO_GL=egl
    $ pip install wandb
    $ cd ..
    $ git clone [email protected]:ARISE-Initiative/robomimic.git
    $ cd robomimic && git checkout diffusion-policy-mg
    $ pip install -e .
    $ pip install diffusers
    $ cd ../act-plus-plus
 
    # 1. Try out the sim_transfer_cube_scripted dataset creation and model training
    $ export WANDB_DISABLED=true
    $ python3 record_sim_episodes.py --task_name sim_transfer_cube_scripted --dataset_dir ./data/sim_transfer_cube_scripted --num_episodes 50
    $ python3 imitate_episodes.py --task_name sim_transfer_cube_scripted --ckpt_dir ./checkpoints/sim_transfer_cube_scripted --policy_class ACT --kl_weight 10 --chunk_size 100 --hidden_dim 512 --batch_size 8 --dim_feedforward 3200   --lr 1e-5 --seed 0 --num_steps 2000

    # 2. Try out the sim_insertion_scripted dataset creation and model training
    $ python3 record_sim_episodes.py --task_name sim_insertion_scripted --dataset_dir ./data/sim_insertion_scripted --num_episodes 50
    $ python3 imitate_episodes.py --task_name sim_insertion_scripted --ckpt_dir ./checkpoints/sim_insertion_scripted --policy_class ACT --kl_weight 10 --chunk_size 100 --hidden_dim 512 --batch_size 8 --dim_feedforward 3200   --lr 1e-5 --seed 0 --num_steps 2000

Example Usages

To set up a new terminal, run:

conda activate aloha
cd <path to act repo>

Simulated experiments (LEGACY table-top ALOHA environments)

We use sim_transfer_cube_scripted task in the examples below. Another option is sim_insertion_scripted. To generated 50 episodes of scripted data, run:

python3 record_sim_episodes.py --task_name sim_transfer_cube_scripted --dataset_dir <data save dir> --num_episodes 50

To can add the flag --onscreen_render to see real-time rendering. To visualize the simulated episodes after it is collected, run

python3 visualize_episodes.py --dataset_dir <data save dir> --episode_idx 0

Note: to visualize data from the mobile-aloha hardware, use the visualize_episodes.py from https://github.com/MarkFzp/mobile-aloha

To train ACT:

# Transfer Cube task
python3 imitate_episodes.py --task_name sim_transfer_cube_scripted --ckpt_dir <ckpt dir> --policy_class ACT --kl_weight 10 --chunk_size 100 --hidden_dim 512 --batch_size 8 --dim_feedforward 3200 --num_epochs 2000  --lr 1e-5 --seed 0

To evaluate the policy, run the same command but add --eval. This loads the best validation checkpoint. The success rate should be around 90% for transfer cube, and around 50% for insertion. To enable temporal ensembling, add flag --temporal_agg. Videos will be saved to <ckpt_dir> for each rollout. You can also add --onscreen_render to see real-time rendering during evaluation.

For real-world data where things can be harder to model, train for at least 5000 epochs or 3-4 times the length after the loss has plateaued. Please refer to tuning tips for more info.

TL;DR: if your ACT policy is jerky or pauses in the middle of an episode, just train for longer! Success rate and smoothness can improve way after loss plateaus.

About

Imitation learning algorithms with Co-training for Mobile ALOHA: ACT, Diffusion Policy, VINN

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%