Skip to content

LOCATE: Localize and Transfer Object Parts for Weakly Supervised Affordance Grounding (CVPR 2023) - Modified with upgrades

License

Notifications You must be signed in to change notification settings

samarthtehri/LOCATE

 
 

Repository files navigation

Head to "Fast Demo" section for immediate demo CSE 597

LOCATE: Localize and Transfer Object Parts for Weakly Supervised Affordance Grounding

arXiv GitHub

Official pytorch implementation of our CVPR 2023 paper - LOCATE: Localize and Transfer Object Parts for Weakly Supervised Affordance Grounding.

Abstract

Humans excel at acquiring knowledge through observation. For example, we can learn to use new tools by watching demonstrations. This skill is fundamental for intelligent systems to interact with the world. A key step to acquire this skill is to identify what part of the object affords each action, which is called affordance grounding. In this paper, we address this problem and propose a framework called LOCATE that can identify matching object parts across images, to transfer knowledge from images where an object is being used (exocentric images used for learning), to images where the object is inactive (egocentric ones used to test). To this end, we first find interaction areas and extract their feature embeddings. Then we learn to aggregate the embeddings into compact prototypes (human, object part, and background), and select the one representing the object part. Finally, we use the selected prototype to guide affordance grounding. We do this in a weakly supervised manner, learning only from image-level affordance and object labels. Extensive experiments demonstrate that our approach outperforms state-of-the-art methods by a large margin on both seen and unseen objects.

Usage

1. Requirements

Code is tested under Pytorch 1.12.1, python 3.7, and CUDA 11.6

pip install -r requirements.txt

2. Dataset

Download the AGD20K dataset from [ Google Drive | Baidu Pan (g23n) ] .

3. Train and Test

Our pretrained model can be downloaded from Google Drive. Run following commands to start training or testing:

python train.py --data_root <PATH_TO_DATA>
python test.py --data_root <PATH_TO_DATA> --model_file <PATH_TO_MODEL>

Fast Demo

The faster method would be to run the notebook on google colab, specifically the 597FinalProject.ipynb notebook The weights and custom run's weights are located in this google drive folder: 597 Project Note: To run the unseen ones, don't forget to add "--divide Unseen" at the end

Citation

@inproceedings{li:locate:2023,
  title = {LOCATE: Localize and Transfer Object Parts for Weakly Supervised Affordance Grounding},
  author = {Li, Gen and Jampani, Varun and Sun, Deqing and Sevilla-Lara, Laura},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2023}
}

Anckowledgement

This repo is based on Cross-View-AG , dino-vit-features, and dino. Thanks for their great work!

About

LOCATE: Localize and Transfer Object Parts for Weakly Supervised Affordance Grounding (CVPR 2023) - Modified with upgrades

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 73.9%
  • Jupyter Notebook 26.1%