Research project at AI·Robotics Institute, KIST
This project aims to retrieve shapes from ModelNet40 based on user text input and visualizes them. Users can input text in various languages, and the number of retrieved shapes (k) can be specified arbitrarily.
Once an user inputs the text, it is refined by a GPT-2 model fine-tuned with a few-shot learning (20-shot) approach. This refined text is then encoded by the CLIP model, producing cosine similarity vector through dot products with the embeddings of ModelNet40.
Depending on the settings, reranking is performed within k * 3 candidates based on the similarity between the features of keyword (in model name) and the user input features, resulting in more accurate retrieval outcomes. (Optional)
Model | Name |
---|---|
Prompt Engineering Model | GPT-2 |
Text Encoder | OpenCLIP ViT-bigG-14 |
Shape Encoder | PointBERT |
python prepare_dataset.py --download-dir {directory path to download dataset}
python main.py
Demo is available in demo.ipynb
. It receives an user input and retrieve k corresponding shapes.