This is the source code for our proposed Masked Autoencoders in 3D Point Cloud Representation Learning (MAE3D).
python >= 3.7
pytorch >= 1.7.0
numpy
scikit-learn
einops
h5py
tqdm
and if you are first time to run "pointnet2_ops_lib", you need
pip install pointnet2_ops_lib/.
The main datasets we used in our project are ShapeNet and ModelNet40, and you can download them in: ShapeNet and ModelNet40.
and then you need to put them at "./data"
We can get an accuracy of 93.4% on the ModelNet40. The pre-trained model can be found in here.
You need to move the "model_cls.t7" to "./checkpoints/mask_ratio_0.7/exp_shapenet55_block/models", then you can simply restore our model and evaluate on ModelNet40 by
python main_cls.py --exp_name exp_shapenet55_block --mask_ratio 0.7 --eval True
You should download ShapeNet dataset first, and then simply run
python main_pretrain.py --exp_name exp_shapenet55_block --mask_ratio 0.7
If you want to visualize all reconstructed point cloud (will spend a lot of time), you could run as
python main_pretrain.py --exp_name exp_shapenet55_block --mask_ratio 0.7 --visualize True
You should download ModelNet dataset first, and then simply run
python main_cls.py --exp_name exp_shapenet55_block --mask_ratio 0.7 --pretrained True --finetune True
You should download ModelNet dataset first, and then simply run
python main_cls.py --exp_name exp_shapenet55_block --mask_ratio 0.7 --pretrained True --linear_classifier True
If you find our work useful, please consider citing:
@article{jiang2023masked,
title={Masked autoencoders in 3d point cloud representation learning},
author={Jiang, Jincen and Lu, Xuequan and Zhao, Lizhi and Dazaley, Richard and Wang, Meili},
journal={IEEE Transactions on Multimedia},
year={2023},
publisher={IEEE}
}