You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
After the solved installation problem with the use of a docker image based on CUDA 10 and some changes on the cpp code of the neural renderer the code start, but when I try to use the entrypoint_predict I receive the next error:
python3.8 entrypoint_predict.py --options ./experiments/default/resnet.yml --checkpoint ./datasets/data/pretrained/resnet50-19c8e357.pth --folder ./datasets/imgs --name test1
=> creating logs/test1
=> creating checkpoints/test1/default_resnet_0505062026
=> creating summary/test1/default_resnet_0505062026
{'checkpoint': './datasets/data/pretrained/resnet50-19c8e357.pth',
'checkpoint_dir': 'checkpoints/test1/default_resnet_0505062026',
'dataset': {'camera_c': array([111.5, 111.5]),
'camera_f': array([248., 248.]),
'mesh_pos': array([ 0. , 0. , -0.8]),
'name': 'shapenet_demo',
'normalization': True,
'num_classes': 13,
'predict': {'folder': './datasets/imgs'},
'shapenet': {'num_points': 9000,
'resize_with_constant_border': False},
'subset_eval': 'test_tf',
'subset_train': 'train_tf'},
'log_dir': 'logs/test1',
'log_level': 'info',
'loss': {'weights': {'chamfer': array([1., 1., 1.]),
'chamfer_opposite': 0.55,
'constant': 1.0,
'edge': 0.1,
'laplace': 0.5,
'move': 0.033,
'normal': 0.00016,
'reconst': 0.0}},
'model': {'align_with_tensorflow': False,
'backbone': 'resnet50',
'coord_dim': 3,
'gconv_activation': True,
'hidden_dim': 192,
'last_hidden_dim': 192,
'name': 'pixel2mesh',
'z_threshold': 0},
'name': 'test1',
'num_gpus': 8,
'num_workers': 16,
'optim': {'adam_beta1': 0.9,
'lr': 0.0001,
'lr_factor': 0.3,
'lr_step': array([30, 70, 90]),
'name': 'adam',
'sgd_momentum': 0.9,
'wd': 1e-06},
'pin_memory': True,
'summary_dir': 'summary/test1/default_resnet_0505062026',
'test': {'batch_size': 8,
'dataset': array([], dtype=float64),
'shuffle': False,
'summary_steps': 50,
'weighted_mean': False},
'train': {'batch_size': 8,
'checkpoint_steps': 10000,
'num_epochs': 110,
'shuffle': True,
'summary_steps': 50,
'test_epochs': 1,
'use_augmentation': True},
'version': 'default_resnet_0505062026'}
=> creating summary writer
Using GPUs: [0, 1, 2, 3, 4, 5, 6, 7]
Loading datasets: shapenet_demo
Running model initialization...
Traceback (most recent call last):
File "entrypoint_predict.py", line 39, in <module>
main()
File "entrypoint_predict.py", line 34, in main
predictor = Predictor(options, logger, writer)
File "/home/vrai/Pixel2Mesh/functions/predictor.py", line 20, in __init__
super().__init__(options, logger, writer, training=False, shared_model=shared_model)
File "/home/vrai/Pixel2Mesh/functions/base.py", line 55, in __init__
self.init_fn(shared_model=shared_model)
File "/home/vrai/Pixel2Mesh/functions/predictor.py", line 42, in init_fn
self.renderer = MeshRenderer(self.options.dataset.camera_f, self.options.dataset.camera_c,
File "/home/vrai/Pixel2Mesh/utils/vis/renderer.py", line 36, in __init__
self.renderer = nr.Renderer(camera_mode='projection',
File "/usr/local/lib/python3.8/dist-packages/neural_renderer-1.1.3-py3.8-linux-x86_64.egg/neural_renderer/renderer.py", line 34, in __init__
raise ValueError('You need to provide a valid (batch_size)x3x4 projection matrix')
ValueError: You need to provide a valid (batch_size)x3x4 projection matrix
I use the default configuration in the default folder and I also tested with the configuration in the baseline folder, after the changes to the baseline from 8 GPUs to 1 (I know that it is not enough but for testing, I use my personal PC)
The text was updated successfully, but these errors were encountered:
You mentioned you are using 1 GPU, why is this in your log: Using GPUs: [0, 1, 2, 3, 4, 5, 6, 7]
Try to found out the matrix shape in /usr/local/lib/python3.8/dist-packages/neural_renderer-1.1.3-py3.8-linux-x86_64.egg/neural_renderer/renderer.py, and trace where it comes from. See why it's invalid.
After the solved installation problem with the use of a docker image based on CUDA 10 and some changes on the cpp code of the neural renderer the code start, but when I try to use the entrypoint_predict I receive the next error:
I use the default configuration in the default folder and I also tested with the configuration in the baseline folder, after the changes to the baseline from 8 GPUs to 1 (I know that it is not enough but for testing, I use my personal PC)
The text was updated successfully, but these errors were encountered: