Skip to content

Commit

Permalink
Fix bugs of rendering on v100/a100.
Browse files Browse the repository at this point in the history
  • Loading branch information
linjing7 committed Apr 16, 2023
1 parent e5506c6 commit 7d569a5
Show file tree
Hide file tree
Showing 5 changed files with 10 additions and 20 deletions.
18 changes: 2 additions & 16 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,21 +50,7 @@ This repo is official **[PyTorch](https://pytorch.org)** implementation of [One-
- Python packages:

```shell
pip install -r requirements.txt
```

- mmcv-full:

```shell
pip install openmim
mim install mmcv-full==1.7.1
```

- mmpose:

```shell
cd main/transformer_utils
python setup.py develop
bash install.sh
```

## 3. Quick demo
Expand Down Expand Up @@ -285,7 +271,7 @@ You can zip the `predictions` folder into `predictions.zip` and submit it to the

* `RuntimeError: Subtraction, the '-' operator, with a bool tensor is not supported. If you are trying to invert a mask, use the '~' or 'logical_not()' operator instead.`: Go to [here](https://github.com/mks0601/I2L-MeshNet_RELEASE/issues/6#issuecomment-675152527)

* `TypeError: startswith first arg must be bytes or a tuple of bytes, not str.`: Go to [here](https://github.com/mcfletch/pyopengl/issues/27). It seems that this solution only works for RTX3090. If it works for V100 or A100 in your case, please tell me in the issue :)
* `TypeError: startswith first arg must be bytes or a tuple of bytes, not str.`: Go to [here](https://github.com/mcfletch/pyopengl/issues/27).

### Acknowledgement

Expand Down
5 changes: 5 additions & 0 deletions install.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
#!/bin/bash
pip install openmim
mim install mmcv-full==1.7.1
pip install -r requirements.txt
cd main/transformer_utils && python setup.py install
2 changes: 1 addition & 1 deletion main/test.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ def parse_args():
parser.add_argument('--exp_name', type=str, default='output/test')
parser.add_argument('--test_batch_size', type=int, default=32)
parser.add_argument('--encoder_setting', type=str, default='osx_l', choices=['osx_b', 'osx_l'])
parser.add_argument('--decoder_setting', type=str, default='normal', choices=['normal', 'wo_face_decoder', 'wo_decoder'])
parser.add_argument('--decoder_setting', type=str, default='wo_face_decoder', choices=['normal', 'wo_face_decoder', 'wo_decoder'])
parser.add_argument('--testset', type=str, default='EHF')
parser.add_argument('--agora_benchmark', action='store_true')
parser.add_argument('--pretrained_model_path', type=str, default='../pretrained_models/osx_l.pth.tar')
Expand Down
2 changes: 1 addition & 1 deletion main/train.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ def parse_args():
parser.add_argument('--end_epoch', type=int, default=14)
parser.add_argument('--train_batch_size', type=int, default=32)
parser.add_argument('--encoder_setting', type=str, default='osx_l', choices=['osx_b', 'osx_l'])
parser.add_argument('--decoder_setting', type=str, default='normal', choices=['normal', 'wo_face_decoder', 'wo_decoder'])
parser.add_argument('--decoder_setting', type=str, default='wo_face_decoder', choices=['normal', 'wo_face_decoder', 'wo_decoder'])
parser.add_argument('--agora_benchmark', action='store_true')
parser.add_argument('--pretrained_model_path', type=str, default='../pretrained_models/osx_l.pth.tar')
args = parser.parse_args()
Expand Down
3 changes: 1 addition & 2 deletions main/transformer_utils/mmpose/core/visualization/image.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,13 +17,12 @@
has_trimesh = False

try:
os.environ['PYOPENGL_PLATFORM'] = 'osmesa'
os.environ['PYOPENGL_PLATFORM'] = 'egl'
import pyrender
has_pyrender = True
except (ImportError, ModuleNotFoundError):
has_pyrender = False


def imshow_bboxes(img,
bboxes,
labels=None,
Expand Down

0 comments on commit 7d569a5

Please sign in to comment.