Skip to content

Commit

Permalink
remove USE_CUDA (huggingface#7861)
Browse files Browse the repository at this point in the history
  • Loading branch information
stas00 authored Oct 19, 2020
1 parent ea1507f commit 4eb61f8
Show file tree
Hide file tree
Showing 6 changed files with 16 additions and 20 deletions.
2 changes: 0 additions & 2 deletions .github/workflows/self-push.yml
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,6 @@ jobs:
TF_FORCE_GPU_ALLOW_GROWTH: "true"
# TF_GPU_MEMORY_LIMIT: 4096
OMP_NUM_THREADS: 1
USE_CUDA: yes
run: |
source .env/bin/activate
python -m pytest -n 2 --dist=loadfile -s ./tests/
Expand Down Expand Up @@ -110,7 +109,6 @@ jobs:
TF_FORCE_GPU_ALLOW_GROWTH: "true"
# TF_GPU_MEMORY_LIMIT: 4096
OMP_NUM_THREADS: 1
USE_CUDA: yes
run: |
source .env/bin/activate
python -m pytest -n 2 --dist=loadfile -s ./tests/
4 changes: 0 additions & 4 deletions .github/workflows/self-scheduled.yml
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,6 @@ jobs:
TF_FORCE_GPU_ALLOW_GROWTH: "true"
OMP_NUM_THREADS: 1
RUN_SLOW: yes
USE_CUDA: yes
run: |
source .env/bin/activate
python -m pytest -n 1 --dist=loadfile -s ./tests/
Expand All @@ -67,7 +66,6 @@ jobs:
TF_FORCE_GPU_ALLOW_GROWTH: "true"
OMP_NUM_THREADS: 1
RUN_SLOW: yes
USE_CUDA: yes
run: |
source .env/bin/activate
pip install -r examples/requirements.txt
Expand Down Expand Up @@ -120,7 +118,6 @@ jobs:
TF_FORCE_GPU_ALLOW_GROWTH: "true"
OMP_NUM_THREADS: 1
RUN_SLOW: yes
USE_CUDA: yes
run: |
source .env/bin/activate
python -m pytest -n 1 --dist=loadfile -s ./tests/
Expand All @@ -130,7 +127,6 @@ jobs:
TF_FORCE_GPU_ALLOW_GROWTH: "true"
OMP_NUM_THREADS: 1
RUN_SLOW: yes
USE_CUDA: yes
run: |
source .env/bin/activate
pip install -r examples/requirements.txt
Expand Down
8 changes: 4 additions & 4 deletions docs/source/testing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -22,12 +22,12 @@ How transformers are tested

* `self-hosted (push) <https://github.com/huggingface/transformers/blob/master/.github/workflows/self-push.yml>`__: runs fast tests on GPU only on commits on ``master``. It only runs if a commit on ``master`` has updated the code in one of the following folders: ``src``, ``tests``, ``.github`` (to prevent running on added model cards, notebooks, etc.)

* `self-hosted runner <https://github.com/huggingface/transformers/blob/master/.github/workflows/self-scheduled.yml>`__: runs slow tests on ``tests`` and ``examples``:
* `self-hosted runner <https://github.com/huggingface/transformers/blob/master/.github/workflows/self-scheduled.yml>`__: runs normal and slow tests on GPU in ``tests`` and ``examples``:

.. code-block:: bash
RUN_SLOW=1 USE_CUDA=1 pytest tests/
RUN_SLOW=1 USE_CUDA=1 pytest examples/
RUN_SLOW=1 pytest tests/
RUN_SLOW=1 pytest examples/
The results can be observed `here <https://github.com/huggingface/transformers/actions>`__.

Expand Down Expand Up @@ -393,7 +393,7 @@ On a GPU-enabled setup, to test in CPU-only mode add ``CUDA_VISIBLE_DEVICES=""``
CUDA_VISIBLE_DEVICES="" pytest tests/test_logging.py
or if you have multiple gpus, you can tell which one to use in this test session, e.g. to use only the second gpu if you have gpus ``0`` and ``1``, you can run:
or if you have multiple gpus, you can specify which one is to be used by ``pytest``. For example, to use only the second gpu if you have gpus ``0`` and ``1``, you can run:

.. code-block:: bash
Expand Down
4 changes: 2 additions & 2 deletions scripts/fsmt/tests-to-run.sh
Original file line number Diff line number Diff line change
Expand Up @@ -2,5 +2,5 @@

# these scripts need to be run before any changes to FSMT-related code - it should cover all bases

USE_CUDA=0 RUN_SLOW=1 pytest --disable-warnings tests/test_tokenization_fsmt.py tests/test_configuration_auto.py tests/test_modeling_fsmt.py examples/seq2seq/test_fsmt_bleu_score.py
USE_CUDA=1 RUN_SLOW=1 pytest --disable-warnings tests/test_tokenization_fsmt.py tests/test_configuration_auto.py tests/test_modeling_fsmt.py examples/seq2seq/test_fsmt_bleu_score.py
CUDA_VISIBLE_DEVICES="" RUN_SLOW=1 pytest --disable-warnings tests/test_tokenization_fsmt.py tests/test_configuration_auto.py tests/test_modeling_fsmt.py examples/seq2seq/test_fsmt_bleu_score.py
RUN_SLOW=1 pytest --disable-warnings tests/test_tokenization_fsmt.py tests/test_configuration_auto.py tests/test_modeling_fsmt.py examples/seq2seq/test_fsmt_bleu_score.py
10 changes: 6 additions & 4 deletions src/transformers/testing_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -187,8 +187,10 @@ def require_torch_tpu(test_case):


if _torch_available:
# Set the USE_CUDA environment variable to select a GPU.
torch_device = "cuda" if parse_flag_from_env("USE_CUDA") else "cpu"
# Set env var CUDA_VISIBLE_DEVICES="" to force cpu-mode
import torch

torch_device = "cuda" if torch.cuda.is_available() else "cpu"
else:
torch_device = None

Expand Down Expand Up @@ -485,9 +487,9 @@ def tearDown(self):
def mockenv(**kwargs):
"""this is a convenience wrapper, that allows this:
@mockenv(USE_CUDA=True, USE_TF=False)
@mockenv(RUN_SLOW=True, USE_TF=False)
def test_something():
use_cuda = os.getenv("USE_CUDA", False)
run_slow = os.getenv("RUN_SLOW", False)
use_tf = os.getenv("USE_TF", False)
"""
return unittest.mock.patch.dict(os.environ, kwargs)
8 changes: 4 additions & 4 deletions tests/test_skip_decorators.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,10 +23,10 @@
# the following 4 should be run. But since we have different CI jobs running
# different configs, all combinations should get covered
#
# USE_CUDA=1 RUN_SLOW=1 pytest -rA tests/test_skip_decorators.py
# USE_CUDA=0 RUN_SLOW=1 pytest -rA tests/test_skip_decorators.py
# USE_CUDA=0 RUN_SLOW=0 pytest -rA tests/test_skip_decorators.py
# USE_CUDA=1 RUN_SLOW=0 pytest -rA tests/test_skip_decorators.py
# RUN_SLOW=1 pytest -rA tests/test_skip_decorators.py
# RUN_SLOW=1 CUDA_VISIBLE_DEVICES="" pytest -rA tests/test_skip_decorators.py
# RUN_SLOW=0 pytest -rA tests/test_skip_decorators.py
# RUN_SLOW=0 CUDA_VISIBLE_DEVICES="" pytest -rA tests/test_skip_decorators.py

import os
import unittest
Expand Down

0 comments on commit 4eb61f8

Please sign in to comment.