Skip to content

Commit

Permalink
Moved examples to examples/torch dir and tests to tests/torch dir (#725)
Browse files Browse the repository at this point in the history
* Moved examples to examples/torch dir

* Moved tests to tests/torch dir

* Update paths

* Fixed tests

* Added init
  • Loading branch information
l-bat authored Jun 3, 2021
1 parent 7e77f7d commit aafe4bf
Show file tree
Hide file tree
Showing 771 changed files with 513 additions and 482 deletions.
2 changes: 1 addition & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -108,4 +108,4 @@ ENV/
*.tar

# object detection eval results
examples/object_detection/eval/
examples/torch/object_detection/eval/
8 changes: 4 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,9 +65,9 @@ For more details about the framework architecture, refer to the [NNCFArchitectur
For a quicker start with NNCF-powered compression, you can also try the sample scripts, each of which provides a basic training pipeline for classification, semantic segmentation and object detection neural network training correspondingly.

To run the samples please refer to the corresponding tutorials:
- [Image Classification sample](examples/classification/README.md)
- [Object Detection sample](examples/object_detection/README.md)
- [Semantic Segmentation sample](examples/semantic_segmentation/README.md)
- [Image Classification sample](examples/torch/classification/README.md)
- [Object Detection sample](examples/torch/object_detection/README.md)
- [Semantic Segmentation sample](examples/torch/semantic_segmentation/README.md)

## Third-party repository integration
NNCF may be straightforwardly integrated into training/evaluation pipelines of third-party repositories.
Expand Down Expand Up @@ -116,7 +116,7 @@ Use one of the Dockerfiles in the [docker](./docker) directory to build an image

**NOTE**: If you want to use sample training scripts provided in the NNCF repository under `examples`, you should install the corresponding Python package dependencies:
```
pip install -r examples/requirements.txt
pip install -r examples/torch/requirements.txt
```

## Contributing
Expand Down
2 changes: 1 addition & 1 deletion docs/compression_algorithms/Quantization.md
Original file line number Diff line number Diff line change
Expand Up @@ -223,7 +223,7 @@ For automatic mixed-precision selection it's recommended to use the following te

Note, optimizer parameters are model specific, this template contains optimal ones for ResNet-like models.

Here's an [example](../../examples/classification/configs/mixed_precision/squeezenet1_1_imagenet_mixed_int_hawq.json) of
Here's an [example](../../examples/torch/classification/configs/mixed_precision/squeezenet1_1_imagenet_mixed_int_hawq.json) of
using the template in the full configuration file.

This template uses `plateau` scheduler. Though it usually leads to a lot of epochs of tuning for achieving a good
Expand Down
2 changes: 0 additions & 2 deletions examples/common/models/__init__.py

This file was deleted.

File renamed without changes.
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ This sample demonstrates a DL model compression in case of an image-classificati
To work with the sample you should install the corresponding Python package dependencies

```
pip install -r examples/requirements.txt
pip install -r examples/torch/requirements.txt
```

## Quantize FP32 Pretrained Model
Expand All @@ -30,7 +30,7 @@ To prepare the ImageNet dataset, refer to the following [tutorial](https://githu
#### Run Classification Sample

- If you did not install the package, add the repository root folder to the `PYTHONPATH` environment variable.
- Go to the `examples/classification` folder.
- Go to the `examples/torch/classification` folder.
- Run the following command to start compression with fine-tuning on GPUs:
```
python main.py -m train --config configs/quantization/mobilenet_v2_imagenet_int8.json --data /data/imagenet/ --log-dir=../../results/quantization/mobilenet_v2_int8/
Expand Down Expand Up @@ -88,7 +88,7 @@ To export a model to the OpenVINO IR and run it using the Intel® Deep Learning
#### Binarization
As an example of NNCF convolution binarization capabilities, you may use the configs in `examples/classification/configs/binarization` to binarize ResNet18. Use the same steps/command line parameters as for quantization (for best results, specify `--pretrained`), except for the actual binarization config path.
As an example of NNCF convolution binarization capabilities, you may use the configs in `examples/torch/classification/configs/binarization` to binarize ResNet18. Use the same steps/command line parameters as for quantization (for best results, specify `--pretrained`), except for the actual binarization config path.
### Results for binarization
|Model|Compression algorithm|Dataset|PyTorch compressed accuracy|NNCF config file|PyTorch Checkpoint|
Expand Down
File renamed without changes.
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
import torch.backends.cudnn as cudnn

from examples.common.model_loader import load_resuming_model_state_dict_and_checkpoint_from_path
from examples.torch.common.model_loader import load_resuming_model_state_dict_and_checkpoint_from_path
from nncf.torch.utils import manual_seed


Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -36,23 +36,23 @@
from torchvision.models import InceptionOutputs

from nncf.common.utils.tensorboard import prepare_for_tensorboard
from examples.common.argparser import get_common_argument_parser
from examples.common.example_logger import logger
from examples.common.execution import ExecutionMode, get_execution_mode, \
from examples.torch.common.argparser import get_common_argument_parser
from examples.torch.common.example_logger import logger
from examples.torch.common.execution import ExecutionMode, get_execution_mode, \
prepare_model_for_execution, start_worker
from examples.common.model_loader import load_model
from examples.common.optimizer import get_parameter_groups, make_optimizer
from examples.common.sample_config import SampleConfig, create_sample_config
from examples.common.utils import configure_logging, configure_paths, create_code_snapshot, \
from examples.torch.common.model_loader import load_model
from examples.torch.common.optimizer import get_parameter_groups, make_optimizer
from examples.torch.common.sample_config import SampleConfig, create_sample_config
from examples.torch.common.utils import configure_logging, configure_paths, create_code_snapshot, \
print_args, make_additional_checkpoints, get_name, is_staged_quantization, \
is_pretrained_model_requested, log_common_mlflow_params, SafeMLFLow, MockDataset, configure_device
from examples.common.utils import write_metrics
from examples.torch.common.utils import write_metrics
from nncf import create_compressed_model
from nncf.api.compression import CompressionStage
from nncf.torch.dynamic_graph.graph_tracer import create_input_infos
from nncf.torch.initialization import register_default_init_args, default_criterion_fn
from nncf.torch.utils import safe_thread_call, is_main_process
from examples.classification.common import set_seed, load_resuming_checkpoint
from examples.torch.classification.common import set_seed, load_resuming_checkpoint

model_names = sorted(name for name in models.__dict__
if name.islower() and not name.startswith("__")
Expand Down Expand Up @@ -99,7 +99,7 @@ def main(argv):
if not is_staged_quantization(config):
start_worker(main_worker, config)
else:
from examples.classification.staged_quantization_worker import staged_quantization_main_worker
from examples.torch.classification.staged_quantization_worker import staged_quantization_main_worker
start_worker(staged_quantization_main_worker, config)


Expand Down
File renamed without changes.
Original file line number Diff line number Diff line change
Expand Up @@ -25,20 +25,20 @@
from torchvision.models import InceptionOutputs

from nncf.common.utils.tensorboard import prepare_for_tensorboard
from examples.classification.main import create_data_loaders, validate, AverageMeter, accuracy, get_lr, \
from examples.torch.classification.main import create_data_loaders, validate, AverageMeter, accuracy, get_lr, \
create_datasets, inception_criterion_fn
from examples.common.example_logger import logger
from examples.common.execution import ExecutionMode, prepare_model_for_execution
from examples.common.model_loader import load_model
from examples.common.utils import configure_logging, print_args, make_additional_checkpoints, get_name, \
from examples.torch.common.example_logger import logger
from examples.torch.common.execution import ExecutionMode, prepare_model_for_execution
from examples.torch.common.model_loader import load_model
from examples.torch.common.utils import configure_logging, print_args, make_additional_checkpoints, get_name, \
is_pretrained_model_requested, log_common_mlflow_params, SafeMLFLow, configure_device
from nncf.torch.binarization.algo import BinarizationController
from nncf.api.compression import CompressionStage
from nncf.torch.initialization import register_default_init_args, default_criterion_fn
from nncf.torch.model_creation import create_compressed_model
from nncf.torch.quantization.algo import QuantizationController
from nncf.torch.utils import is_main_process
from examples.classification.common import set_seed, load_resuming_checkpoint
from examples.torch.classification.common import set_seed, load_resuming_checkpoint


class KDLossCalculator:
Expand Down
File renamed without changes.
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
limitations under the License.
"""

from examples.common.sample_config import CustomArgumentParser
from examples.torch.common.sample_config import CustomArgumentParser
from nncf.common.hardware.config import HWConfigType


Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@
from torch import distributed as dist
from torch.utils.data import Sampler

from examples.common.example_logger import logger
from examples.torch.common.example_logger import logger


def configure_distributed(config):
Expand Down
File renamed without changes.
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@

import torch
import torch.multiprocessing as mp
from examples.common.sample_config import SampleConfig
from examples.torch.common.sample_config import SampleConfig


class ExecutionMode:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,12 +16,12 @@
import torchvision.models
from functools import partial

import examples.common.models as custom_models
from examples.common.example_logger import logger
import examples.common.restricted_pickle_module as restricted_pickle_module
import examples.torch.common.models as custom_models
from examples.torch.common.example_logger import logger
import examples.torch.common.restricted_pickle_module as restricted_pickle_module
from nncf import load_state
from nncf.torch.utils import safe_thread_call
from examples.classification.models.mobilenet_v2_32x32 import MobileNetV2For32x32
from examples.torch.classification.models.mobilenet_v2_32x32 import MobileNetV2For32x32


def load_model(model, pretrained=True, num_classes=1000, model_params=None,
Expand Down
2 changes: 2 additions & 0 deletions examples/torch/common/models/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
from examples.torch.common.models.segmentation import *
from examples.torch.common.models.classification import *
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@
import torch.nn as nn
import torch

from examples.common.example_logger import logger
from examples.torch.common.example_logger import logger

class InitialBlock(nn.Module):
"""The initial block is composed of two branches:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@
import torch
import torch.nn.functional as F

from examples.common.example_logger import logger
from examples.torch.common.example_logger import logger
from nncf.torch.utils import is_tracing_state


Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@

from nncf.torch.utils import is_tracing_state

from examples.common.example_logger import logger
from examples.torch.common.example_logger import logger

class UNet(nn.Module):
def __init__(
Expand Down
File renamed without changes.
File renamed without changes.
File renamed without changes.
8 changes: 4 additions & 4 deletions examples/common/utils.py → examples/torch/common/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,14 +27,14 @@
from PIL import Image
import torch.utils.data as data

from examples.common.distributed import configure_distributed
from examples.common.execution import ExecutionMode, get_device
from examples.common.sample_config import SampleConfig
from examples.torch.common.distributed import configure_distributed
from examples.torch.common.execution import ExecutionMode, get_device
from examples.torch.common.sample_config import SampleConfig
from torch.utils.tensorboard import SummaryWriter
import mlflow
import torch

from examples.common.example_logger import logger as default_logger
from examples.torch.common.example_logger import logger as default_logger
from nncf.torch.utils import is_main_process

# pylint: disable=import-error
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ This sample demonstrates DL model compression capabailites for object detection
To work with the sample you should install the corresponding Python package dependencies

```
pip install -r examples/requirements.txt
pip install -r examples/torch/requirements.txt
```

## Quantize FP32 pretrained model
Expand All @@ -25,7 +25,7 @@ This scenario demonstrates quantization with fine-tuning of SSD300 on VOC datase

#### Run object detection sample
- If you did not install the package then add the repository root folder to the `PYTHONPATH` environment variable
- Navigate to the `examples/object_detection` folder
- Navigate to the `examples/torch/object_detection` folder
- Run the following command to start compression with fine-tuning on GPUs:
`python main.py -m train --config configs/ssd300_vgg_int8_voc.json --data <path_to_dataset> --log-dir=../../results/quantization/ssd300_int8`
It may take a few epochs to get the baseline accuracy results.
Expand Down
File renamed without changes.
Original file line number Diff line number Diff line change
Expand Up @@ -16,9 +16,9 @@
import cv2
import numpy as np
import torch
from examples.object_detection.datasets.coco import COCODataset
from examples.object_detection.datasets.voc0712 import VOCDetection, VOCAnnotationTransform
from examples.object_detection.utils.augmentations import SSDAugmentation
from examples.torch.object_detection.datasets.coco import COCODataset
from examples.torch.object_detection.datasets.voc0712 import VOCDetection, VOCAnnotationTransform
from examples.torch.object_detection.utils.augmentations import SSDAugmentation
from nncf.torch.dynamic_graph.graph_tracer import create_input_infos

VOC_MEAN = (0.406, 0.456, 0.485)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@
import torch
import torch.utils.data as data

from examples.common.example_logger import logger
from examples.torch.common.example_logger import logger

COCO_CLASSES = ( # always index 0
"person", "bicycle", "car", "motorcycle", "airplane", "bus", "train", "truck", "boat", "traffic light",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@
from torch import distributed as dist
from torch.nn import functional as F

from examples.common.example_logger import logger
from examples.torch.common.example_logger import logger
from nncf.torch.utils import is_main_process, is_dist_avail_and_initialized, get_world_size


Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@
from torch import nn
from torch.nn import functional as F

from examples.object_detection.layers import DetectionOutput, PriorBox
from examples.torch.object_detection.layers import DetectionOutput, PriorBox
from nncf.torch.dynamic_graph.context import no_nncf_trace


Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,27 +20,27 @@
import torch
import torch.utils.data as data

from examples.common import restricted_pickle_module
from examples.common.model_loader import load_resuming_model_state_dict_and_checkpoint_from_path
from examples.common.sample_config import create_sample_config, SampleConfig
from examples.torch.common import restricted_pickle_module
from examples.torch.common.model_loader import load_resuming_model_state_dict_and_checkpoint_from_path
from examples.torch.common.sample_config import create_sample_config, SampleConfig
from torch.optim.lr_scheduler import ReduceLROnPlateau

from examples.common.argparser import get_common_argument_parser
from examples.common.distributed import DistributedSampler
from examples.common.example_logger import logger
from examples.common.execution import get_execution_mode
from examples.common.execution import prepare_model_for_execution, start_worker
from examples.torch.common.argparser import get_common_argument_parser
from examples.torch.common.distributed import DistributedSampler
from examples.torch.common.example_logger import logger
from examples.torch.common.execution import get_execution_mode
from examples.torch.common.execution import prepare_model_for_execution, start_worker
from nncf.api.compression import CompressionStage
from nncf.torch.initialization import register_default_init_args
from examples.common.optimizer import get_parameter_groups, make_optimizer
from examples.common.utils import get_name, make_additional_checkpoints, configure_paths, \
from examples.torch.common.optimizer import get_parameter_groups, make_optimizer
from examples.torch.common.utils import get_name, make_additional_checkpoints, configure_paths, \
create_code_snapshot, is_on_first_rank, configure_logging, print_args, is_pretrained_model_requested, \
log_common_mlflow_params, SafeMLFLow, configure_device
from examples.common.utils import write_metrics
from examples.object_detection.dataset import detection_collate, get_testing_dataset, get_training_dataset
from examples.object_detection.eval import test_net
from examples.object_detection.layers.modules import MultiBoxLoss
from examples.object_detection.model import build_ssd
from examples.torch.common.utils import write_metrics
from examples.torch.object_detection.dataset import detection_collate, get_testing_dataset, get_training_dataset
from examples.torch.object_detection.eval import test_net
from examples.torch.object_detection.layers.modules import MultiBoxLoss
from examples.torch.object_detection.model import build_ssd
from nncf import create_compressed_model, load_state
from nncf.torch.dynamic_graph.graph_tracer import create_input_infos
from nncf.torch.utils import is_main_process
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,8 @@
limitations under the License.
"""

from examples.object_detection.models.ssd_mobilenet import build_ssd_mobilenet
from examples.object_detection.models.ssd_vgg import build_ssd_vgg
from examples.torch.object_detection.models.ssd_mobilenet import build_ssd_mobilenet
from examples.torch.object_detection.models.ssd_vgg import build_ssd_vgg


def build_ssd(net_name, cfg, ssd_dim, num_classes, config):
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,9 +14,9 @@
import torch
import torch.nn as nn

from examples.common import restricted_pickle_module
from examples.common.example_logger import logger
from examples.object_detection.layers.modules.ssd_head import MultiOutputSequential, SSDDetectionOutput
from examples.torch.common import restricted_pickle_module
from examples.torch.common.example_logger import logger
from examples.torch.object_detection.layers.modules.ssd_head import MultiOutputSequential, SSDDetectionOutput
from nncf.torch.checkpoint_loading import load_state


Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -15,10 +15,10 @@
import torch
import torch.nn as nn

from examples.common import restricted_pickle_module
from examples.common.example_logger import logger
from examples.object_detection.layers import L2Norm
from examples.object_detection.layers.modules.ssd_head import MultiOutputSequential, SSDDetectionOutput
from examples.torch.common import restricted_pickle_module
from examples.torch.common.example_logger import logger
from examples.torch.object_detection.layers import L2Norm
from examples.torch.object_detection.layers.modules.ssd_head import MultiOutputSequential, SSDDetectionOutput
from nncf.torch.checkpoint_loading import load_state

BASE_NUM_OUTPUTS = {
Expand Down
File renamed without changes.
File renamed without changes.
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ This sample demonstrates DL model compression capabilities for semantic segmenta
To work with the sample you should install the corresponding Python package dependencies

```
pip install -r examples/requirements.txt
pip install -r examples/torch/requirements.txt
```

## Quantize FP32 pretrained model
Expand All @@ -25,7 +25,7 @@ This scenario demonstrates quantization with fine-tuning of UNet on Mapillary Vi

#### Run semantic segmentation sample
- If you did not install the package then add the repository root folder to the `PYTHONPATH` environment variable
- Navigate to the `examples/segmentation` folder
- Navigate to the `examples/torch/segmentation` folder
- Run the following command to start compression with fine-tuning on GPUs:
`python main.py -m train --config configs/unet_mapillary_int8.json --data <path_to_dataset> --weights <path_to_fp32_model_checkpoint>`

Expand Down
Loading

0 comments on commit aafe4bf

Please sign in to comment.