Skip to content

Commit

Permalink
Update guide and reference of NAS (microsoft#1972)
Browse files Browse the repository at this point in the history
  • Loading branch information
ultmaster authored Feb 8, 2020
1 parent d838895 commit d2c610a
Show file tree
Hide file tree
Showing 28 changed files with 1,119 additions and 586 deletions.
4 changes: 0 additions & 4 deletions docs/en_US/NAS/CDARTS.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,16 +46,12 @@ bash run_retrain_cifar.sh
.. autoclass:: nni.nas.pytorch.cdarts.CdartsTrainer
:members:
.. automethod:: __init__
.. autoclass:: nni.nas.pytorch.cdarts.RegularizedDartsMutator
:members:
.. autoclass:: nni.nas.pytorch.cdarts.DartsDiscreteMutator
:members:
.. automethod:: __init__
.. autoclass:: nni.nas.pytorch.cdarts.RegularizedMutatorParallel
:members:
```
6 changes: 4 additions & 2 deletions docs/en_US/NAS/DARTS.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,8 +43,10 @@ python3 retrain.py --arc-checkpoint ./checkpoints/epoch_49.json
.. autoclass:: nni.nas.pytorch.darts.DartsTrainer
:members:
.. automethod:: __init__
.. autoclass:: nni.nas.pytorch.darts.DartsMutator
:members:
```

## Limitations

* DARTS doesn't support DataParallel and needs to be customized in order to support DistributedDataParallel.
4 changes: 0 additions & 4 deletions docs/en_US/NAS/ENAS.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,10 +37,6 @@ python3 search.py -h
.. autoclass:: nni.nas.pytorch.enas.EnasTrainer
:members:
.. automethod:: __init__
.. autoclass:: nni.nas.pytorch.enas.EnasMutator
:members:
.. automethod:: __init__
```
311 changes: 311 additions & 0 deletions docs/en_US/NAS/NasGuide.md

Large diffs are not rendered by default.

173 changes: 0 additions & 173 deletions docs/en_US/NAS/NasInterface.md

This file was deleted.

109 changes: 109 additions & 0 deletions docs/en_US/NAS/NasReference.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,109 @@
# NAS Reference

```eval_rst
.. contents::
```

## Mutables

```eval_rst
.. autoclass:: nni.nas.pytorch.mutables.Mutable
:members:
.. autoclass:: nni.nas.pytorch.mutables.LayerChoice
:members:
.. autoclass:: nni.nas.pytorch.mutables.InputChoice
:members:
.. autoclass:: nni.nas.pytorch.mutables.MutableScope
:members:
```

### Utilities

```eval_rst
.. autofunction:: nni.nas.pytorch.utils.global_mutable_counting
```

## Mutators

```eval_rst
.. autoclass:: nni.nas.pytorch.base_mutator.BaseMutator
:members:
.. autoclass:: nni.nas.pytorch.mutator.Mutator
:members:
```

### Random Mutator

```eval_rst
.. autoclass:: nni.nas.pytorch.random.RandomMutator
:members:
```

### Utilities

```eval_rst
.. autoclass:: nni.nas.pytorch.utils.StructuredMutableTreeNode
:members:
```

## Trainers

### Trainer

```eval_rst
.. autoclass:: nni.nas.pytorch.base_trainer.BaseTrainer
:members:
.. autoclass:: nni.nas.pytorch.trainer.Trainer
:members:
```

### Retrain

```eval_rst
.. autofunction:: nni.nas.pytorch.fixed.apply_fixed_architecture
.. autoclass:: nni.nas.pytorch.fixed.FixedArchitecture
:members:
```

### Distributed NAS

```eval_rst
.. autofunction:: nni.nas.pytorch.classic_nas.get_and_apply_next_architecture
.. autoclass:: nni.nas.pytorch.classic_nas.mutator.ClassicMutator
:members:
```

### Callbacks

```eval_rst
.. autoclass:: nni.nas.pytorch.callbacks.Callback
:members:
.. autoclass:: nni.nas.pytorch.callbacks.LRSchedulerCallback
:members:
.. autoclass:: nni.nas.pytorch.callbacks.ArchitectureCheckpoint
:members:
.. autoclass:: nni.nas.pytorch.callbacks.ModelCheckpoint
:members:
```

### Utilities

```eval_rst
.. autoclass:: nni.nas.pytorch.utils.AverageMeterGroup
:members:
.. autoclass:: nni.nas.pytorch.utils.AverageMeter
:members:
.. autofunction:: nni.nas.pytorch.utils.to_device
```
28 changes: 14 additions & 14 deletions docs/en_US/NAS/Overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,11 +6,7 @@ However, it takes great efforts to implement NAS algorithms, and it is hard to r

With this motivation, our ambition is to provide a unified architecture in NNI, to accelerate innovations on NAS, and apply state-of-art algorithms on real world problems faster.

With [the unified interface](./NasInterface.md), there are two different modes for the architecture search. [One](#supported-one-shot-nas-algorithms) is the so-called one-shot NAS, where a super-net is built based on search space, and using one shot training to generate good-performing child model. [The other](./NasInterface.md#classic-distributed-search) is the traditional searching approach, where each child model in search space runs as an independent trial, the performance result is sent to tuner and the tuner generates new child model.

* [Supported One-shot NAS Algorithms](#supported-one-shot-nas-algorithms)
* [Classic Distributed NAS with NNI experiment](./NasInterface.md#classic-distributed-search)
* [NNI NAS Programming Interface](./NasInterface.md)
With the unified interface, there are two different modes for the architecture search. [One](#supported-one-shot-nas-algorithms) is the so-called one-shot NAS, where a super-net is built based on search space, and using one shot training to generate good-performing child model. [The other](#supported-distributed-nas-algorithms) is the traditional searching approach, where each child model in search space runs as an independent trial, the performance result is sent to tuner and the tuner generates new child model.

## Supported One-shot NAS Algorithms

Expand All @@ -33,28 +29,32 @@ Here are some common dependencies to run the examples. PyTorch needs to be above
* PyTorch 1.2+
* git

## Use NNI API
## Supported Distributed NAS Algorithms

|Name|Brief Introduction of Algorithm|
|---|---|
| [SPOS](SPOS.md) | [Single Path One-Shot Neural Architecture Search with Uniform Sampling](https://arxiv.org/abs/1904.00420) constructs a simplified supernet trained with an uniform path sampling method, and applies an evolutionary algorithm to efficiently search for the best-performing architectures. |

NOTE, we are trying to support various NAS algorithms with unified programming interface, and it's in very experimental stage. It means the current programing interface may be updated in future.
```eval_rst
.. Note:: SPOS is a two-stage algorithm, whose first stage is one-shot and second stage is distributed, leveraging result of first stage as a checkpoint.
```

### Programming interface
## Use NNI API

The programming interface of designing and searching a model is often demanded in two scenarios.

1. When designing a neural network, there may be multiple operation choices on a layer, sub-model, or connection, and it's undetermined which one or combination performs best. So, it needs an easy way to express the candidate layers or sub-models.
2. When applying NAS on a neural network, it needs an unified way to express the search space of architectures, so that it doesn't need to update trial code for different searching algorithms.

NNI proposed API is [here](https://github.com/microsoft/nni/tree/master/src/sdk/pynni/nni/nas/pytorch). And [here](https://github.com/microsoft/nni/tree/master/examples/nas/naive) is an example of NAS implementation, which bases on NNI proposed interface.
[Here](./NasGuide.md) is a user guide to get started with using NAS on NNI.

## Reference and Feedback

[1]: https://arxiv.org/abs/1802.03268
[2]: https://arxiv.org/abs/1707.07012
[3]: https://arxiv.org/abs/1806.09055
[4]: https://arxiv.org/abs/1806.10282
[5]: https://arxiv.org/abs/1703.01041

## **Reference and Feedback**
* To [report a bug](https://github.com/microsoft/nni/issues/new?template=bug-report.md) for this feature in GitHub;
* To [file a feature or improvement request](https://github.com/microsoft/nni/issues/new?template=enhancement.md) for this feature in GitHub;
* To know more about [Feature Engineering with NNI](https://github.com/microsoft/nni/blob/master/docs/en_US/FeatureEngineering/Overview.md);
* To know more about [Model Compression with NNI](https://github.com/microsoft/nni/blob/master/docs/en_US/Compressor/Overview.md);
* To know more about [Hyperparameter Tuning with NNI](https://github.com/microsoft/nni/blob/master/docs/en_US/Tuner/BuiltinTuner.md);
* To [file a feature or improvement request](https://github.com/microsoft/nni/issues/new?template=enhancement.md) for this feature in GitHub.
6 changes: 0 additions & 6 deletions docs/en_US/NAS/SPOS.md
Original file line number Diff line number Diff line change
Expand Up @@ -93,17 +93,11 @@ By default, it will use `architecture_final.json`. This architecture is provided
.. autoclass:: nni.nas.pytorch.spos.SPOSEvolution
:members:
.. automethod:: __init__
.. autoclass:: nni.nas.pytorch.spos.SPOSSupernetTrainer
:members:
.. automethod:: __init__
.. autoclass:: nni.nas.pytorch.spos.SPOSSupernetTrainingMutator
:members:
.. automethod:: __init__
```

## Known Limitations
Expand Down
Loading

0 comments on commit d2c610a

Please sign in to comment.