Skip to content

Latest commit

 

History

History
107 lines (89 loc) · 4.24 KB

VGG.md

File metadata and controls

107 lines (89 loc) · 4.24 KB

🚀 VGG Example

This is the documentation of a toy VGG model (simplified from the VGG architecture), trained on the GPU. The dataset used is CIFAR 10.

We want to train the model to discriminate between 2 labels (not the 10 available of the CIFAR 10 dataset): the 8 (ships) or the 5 (dogs).

Here is a subset of images we find for the label 8 (ships) vs label 5 (dogs).

Setup

This example has some Python dependencies. In order to run the example, we first have to setup the environment:

conda create --name graiexamples python=3.9
conda activate graiexamples
cd Tests/GrAIExamples/Base
pip install -e .

Now, let us run the tests from Xcode or a bash command (here with compiler optimization):

swift test -c release --filter GrAIExamples

It is finally possible to clean the environment 🌍

conda deactivate
conda env remove --name graiexamples

Steps

  1. Dump the training and testing datasets.
  2. Dump some images corresponding to the 2 labels we want to discriminate against.
  3. Evaluate a random model on the testing dataset: watch a bad performance.
  4. Train a model on the training dataset.
  5. Evaluate the trained model on the testing dataset: watch a better performance.

Benchmarks

To benchmark the time performance of the VGG model, look at VGGBenchmark and rename _test_TrainVGG and _test_EvalVGG into test_TrainVGG and test_EvalVGG.

The test test_TrainVGG will measure the time spent for training the VGG model for 20 steps.

The test test_EvalVGG will measure the time spent for running the VGG model in inference for 20 steps.

Note that for both tests, the data is random and fixed once and for all.