Skip to content

Commit 9e89693

Browse files
authored
Update README.md
1 parent 3d9458e commit 9e89693

File tree

1 file changed

+9
-9
lines changed

1 file changed

+9
-9
lines changed

README.md

+9-9
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ Experimental results on few-shot learning datasets with ResNet-12 backbone (Same
4141
| GCN | 68.20 | 84.64 | [Coming Soon]() |
4242
| FEAT | **70.80** | **84.79** | [Coming Soon]() |
4343

44-
### Prerequisites
44+
## Prerequisites
4545

4646
The following packages are required to run the scripts:
4747

@@ -53,32 +53,32 @@ The following packages are required to run the scripts:
5353

5454
- Pre-trained weights: please download the [pre-trained weights](https://drive.google.com/open?id=14Jn1t9JxH-CxjfWy4JmVpCxkC9cDqqfE) of the encoder if needed. The pre-trained weights can be downloaded in a [zip file](https://drive.google.com/file/d/1XcUZMNTQ-79_2AkNG3E04zh6bDYnPAMY/view?usp=sharing).
5555

56-
### Dataset
56+
## Dataset
5757

58-
#### MiniImageNet Dataset
58+
### MiniImageNet Dataset
5959

6060
The MiniImageNet dataset is a subset of the ImageNet that includes a total number of 100 classes and 600 examples per class. We follow the [previous setup](https://github.com/twitter/meta-learning-lstm), and use 64 classes as SEEN categories, 16 and 20 as two sets of UNSEEN categories for model validation and evaluation, respectively.
6161

62-
#### CUB Dataset
62+
### CUB Dataset
6363
[Caltech-UCSD Birds (CUB) 200-2011 dataset](http://www.vision.caltech.edu/visipedia/CUB-200-2011.html) is initially designed for fine-grained classification. It contains in total 11,788 images of birds over 200 species. On CUB, we randomly sampled 100 species as SEEN classes, and another two 50 species are used as two UNSEEN sets. We crop all images with given bounding boxes before training. We only test CUB with the ConvNet backbone in our work.
6464

65-
#### TieredImageNet Dataset
65+
### TieredImageNet Dataset
6666
[TieredImageNet](https://github.com/renmengye/few-shot-ssl-public) is a large-scale dataset with more categories, which contains 351, 97, and 160 categoriesfor model training, validation, and evaluation, respectively. The dataset can also be download from [here](https://github.com/kjunelee/MetaOptNet).
6767
We only test TieredImageNet with ResNet backbone in our work.
6868

6969
Check [this](https://github.com/Sha-Lab/FEAT/blob/master/data/README.md) for details of data downloading and preprocessing.
7070

71-
### Code Structures
71+
## Code Structures
7272
To reproduce our experiments with FEAT, please use **train_fsl.py**. There are four parts in the code.
7373
- `model`: It contains the main files of the code, including the few-shot learning trainer, the dataloader, the network architectures, and baseline and comparison models.
7474
- `data`: Images and splits for the data sets.
7575
- `saves`: The pre-trained weights of different networks.
7676
- `checkpoints`: To save the trained models.
7777

78-
### Model Training and Evaluation
78+
## Model Training and Evaluation
7979
Please use **train_fsl.py** and follow the instructions below. FEAT meta-learns the embedding adaptation process such that all the training instance embeddings in a task is adapted, based on their contextual task information, using Transformer. The file will automatically evaluate the model on the meta-test set with 10,000 tasks after given epochs.
8080

81-
#### Arguments
81+
## Arguments
8282
The train_fsl.py takes the following command line options (details are in the `model/utils.py`):
8383

8484
**Task Related Arguments**
@@ -149,7 +149,7 @@ The train_fsl.py takes the following command line options (details are in the `m
149149

150150
Running the command without arguments will train the models with the default hyper-parameter values. Loss changes will be recorded as a tensorboard file.
151151

152-
#### FEAT Approach
152+
## Training scripts for FEAT
153153

154154
For example, to train the 1-shot/5-shot 5-way FEAT model with ConvNet backbone on MiniImageNet:
155155

0 commit comments

Comments
 (0)