Skip to content

Commit

Permalink
update CVPR 2021 and AAAI 2021 codes
Browse files Browse the repository at this point in the history
  • Loading branch information
zkcys001 committed Mar 20, 2021
1 parent 8ef892c commit 49e850f
Show file tree
Hide file tree
Showing 16 changed files with 318 additions and 504 deletions.
52 changes: 37 additions & 15 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,8 @@ repository will be released upon the paper published.
| [SpCL](https://github.com/open-mmlab/OpenUnReID/) NIPS'2020 submission| ResNet50 | imagenet | 78.2 | 90.5 | 96.6 | 97.8 | ~3h |
| [strong_baseline](https://github.com/open-mmlab/OpenUnReID/) | ResNet50 | imagenet | 75.6 | 90.9 | 96.6 | 97.8 | ~3h |
| [Our stronger_baseline](https://github.com/JDAI-CV/fast-reid) | ResNet50 | DukeMTMC | 77.4 | 91.0 | 96.4 | 97.7 | ~3h |
| [Our stronger_baseline + memory bank] | ResNet50 | DukeMTMC | 79.4 | 92.5 | 97.5 | 98.5 | ~5h/60epoches |
| [Our stronger_baseline + GLT (Kmeans)] | ResNet50 | DukeMTMC | 79.5 | 92.7 | 96.7 | 97.9 | ~35h |
| [Our stronger_baseline + uncertainty (DBSCAN)] | ResNet50 | DukeMTMC | 80.5 | 93.0 | 97.3 | 98.2 | ~5h |

#### Market-1501 -> DukeMTMC-reID

Expand All @@ -42,7 +43,6 @@ repository will be released upon the paper published.
| [SpCL](https://github.com/open-mmlab/OpenUnReID/) NIPS'2020 submission | ResNet50 | imagenet | 70.4 | 83.8 | 91.2 | 93.4 | ~3h |
| [strong_baseline](https://github.com/open-mmlab/OpenUnReID/) | ResNet50 | imagenet | 60.4 | 75.9 | 86.2 | 89.8 | ~3h |
| [Our stronger_baseline](https://github.com/JDAI-CV/fast-reid) | ResNet50 | Market1501 | 66.7 | 80.0 | 89.2 | 92.2 | ~3h |
| [Our stronger_baseline + memory bank](https://github.com/JDAI-CV/fast-reid) | ResNet50 | Market1501 | 69.7 | 82.5 | 90.5 | 92.9 | ~5h/60epoches |

## Requirements

Expand All @@ -51,23 +51,21 @@ repository will be released upon the paper published.
```shell
git https://github.com/zkcys001/UDAStrongBaseline/
cd UDAStrongBaseline
python setup.py install
pip install -r requirements
pip install faiss-gpu==1.6.3
```

### Prepare Datasets

```shell
```
Download the person datasets [DukeMTMC-reID](https://arxiv.org/abs/1609.01775), [Market-1501](https://drive.google.com/file/d/0B8-rUzbwVRk0c054eEozWG9COHM/view), [MSMT17](https://arxiv.org/abs/1711.08565), Then unzip them under the directory like
```
./data
├── dukemtmc
   └── DukeMTMC-reID
└── DukeMTMC-reID
├── market1501
   └── Market-1501-v15.09.15
└── Market-1501-v15.09.15
├── msmt17
   └── MSMT17_V1
└── MSMT17_V1
```

Expand All @@ -83,25 +81,48 @@ ImageNet-pretrained models for **ResNet-50** will be automatically downloaded in

We utilize 4 GPUs for training. **Note that**

### Stronger Baseline:

### Stage I: Pretrain Model on Source Domain
To train the model(s) in the source domain, run this command:
#### Stage I: Pretrain Model on Source Domain
To train the baseline in the source domain, run this command:
```shell
sh scripts/pretrain_market1501.sh
sh scripts/pretrain_dukemtmc.sh
```

#### Stage II: End-to-end training with clustering

### Stage II: End-to-end training with clustering

Utilizeing DBSCAN clustering algorithm
Utilizeing the baseline or uncertainty model(s) based on DBSCAN clustering algorithm:

```shell
sh scripts/dbscan_baseline_market2duke.sh
sh scripts/dbscan_baseline_duke2market.sh
```

### GLT (group-aware label transfer, CVPR 2021):

#### Stage I: Pretrain Model on Source Domain
To train the GLT model in the source domain, run this command:
```shell
sh scripts/pretrain_market1501.sh
```

Utilizeing the GLT model based on K-means clustering algorithm:
```shell
sh scripts/GLT_kmeans_duke2market.sh
```

### Uncertainty (AAAI 2021):

#### Stage I: Pretrain Model on Source Domain

To train the uncertainty model in the source domain, run this command:
```shell
sh pretrain_uncertainty_market1501.sh
```

Utilizeing the GLT model based on DBSCAN clustering algorithm:
```shell
sh scripts/dbscan_uncertainty_duke2market.sh
```

## Acknowledgement

Expand All @@ -119,6 +140,7 @@ If you find this code useful for your research, please use the following BibTeX
journal={AAAI},
year={2021}
}
@article{zheng2021labeltransfer,
title={Group-aware Label Transfer for Domain Adaptive Person Re-identification},
author={Zheng, Kecheng and Liu, Wu and Mei, Tao and Luo, Jiebo and Zha, Zheng-Jun},
Expand Down
5 changes: 4 additions & 1 deletion UDAsbs/evaluation_metrics/ranking.py
Original file line number Diff line number Diff line change
Expand Up @@ -109,7 +109,10 @@ def mean_ap(distmat, query_ids=None, gallery_ids=None,
y_true = matches[i, valid]
y_score = -distmat[i][indices[i]][valid]
if not np.any(y_true): continue
aps.append(average_precision_score(y_true, y_score))
try:
aps.append(average_precision_score(y_true, y_score))
except:
import ipdb;ipdb.set_trace()
if len(aps) == 0:
raise RuntimeError("No valid query")
return np.mean(aps)
181 changes: 1 addition & 180 deletions UDAsbs/loss/triplet.py
Original file line number Diff line number Diff line change
@@ -1,180 +1 @@
from __future__ import absolute_import

import torch
from torch import nn
import torch.nn.functional as F


def euclidean_dist(x, y):
m, n = x.size(0), y.size(0)
xx = torch.pow(x, 2).sum(1, keepdim=True).expand(m, n)
yy = torch.pow(y, 2).sum(1, keepdim=True).expand(n, m).t()
dist = xx + yy
dist.addmm_(1, -2, x, y.t())
dist = dist.clamp(min=1e-12).sqrt() # for numerical stability
return dist


def cosine_dist(x, y):
bs1, bs2 = x.size(0), y.size(0)
frac_up = torch.matmul(x, y.transpose(0, 1))
frac_down = (torch.sqrt(torch.sum(torch.pow(x, 2), 1))).view(bs1, 1).repeat(1, bs2) * \
(torch.sqrt(torch.sum(torch.pow(y, 2), 1))).view(1, bs2).repeat(bs1, 1)
cosine = frac_up / frac_down
return 1 - cosine


from functools import reduce


def _batch_hard(mat_distance, mat_similarity, indice=False):
# mat_similarity=reduce(lambda x, y: x * y, mat_similaritys)
# mat_similarity=mat_similaritys[0]*mat_similaritys[1]*mat_similaritys[2]*mat_similaritys[3]
sorted_mat_distance, positive_indices = torch.sort(mat_distance + (-9999999.) * (1 - mat_similarity), dim=1,
descending=True)
hard_p = sorted_mat_distance[:, 0]
hard_p_indice = positive_indices[:, 0]
sorted_mat_distance, negative_indices = torch.sort(mat_distance + (9999999.) * (mat_similarity), dim=1,
descending=False)
hard_n = sorted_mat_distance[:, 0]
hard_n_indice = negative_indices[:, 0]
if (indice):
return hard_p, hard_n, hard_p_indice, hard_n_indice
return hard_p, hard_n


class TripletLoss(nn.Module):
'''
Compute Triplet loss augmented with Batch Hard
Details can be seen in 'In defense of the Triplet Loss for Person Re-Identification'
'''

def __init__(self, margin, normalize_feature=False):
super(TripletLoss, self).__init__()
self.margin = margin
self.normalize_feature = normalize_feature
self.margin_loss = nn.MarginRankingLoss(margin=margin).cuda()

def forward(self, emb, label):
if self.normalize_feature:
# equal to cosine similarity
emb = F.normalize(emb)
mat_dist = euclidean_dist(emb, emb)
# mat_dist = cosine_dist(emb, emb)
assert mat_dist.size(0) == mat_dist.size(1)
N = mat_dist.size(0)
mat_sim = label.expand(N, N).eq(label.expand(N, N).t()).float()

dist_ap, dist_an = _batch_hard(mat_dist, mat_sim)
assert dist_an.size(0) == dist_ap.size(0)
y = torch.ones_like(dist_ap)
loss = self.margin_loss(dist_an, dist_ap, y)
prec = (dist_an.data > dist_ap.data).sum() * 1. / y.size(0)
return loss, prec


def logsumexp(value, weight=1, dim=None, keepdim=False):
"""Numerically stable implementation of the operation
value.exp().sum(dim, keepdim).log()
"""
# TODO: torch.max(value, dim=None) threw an error at time of writing
if dim is not None:
m, _ = torch.max(value, dim=dim, keepdim=True)
value0 = value - m
if keepdim is False:
m = m.squeeze(dim)
return m + torch.log(torch.sum(weight * torch.exp(value0),
dim=dim, keepdim=keepdim))
else:
m = torch.max(value)
sum_exp = torch.sum(weight * torch.exp(value - m))

return m + torch.log(sum_exp)


class SoftTripletLoss_uncer(nn.Module):

def __init__(self, margin=None, normalize_feature=False, uncer_mode=0):
super(SoftTripletLoss_uncer, self).__init__()
self.margin = margin
self.normalize_feature = normalize_feature
self.uncer_mode = uncer_mode

def forward(self, emb1, emb2, label, uncertainty):
if self.normalize_feature:
# equal to cosine similarity
emb1 = F.normalize(emb1)
emb2 = F.normalize(emb2)

mat_dist = euclidean_dist(emb1, emb1)
assert mat_dist.size(0) == mat_dist.size(1)
N = mat_dist.size(0)

# mat_sims=[]
# for label in labels:
# mat_sims.append(label.expand(N, N).eq(label.expand(N, N).t()).float())
# mat_sim=reduce(lambda x, y: x + y, mat_sims)
mat_sim = label.expand(N, N).eq(label.expand(N, N).t()).float()
dist_ap, dist_an, ap_idx, an_idx = _batch_hard(mat_dist, mat_sim, indice=True)
assert dist_an.size(0) == dist_ap.size(0)
triple_dist = torch.stack((dist_ap, dist_an), dim=1)
triple_dist = F.log_softmax(triple_dist, dim=1)

# mat_dist_ref = euclidean_dist(emb2, emb2)
# dist_ap_ref = torch.gather(mat_dist_ref, 1, ap_idx.view(N,1).expand(N,N))[:,0]
# dist_an_ref = torch.gather(mat_dist_ref, 1, an_idx.view(N,1).expand(N,N))[:,0]
# triple_dist_ref = torch.stack((dist_ap_ref, dist_an_ref), dim=1)
# triple_dist_ref = F.softmax(triple_dist_ref, dim=1).detach()

# torch.gather
if self.uncer_mode == 0:
uncer_ap_ref = torch.gather(uncertainty, 0, ap_idx) + uncertainty
uncer_an_ref = torch.gather(uncertainty, 0, an_idx) + uncertainty
elif self.uncer_mode == 1:
uncer_ap_ref = max(torch.gather(uncertainty, 0, ap_idx), uncertainty)
uncer_an_ref = max(torch.gather(uncertainty, 0, an_idx), uncertainty)
else:
uncer_ap_ref = min(torch.gather(uncertainty, 0, ap_idx), uncertainty)
uncer_an_ref = min(torch.gather(uncertainty, 0, an_idx), uncertainty)
uncer = torch.stack((uncer_ap_ref, uncer_an_ref), dim=1).detach() / 2.0

loss = (-uncer * triple_dist).mean(0).sum() - triple_dist[:,1].mean()
return loss


class SoftTripletLoss(nn.Module):

def __init__(self, margin=None, normalize_feature=False):
super(SoftTripletLoss, self).__init__()
self.margin = margin
self.normalize_feature = normalize_feature

def forward(self, emb1, emb2, label):
if self.normalize_feature:
# equal to cosine similarity
emb1 = F.normalize(emb1)
emb2 = F.normalize(emb2)

mat_dist = euclidean_dist(emb1, emb1)
assert mat_dist.size(0) == mat_dist.size(1)
N = mat_dist.size(0)

mat_sim = label.expand(N, N).eq(label.expand(N, N).t()).float()

dist_ap, dist_an, ap_idx, an_idx = _batch_hard(mat_dist, mat_sim, indice=True)
assert dist_an.size(0) == dist_ap.size(0)
triple_dist = torch.stack((dist_ap, dist_an), dim=1)
triple_dist = F.log_softmax(triple_dist, dim=1)
if (self.margin is not None):
loss = (- self.margin * triple_dist[:, 0] - (1 - self.margin) * triple_dist[:, 1]).mean()
return loss

mat_dist_ref = euclidean_dist(emb2, emb2)
dist_ap_ref = torch.gather(mat_dist_ref, 1, ap_idx.view(N, 1).expand(N, N))[:, 0]
dist_an_ref = torch.gather(mat_dist_ref, 1, an_idx.view(N, 1).expand(N, N))[:, 0]
triple_dist_ref = torch.stack((dist_ap_ref, dist_an_ref), dim=1)
triple_dist_ref = F.softmax(triple_dist_ref, dim=1).detach()

loss = (- triple_dist_ref * triple_dist).mean(0).sum()
return loss
from __future__ import absolute_importimport torchfrom torch import nnimport torch.nn.functional as Fdef euclidean_dist(x, y): m, n = x.size(0), y.size(0) xx = torch.pow(x, 2).sum(1, keepdim=True).expand(m, n) yy = torch.pow(y, 2).sum(1, keepdim=True).expand(n, m).t() dist = xx + yy dist.addmm_(1, -2, x, y.t()) dist = dist.clamp(min=1e-12).sqrt() # for numerical stability return distdef cosine_dist(x, y): bs1, bs2 = x.size(0), y.size(0) frac_up = torch.matmul(x, y.transpose(0, 1)) frac_down = (torch.sqrt(torch.sum(torch.pow(x, 2), 1))).view(bs1, 1).repeat(1, bs2) * \ (torch.sqrt(torch.sum(torch.pow(y, 2), 1))).view(1, bs2).repeat(bs1, 1) cosine = frac_up / frac_down return 1 - cosinefrom functools import reducedef _batch_hard(mat_distance, mat_similarity, indice=False): # mat_similarity=reduce(lambda x, y: x * y, mat_similaritys) # mat_similarity=mat_similaritys[0]*mat_similaritys[1]*mat_similaritys[2]*mat_similaritys[3] sorted_mat_distance, positive_indices = torch.sort(mat_distance + (-9999999.) * (1 - mat_similarity), dim=1, descending=True) hard_p = sorted_mat_distance[:, 0] hard_p_indice = positive_indices[:, 0] sorted_mat_distance, negative_indices = torch.sort(mat_distance + (9999999.) * (mat_similarity), dim=1, descending=False) hard_n = sorted_mat_distance[:, 0] hard_n_indice = negative_indices[:, 0] if (indice): return hard_p, hard_n, hard_p_indice, hard_n_indice return hard_p, hard_nclass TripletLoss(nn.Module): ''' Compute Triplet loss augmented with Batch Hard Details can be seen in 'In defense of the Triplet Loss for Person Re-Identification' ''' def __init__(self, margin, normalize_feature=False): super(TripletLoss, self).__init__() self.margin = margin self.normalize_feature = normalize_feature self.margin_loss = nn.MarginRankingLoss(margin=margin).cuda() def forward(self, emb, label): if self.normalize_feature: # equal to cosine similarity emb = F.normalize(emb) mat_dist = euclidean_dist(emb, emb) # mat_dist = cosine_dist(emb, emb) assert mat_dist.size(0) == mat_dist.size(1) N = mat_dist.size(0) mat_sim = label.expand(N, N).eq(label.expand(N, N).t()).float() dist_ap, dist_an = _batch_hard(mat_dist, mat_sim) assert dist_an.size(0) == dist_ap.size(0) y = torch.ones_like(dist_ap) loss = self.margin_loss(dist_an, dist_ap, y) prec = (dist_an.data > dist_ap.data).sum() * 1. / y.size(0) return loss, precdef logsumexp(value, weight=1, dim=None, keepdim=False): """Numerically stable implementation of the operation value.exp().sum(dim, keepdim).log() """ # TODO: torch.max(value, dim=None) threw an error at time of writing if dim is not None: m, _ = torch.max(value, dim=dim, keepdim=True) value0 = value - m if keepdim is False: m = m.squeeze(dim) return m + torch.log(torch.sum(weight * torch.exp(value0), dim=dim, keepdim=keepdim)) else: m = torch.max(value) sum_exp = torch.sum(weight * torch.exp(value - m)) return m + torch.log(sum_exp)class SoftTripletLoss_uncer(nn.Module): def __init__(self, margin=None, normalize_feature=False, uncer_mode=0): super(SoftTripletLoss_uncer, self).__init__() self.margin = margin self.normalize_feature = normalize_feature self.uncer_mode = uncer_mode def forward(self, emb1, emb2, label, uncertainty): if self.normalize_feature: # equal to cosine similarity emb1 = F.normalize(emb1) emb2 = F.normalize(emb2) mat_dist = euclidean_dist(emb1, emb1) assert mat_dist.size(0) == mat_dist.size(1) N = mat_dist.size(0) # mat_sims=[] # for label in labels: # mat_sims.append(label.expand(N, N).eq(label.expand(N, N).t()).float()) # mat_sim=reduce(lambda x, y: x + y, mat_sims) mat_sim = label.expand(N, N).eq(label.expand(N, N).t()).float() dist_ap, dist_an, ap_idx, an_idx = _batch_hard(mat_dist, mat_sim, indice=True) assert dist_an.size(0) == dist_ap.size(0) triple_dist = torch.stack((dist_ap, dist_an), dim=1) triple_dist = F.log_softmax(triple_dist, dim=1) # mat_dist_ref = euclidean_dist(emb2, emb2) # dist_ap_ref = torch.gather(mat_dist_ref, 1, ap_idx.view(N,1).expand(N,N))[:,0] # dist_an_ref = torch.gather(mat_dist_ref, 1, an_idx.view(N,1).expand(N,N))[:,0] # triple_dist_ref = torch.stack((dist_ap_ref, dist_an_ref), dim=1) # triple_dist_ref = F.softmax(triple_dist_ref, dim=1).detach() # torch.gather if self.uncer_mode == 0: uncer_ap_ref = torch.gather(uncertainty, 0, ap_idx) + uncertainty uncer_an_ref = torch.gather(uncertainty, 0, an_idx) + uncertainty elif self.uncer_mode == 1: uncer_ap_ref = max(torch.gather(uncertainty, 0, ap_idx), uncertainty) uncer_an_ref = max(torch.gather(uncertainty, 0, an_idx), uncertainty) else: uncer_ap_ref = min(torch.gather(uncertainty, 0, ap_idx), uncertainty) uncer_an_ref = min(torch.gather(uncertainty, 0, an_idx), uncertainty) uncer = torch.stack((uncer_ap_ref, uncer_an_ref), dim=1).detach() / 2.0 loss = (-uncer * triple_dist).mean(0).sum() - triple_dist[:,1].mean() return lossclass SoftTripletLoss(nn.Module): def __init__(self, margin=None, normalize_feature=False): super(SoftTripletLoss, self).__init__() self.margin = margin self.normalize_feature = normalize_feature def forward(self, emb1, emb2, label): if self.normalize_feature: # equal to cosine similarity emb1 = F.normalize(emb1) emb2 = F.normalize(emb2) mat_dist = euclidean_dist(emb1, emb1) assert mat_dist.size(0) == mat_dist.size(1) N = mat_dist.size(0) mat_sim = label.expand(N, N).eq(label.expand(N, N).t()).float() dist_ap, dist_an, ap_idx, an_idx = _batch_hard(mat_dist, mat_sim, indice=True) assert dist_an.size(0) == dist_ap.size(0) triple_dist = torch.stack((dist_ap, dist_an), dim=1) triple_dist = F.log_softmax(triple_dist, dim=1) if (self.margin is not None): loss = (- self.margin * triple_dist[:, 0] - (1 - self.margin) * triple_dist[:, 1]).mean() return loss mat_dist_ref = euclidean_dist(emb2, emb2) dist_ap_ref = torch.gather(mat_dist_ref, 1, ap_idx.view(N, 1).expand(N, N))[:, 0] dist_an_ref = torch.gather(mat_dist_ref, 1, an_idx.view(N, 1).expand(N, N))[:, 0] triple_dist_ref = torch.stack((dist_ap_ref, dist_an_ref), dim=1) triple_dist_ref = F.softmax(triple_dist_ref, dim=1).detach() loss = (- triple_dist_ref * triple_dist).mean(0).sum() return loss
Expand Down
6 changes: 4 additions & 2 deletions UDAsbs/models/__init__.py
Original file line number Diff line number Diff line change
@@ -1,10 +1,12 @@
from __future__ import absolute_import

from .resnet import *

from .resnet_multi import resnet50_multi,resnet50_multi_sbs
__factory = {

'resnet50_sbs': resnet50
'resnet50_sbs': resnet50,
'resnet50_multi': resnet50_multi,
'resnet50_multi_sbs': resnet50_multi_sbs
}


Expand Down
Loading

0 comments on commit 49e850f

Please sign in to comment.