Skip to content

Commit c8c9fa1

Browse files
author
Qian Liu
committed
add loss
1 parent 7efa1b2 commit c8c9fa1

File tree

2 files changed

+5
-3
lines changed

2 files changed

+5
-3
lines changed

README.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,7 @@ Now we have supported:
4242
- Running the code on Google Colab with Free GPU. Check [Here](https://github.com/layumi/Person_reID_baseline_pytorch/tree/master/colab) (Thanks to @ronghao233)
4343
- [DG-Market](https://github.com/NVlabs/DG-Net#dg-market) (10x Large Synethic Dataset from Market **CVPR 2019 Oral**)
4444
- [Swin Transformer](https://github.com/microsoft/Swin-Transformer) / [EfficientNet](https://github.com/lukemelas/EfficientNet-PyTorch) / [HRNet](https://github.com/HRNet)
45-
- Circle Loss (**CVPR 2020 Oral**), Triplet Loss, Contrastive Loss, Sphere Loss, Lifted Loss and Instance Loss
45+
- Circle Loss, Triplet Loss, Contrastive Loss, Sphere Loss, Lifted Loss, Arcface, Cosface and Instance Loss
4646
- Float16 to save GPU memory based on [apex](https://github.com/NVIDIA/apex)
4747
- Part-based Convolutional Baseline(PCB)
4848
- Multiple Query Evaluation
@@ -203,6 +203,7 @@ I do not optimize the hyper-parameters. You are free to tune them for better per
203203
| CE + Sphere [[Paper]](https://openaccess.thecvf.com/content_cvpr_2017/papers/Liu_SphereFace_Deep_Hypersphere_CVPR_2017_paper.pdf) | 92.01% | 79.39% | `python train.py --warm_epoch 5 --stride 1 --erasing_p 0.5 --batchsize 32 --lr 0.08 --name warm5_s1_b32_lr8_p0.5_sphere100 --sphere --total 100; python test.py --name warm5_s1_b32_lr8_p0.5_sphere100` |
204204
| CE + Triplet [[Paper]](https://arxiv.org/pdf/1703.07737) | 92.40% | 79.71% | `python train.py --warm_epoch 5 --stride 1 --erasing_p 0.5 --batchsize 32 --lr 0.08 --name warm5_s1_b32_lr8_p0.5_triplet100 --triplet --total 100; python test.py --name warm5_s1_b32_lr8_p0.5_triplet100` |
205205
| CE + Lifted [[Paper]](https://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Song_Deep_Metric_Learning_CVPR_2016_paper.pdf)| 91.78% | 79.77% | `python train.py --warm_epoch 5 --stride 1 --erasing_p 0.5 --batchsize 32 --lr 0.08 --name warm5_s1_b32_lr8_p0.5_lifted100 --lifted --total 100; python test.py --name warm5_s1_b32_lr8_p0.5_lifted100` |
206+
| CE + Instance [[Paper]](https://zdzheng.xyz/files/TOMM20.pdf) | 92.49% | 80.51% | `python train.py --warm_epoch 5 --stride 1 --erasing_p 0.5 --batchsize 32 --lr 0.08 --name warm5_s1_b32_lr8_p0.5_instance100_gamma32 --instance --ins_gamma 32 --total 100 ; python test.py --name warm5_s1_b32_lr8_p0.5_instance100_gamma32`|
206207
| CE + Contrast [[Paper]](https://zdzheng.xyz/files/TOMM18.pdf) | 92.28% | 81.42% | `python train.py --warm_epoch 5 --stride 1 --erasing_p 0.5 --batchsize 32 --lr 0.08 --name warm5_s1_b32_lr8_p0.5_contrast100 --contrast --total 100; python test.py --name warm5_s1_b32_lr8_p0.5_contrast100`|
207208
| CE + Circle [[Paper]](https://arxiv.org/abs/2002.10857) | 92.46% | 81.70% | `python train.py --warm_epoch 5 --stride 1 --erasing_p 0.5 --batchsize 32 --lr 0.08 --name warm5_s1_b32_lr8_p0.5_circle100 --circle --total 100 ; python test.py --name warm5_s1_b32_lr8_p0.5_circle100` |
208209
| CE + Contrast + Sphere | 92.79% | 82.02% | `python train.py --warm_epoch 5 --stride 1 --erasing_p 0.5 --batchsize 32 --lr 0.08 --name warm5_s1_b32_lr8_p0.5_cs100 --contrast --sphere --total 100; python test.py --name warm5_s1_b32_lr8_p0.5_cs100`|

train.py

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -63,6 +63,7 @@
6363
parser.add_argument('--cosface', action='store_true', help='use CosFace loss' )
6464
parser.add_argument('--contrast', action='store_true', help='use contrast loss' )
6565
parser.add_argument('--instance', action='store_true', help='use instance loss' )
66+
parser.add_argument('--ins_gamma', default=32, type=int, help='gamma for instance loss')
6667
parser.add_argument('--triplet', action='store_true', help='use triplet loss' )
6768
parser.add_argument('--lifted', action='store_true', help='use lifted loss' )
6869
parser.add_argument('--sphere', action='store_true', help='use sphere loss' )
@@ -216,7 +217,7 @@ def train_model(model, criterion, optimizer, scheduler, num_epochs=25):
216217
if opt.contrast:
217218
criterion_contrast = losses.ContrastiveLoss(pos_margin=0, neg_margin=1)
218219
if opt.instance:
219-
criterion_instance = InstanceLoss(gamma=8)
220+
criterion_instance = InstanceLoss(gamma = opt.ins_gamma)
220221
if opt.sphere:
221222
criterion_sphere = losses.SphereFaceLoss(num_classes=opt.nclasses, embedding_size=512, margin=4)
222223
for epoch in range(num_epochs):
@@ -283,7 +284,7 @@ def train_model(model, criterion, optimizer, scheduler, num_epochs=25):
283284
if opt.contrast:
284285
loss += criterion_contrast(ff, labels) #/now_batch_size
285286
if opt.instance:
286-
loss += criterion_instance(ff, labels)
287+
loss += criterion_instance(ff, labels) /now_batch_size
287288
if opt.sphere:
288289
loss += criterion_sphere(ff, labels)/now_batch_size
289290
elif opt.PCB: # PCB

0 commit comments

Comments
 (0)