【行人重识别】fast-reid复现(20210119-v1.0.0)

参考代码:

https://github.com/JDAI-CV/fast-reid/tree/v1.0.0

0.环境

ubuntu16.04
cuda9.0
python3.6
torch==1.1.0
torchvision==0.3.0
Cython
yacs
tensorboard
future
termcolor
sklearn
tqdm
opencv-python==4.1.0.25
matplotlib
scikit-image
numpy==1.16.4
faiss-gpu==1.6.3

安装apex(不要直接通过pip安装):

git clone https://www.github.com/nvidia/apex
cd apex
# python setup.py install --cuda_ext --cpp_ext 
pip install -v --no-cache-dir ./

1.准备数据

参考https://blog.csdn.net/qq_35975447/article/details/106664593

数据目录结构如下:

fast-reid
	datasets
		Market-1501-v15.09.15
			bounding_box_train
			bounding_box_test
			query
 

2.修改与训练

修改的部分与https://blog.csdn.net/qq_35975447/article/details/112482765一样,直接就可以训练起来:

CUDA_VISIBLE_DEVICES="0,1" python ./tools/train_net.py --config-file='./configs/Market1501/sbs_R101-ibn.yml'

3.知识蒸馏训练

此处准备的是Market1501数据集,将yaml文件改为:

1)训练教师模型:

CUDA_VISIBLE_DEVICES='0,1' python ./projects/FastDistill/train_net.py --config-file ./projects/FastDistill/configs/sbs_r101ibn.yml  --num-gpus 2

无Non-local, image size 也只有 128x256的结果:

2)单独训练r34:

CUDA_VISIBLE_DEVICES='0' python ./projects/FastDistill/train_net.py --config-file ./projects/FastDistill/configs/sbs_r34.yml  --num-gpus 1

3)loss,r101作为教师模型训练学生模型:

CUDA_VISIBLE_DEVICES='0,1' python ./projects/FastDistill/train_net.py --config-file ./projects/FastDistill/configs/kd-sbs_r101ibn-sbs_r34.yml  --num-gpus 2

4)loss,以r34作为预训练模型训练,r101作为教师模型训练学生模型:

CUDA_VISIBLE_DEVICES='0,1' python ./projects/FastDistill/train_net.py --config-file ./projects/FastDistill/configs/kd-sbs_r101ibn-sbs_r34.yml  --num-gpus 2 MODEL.WEIGHTS projects/FastDistill/logs/market1501/r34/model_best.pth

5)loss+overhaul distillation,r101作为教师模型训练学生模型:

CUDA_VISIBLE_DEVICES='0,1' python ./projects/FastDistill/train_net.py --config-file ./projects/FastDistill/configs/kd-sbs_r101ibn-sbs_r34.yml  --num-gpus 2 --dist-url 'tcp://127.0.0.1:49153' MODEL.META_ARCHITECTURE DistillerOverhaul

6)loss+overhaul distillation,以r34作为预训练模型训练,r101作为教师模型训练学生模型:

CUDA_VISIBLE_DEVICES='0,1' python ./projects/FastDistill/train_net.py --config-file ./projects/FastDistill/configs/kd-sbs_r101ibn-sbs_r34.yml  --num-gpus 2 --dist-url 'tcp://127.0.0.1:49153' MODEL.WEIGHTS projects/FastDistill/logs/market1501/r34/model_best.pth MODEL.META_ARCHITECTURE DistillerOverhaul

7)Market1501数据集上对比表格(下面的表格写在知乎了,写表格太麻烦,直接截图过来了):

Model Rank@1 mAP
R101_ibn (teacher) 95.52% 88.75%
R34 (student)     91.95% 79.60%
JS Div 94.74% 86.85%
JS Div+R34 94.71% 86.60%
JS Div + Overhaul 94.80% 87.39%
JS Div + Overhaul+R34 95.19% 87.33%

是否加入r34作为预训练模型,结果差异不大。实际上训练知识蒸馏是不需要提前预训练学生模型的。根据知识蒸馏结果显示,知识蒸馏还是很有效的(涨点很多)。

猜你喜欢

转载自blog.csdn.net/qq_35975447/article/details/112803615