pytorch 多GPU训练
pytorch多GPU最终还是没搞通,可用的部分是前向计算,back propagation会出错,当时运行通过,也不太确定是如何通过了的。目前是这样,有机会再来补充
pytorch支持多GPU训练,官方文档(pytorch 0.30)给了一些说明:pytorch数据并行,但遗憾的是给出的说明并不详细。不过说的还是蛮清楚的,建议使用DataParallel。
pytorch使用多GPU训练的时候要考虑的主要的不过是前向计算和后向计算两个部分。
前向计算:
- net = Net() #Net是自定义的一个网络结构类
- device_ids = [2, 4, 5]
- cudnn.benchmark = True
- net = net.cuda(device_ids[0])
- net = nn.DataParallel(net, device_ids=device_ids) #使用dataParallel重新包装一下
后向计算:
- lr = 1e-2
- momentum = 0.9
- weight_decay = 1e-3
- param = get_param(net, lr)
- optimizer = optim.SGD(param, momentum=momentum, weight_decay=weight_decay) #准备pytorch中的随机梯度下降方法
- loss = nn.MSEloss()
- optimizer = nn.DataParallel(optimizer, device_ids=device_ids) #将optimizer放入dataparallel中。
DataParallel的使用:
- img = Variable(img, requires_grad=True).cuda(device_ids[0]) # 输入图片数据
- gt = Variable(gt_heatmap).cuda(device_ids[0]) #ground truth
- predicted = net(img) # net是DataParallel对象,img 作为输入会将分为3份(bachsize/3),等3个并行计算结束后再以该轴组合再一起,predicted和img的shape是一样的。
- l = loss(gt, predicted)
- # compute gradient and do SGD step
- optimizer.zero_grad()
- l.backward() #在这儿使用optimizer的相应的对象。
- optimizer.module.step() #因为它在DataParallel里面,所以要先变成普通的nn.SGD对象,然后才能调用该类的梯度更新方法。类似的,还有其他的一些需要注意的地方,看下面:
相应的学习率更新的方法:
- for param_lr in optimizer.module.param_groups: #同样是要加module
- param_lr['lr'] /= 2
加载保存的网络参数:
加载保存的网络参数时也要注意,因为所有的保存的参数对应的关键字都加了module。可以像下面这样使用序号的方式重新加载所保存的网络参数。
- model_dict = net.state_dict()
- vgg_19_key = list(vgg_19.keys())
- model_key = list(model_dict.keys())
- from collections import OrderedDict
- vgg_dict = OrderedDict()
- for i in range(param_num):
- vgg_dict[model_key[i]] = vgg_19[vgg_19_key[i]]
- model_dict.update(vgg_dict)
也可以简单的去掉OrderedDict关键字多出的module,像这样:
- for item, value in saved_state.items():
- name = '.'.join(item.split('.')[1:])
- trans_param[name] = value
还有说法:
pytorch-multi-gpu
1. nn.DataParallel
model = nn.DataParallel(model.cuda(1), device_ids=[1,2,3,4,5])
criteria = nn.Loss() # i. .cuda(1) 20G-21G ii. cuda() 18.5G-12.7G iii. nothing 16.5G-12.7G. these all use almost same time per batch
data = data.cuda(1)
label = data.cuda(1)
-
out = model(data)
or.
model = nn.DataParallel(model, device_ids=[1,2,3,4,5]).cuda(1)
note:
original module == model.module
if device_ids[0] use much mem than others, data[other] label[other]
output_gpu = other
3. NEW API
torch.version.cuda
torch.cuda.get_device_name(0)
-------------------errors---
1.
data = data.cuda()
RuntimeError: Assertion `THCTensor_(checkGPU)(state, 4, input, target, output, total_weight)' failed. Some of weight/gradient/input tensors are located on different GPUs. Please move them to a single one. at /b/wheel/pytorch-src/torch/lib/THCUNN/generic/SpatialClassNLLCriterion.cu:46
2.
nn.DataParallel(model.cuda(), device_ids=[1,2,3,4,5])
result = self.forward(*input, **kwargs)
File "/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 60, in forward
replicas = self.replicate(self.module, self.device_ids[:len(inputs)])
File "/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 65, in replicate
return replicate(module, device_ids)
File "/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/replicate.py", line 12, in replicate
param_copies = Broadcast(devices)(*params)
File "/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/_functions.py", line 18, in forward
outputs = comm.broadcast_coalesced(inputs, self.target_gpus)
File "/anaconda3/lib/python3.6/site-packages/torch/cuda/comm.py", line 52, in broadcast_coalesced
raise RuntimeError('all tensors must be on devices[0]')
RuntimeError: all tensors must be on devices[0]
3.
nn.DataParallel(model, device_ids=[1,2,3,4,5])
out = model(data, train_seqs.index(name))
File "/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 206, in __call__
result = self.forward(*input, **kwargs)
File "anaconda3/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 60, in forward
replicas = self.replicate(self.module, self.device_ids[:len(inputs)])
File "/data1/ailab_view/wenyulv/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 65, in replicate
return replicate(module, device_ids)
File "/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/replicate.py", line 12, in replicate
param_copies = Broadcast(devices)(*params)
File "/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/_functions.py", line 14, in forward
raise TypeError('Broadcast function not implemented for CPU tensors')
TypeError: Broadcast function not implemented for CPU tensors
-------------reference-----------
1. https://github.com/GunhoChoi/Kind_PyTorch_Tutorial/blob/master/09_GAN_LayerName_MultiGPU/GAN_LayerName_MultiGPU.py
2. http://pytorch.org/docs/master/nn.html#dataparallel
if device_ids[0] use much mem than others,