torch - 打印 model 梯度更新情况

打印所有参数(输出model下所有参数,一大串,眼花缭乱的)

train_loss =  lw[0] *  loss0 + lw[1] * loss1 + lw[2] * loss2 
# loss backward
for name, parms in model.named_parameters():	
   	print('\nBefore backward\n')
    print('-->name:', name)
    print('-->para:', parms)
    print('-->grad_requirs:',parms.requires_grad)
    print('-->grad_value:',parms.grad)
    print("===========================")
#
train_loss.backward()
#
for name, parms in model.named_parameters():	
    print('\nAfter backward\n')
    print('-->name:', name)
    print('-->para:', parms)
    print('-->grad_requirs:',parms.requires_grad)
    print('-->grad_value:',parms.grad)
    print("===========================")

打印第一层参数情况(torch的autograd 只有requires_grad = True的叶子节点在.backward()的过程中属性grad才会存储其梯度的值,也可通过retain_grad设置自定义)

train_loss =  lw[0] *  loss0 + lw[1] * loss1 + lw[2] * loss2 
# loss backward
for name, parms in model.named_parameters():	
    if name == 'module.invertible_model.operations.0.F.conv1.weight':
    	print('\nBefore backward\n')
        print('-->name:', name)
        # print('-->para:', parms)
        print('-->grad_requirs:',parms.requires_grad)
        print('-->grad_value:',parms.grad)
        print("===========================")
#
train_loss.backward()
#
for name, parms in model.named_parameters():	
    if name == 'module.invertible_model.operations.0.F.conv1.weight':
        print('\nAfter backward\n')
        print('-->name:', name)
        # print('-->para:', parms)
        print('-->grad_requirs:',parms.requires_grad)
        print('-->grad_value:',parms.grad)
        print("===========================")

猜你喜欢

转载自blog.csdn.net/mr1217704159/article/details/121786034