view之前为何先用函数contiguous()

output = model( data )[ 0 ]  # shape: (100, 321, 481)

output = output.permute(1, 2, 0)  # 维度置换 shape: (321, 481, 100)

output = output.contiguous().view(-1, args.nChannel) # (321x481, 100)


view原矩阵变换到其他大小,需要原矩阵tensor的内存是整块的

猜你喜欢

转载自blog.csdn.net/jizhidexiaoming/article/details/82454432