Pytorch入门之基本操作

Pytorch入门之基本操作

继TensorFlow、keras之后,开始学习新的深度学习框架——Pytorch。整理只是为了方便以后查找。
学习内容来自一个印度小哥哥写的一个在GitHub上的深度学习教程,附上学习链接

整理人:陈振庭
qq:2621336811

Tensor basics

import numpy as np
import torch

产生一个张量:

x = torch.Tensor(3, 4)
print("Type: {}".format(x.type()))
print("Size: {}".format(x.shape))
print("Values: \n{}".format(x))

结果:

Type: torch.FloatTensor
Size: torch.Size([3, 4])
Values: 
tensor([[1.1744e-35, 0.0000e+00, 2.8026e-44, 0.0000e+00],
        [       nan, 0.0000e+00, 1.3733e-14, 4.7429e+30],
        [1.9431e-19, 4.7429e+30, 5.0938e-14, 0.0000e+

产生一个随机张量:

x = torch.randn(2, 3) # normal distribution (rand(2,3) -> uniform distribution)
print (x)

结果:

tensor([[ 0.7434, -1.0611, -0.3752],
        [ 0.2613, -1.7051,  0.9118]])

产生一个全0或者全1张量:

x = torch.zeros(2, 3)
print (x)
x = torch.ones(2, 3)
print (x)

结果:

tensor([[0., 0., 0.],
        [0., 0., 0.]])
tensor([[1., 1., 1.],
        [1., 1., 1.]])

列表转化为张量:

x = torch.Tensor([[1, 2, 3],[4, 5, 6]])
print("Size: {}".format(x.shape)) 
print("Values: \n{}".format(x))

结果:

Size: torch.Size([2, 3])
Values: 
tensor([[1., 2., 3.],
        [4., 5., 6.]])

Numpy数组转化为张量:

x = torch.from_numpy(np.random.rand(2, 3))
print("Size: {}".format(x.shape)) 
print("Values: \n{}".format(x))

结果:

Size: torch.Size([2, 3])
Values: 
tensor([[0.0372, 0.6757, 0.9554],
        [0.5651, 0.2336, 0.8303]], dtype=torch.float64)

转换张量的类型:

x = torch.Tensor(3, 4)
print("Type: {}".format(x.type()))
x = x.long()
print("Type: {}".format(x.type()))

结果:

Type: torch.FloatTensor
Type: torch.LongTensor

Tensor operations

张量相加:

x = torch.randn(2, 3)
y = torch.randn(2, 3)
z = x + y
print("Size: {}".format(z.shape)) 
print("Values: \n{}".format(z))

结果:

Size: torch.Size([2, 3])
Values: 
tensor([[ 0.5650, -0.0173,  1.1263],
        [ 3.4274,  1.3610, -0.9262]])

张量点乘:

x = torch.randn(2, 3)
y = torch.randn(3, 2)
z = torch.mm(x, y)
print("Size: {}".format(z.shape)) 
print("Values: \n{}".format(z))

结果:

Size: torch.Size([2, 2])
Values: 
tensor([[ 1.3294, -2.4559],
        [-0.4337,  4.9667]])

张量转置:

x = torch.randn(2, 3)
print("Size: {}".format(x.shape)) 
print("Values: \n{}".format(x))
y = torch.t(x)
print("Size: {}".format(y.shape)) 
print("Values: \n{}".format(y))

结果:

Size: torch.Size([2, 3])
Values: 
tensor([[ 0.0257, -0.5716, -0.9207],
        [-1.0590,  0.2942, -0.7114]])
Size: torch.Size([3, 2])
Values: 
tensor([[ 0.0257, -1.0590],
        [-0.5716,  0.2942],
        [-0.9207, -0.7114]])

转换张量形状:

z = x.view(3, 2)
print("Size: {}".format(z.shape)) 
print("Values: \n{}".format(z))

结果:

Size: torch.Size([3, 2])
Values: 
tensor([[ 0.0257, -0.5716],
        [-0.9207, -1.0590],
        [ 0.2942, -0.7114]])

改变张量的形状一定要仔细,不然可能带来意外的结果:

x = torch.tensor([
    [[1,1,1,1], [2,2,2,2], [3,3,3,3]],
    [[10,10,10,10], [20,20,20,20], [30,30,30,30]]
])
print("Size: {}".format(x.shape)) 
print("Values: \n{}\n".format(x))
a = x.view(x.size(1), -1)
print("Size: {}".format(a.shape)) 
print("Values: \n{}\n".format(a))
b = x.transpose(0,1).contiguous()
print("Size: {}".format(b.shape)) 
print("Values: \n{}\n".format(b))
c = b.view(b.size(0), -1)
print("Size: {}".format(c.shape)) 
print("Values: \n{}".format(c))

结果:

Size: torch.Size([2, 3, 4])
Values: 
tensor([[[ 1,  1,  1,  1],
         [ 2,  2,  2,  2],
         [ 3,  3,  3,  3]],

        [[10, 10, 10, 10],
         [20, 20, 20, 20],
         [30, 30, 30, 30]]])

Size: torch.Size([3, 8])
Values: 
tensor([[ 1,  1,  1,  1,  2,  2,  2,  2],
        [ 3,  3,  3,  3, 10, 10, 10, 10],
        [20, 20, 20, 20, 30, 30, 30, 30]])

Size: torch.Size([3, 2, 4])
Values: 
tensor([[[ 1,  1,  1,  1],
         [10, 10, 10, 10]],

        [[ 2,  2,  2,  2],
         [20, 20, 20, 20]],

        [[ 3,  3,  3,  3],
         [30, 30, 30, 30]]])

Size: torch.Size([3, 8])
Values: 
tensor([[ 1,  1,  1,  1, 10, 10, 10, 10],
        [ 2,  2,  2,  2, 20, 20, 20, 20],
        [ 3,  3,  3,  3, 30, 30, 30, 30]])

张量的维度操作,即指定某个维度的轴对张量进行操作:

x = torch.randn(2, 3)
print("Values: \n{}".format(x))
y = torch.sum(x, dim=0) # add each row's value for every column
print("Values: \n{}".format(y))
z = torch.sum(x, dim=1) # add each columns's value for every row
print("Values: \n{}".format(z))

结果:

Values: 
tensor([[ 0.4295,  0.2223,  0.1772],
        [ 2.1602, -0.8891, -0.5011]])
Values: 
tensor([ 2.5897, -0.6667, -0.3239])
Values: 
tensor([0.8290, 0.7700])

Indexing, Splicing and Joining

张量的索引:

x = torch.randn(3, 4)
print("x: \n{}".format(x))
print ("x[:1]: \n{}".format(x[:1]))
print ("x[:1, 1:3]: \n{}".format(x[:1, 1:3]))

结果:

x: 
tensor([[-1.0305,  0.0368,  1.2809,  1.2346],
        [-0.8837,  1.3678, -0.0971,  1.2528],
        [ 0.3382, -1.4948, -0.7058,  1.3378]])
x[:1]: 
tensor([[-1.0305,  0.0368,  1.2809,  1.2346]])
x[:1, 1:3]: 
tensor([[0.0368, 1.2809]])

张量的切片:用维度选择:

x = torch.randn(2, 3)
print("Values: \n{}".format(x))
col_indices = torch.LongTensor([0, 2])
chosen = torch.index_select(x, dim=1, index=col_indices) # values from column 0 & 2
print("Values: \n{}".format(chosen)) 
row_indices = torch.LongTensor([0, 1])
chosen = x[row_indices, col_indices] # values from (0, 0) & (2, 1)
print("Values: \n{}".format(chosen)) 

结果:

Values: 
tensor([[ 0.0720,  0.4266, -0.5351],
        [ 0.9672,  0.3691, -0.7332]])
Values: 
tensor([[ 0.0720, -0.5351],
        [ 0.9672, -0.7332]])
Values: 
tensor([ 0.0720, -0.7332])

张量的拼接:

x = torch.randn(2, 3)
print("Values: \n{}".format(x))
y = torch.cat([x, x], dim=0) # stack by rows (dim=1 to stack by columns)
print("Values: \n{}".format(y))

结果:

Values: 
tensor([[-0.8443,  0.9883,  2.2796],
        [-0.0482, -0.1147, -0.5290]])
Values: 
tensor([[-0.8443,  0.9883,  2.2796],
        [-0.0482, -0.1147, -0.5290],
        [-0.8443,  0.9883,  2.2796],
        [-0.0482, -0.1147, -0.5290]])

Gradients

张量求梯度:

x = torch.rand(3, 4, requires_grad=True)
y = 3*x + 2
z = y.mean()
z.backward() # z has to be scalar
print("Values: \n{}".format(x))
print("x.grad: \n", x.grad)

结果:

Values: 
tensor([[0.7014, 0.2477, 0.5928, 0.5314],
        [0.2832, 0.0825, 0.5684, 0.3090],
        [0.1591, 0.0049, 0.0439, 0.7602]], requires_grad=True)
x.grad: 
 tensor([[0.2500, 0.2500, 0.2500, 0.2500],
        [0.2500, 0.2500, 0.2500, 0.2500],
        [0.2500, 0.2500, 0.2500, 0.2500]])
  • y = 3 x + 2 y = 3 x + 2
  • z = y / N z = \sum y / N
  • ( z ) ( x ) = ( z ) ( y ) ( y ) ( x ) = 1 N 3 = 1 12 3 = 0.25 \frac { \partial ( z ) } { \partial ( x ) } = \frac { \partial ( z ) } { \partial ( y ) } \frac { \partial ( y ) } { \partial ( x ) } = \frac { 1 } { N } * 3 = \frac { 1 } { 12 } * 3 = 0.25

CUDA tensors

是否使用CUDA?:

print (torch.cuda.is_available())

结果:

True

产生一个全0张量:

x = torch.Tensor(3, 4).to("cpu")
print("Type: {}".format(x.type()))

结果:

Type: torch.FloatTensor

产生一个全0张量(CUDA下):

x = torch.Tensor(3, 4).to("cuda")
print("Type: {}".format(x.type()))

结果:

Type: torch.cuda.FloatTensor

猜你喜欢

转载自blog.csdn.net/qq_30883339/article/details/86498046