目录
pytorch神经网络实现
包括CNN、RNN、LSTM等神经网络的介绍以及PyTorch实现。
卷积神经网络(CNN)
CNN(Convolutional Neural Network)
卷积
“卷积” 和 “神经网络”。卷积也就是说神经网络不再是对每个像素的输入信息做处理了,而是图片上每一小块像素区域进行处理,这种做法加强了图片信息的连续性。使得神经网络能看到图形,而非一个点。这种做法同时也加深了神经网络对图片的理解。具体来说,卷积神经网络有一个批量过滤器,持续不断的在图片上滚动收集图片里的信息,每一次收集的时候都只是收集一小块像素区域,然后把收集来的信息进行整理,这时候整理出来的信息有了一些实际上的呈现,比如这时的神经网络能看到一些边缘的图片信息,然后在以同样的步骤,用类似的批量过滤器扫过产生的这些边缘信息,神经网络从这些边缘信息里面总结出更高层的信息结构,比如说总结的边缘能够画出眼睛,鼻子等等。再经过一次过滤,脸部的信息也从这些眼睛鼻子的信息中被总结出来。最后我们再把这些信息套入几层普通的全连接神经层进行分类,这样就能得到输入的图片能被分为哪一类的结果了.
卷积核(core) -> 高度的维度
池化(pooling)
研究发现,在每一次卷积的时候,神经层可能会无意地丢失一些信息。这时,池化 (pooling) 就可以很好地解决这一问题。而且池化是一个筛选过滤的过程,能将 layer 中有用的信息筛选出来,给下一个层分析。同时也减轻了神经网络的计算负担 (具体细节参考)。也就是说在卷集的时候,我们不压缩长宽,尽量地保留更多信息,压缩的工作就交给池化了,这样的一项附加工作能够很有效的提高准确性。有了这些技术,我们就可以搭建一个属于我们自己的卷积神经网络啦.
流行的CNN结构
比较流行的一种搭建结构是这样,从下到上的顺序,首先是输入的图片(image),经过一层卷积层 (convolution),然后在用池化(pooling)方式处理卷积的信息,这里使用的是 max pooling 的方式。然后在经过一次同样的处理,把得到的第二次处理的信息传入两层全连接的神经层 (fully connected),这也是一般的两层神经网络层,最后在接上一个分类器(classifier)进行分类预测。这仅仅是对卷积神经网络在图片处理上一次简单的介绍。如果你想知道使用 python 搭建这样的卷积神经网络,欢迎点击下面的内容.
PyTorch搭建CNN
卷积神经网络目前被广泛地用在图片识别上,已经有层出不穷的应用,如果你对卷积神经网络还没有特别了解,我制作的 卷积神经网络 动画简介 能让你花几分钟就了解什么是卷积神经网络。接着我们就一步一步做一个分析手写数字的 CNN 吧。
# library
# standard library
import os
# third-party library
import torch
import torch.nn as nn
import torch.utils.data as Data
import torchvision
import matplotlib.pyplot as plt
# torch.manual_seed(1) # reproducible
# Hyper Parameters
EPOCH = 1 # train the training data n times,to save time,we just train 1 epoch
BATCH_SIZE = 50
LR = 0.001 # learning rate
DOWNLOAD_MNIST = False
# Mnist digits dataset
if not(os.path.exists('./mnist/')) or not os.listdir('./mnist/'):
# not mnist dir or mnist is empyt dir
DOWNLOAD_MNIST = True
train_data = torchvision.datasets.MNIST(
root='./mnist/',
train=True, # this is training data
transform=torchvision.transforms.ToTensor(), # Converts a PIL.Image or numpy.ndarray to
# torch.FloatTensor of shape (C x H x W) and normalize in the range [0.0,1.0]
download=DOWNLOAD_MNIST,
)
# plot one example
print(train_data.train_data.size()) # (60000,28,28)
print(train_data.train_labels.size()) # (60000)
plt.imshow(train_data.train_data[0].numpy(),cmap='gray')
plt.title('%i' % train_data.train_labels[0])
plt.show()
# Data Loader for easy mini-batch return in training,the image batch shape will be (50,1,28,28)
train_loader = Data.DataLoader(dataset=train_data,batch_size=BATCH_SIZE,shuffle=True)
# pick 2000 samples to speed up testing
test_data = torchvision.datasets.MNIST(root='./mnist/',train=False)
test_x = torch.unsqueeze(test_data.test_data,dim=1).type(torch.FloatTensor)[:2000]/255。 # shape from (2000,28,28) to (2000,1,28,28),value in range(0,1)
test_y = test_data.test_labels[:2000]
class CNN(nn.Module):
def __init__(self):
super(CNN,self).__init__()
# 卷积层1
self.conv1 = nn.Sequential( # input shape (1,28,28)
nn.Conv2d(
in_channels=1, # input height
out_channels=16, # n_filters
kernel_size=5, # filter size
stride=1, # filter movement/step
padding=2, # if want same width and length of this image after Conv2d,padding=(kernel_size-1)/2 if stride=1
), # output shape (16,28,28)
nn.ReLU(), # activation
nn.MaxPool2d(kernel_size=2), # choose max value in 2x2 area,output shape (16,14,14)
)
# 卷积层2
self.conv2 = nn.Sequential( # input shape (16,14,14)
nn.Conv2d(16,32,5,1,2), # output shape (32,14,14)
nn.ReLU(), # activation
nn.MaxPool2d(2), # output shape (32,7,7)
)
self.out = nn.Linear(32 * 7 * 7,10) # fully connected layer,output 10 classes
def forward(self,x):
x = self.conv1(x)
x = self.conv2(x)
x = x.view(x.size(0),-1) # flatten the output of conv2 to (batch_size,32 * 7 * 7)
output = self.out(x)
return output,x # return x for visualization
cnn = CNN()
print(cnn) # net architecture
optimizer = torch.optim.Adam(cnn.parameters(),lr=LR) # optimize all cnn parameters
loss_func = nn.CrossEntropyLoss() # the target label is not one-hotted
# following function (plot_with_labels) is for visualization,can be ignored if not interested
from matplotlib import cm
try: from sklearn.manifold import TSNE; HAS_SK = True
except: HAS_SK = False; print('Please install sklearn for layer visualization')
def plot_with_labels(lowDWeights,labels):
plt.cla()
X,Y = lowDWeights[:,0],lowDWeights[:,1]
for x,y,s in zip(X,Y,labels):
c = cm.rainbow(int(255 * s / 9)); plt.text(x,y,s,backgroundcolor=c,fontsize=9)
plt.xlim(X.min(),X.max()); plt.ylim(Y.min(),Y.max()); plt.title('Visualize last layer'); plt.show(); plt.pause(0.01)
plt.ion()
# training and testing
for epoch in range(EPOCH):
for step,(b_x,b_y) in enumerate(train_loader): # gives batch data,normalize x when iterate train_loader
output = cnn(b_x)[0] # cnn output
loss = loss_func(output,b_y) # cross entropy loss
optimizer.zero_grad() # clear gradients for this training step
loss.backward() # backpropagation,compute gradients
optimizer.step() # apply gradients
if step % 50 == 0:
test_output,last_layer = cnn(test_x)
pred_y = torch.max(test_output,1)[1].data.numpy()
accuracy = float((pred_y == test_y.data.numpy()).astype(int).sum()) / float(test_y.size(0))
print('Epoch: ',epoch,'| train loss: %.4f' % loss.data.numpy(),'| test accuracy: %.2f' % accuracy)
if HAS_SK:
# Visualization of trained flatten layer (T-SNE)
tsne = TSNE(perplexity=30,n_components=2,init='pca',n_iter=5000)
plot_only = 500
low_dim_embs = tsne.fit_transform(last_layer.data.numpy()[:plot_only,:])
labels = test_y.numpy()[:plot_only]
plot_with_labels(low_dim_embs,labels)
plt.ioff()
# print 10 predictions from test data
test_output,_ = cnn(test_x[:10])
pred_y = torch.max(test_output,1)[1].data.numpy()
print(pred_y,'prediction number')
print(test_y[:10].numpy(),'real number')
RNN循环神经网络
序列数据
数据关联
LSTM RNN
解决问题:
1.梯度消失(梯度弥散)
2.梯度爆炸
主线:长期记忆
分线:短期记忆
PyTorch实现RNN
我们将图片数据看成一个时间上的连续数据,每一行的像素点都是这个时刻的输入,读完整张图片就是从上而下的读完了每行的像素点。 然后我们就可以拿出 RNN 在最后一步的分析值判断图片是哪一类了。这里基于mnist数据集,每个样本数据为28*28的大小,一次输入28个像素数据,一共输入28行数据形成时间序列的建立。
- classfication
import torch
from torch import nn
import torchvision.datasets as dsets
import torchvision.transforms as transforms
import matplotlib.pyplot as plt
# torch.manual_seed(1) # reproducible
# Hyper Parameters
EPOCH = 1 # train the training data n times,to save time,we just train 1 epoch
BATCH_SIZE = 64
TIME_STEP = 28 # rnn time step / image height 输出次数
INPUT_SIZE = 28 # rnn input size / image width 一次数据输入量
LR = 0.01 # learning rate
DOWNLOAD_MNIST = False # set to True if haven't download the data
# Mnist digital dataset
train_data = dsets.MNIST(
root='./mnist/',
train=True, # this is training data
transform=transforms.ToTensor(), # Converts a PIL.Image or numpy.ndarray to
# torch.FloatTensor of shape (C x H x W) and normalize in the range [0.0,1.0]
download=DOWNLOAD_MNIST, # download it if you don't have it
)
# plot one example
print(train_data.train_data.size()) # (60000,28,28)
print(train_data.train_labels.size()) # (60000)
plt.imshow(train_data.train_data[0].numpy(),cmap='gray')
plt.title('%i' % train_data.train_labels[0])
plt.show()
# Data Loader for easy mini-batch return in training
train_loader = torch.utils.data.DataLoader(dataset=train_data,batch_size=BATCH_SIZE,shuffle=True)
# convert test data into Variable,pick 2000 samples to speed up testing
test_data = dsets.MNIST(root='./mnist/',train=False,transform=transforms.ToTensor())
test_x = test_data.test_data.type(torch.FloatTensor)[:2000]/255。 # shape (2000,28,28) value in range(0,1)
test_y = test_data.test_labels.numpy()[:2000] # covert to numpy array
class RNN(nn.Module):
def __init__(self):
super(RNN,self).__init__()
self.rnn = nn.LSTM( # if use nn.RNN(),it hardly learns
input_size=INPUT_SIZE,
hidden_size=64, # rnn hidden unit
num_layers=1, # number of rnn layer
batch_first=True, # input & output will has batch size as 1s dimension。e.g。(batch,time_step,input_size) 数据格式
)
self.out = nn.Linear(64,10)
def forward(self,x):
# x shape (batch,time_step,input_size)
# r_out shape (batch,time_step,output_size)
# h_n shape (n_layers,batch,hidden_size)
# h_c shape (n_layers,batch,hidden_size)
r_out,(h_n,h_c) = self.rnn(x,None) # None represents zero initial hidden state
# choose r_out at the last time step
out = self.out(r_out[:,-1,:])
return out
rnn = RNN()
print(rnn)
optimizer = torch.optim.Adam(rnn.parameters(),lr=LR) # optimize all cnn parameters
loss_func = nn.CrossEntropyLoss() # the target label is not one-hotted
# training and testing
for epoch in range(EPOCH):
for step,(b_x,b_y) in enumerate(train_loader): # gives batch data
b_x = b_x.view(-1,28,28) # reshape x to (batch,time_step,input_size) 第一个参数-1表示自动判断
output = rnn(b_x) # rnn output
loss = loss_func(output,b_y) # cross entropy loss
optimizer.zero_grad() # clear gradients for this training step
loss.backward() # backpropagation,compute gradients
optimizer.step() # apply gradients
if step % 50 == 0:
test_output = rnn(test_x) # (samples,time_step,input_size)
pred_y = torch.max(test_output,1)[1].data.numpy()
accuracy = float((pred_y == test_y).astype(int).sum()) / float(test_y.size)
print('Epoch: ',epoch,'| train loss: %.4f' % loss.data.numpy(),'| test accuracy: %.2f' % accuracy)
# print 10 predictions from test data
test_output = rnn(test_x[:10].view(-1,28,28))
pred_y = torch.max(test_output,1)[1].data.numpy()
print(pred_y,'prediction number')
print(test_y[:10],'real number')
- regression
import torch
from torch import nn
import numpy as np
import matplotlib.pyplot as plt
# torch.manual_seed(1) # reproducible
# Hyper Parameters
TIME_STEP = 10 # rnn time step
INPUT_SIZE = 1 # rnn input size
LR = 0.02 # learning rate
# show data
steps = np.linspace(0,np.pi*2,100,dtype=np.float32) # float32 for converting torch FloatTensor
x_np = np.sin(steps)
y_np = np.cos(steps)
plt.plot(steps,y_np,'r-',label='target (cos)')
plt.plot(steps,x_np,'b-',label='input (sin)')
plt.legend(loc='best')
plt.show()
class RNN(nn.Module):
def __init__(self):
super(RNN,self).__init__()
self.rnn = nn.RNN(
input_size=INPUT_SIZE,
hidden_size=32, # rnn hidden unit
num_layers=1, # number of rnn layer
batch_first=True, # input & output will has batch size as 1s dimension。e.g。(batch,time_step,input_size)
)
self.out = nn.Linear(32,1)
def forward(self,x,h_state):
# x (batch,time_step,input_size)
# h_state (n_layers,batch,hidden_size)
# r_out (batch,time_step,hidden_size)
r_out,h_state = self.rnn(x,h_state)
outs = [] # save all predictions
for time_step in range(r_out.size(1)): # calculate output for each time step
outs.append(self.out(r_out[:,time_step,:]))
return torch.stack(outs,dim=1),h_state
# instead,for simplicity,you can replace above codes by follows
# r_out = r_out.view(-1,32)
# outs = self.out(r_out)
# outs = outs.view(-1,TIME_STEP,1)
# return outs,h_state
# or even simpler,since nn.Linear can accept inputs of any dimension
# and returns outputs with same dimension except for the last
# outs = self.out(r_out)
# return outs
rnn = RNN()
print(rnn)
optimizer = torch.optim.Adam(rnn.parameters(),lr=LR) # optimize all cnn parameters
loss_func = nn.MSELoss()
h_state = None # for initial hidden state
plt.figure(1,figsize=(12,5))
plt.ion() # continuously plot
for step in range(100):
start,end = step * np.pi,(step+1)*np.pi # time range
# use sin predicts cos
steps = np.linspace(start,end,TIME_STEP,dtype=np.float32,endpoint=False) # float32 for converting torch FloatTensor
x_np = np.sin(steps)
y_np = np.cos(steps)
x = torch.from_numpy(x_np[np.newaxis,:,np.newaxis]) # shape (batch,time_step,input_size)
y = torch.from_numpy(y_np[np.newaxis,:,np.newaxis])
prediction,h_state = rnn(x,h_state) # rnn output
# !! next step is important !!
h_state = h_state.data # repack the hidden state,break the connection from last iteration
loss = loss_func(prediction,y) # calculate loss
optimizer.zero_grad() # clear gradients for this training step
loss.backward() # backpropagation,compute gradients
optimizer.step() # apply gradients
# plotting
plt.plot(steps,y_np.flatten(),'r-')
plt.plot(steps,prediction.data.numpy().flatten(),'b-')
plt.draw(); plt.pause(0.05)
plt.ioff()
plt.show()
自编码(Autoencoder)
数据压缩再解压
有时神经网络要接受大量的输入信息,比如输入信息是高清图片时,输入信息量可能达到上千万,让神经网络直接从上千万个信息源中学习是一件很吃力的工作。所以,何不压缩一下,提取出原图片中的最具代表性的信息,缩减输入信息量,再把缩减过后的信息放进神经网络学习。这样学习起来就简单轻松了。所以,自编码就能在这时发挥作用。通过将原数据白色的X 压缩,解压 成黑色的X,然后通过对比黑白 X ,求出预测误差,进行反向传递,逐步提升自编码的准确性。训练好的自编码中间这一部分就是能总结原数据的精髓。可以看出,从头到尾,我们只用到了输入数据 X,并没有用到 X 对应的数据标签,所以也可以说自编码是一种非监督学习。到了真正使用自编码的时候。通常只会用到自编码前半部分。
MNIST实战
使用MINST手写数字数据进行压缩再解压的操作,用压缩的特征进行非监督学习。
自编码只用训练集就好了,而且只需要训练 training data 的 image,不用训练 labels。
import torch
import torch.nn as nn
import torch.utils.data as Data
import torchvision
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
import numpy as np
# torch.manual_seed(1) # reproducible
# Hyper Parameters
EPOCH = 10
BATCH_SIZE = 64
LR = 0.005 # learning rate
DOWNLOAD_MNIST = False
N_TEST_IMG = 5
# Mnist digits dataset
train_data = torchvision.datasets.MNIST(
root='./mnist/',
train=True,
# this is training data
transform=torchvision.transforms.ToTensor(),
# Converts a PIL.Image or numpy.ndarray to
# torch.FloatTensor of shape (C x H x W) and normalize in the range [0.0, 1.0]
download=DOWNLOAD_MNIST,
# download it if you don't have it
)
# plot one example
print(train_data.train_data.size()) # (60000, 28, 28)
print(train_data.train_labels.size()) # (60000)
plt.imshow(train_data.train_data[2].numpy(), cmap='gray')
plt.title('%i' % train_data.train_labels[2])
plt.show()
# Data Loader for easy mini-batch return in training, the image batch shape will be (50, 1, 28, 28)
train_loader = Data.DataLoader(dataset=train_data, batch_size=BATCH_SIZE, shuffle=True)
class AutoEncoder(nn.Module):
def __init__(self):
super(AutoEncoder, self).__init__()
self.encoder = nn.Sequential(
nn.Linear(28*28, 128),
nn.Tanh(),
nn.Linear(128, 64),
nn.Tanh(),
nn.Linear(64, 12),
nn.Tanh(),
nn.Linear(12, 3), # compress to 3 features which can be visualized in plt
)
self.decoder = nn.Sequential(
nn.Linear(3, 12),
nn.Tanh(),
nn.Linear(12, 64),
nn.Tanh(),
nn.Linear(64, 128),
nn.Tanh(),
nn.Linear(128, 28*28),
nn.Sigmoid(), # compress to a range (0, 1)
)
def forward(self, x):
encoded = self.encoder(x)
decoded = self.decoder(encoded)
return encoded, decoded
autoencoder = AutoEncoder()
optimizer = torch.optim.Adam(autoencoder.parameters(), lr=LR)
loss_func = nn.MSELoss()
# initialize figure
f, a = plt.subplots(2, N_TEST_IMG, figsize=(5, 2))
plt.ion() # continuously plot
# original data (first row) for viewing
view_data = train_data.train_data[:N_TEST_IMG].view(-1, 28*28).type(torch.FloatTensor)/255.
for i in range(N_TEST_IMG):
a[0][i].imshow(np.reshape(view_data.data.numpy()[i], (28, 28)), cmap='gray'); a[0][i].set_xticks(()); a[0][i].set_yticks(())
for epoch in range(EPOCH):
for step, (x, b_label) in enumerate(train_loader):
b_x = x.view(-1, 28*28) # batch x, shape (batch, 28*28)
b_y = x.view(-1, 28*28) # batch y, shape (batch, 28*28)
encoded, decoded = autoencoder(b_x)
loss = loss_func(decoded, b_y) # mean square error
optimizer.zero_grad() # clear gradients for this training step
loss.backward() # backpropagation, compute gradients
optimizer.step() # apply gradients
if step % 100 == 0:
print('Epoch: ', epoch, '| train loss: %.4f' % loss.data.numpy())
# plotting decoded image (second row)
_, decoded_data = autoencoder(view_data)
for i in range(N_TEST_IMG):
a[1][i].clear()
a[1][i].imshow(np.reshape(decoded_data.data.numpy()[i], (28, 28)), cmap='gray')
a[1][i].set_xticks(()); a[1][i].set_yticks(())
plt.draw()
plt.pause(0.05)
plt.ioff()
plt.show()
# visualize in 3D plot
view_data = train_data.train_data[:200].view(-1, 28*28).type(torch.FloatTensor)/255.
encoded_data, _ = autoencoder(view_data)
fig = plt.figure(2); ax = Axes3D(fig)
X, Y, Z = encoded_data.data[:, 0].numpy(), encoded_data.data[:, 1].numpy(), encoded_data.data[:, 2].numpy()
values = train_data.train_labels[:200].numpy()
for x, y, z, s in zip(X, Y, Z, values):
c = cm.rainbow(int(255*s/9)); ax.text(x, y, z, s, backgroundcolor=c)
ax.set_xlim(X.min(), X.max()); ax.set_ylim(Y.min(), Y.max()); ax.set_zlim(Z.min(), Z.max())
plt.show()
GAN生成对抗网络(Generative Adversarial Networks)
"""
View more, visit my tutorial page: https://morvanzhou.github.io/tutorials/
My Youtube Channel: https://www.youtube.com/user/MorvanZhou
Dependencies:
torch: 0.4
numpy
matplotlib
"""
import torch
import torch.nn as nn
import numpy as np
import matplotlib.pyplot as plt
# torch.manual_seed(1) # reproducible
# np.random.seed(1)
# Hyper Parameters
BATCH_SIZE = 64
LR_G = 0.0001 # learning rate for generator
LR_D = 0.0001 # learning rate for discriminator
N_IDEAS = 5 # think of this as number of ideas for generating an art work (Generator)
ART_COMPONENTS = 15 # it could be total point G can draw in the canvas
PAINT_POINTS = np.vstack([np.linspace(-1, 1, ART_COMPONENTS) for _ in range(BATCH_SIZE)])
# show our beautiful painting range
# plt.plot(PAINT_POINTS[0], 2 * np.power(PAINT_POINTS[0], 2) + 1, c='#74BCFF', lw=3, label='upper bound')
# plt.plot(PAINT_POINTS[0], 1 * np.power(PAINT_POINTS[0], 2) + 0, c='#FF9359', lw=3, label='lower bound')
# plt.legend(loc='upper right')
# plt.show()
def artist_works_with_labels(): # painting from the famous artist (real target)
a = np.random.uniform(1, 2, size=BATCH_SIZE)[:, np.newaxis] # 在列上增加一个维度
paintings_np = a * np.power(PAINT_POINTS, 2) + (a-1)
labels_np = ((a-1) > 0.5) # upper paintings (1), lower paintings (0), two classes
paintings = torch.from_numpy(paintings_np).float()
labels = torch.from_numpy(labels_np.astype(np.float32))
return paintings, labels
G = nn.Sequential( # Generator
nn.Linear(N_IDEAS+1, 128), # random ideas (could from normal distribution) + class label
nn.ReLU(),
nn.Linear(128, ART_COMPONENTS), # making a painting from these random ideas
)
D = nn.Sequential( # Discriminator
nn.Linear(ART_COMPONENTS+1, 128), # receive art work either from the famous artist or a newbie like G with label
nn.ReLU(),
nn.Linear(128, 1),
nn.Sigmoid(), # tell the probability that the art work is made by artist
)
opt_D = torch.optim.Adam(D.parameters(), lr=LR_D)
opt_G = torch.optim.Adam(G.parameters(), lr=LR_G)
plt.ion() # something about continuous plotting
for step in range(10000):
artist_paintings, labels = artist_works_with_labels() # real painting, label from artist
G_ideas = torch.randn(BATCH_SIZE, N_IDEAS) # random ideas (BATCH_SIZE*N_IDEAS)
G_inputs = torch.cat((G_ideas, labels), 1) # ideas with labels
G_paintings = G(G_inputs) # fake painting w.r.t label from G
D_inputs0 = torch.cat((artist_paintings, labels), 1) # all have their labels
D_inputs1 = torch.cat((G_paintings, labels), 1)
prob_artist0 = D(D_inputs0) # D try to increase this prob
prob_artist1 = D(D_inputs1) # D try to reduce this prob
D_score0 = torch.log(prob_artist0) # maximise this for D log以e为底数
D_score1 = torch.log(1. - prob_artist1) # maximise this for D
D_loss = - torch.mean(D_score0 + D_score1) # minimise the negative of both two above for Dd
G_loss = torch.mean(D_score1) # minimise D score w.r.t G
opt_D.zero_grad()
D_loss.backward(retain_graph=True) # reusing computational graph
opt_D.step()
opt_G.zero_grad()
G_loss.backward()
opt_G.step()
if step % 200 == 0: # plotting
plt.cla()
plt.plot(PAINT_POINTS[0], G_paintings.data.numpy()[0], c='#4AD631', lw=3, label='Generated painting',)
bound = [0, 0.5] if labels.data[0, 0] == 0 else [0.5, 1]
plt.plot(PAINT_POINTS[0], 2 * np.power(PAINT_POINTS[0], 2) + bound[1], c='#74BCFF', lw=3, label='upper bound')
plt.plot(PAINT_POINTS[0], 1 * np.power(PAINT_POINTS[0], 2) + bound[0], c='#FF9359', lw=3, label='lower bound')
plt.text(-.5, 2.3, 'D accuracy=%.2f (0.5 for D to converge)' % prob_artist0.data.numpy().mean(), fontdict={
'size': 13})
plt.text(-.5, 2, 'D score= %.2f (-1.38 for G to converge)' % -D_loss.data.numpy(), fontdict={
'size': 13})
plt.text(-.5, 1.7, 'Class = %i' % int(labels.data[0, 0]), fontdict={
'size': 13})
plt.ylim((0, 3));plt.legend(loc='upper right', fontsize=10);plt.draw();plt.pause(0.1)
plt.ioff()
plt.show()
# plot a generated painting for upper class
z = torch.randn(1, N_IDEAS)
label = torch.FloatTensor([[1.]]) # for upper class
G_inputs = torch.cat((z, label), 1)
G_paintings = G(G_inputs)
plt.plot(PAINT_POINTS[0], G_paintings.data.numpy()[0], c='#4AD631', lw=3, label='G painting for upper class',)
plt.plot(PAINT_POINTS[0], 2 * np.power(PAINT_POINTS[0], 2) + bound[1], c='#74BCFF', lw=3, label='upper bound (class 1)')
plt.plot(PAINT_POINTS[0], 1 * np.power(PAINT_POINTS[0], 2) + bound[0], c='#FF9359', lw=3, label='lower bound (class 1)')
plt.ylim((0, 3)); plt.legend(loc='upper right', fontsize=10); plt.show()