实验四:线性分类

点击查看需完成任务

目录

(一)基于Logistic回归

1、数据集构建与划分

2、模型构建

3、模型优化

4、评价指标

5、 完善Runner类

7、 模型训练

8 、模型评价

(二)基于Softmax回归的多分类任务

1、 数据集构建与划分

2 、模型构建

3、 模型优化

4、 模型训练

5、模型评价 

(三) 实践:基于Softmax回归完成鸢尾花分类任务

1 、数据处理

2 、模型构建

3、 模型训练

4 、模型评价

(四)总结

(一)基于Logistic回归


1、数据集构建与划分

         数据集总共生成 n_samples(1000) 个样本,其中一半为外弯月形状的数据(标记为 0),另一半为内弯月形状的数据(标记为 1)。

                 外弯月:使用均匀分布的角度(从 0 到 π)生成外弯月的数据点,特征为 cos 和 sin的计算结果。内弯月:同样使用均匀分布的角度生成内弯月的数据点,但特征的计算方式使得其形成一个位于外弯月内的半圆。

        划分总数据集为:训练集有640个样本,验证集有160个样本,测试集有200个样本。

import math
import copy
import torch
import matplotlib
matplotlib.use('TkAgg')
import matplotlib.pyplot as plt

def make_moons(n_samples=1000, shuffle=True, noise=None):
    """
    生成带噪音的弯月形状数据
    输入:
        - n_samples:数据量大小,数据类型为int
        - shuffle:是否打乱数据,数据类型为bool
        - noise:以多大的程度增加噪声,数据类型为None或float,noise为None时表示不增加噪声
    输出:
        - X:特征数据,shape=[n_samples,2]
        - y:标签数据, shape=[n_samples]
    """
    n_samples_out = n_samples // 2
    n_samples_in = n_samples - n_samples_out
    # 采集第1类数据,特征为(x,y)
    # 使用'torch.linspace'在0到pi上均匀取n_samples_out个值
    # 使用'torch.cos'计算上述取值的余弦值作为特征1,使用'torch.sin'计算上述取值的正弦值作为特征2
    outer_circ_x = torch.cos(torch.linspace(0, math.pi, n_samples_out))
    outer_circ_y = torch.sin(torch.linspace(0, math.pi, n_samples_out))
    inner_circ_x = 1 - torch.cos(torch.linspace(0, math.pi, n_samples_in))
    inner_circ_y = 0.5 - torch.sin(torch.linspace(0, math.pi, n_samples_in))
    print('外弯月特征x的形状:', outer_circ_x.shape, '外弯月特征y的形状:', outer_circ_y.shape)
    print('内弯月特征x的形状:', inner_circ_x.shape, '内弯月特征y的形状:', inner_circ_y.shape)

    # 使用'torch.cat'将两类数据的特征1和特征2分别沿维度0拼接在一起,得到全部特征1和特征2
    # 使用'torch.stack'将两类特征沿维度1堆叠在一起
    X = torch.stack(
        [torch.cat([outer_circ_x, inner_circ_x]),
         torch.cat([outer_circ_y, inner_circ_y])],
        axis=1
    )

    print('拼接后的形状:', torch.cat([outer_circ_x, inner_circ_x]).shape)
    print('X的形状:', X.shape)

    # 使用'torch.zeros'将第一类数据的标签全部设置为0
    # 使用'torch.ones'将第二类数据的标签全部设置为1
    y = torch.cat(
        [torch.zeros(size=[n_samples_out]), torch.ones(size=[n_samples_in])]
    )

    print('y的形状:', y.shape)

    # 如果shuffle为True,将所有数据打乱
    if shuffle:
        # 使用'torch.randperm'生成一个数值在0到X.shape[0],随机排列的一维Tensor作为索引值,用于打乱数据
        print(X.shape[0])

        idx = torch.randperm(X.shape[0])
        X = X[idx]
        y = y[idx]

    # 如果noise不为None,则给特征值加入噪声
    if noise is not None:
        # 使用'torch.normal'生成符合正态分布的随机Tensor作为噪声,并加到原始特征上
        print(noise)
        X += torch.normal(mean=0.0, std=noise, size=X.shape)

    return X, y

# 采样1000个样本
n_samples = 1000
X, y = make_moons(n_samples=n_samples, shuffle=True, noise=0.2)

# 可视化生成的数据集,不同颜色代表不同类别
plt.figure(figsize=(5,5))
plt.scatter(x=X[:, 0].tolist(), y=X[:, 1].tolist(), marker='*', c=y.tolist())
plt.xlim(-3,4)
plt.ylim(-3,4)
plt.savefig('线性数据集可视化.pdf')
plt.show()

num_train = 640
num_dev = 160
num_test = 200
X_train, y_train = X[:num_train], y[:num_train]
X_dev, y_dev = X[num_train:num_train + num_dev], y[num_train:num_train + num_dev]
X_test, y_test = X[num_train + num_dev:], y[num_train + num_dev:]
y_train = y_train.reshape([-1,1])
y_dev = y_dev.reshape([-1,1])
y_test = y_test.reshape([-1,1])
# 打印X_train和y_train的维度
print("X_train shape: ", X_train.shape, "y_train shape: ", y_train.shape)

                                                            

外弯月特征x的形状: torch.Size([500]) 外弯月特征y的形状: torch.Size([500])
内弯月特征x的形状: torch.Size([500]) 内弯月特征y的形状: torch.Size([500])
拼接后的形状: torch.Size([1000])
X的形状: torch.Size([1000, 2])
y的形状: torch.Size([1000])
1000
0.2
X_train shape:  torch.Size([640, 2]) y_train shape:  torch.Size([640, 1])

2、模型构建

        Logistic回归模型其实就是线性层与Logistic函数的组合,通常会将 Logistic回归模型中的权重和偏置初始化为0,也可以随机进行初始化。

构建一个Logistic回归算子,代码实现如下:

import torch
import torch.nn as nn
import torch.nn.functional as F

class ModelLR(nn.Module):
    def __init__(self, input_dim):
        super(ModelLR, self).__init__()
        # 定义模型参数并初始化
        self.params = {}
        # 初始化权重参数为0,形状为 [input_dim, 1]
        self.params['w'] = nn.Parameter(torch.zeros(input_dim, 1))
        # 可选:使用正态分布初始化权重
        # self.params['w'] = nn.Parameter(torch.normal(0, 0.01, (input_dim, 1)))
        # 初始化偏置参数为0,形状为 [1]
        self.params['b'] = nn.Parameter(torch.zeros(1))

    def __call__(self, inputs):
        """
        定义 __call__ 方法以便直接调用模型,等同于调用 forward 方法
        输入:
            - inputs: 输入数据
        输出:
            - 模型的预测结果
        """
        return self.forward(inputs)

    def forward(self, inputs):
        """
        前向传播函数
        输入:
            - inputs: shape=[N, D],N 为样本数量,D 为特征维度
        输出:
            - outputs: 预测标签为1的概率,shape=[N, 1]
        """
        # 线性计算,使用初始化的权重和偏置
        score = torch.matmul(inputs, self.params['w']) + self.params['b']
        # 使用 sigmoid 函数将线性输出转化为概率
        outputs = torch.sigmoid(score)
        return outputs

模型测试:

# 固定随机种子,保持每次运行结果一致
torch.manual_seed(0)
# 随机生成3条长度为4的数据
inputs = torch.randn(size=[3,4])
print('Input is:', inputs)
# 实例化模型
model = ModelLR(4)
outputs = model(inputs)
print('Output is:', outputs)
Input is: tensor([[ 1.5410, -0.2934, -2.1788,  0.5684],
        [-1.0845, -1.3986,  0.4033,  0.8380],
        [-0.7193, -0.4033, -0.5966,  0.1820]])
Output is: tensor([[0.5000],
        [0.5000],
        [0.5000]], grad_fn=<SigmoidBackward0>)

        当模型的参数初始化为全0时,输入经过线性变换后会始终输出0。这导致通过Logistic函数计算时,结果恒为0.5,因为Logistic函数在输入为0时的值就是0.5。

这里当初始化权重时采用正态分布

self.params['w'] = nn.Parameter(torch.normal(0, 0.01, (input_dim, 1)))

模型测试的输出结果为

Input is: tensor([[ 1.5410, -0.2934, -2.1788,  0.5684],
        [-1.0845, -1.3986,  0.4033,  0.8380],
        [-0.7193, -0.4033, -0.5966,  0.1820]])
Output is: tensor([[0.5019],
        [0.4977],
        [0.5021]], grad_fn=<SigmoidBackward0>)
  • 第一个样本的输出为0.5019,表示模型认为它属于正类的概率略高于50%。
  • 第二个样本的输出为0.4977,表示模型对该样本属于正类的概率略低于50%。
  • 第三个样本的输出为0.5021,同样表明模型认为这个样本属于正类的概率也略高于50%。

但是输出都在0.5左右,可见未经训练的模型预测效果并不好。

定义模型的损失函数--交叉熵损失函数:

import torch
import torch.nn as nn
import torch.nn.functional as F


class BinaryCrossEntropyLoss(nn.Module):
    def __init__(self):
        super(BinaryCrossEntropyLoss, self).__init__()
        self.predicts = None
        self.labels = None
        self.num = None

    def forward(self, predicts, labels):
        """
        输入:
            - predicts:预测值,shape=[N, 1],N为样本数量
            - labels:真实标签,shape=[N, 1]
        输出:
            - 损失值:shape=[1]
        """
        self.predicts = predicts
        self.labels = labels
        self.num = self.predicts.shape[0]

        # 计算二元交叉熵损失
        loss = -1. / self.num * (
                    torch.matmul(self.labels.t(), torch.log(self.predicts)) + torch.matmul((1 - self.labels.t()),
                                                                                           torch.log(
                                                                                               1 - self.predicts)))
        loss = torch.squeeze(loss, axis=1)
        return loss

 测试交叉熵损失函数 :

# 测试一下
# 生成一组长度为3,值为1的标签数据
labels = torch.ones(size=[3, 1])
# 假设outputs是模型的输出
outputs = torch.rand(size=[3, 1])  # 随机生成模型的输出
# 计算损失
bce_loss = BinaryCrossEntropyLoss()
loss = bce_loss(outputs, labels)
print(loss.item())  # 打印损失值

最终输出:0.8084673881530762 

3、模型优化

梯度计算:

在__init__ 方法中定义self.grads 用于存放参数的梯度。

并将偏导数的计算过程定义在Logistic回归算子的backward函数中以计算梯度。

import torch
import torch.nn as nn
import torch.nn.functional as F

class ModelLR(nn.Module):
    def __init__(self, input_dim):
        super(ModelLR, self).__init__()
        # 存放线性层参数
        self.params = {}
        # 将线性层的权重参数全部初始化为0
        self.params['w'] = nn.Parameter(torch.zeros(input_dim, 1))
        # 如果需要使用不同的初始化方法,请取消下面这行的注释
        # self.params['w'] = nn.Parameter(torch.normal(0, 0.01, (input_dim, 1)))
        # 将线性层的偏置参数初始化为0
        self.params['b'] = nn.Parameter(torch.zeros(1))
        # 存放参数的梯度
        self.grads = {}
        self.X = None
        self.outputs = None

    def forward(self, inputs):
        self.X = inputs
        # 线性计算
        score = torch.matmul(inputs, self.params['w']) + self.params['b']
        # Logistic 函数
        self.outputs = torch.sigmoid(score)
        return self.outputs

    def backward(self, labels):
        """
        输入:
            - labels:真实标签,shape=[N, 1]
        """
        N = labels.shape[0]
        # 计算偏导数
        self.grads['w'] = -1 / N * torch.matmul(self.X.t(), (labels - self.outputs))
        self.grads['b'] = -1 / N * torch.sum(labels - self.outputs)

 参数更新

参数更新:将参数更新过程包装为优化器,定义一个优化器基类Optimizer(方便后续所有的优化器调用),在这个基类中,初始化优化器的初始学习率init_lr,以及指定优化器需要优化的参数。实现一个简单批量梯度下降优化器 SimpleBatchGD:

from abc import abstractmethod
# 优化器基类
class Optimizer(object):
    def __init__(self, init_lr, model):
        """
        优化器类初始化
        """
        # 初始化学习率,用于参数更新的计算
        self.init_lr = init_lr
        # 指定优化器需要优化的模型
        self.model = model

    @abstractmethod
    def step(self):
        """
        定义每次迭代如何更新参数
        """
        pass

class SimpleBatchGD(Optimizer):
    def __init__(self, init_lr, model):
        super(SimpleBatchGD, self).__init__(init_lr=init_lr, model=model)

    def step(self):
        # 参数更新
        # 遍历所有参数,按照公式(3.8)和(3.9)更新参数
        if isinstance(self.model.params, dict):
            for key in self.model.params.keys():
                self.model.params[key] = self.model.params[key] - self.init_lr * self.model.grads[key]

 abc(Abstract Base Classes)是Python标准库中的一个模块,用于支持抽象基类的定义。

abc模块作用介绍


4、评价指标

在分类任务中,通常使用准确率(Accuracy)作为评价指标。模型预测的类别与真实类别一致,则说明模型预测正确。准确率即正确预测的数量与总的预测数量的比值。

import torch


def accuracy(preds, labels):
    """
    输入:
        - preds:预测值,二分类时,shape=[N, 1],N为样本数量,多分类时,shape=[N, C],C为类别数量
        - labels:真实标签,shape=[N, 1]
    输出:
        - 准确率:shape=[1]
    """
    # 判断是二分类任务还是多分类任务,preds.shape[1]=1时为二分类任务,preds.shape[1]>1时为多分类任务
    if preds.shape[1] == 1:
        # 二分类时,判断每个概率值是否大于0.5,当大于0.5时,类别为1,否则类别为0
        # 使用 'torch.round' 进行四舍五入,将概率值转换为二进制标签
        preds = torch.round(preds)
    else:
        # 多分类时,使用 'torch.argmax' 计算最大元素索引作为类别
        preds = torch.argmax(preds, dim=1)

    # 计算准确率
    correct = (preds == labels).sum().item()
    #     print("correct:",correct)
    accuracy = correct / len(labels)
    #     print("shape of labels:",labels.shape)
    #     print("labels:",labels)
    #     print("shape of preds:",preds.shape)
    #     print("preds:",preds)
    return accuracy

测试一下:

# 假设模型的预测值为[[0.],[1.],[1.],[0.]],真实类别为[[1.],[1.],[0.],[0.]],计算准确率
preds = torch.tensor([[0.], [1.], [1.], [0.]])
labels = torch.tensor([[1.], [1.], [0.], [0.]])
print("accuracy is:", accuracy(preds, labels))
accuracy is: 0.5


5、 完善Runner类

1.初始化 2.train  3.evaluate  4.predict

5. 模型管理

保存模型 (save_model):将模型参数保存到指定路径。

加载模型 (load_model):从指定路径加载模型参数。

将以上步骤封装成一个类,后面实验时直接调用即可。

import  torch
# 用RunnerV2类封装整个训练过程
class RunnerV2(object):
    def __init__(self, model, optimizer, metric, loss_fn):
        self.model = model
        self.optimizer = optimizer
        self.loss_fn = loss_fn
        self.metric = metric
        # 记录训练过程中的评价指标变化情况
        self.train_scores = []
        self.dev_scores = []
        # 记录训练过程中的损失函数变化情况
        self.train_loss = []
        self.dev_loss = []

    def train(self, train_set, dev_set, **kwargs):
        # 传入训练轮数,如果没有传入值则默认为0
        num_epochs = kwargs.get("num_epochs", 0)
        # 传入log打印频率,如果没有传入值则默认为100
        log_epochs = kwargs.get("log_epochs", 100)
        # 传入模型保存路径,如果没有传入值则默认为"best_model.pdparams"
        save_path = kwargs.get("save_path", "best_model.pdparams")
        # 梯度打印函数,如果没有传入则默认为"None"
        print_grads = kwargs.get("print_grads", None)
        # 记录全局最优指标
        best_score = 0
        # 进行num_epochs轮训练
        for epoch in range(num_epochs):
            X, y = train_set
            # 获取模型预测
            logits = self.model(X)
            # 计算交叉熵损失
            trn_loss = self.loss_fn(logits, y).item()
            self.train_loss.append(trn_loss)
            # 计算评价指标
            trn_score = self.metric(logits, y)
            self.train_scores.append(trn_score)
            # 计算参数梯度
            self.model.backward(y)
            if print_grads is not None:
                # 打印每一层的梯度
                print_grads(self.model)
            # 更新模型参数
            self.optimizer.step()
            dev_score, dev_loss = self.evaluate(dev_set)
            # 如果当前指标为最优指标,保存该模型
            if dev_score > best_score:
                self.save_model(save_path)
                print(f"best accuracy performence has been updated: {best_score:.5f} --> {dev_score:.5f}")
                best_score = dev_score
            if epoch % log_epochs == 0:
                print(f"[Train] epoch: {epoch}, loss: {trn_loss}, score: {trn_score}")
                print(f"[Dev] epoch: {epoch}, loss: {dev_loss}, score: {dev_score}")

    def evaluate(self, data_set):
        X, y = data_set
        # 计算模型输出
        logits = self.model(X)
        # 计算损失函数
        loss = self.loss_fn(logits, y).item()
        self.dev_loss.append(loss)
        # 计算评价指标
        score = self.metric(logits, y)
        self.dev_scores.append(score)
        return score, loss

    def predict(self, X):
        return self.model(X)

    def save_model(self, save_path):
        torch.save(self.model.params, save_path)

    def load_model(self, model_path):
        self.model.params = torch.load(model_path)


7、 模型训练

        使用交叉熵损失函数和梯度下降法进行优化。
        使用训练集和验证集进行模型训练,共训练 500个epoch,每隔50个epoch打印出训练集上的指标。

        设置参数-->实例化模型--->指定优化器、损失函数、评价方式——>实例化RunnerV2类,传入模型、优化器、评价指标和损失函数并调用 train 方法,传入训练集和验证集,开始训练模型。

# 固定随机种子,保持每次运行结果一致
torch.manual_seed(102)
# 特征维度
input_dim = 2
# 学习率
lr = 0.2
# 实例化模型
model = ModelLR(input_dim=input_dim)
# 指定优化器
optimizer = SimpleBatchGD(init_lr=lr, model=model)
# 指定损失函数
loss_fn = BinaryCrossEntropyLoss()
# 指定评价方式
metric = accuracy

# 实例化RunnerV2类,并传入训练配置
runner = RunnerV2(model, optimizer, metric, loss_fn)
runner.train([X_train, y_train], [X_dev, y_dev], num_epochs=500, log_epochs=50, save_path="best_model.pdparams")

可视化观察训练集与验证集的准确率和损失的变化情况以及决策边界:

# 可视化观察训练集与验证集的指标变化情况
def plot(runner,fig_name):
    plt.figure(figsize=(10,5))
    plt.subplot(1,2,1)
    epochs = [i for i in range(len(runner.train_scores))]
    # 绘制训练损失变化曲线
    plt.plot(epochs, runner.train_loss, color='#e4007f', label="Train loss")
    # 绘制评价损失变化曲线
    plt.plot(epochs, runner.dev_loss, color='#f19ec2', linestyle='--', label="Dev loss")
    # 绘制坐标轴和图例
    plt.ylabel("loss", fontsize='large')
    plt.xlabel("epoch", fontsize='large')
    plt.legend(loc='upper right', fontsize='x-large')
    plt.subplot(1,2,2)
    # 绘制训练准确率变化曲线
    plt.plot(epochs, runner.train_scores, color='#e4007f', label="Train accuracy")
    # 绘制评价准确率变化曲线
    plt.plot(epochs, runner.dev_scores, color='#f19ec2', linestyle='--', label="Dev accuracy")
    # 绘制坐标轴和图例
    plt.ylabel("score", fontsize='large')
    plt.xlabel("epoch", fontsize='large')
    plt.legend(loc='lower right', fontsize='x-large')
    plt.tight_layout()
    plt.savefig(fig_name)
    plt.show()

plot(runner,fig_name='linear-acc.pdf')

def decision_boundary(w, b, x1):
    w1, w2 = w.flatten()  # 将权重转换为一维数组
    x2 = (- w1 * x1 - b) / w2  # 计算对应的 x2 值
    return x2
# 绘制训练集上的决策边界
plt.figure(figsize=(5, 5))
# 绘制训练数据
plt.scatter(X_train[:, 0].tolist(), X_train[:, 1].tolist(), marker='*', c=y_train.tolist(), label='Training Data')

# 获取模型参数
w = model.params['w'].detach().numpy()  # 转换为numpy数组
b = model.params['b'].detach().numpy()  # 转换为numpy数组

# 生成x1的范围
x1 = torch.linspace(-2, 3, 1000).detach().numpy()  # 转换为numpy数组
x2 = decision_boundary(w, b, x1)  # 计算决策边界的x2值

# 绘制决策边界
plt.plot(x1, x2, color="red", label='Decision Boundary')
plt.xlim(-2, 3)
plt.ylim(-2, 3)
plt.xlabel('Feature 1')
plt.ylabel('Feature 2')
plt.title('Decision Boundary on Training Data')
plt.legend()
plt.show()

运行结果: 

best accuracy performence has been updated: 0.00000 --> 0.77500
[Train] epoch: 0, loss: 0.6931460499763489, score: 0.4890625
[Dev] epoch: 0, loss: 0.6747446060180664, score: 0.775
best accuracy performence has been updated: 0.77500 --> 0.78125
best accuracy performence has been updated: 0.78125 --> 0.78750
best accuracy performence has been updated: 0.78750 --> 0.80000
best accuracy performence has been updated: 0.80000 --> 0.80625
[Train] epoch: 50, loss: 0.408275842666626, score: 0.8171875
[Dev] epoch: 50, loss: 0.41150161623954773, score: 0.8
best accuracy performence has been updated: 0.80625 --> 0.81250
best accuracy performence has been updated: 0.81250 --> 0.81875
best accuracy performence has been updated: 0.81875 --> 0.82500
best accuracy performence has been updated: 0.82500 --> 0.83125
[Train] epoch: 100, loss: 0.3701808452606201, score: 0.828125
[Dev] epoch: 100, loss: 0.36325883865356445, score: 0.83125
best accuracy performence has been updated: 0.83125 --> 0.83750
[Train] epoch: 150, loss: 0.3553125858306885, score: 0.8359375
[Dev] epoch: 150, loss: 0.3403373956680298, score: 0.8375
best accuracy performence has been updated: 0.83750 --> 0.84375
[Train] epoch: 200, loss: 0.3471297323703766, score: 0.8421875
[Dev] epoch: 200, loss: 0.3259211480617523, score: 0.84375
[Train] epoch: 250, loss: 0.3419327437877655, score: 0.84375
[Dev] epoch: 250, loss: 0.31580424308776855, score: 0.84375
best accuracy performence has been updated: 0.84375 --> 0.85000
[Train] epoch: 300, loss: 0.33840587735176086, score: 0.846875
[Dev] epoch: 300, loss: 0.30832257866859436, score: 0.85
[Train] epoch: 350, loss: 0.33592724800109863, score: 0.8484375
[Dev] epoch: 350, loss: 0.3026159703731537, score: 0.85
[Train] epoch: 400, loss: 0.33414754271507263, score: 0.8484375
[Dev] epoch: 400, loss: 0.29816707968711853, score: 0.85
best accuracy performence has been updated: 0.85000 --> 0.85625
best accuracy performence has been updated: 0.85625 --> 0.86250
[Train] epoch: 450, loss: 0.33284980058670044, score: 0.8484375
[Dev] epoch: 450, loss: 0.29463812708854675, score: 0.8625

  • 在每个训练周期(epoch)中,训练损失和验证损失逐渐减小,表明模型在学习过程中不断优化。
  • 准确率从初始的0.489逐步提升到最终的0.8625,说明模型的性能得到了显著提升。


8 、模型评价

#===========模型评价======================
score, loss = runner.evaluate([X_test, y_test])
print("[Test] score/loss: {:.4f}/{:.4f}".format(score, loss))

plt.figure(figsize=(5,5))
# 绘制原始数据
plt.scatter(X[:, 0].tolist(), X[:, 1].tolist(), marker='*', c=y.tolist())

w = model.params['w']
b = model.params['b']
x1 = torch.linspace(-2, 3, 1000)
x2 = decision_boundary(w, b, x1)
# 绘制决策边界
plt.plot(x1.tolist(), x2.tolist(), color="red")
plt.show()

[Test] score/loss: 0.8900/0.2834

最终的测试得分为0.8900,损失为0.2834,显示模型在测试集上表现良好。

我调整了epoch为1000和步长为0.3,观察在测试集上的score和loss变化,观察模型的性能如何改变?

[Test] score/loss: 0.9050/0.2356

最终的测试得分为0.905,损失为0.2356,增加训练次数和适当改变步长可以提高模型性能。

训练参数的调整

总代码如下(不含runner2类代码):(可运行)

import torch
import torch.nn as nn
import torch.nn.functional as F
from Runner2 import RunnerV2  # 导入 RunnerV2 类
class ModelLR(nn.Module):
    def __init__(self, input_dim):
        super(ModelLR, self).__init__()
        # 存放线性层参数
        self.params = {}
        # 将线性层的权重参数全部初始化为0
        self.params['w'] = nn.Parameter(torch.zeros(input_dim, 1))
        # 如果需要使用不同的初始化方法,请取消下面这行的注释
        # self.params['w'] = nn.Parameter(torch.normal(0, 0.01, (input_dim, 1)))
        # 将线性层的偏置参数初始化为0
        self.params['b'] = nn.Parameter(torch.zeros(1))
        # 存放参数的梯度
        self.grads = {}
        self.X = None
        self.outputs = None

    def forward(self, inputs):
        self.X = inputs
        # 线性计算
        score = torch.matmul(inputs, self.params['w']) + self.params['b']
        # Logistic 函数
        self.outputs = torch.sigmoid(score)
        return self.outputs

    def backward(self, labels):
        """
        输入:
            - labels:真实标签,shape=[N, 1]
        """
        N = labels.shape[0]
        # 计算偏导数
        self.grads['w'] = -1 / N * torch.matmul(self.X.t(), (labels - self.outputs))
        self.grads['b'] = -1 / N * torch.sum(labels - self.outputs)
class BinaryCrossEntropyLoss(nn.Module):
    def __init__(self):
        super(BinaryCrossEntropyLoss, self).__init__()
        self.predicts = None
        self.labels = None
        self.num = None

    def forward(self, predicts, labels):
        """
        输入:
            - predicts:预测值,shape=[N, 1],N为样本数量
            - labels:真实标签,shape=[N, 1]
        输出:
            - 损失值:shape=[1]
        """
        self.predicts = predicts
        self.labels = labels
        self.num = self.predicts.shape[0]

        # 计算二元交叉熵损失
        loss = -1. / self.num * (
                    torch.matmul(self.labels.t(), torch.log(self.predicts)) + torch.matmul((1 - self.labels.t()),
                                                                                           torch.log(
                                                                                               1 - self.predicts)))
        loss = torch.squeeze(loss, axis=1)
        return loss
from abc import abstractmethod
# 优化器基类
class Optimizer(object):
    def __init__(self, init_lr, model):
        """
        优化器类初始化
        """
        # 初始化学习率,用于参数更新的计算
        self.init_lr = init_lr
        # 指定优化器需要优化的模型
        self.model = model

    @abstractmethod
    def step(self):
        """
        定义每次迭代如何更新参数
        """
        pass

class SimpleBatchGD(Optimizer):
    def __init__(self, init_lr, model):
        super(SimpleBatchGD, self).__init__(init_lr=init_lr, model=model)

    def step(self):
        # 参数更新
        # 遍历所有参数,按照公式(3.8)和(3.9)更新参数
        if isinstance(self.model.params, dict):
            for key in self.model.params.keys():
                self.model.params[key] = self.model.params[key] - self.init_lr * self.model.grads[key]
def accuracy(preds, labels):
    """
    输入:
        - preds:预测值,二分类时,shape=[N, 1],N为样本数量,多分类时,shape=[N, C],C为类别数量
        - labels:真实标签,shape=[N, 1]
    输出:
        - 准确率:shape=[1]
    """
    # 判断是二分类任务还是多分类任务,preds.shape[1]=1时为二分类任务,preds.shape[1]>1时为多分类任务
    if preds.shape[1] == 1:
        # 二分类时,判断每个概率值是否大于0.5,当大于0.5时,类别为1,否则类别为0
        # 使用 'torch.round' 进行四舍五入,将概率值转换为二进制标签
        preds = torch.round(preds)
    else:
        # 多分类时,使用 'torch.argmax' 计算最大元素索引作为类别
        preds = torch.argmax(preds, dim=1)

    # 计算准确率
    correct = (preds == labels).sum().item()
    #     print("correct:",correct)
    accuracy = correct / len(labels)
    #     print("shape of labels:",labels.shape)
    #     print("labels:",labels)
    #     print("shape of preds:",preds.shape)
    #     print("preds:",preds)
    return accuracy


import math
import matplotlib
matplotlib.use('TkAgg')
import matplotlib.pyplot as plt

def make_moons(n_samples=1000, shuffle=True, noise=None):
    """
    生成带噪音的弯月形状数据
    输入:
        - n_samples:数据量大小,数据类型为int
        - shuffle:是否打乱数据,数据类型为bool
        - noise:以多大的程度增加噪声,数据类型为None或float,noise为None时表示不增加噪声
    输出:
        - X:特征数据,shape=[n_samples,2]
        - y:标签数据, shape=[n_samples]
    """
    n_samples_out = n_samples // 2
    n_samples_in = n_samples - n_samples_out
    # 采集第1类数据,特征为(x,y)
    # 使用'torch.linspace'在0到pi上均匀取n_samples_out个值
    # 使用'torch.cos'计算上述取值的余弦值作为特征1,使用'torch.sin'计算上述取值的正弦值作为特征2
    outer_circ_x = torch.cos(torch.linspace(0, math.pi, n_samples_out))
    outer_circ_y = torch.sin(torch.linspace(0, math.pi, n_samples_out))
    inner_circ_x = 1 - torch.cos(torch.linspace(0, math.pi, n_samples_in))
    inner_circ_y = 0.5 - torch.sin(torch.linspace(0, math.pi, n_samples_in))
    print('外弯月特征x的形状:', outer_circ_x.shape, '外弯月特征y的形状:', outer_circ_y.shape)
    print('内弯月特征x的形状:', inner_circ_x.shape, '内弯月特征y的形状:', inner_circ_y.shape)

    # 使用'torch.cat'将两类数据的特征1和特征2分别沿维度0拼接在一起,得到全部特征1和特征2
    # 使用'torch.stack'将两类特征沿维度1堆叠在一起
    X = torch.stack(
        [torch.cat([outer_circ_x, inner_circ_x]),
         torch.cat([outer_circ_y, inner_circ_y])],
        axis=1
    )

    print('拼接后的形状:', torch.cat([outer_circ_x, inner_circ_x]).shape)
    print('X的形状:', X.shape)

    # 使用'torch.zeros'将第一类数据的标签全部设置为0
    # 使用'torch.ones'将第二类数据的标签全部设置为1
    y = torch.cat(
        [torch.zeros(size=[n_samples_out]), torch.ones(size=[n_samples_in])]
    )

    print('y的形状:', y.shape)

    # 如果shuffle为True,将所有数据打乱
    if shuffle:
        # 使用'torch.randperm'生成一个数值在0到X.shape[0],随机排列的一维Tensor作为索引值,用于打乱数据
        print(X.shape[0])

        idx = torch.randperm(X.shape[0])
        X = X[idx]
        y = y[idx]

    # 如果noise不为None,则给特征值加入噪声
    if noise is not None:
        # 使用'torch.normal'生成符合正态分布的随机Tensor作为噪声,并加到原始特征上
        print(noise)
        X += torch.normal(mean=0.0, std=noise, size=X.shape)

    return X, y

# 采样1000个样本
n_samples = 1000
X, y = make_moons(n_samples=n_samples, shuffle=True, noise=0.2)

# 可视化生成的数据集,不同颜色代表不同类别
plt.figure(figsize=(5,5))
plt.scatter(x=X[:, 0].tolist(), y=X[:, 1].tolist(), marker='*', c=y.tolist())
plt.xlim(-3,4)
plt.ylim(-3,4)
plt.savefig('线性数据集可视化.pdf')
plt.show()

num_train = 640
num_dev = 160
num_test = 200
X_train, y_train = X[:num_train], y[:num_train]
X_dev, y_dev = X[num_train:num_train + num_dev], y[num_train:num_train + num_dev]
X_test, y_test = X[num_train + num_dev:], y[num_train + num_dev:]
y_train = y_train.reshape([-1,1])
y_dev = y_dev.reshape([-1,1])
y_test = y_test.reshape([-1,1])
# 打印X_train和y_train的维度
print("X_train shape: ", X_train.shape, "y_train shape: ", y_train.shape)
# 固定随机种子,保持每次运行结果一致
torch.manual_seed(102)
# 特征维度
input_dim = 2
# 学习率
lr = 0.3
# 实例化模型
model = ModelLR(input_dim=input_dim)
# 指定优化器
optimizer = SimpleBatchGD(init_lr=lr, model=model)
# 指定损失函数
loss_fn = BinaryCrossEntropyLoss()
# 指定评价方式
metric = accuracy

# 实例化RunnerV2类,并传入训练配置
runner = RunnerV2(model, optimizer, metric, loss_fn)
runner.train([X_train, y_train], [X_dev, y_dev], num_epochs=1000, log_epochs=200, save_path="best_model.pdparams")

# 可视化观察训练集与验证集的指标变化情况
def plot(runner,fig_name):
    plt.figure(figsize=(10,5))
    plt.subplot(1,2,1)
    epochs = [i for i in range(len(runner.train_scores))]
    # 绘制训练损失变化曲线
    plt.plot(epochs, runner.train_loss, color='#e4007f', label="Train loss")
    # 绘制评价损失变化曲线
    plt.plot(epochs, runner.dev_loss, color='#f19ec2', linestyle='--', label="Dev loss")
    # 绘制坐标轴和图例
    plt.ylabel("loss", fontsize='large')
    plt.xlabel("epoch", fontsize='large')
    plt.legend(loc='upper right', fontsize='x-large')
    plt.subplot(1,2,2)
    # 绘制训练准确率变化曲线
    plt.plot(epochs, runner.train_scores, color='#e4007f', label="Train accuracy")
    # 绘制评价准确率变化曲线
    plt.plot(epochs, runner.dev_scores, color='#f19ec2', linestyle='--', label="Dev accuracy")
    # 绘制坐标轴和图例
    plt.ylabel("score", fontsize='large')
    plt.xlabel("epoch", fontsize='large')
    plt.legend(loc='lower right', fontsize='x-large')
    plt.tight_layout()
    plt.savefig(fig_name)
    plt.show()

plot(runner,fig_name='linear-acc.pdf')

def decision_boundary(w, b, x1):
    w1, w2 = w.flatten()  # 将权重转换为一维数组
    x2 = (- w1 * x1 - b) / w2  # 计算对应的 x2 值
    return x2
# 绘制训练集上的决策边界
plt.figure(figsize=(5, 5))
# 绘制训练数据
plt.scatter(X_train[:, 0].tolist(), X_train[:, 1].tolist(), marker='*', c=y_train.tolist(), label='Training Data')

# 获取模型参数
w = model.params['w'].detach().numpy()  # 转换为numpy数组
b = model.params['b'].detach().numpy()  # 转换为numpy数组

# 生成x1的范围
x1 = torch.linspace(-2, 3, 1000).detach().numpy()  # 转换为numpy数组
x2 = decision_boundary(w, b, x1)  # 计算决策边界的x2值

# 绘制决策边界
plt.plot(x1, x2, color="red", label='Decision Boundary')
plt.xlim(-2, 3)
plt.ylim(-2, 3)
plt.xlabel('Feature 1')
plt.ylabel('Feature 2')
plt.title('Decision Boundary on Training Data')
plt.legend()
plt.show()


#===========模型评价======================
score, loss = runner.evaluate([X_test, y_test])
print("[Test] score/loss: {:.4f}/{:.4f}".format(score, loss))

plt.figure(figsize=(5,5))
# 绘制原始数据
plt.scatter(X[:, 0].tolist(), X[:, 1].tolist(), marker='*', c=y.tolist())

w = model.params['w']
b = model.params['b']
x1 = torch.linspace(-2, 3, 1000)
x2 = decision_boundary(w, b, x1)
# 绘制决策边界
plt.plot(x1.tolist(), x2.tolist(), color="red")
plt.show()

(二)基于Softmax回归的多分类任务

1、 数据集构建与划分

        数据来自3个不同的簇,每个簇对一个类别。1000条样本,每个样本包含2个特征。

        将实验数据拆分成训练集、验证集和测试集。其中训练集640条、验证集160条、测试集200条。

import numpy as np
import matplotlib
matplotlib.use('TkAgg')
import matplotlib.pyplot as plt
import torch

def make_multiclass_classification(n_samples=100, n_features=2, n_classes=3, shuffle=True, noise=0.1):
    """
    生成带噪音的多类别数据
    输入:
        - n_samples:数据量大小,数据类型为int
        - n_features:特征数量,数据类型为int
        - shuffle:是否打乱数据,数据类型为bool
        - noise:以多大的程度增加噪声,数据类型为None或float,noise为None时表示不增加噪声
    输出:
        - X:特征数据,shape=[n_samples,2]
        - y:标签数据, shape=[n_samples,1]
    """
    # 计算每个类别的样本数量
    n_samples_per_class = [int(n_samples / n_classes) for k in range(n_classes)]
    for i in range(n_samples - sum(n_samples_per_class)):
        n_samples_per_class[i % n_classes] += 1
    # 将特征和标签初始化为0
    X = torch.zeros([n_samples, n_features])
    y = torch.zeros([n_samples], dtype=torch.int32)
    # 随机生成3个簇中心作为类别中心
    centroids = torch.randperm(2 ** n_features)[:n_classes]
    centroids_bin = np.unpackbits(centroids.numpy().astype('uint8')).reshape((-1, 8))[:, -n_features:]
    centroids = torch.tensor(centroids_bin, dtype=torch.float32)
    # 控制簇中心的分离程度
    centroids = 1.5 * centroids - 1
    # 随机生成特征值
    X[:, :n_features] = torch.randn(size=[n_samples, n_features])

    stop = 0
    # 将每个类的特征值控制在簇中心附近
    for k, centroid in enumerate(centroids):
        start, stop = stop, stop + n_samples_per_class[k]
        # 指定标签值
        y[start:stop] = k % n_classes
        X_k = X[start:stop, :n_features]
        # 控制每个类别特征值的分散程度
        A = 2 * torch.rand(size=[n_features, n_features]) - 1
        X_k[...] = torch.matmul(X_k, A)
        X_k += centroid
        X[start:stop, :n_features] = X_k

    # 如果noise不为None,则给特征加入噪声
    if noise > 0.0:
        # 生成noise掩膜,用来指定给那些样本加入噪声
        noise_mask = torch.rand([n_samples]) < noise
        for i in range(len(noise_mask)):
            if noise_mask[i]:
                # 给加噪声的样本随机赋标签值
                y[i] = torch.randint(0,n_classes, (1,),dtype=torch.int32)
    # 如果shuffle为True,将所有数据打乱
    if shuffle:
        idx = torch.randperm(X.shape[0])
        X = X[idx]
        y = y[idx]

    return X, y
# 固定随机种子,保持每次运行结果一致
torch.manual_seed(102)
# 采样1000个样本
n_samples = 1000
X, y = make_multiclass_classification(n_samples=n_samples, n_features=2, n_classes=3, noise=0.2)

# 可视化生产的数据集,不同颜色代表不同类别
plt.figure(figsize=(5,5))
plt.scatter(x=X[:, 0].tolist(), y=X[:, 1].tolist(), marker='*', c=y.tolist())
plt.savefig('linear-dataset-vis2.pdf')
plt.show()
num_train = 640
num_dev = 160
num_test = 200

X_train, y_train = X[:num_train], y[:num_train]
X_dev, y_dev = X[num_train:num_train + num_dev], y[num_train:num_train + num_dev]
X_test, y_test = X[num_train + num_dev:], y[num_train + num_dev:]

# 打印X_train和y_train的维度
print("X_train shape: ", X_train.shape, "y_train shape: ", y_train.shape)
X_train shape:  torch.Size([640, 2]) y_train shape:  torch.Size([640])

2 、模型构建

         主要思想:将输入特征通过线性变换映射到各个类别的得分,然后通过Softmax函数将这些得分转换为概率分布。

        模型:Softmax回归的输出值个数等于类别数k,而每个类别的概率值则通过Softmax函数进行求解。

Softmax激活函数以及Softmax回归和Logistic回归关系

定义Softmax回归算子:

import torch
import torch.nn as nn
import torch.nn.functional as F
class ModelSR(nn.Module):
    def __init__(self, input_dim, output_dim):
        super(ModelSR, self).__init__()
        self.params = nn.ParameterDict({
            'W': nn.Parameter(torch.zeros(input_dim, output_dim)),
            'b': nn.Parameter(torch.zeros(output_dim))
        })

    def __call__(self, inputs):
        return self.forward(inputs)

    def forward(self, inputs):
        """
        输入:
            - inputs: shape=[N, D], N是样本数量,D是特征维度
        输出:
            - outputs:预测值,shape=[N, C],C是类别数
        """
        # 线性计算
        score = torch.matmul(inputs, self.params['W']) + self.params['b']
        # Softmax 函数
        outputs = F.softmax(score, dim=1)
        return outputs

模型测试:


# 随机生成1条长度为4的数据
inputs = torch.randn(1, 4)
print('Input is:', inputs)
# 实例化模型,这里令输入长度为4,输出类别数为3
model = ModelSR(input_dim=4, output_dim=3)
outputs = model(inputs)
print('Output is:', outputs)
Input is: tensor([[-0.2010,  1.9033, -1.2540,  1.1313]])
Output is: tensor([[0.3333, 0.3333, 0.3333]], grad_fn=<SoftmaxBackward0>)

 输出的三个值均为0.3333,表明模型对这三个类别的预测均等,下面定义损失函数为后面模型的优化做准备

定义损失函数:

使用 PyTorch 的内置 nn.CrossEntropyLoss() 计算损失。该损失函数会自动应用 softmax 函数,计算预测值和真实标签之间的交叉熵损失。

import torch
import torch.nn as nn

class MultiCrossEntropyLoss(nn.Module):
    def __init__(self):
        super(MultiCrossEntropyLoss, self).__init__()

    def forward(self, predicts, labels):
        """
        输入:
            - predicts:预测值,shape=[N, C],N为样本数量,C为类别数量
            - labels:真实标签,shape=[N]
        输出:
            - 损失值:shape=[1]
        """
        # 将标签转换为长整型
        labels = labels.view(-1).long()
        N = predicts.shape[0]  # 样本数量
        loss = 0.0

        # 计算损失
        for i in range(N):
            index = labels[i]  # 获取当前样本的标签
            loss -= torch.log(predicts[i][index])  # 计算交叉熵损失

        return loss / N  # 返回平均损失

测试:

# 测试一下
# 假设真实标签为第0类
labels = torch.tensor([0, 1, 0])  # 真实标签(3个样本)
# 假设的预测值(3个样本,2个类别)
outputs = torch.tensor([[0.9, 0.1], [0.2, 0.8], [0.6, 0.4]])

# 计算损失函数
mce_loss = MultiCrossEntropyLoss()
loss = mce_loss(outputs, labels)
print(loss)

输出:tensor(0.2798)


3、 模型优化

梯度计算

在构建好的模型中加入梯度计算的步骤

import torch
import torch.nn as nn
import torch.nn.functional as F

class ModelSR(nn.Module):
    def __init__(self, input_dim, output_dim):
        super(ModelSR, self).__init__()
        # 将线性层的权重参数全部初始化为0
        self.params = {
            'W': nn.Parameter(torch.zeros(size=[input_dim, output_dim])),
            'b': nn.Parameter(torch.zeros(size=[output_dim]))
        }
        # 存放参数的梯度
        self.grads = {}
        self.X = None
        self.outputs = None
        self.output_dim = output_dim

    def forward(self, inputs):
        self.X = inputs
        # 线性计算
        score = torch.matmul(self.X, self.params['W']) + self.params['b']
        # Softmax 函数
        self.outputs = F.softmax(score, dim=1)
        return self.outputs

    def backward(self, labels):
        """
        输入:
            - labels:真实标签,shape=[N, 1],其中N为样本数量
        """
        # 计算偏导数
        N = labels.shape[0]
        labels = labels.view(-1).long()  # 确保标签为一维
        one_hot_labels = F.one_hot(labels, num_classes=self.output_dim).float()  # 独热编码

        # 计算梯度
        self.grads['W'] = -1 / N * torch.matmul(self.X.t(), (one_hot_labels - self.outputs))
        self.grads['b'] = -1 / N * torch.sum(one_hot_labels - self.outputs, dim=0)

测试:

# 测试一下
if __name__ == "__main__":
    input_dim = 4  # 输入特征维度
    output_dim = 3  # 输出类别数量
    model = ModelSR(input_dim, output_dim)

    # 随机生成输入数据和标签
    inputs = torch.randn(5, input_dim)  # 5个样本
    labels = torch.tensor([0, 1, 2, 0, 1]).view(-1, 1)  # 标签

    # 前向传播
    outputs = model(inputs)
    print("Outputs:", outputs)

    # 反向传播
    model.backward(labels)
    print("Gradients W:", model.grads['W'])
    print("Gradients b:", model.grads['b'])
Outputs: tensor([[0.3333, 0.3333, 0.3333],
        [0.3333, 0.3333, 0.3333],
        [0.3333, 0.3333, 0.3333],
        [0.3333, 0.3333, 0.3333],
        [0.3333, 0.3333, 0.3333]], grad_fn=<SoftmaxBackward0>)
Gradients W: tensor([[ 0.2472,  0.0479, -0.2950],
        [-0.3463,  0.3344,  0.0118],
        [-0.0974, -0.2339,  0.3314],
        [ 0.0813, -0.2535,  0.1723]], grad_fn=<MulBackward0>)
Gradients b: tensor([-0.0667, -0.0667,  0.1333], grad_fn=<MulBackward0>)
  • 每个样本的输出都是 [0.3333, 0.3333, 0.3333],是由于模型的权重初始化为0,在进行线性计算后,所有输入样本的得分(score)都是相同的,导致经过Softmax函数后每个类别的概率均为1/3。

梯度的计算结果显示了如何通过调整权重和偏置项来减少损失。


参数更新:使用一中实现的梯度下降法进行更新

from abc import abstractmethod
# 优化器基类
class Optimizer(object):
    def __init__(self, init_lr, model):
        """
        优化器类初始化
        """
        # 初始化学习率,用于参数更新的计算
        self.init_lr = init_lr
        # 指定优化器需要优化的模型
        self.model = model

    @abstractmethod
    def step(self):
        """
        定义每次迭代如何更新参数
        """
        pass

class SimpleBatchGD(Optimizer):
    def __init__(self, init_lr, model):
        super(SimpleBatchGD, self).__init__(init_lr=init_lr, model=model)

    def step(self):
        # 参数更新
        # 遍历所有参数,按照公式(3.8)和(3.9)更新参数
        if isinstance(self.model.params, dict):
            for key in self.model.params.keys():
                self.model.params[key] = self.model.params[key] - self.init_lr * self.model.grads[key]

4、 模型训练

        实例化RunnerV2类,并像上面logistic一样传入训练配置。使用训练集和验证集进行模型训练,共训练500个epoch。每隔50个epoch打印训练集上的指标。

# 固定随机种子,保持每次运行结果一致
torch.manual_seed(102)

# 特征维度
input_dim = 2
# 类别数
output_dim = 3
# 学习率
lr = 0.1

# 实例化模型
model = ModelSR(input_dim=input_dim, output_dim=output_dim)
# 指定优化器
optimizer = SimpleBatchGD(init_lr=lr, model=model)
# 指定损失函数
loss_fn = MultiCrossEntropyLoss()
# 指定评价方式
metric = accuracy
# 实例化RunnerV2类
runner = RunnerV2(model, optimizer, metric, loss_fn)

# 模型训练
runner.train([X_train, y_train], [X_dev, y_dev], num_epochs=500, log_eopchs=50, eval_epochs=1,
             save_path="best_model.pdparams")

 可视化观察训练集与验证集的准确率和损失的变化情况:

best accuracy performence has been updated: 0.00000 --> 0.70625
[Train] epoch: 0, loss: 1.0986149311065674, score: 0.321875
[Dev] epoch: 0, loss: 1.0805636644363403, score: 0.70625
best accuracy performence has been updated: 0.70625 --> 0.71250
best accuracy performence has been updated: 0.71250 --> 0.71875
best accuracy performence has been updated: 0.71875 --> 0.72500
best accuracy performence has been updated: 0.72500 --> 0.73125
best accuracy performence has been updated: 0.73125 --> 0.73750
best accuracy performence has been updated: 0.73750 --> 0.74375
best accuracy performence has been updated: 0.74375 --> 0.75000
best accuracy performence has been updated: 0.75000 --> 0.75625
best accuracy performence has been updated: 0.75625 --> 0.76875
best accuracy performence has been updated: 0.76875 --> 0.77500
best accuracy performence has been updated: 0.77500 --> 0.78750
[Train] epoch: 100, loss: 0.7155234813690186, score: 0.76875
[Dev] epoch: 100, loss: 0.7977758049964905, score: 0.7875
best accuracy performence has been updated: 0.78750 --> 0.79375
best accuracy performence has been updated: 0.79375 --> 0.80000
[Train] epoch: 200, loss: 0.6921818852424622, score: 0.784375
[Dev] epoch: 200, loss: 0.8020225763320923, score: 0.79375
best accuracy performence has been updated: 0.80000 --> 0.80625
[Train] epoch: 300, loss: 0.684037983417511, score: 0.790625
[Dev] epoch: 300, loss: 0.81141597032547, score: 0.80625
best accuracy performence has been updated: 0.80625 --> 0.81250
[Train] epoch: 400, loss: 0.680213987827301, score: 0.8078125
[Dev] epoch: 400, loss: 0.819807231426239, score: 0.80625

在训练的初始阶段,精确度从0.00000逐渐提高到0.78750。这说明模型在训练过程中逐渐学习到了有效的特征。 

5、模型评价 

 在验证集上的准确率和损失:

score, loss = runner.evaluate([X_test, y_test])
print("[Test] score/loss: {:.4f}/{:.4f}".format(score, loss))
[Test] score/loss: 0.8400/0.7014

        最终模型在测试集上的精确度为0.8400,损失为0.7014。这表明模型在未见过的数据上仍然表现良好,具备一定的泛化能力。 

进一步测试并可视化决策边界:

# 均匀生成40000个数据点
x1, x2 = torch.meshgrid(torch.linspace(-3.5, 2, 200), torch.linspace(-4.5, 3.5, 200),indexing='ij')
x = torch.stack([torch.flatten(x1), torch.flatten(x2)], dim=1)
# 预测对应类别
y = runner.predict(x)
y = torch.argmax(y, dim=1)
# 绘制类别区域
plt.ylabel('x2')
plt.xlabel('x1')
plt.scatter(x[:, 0].tolist(), x[:, 1].tolist(), c=y.tolist(), cmap=plt.cm.Spectral)

完整代码:

import torch
import torch.nn as nn
import torch.nn.functional as F
from Runner2 import RunnerV2  # 导入 RunnerV2 类
class ModelSR(nn.Module):
    def __init__(self, input_dim, output_dim):
        super(ModelSR, self).__init__()
        # 将线性层的权重参数全部初始化为0
        self.params = {
            'W': nn.Parameter(torch.zeros(size=[input_dim, output_dim])),
            'b': nn.Parameter(torch.zeros(size=[output_dim]))
        }
        # 存放参数的梯度
        self.grads = {}
        self.X = None
        self.outputs = None
        self.output_dim = output_dim

    def forward(self, inputs):
        self.X = inputs
        # 线性计算
        score = torch.matmul(self.X, self.params['W']) + self.params['b']
        # Softmax 函数
        self.outputs = F.softmax(score, dim=1)
        return self.outputs

    def backward(self, labels):
        """
        输入:
            - labels:真实标签,shape=[N, 1],其中N为样本数量
        """
        # 计算偏导数
        N = labels.shape[0]
        labels = labels.view(-1).long()  # 确保标签为一维并转换为长整型

        one_hot_labels = F.one_hot(labels, num_classes=self.output_dim).float()  # 独热编码

        # 计算梯度
        self.grads['W'] = -1 / N * torch.matmul(self.X.t(), (one_hot_labels - self.outputs))
        self.grads['b'] = -1 / N * torch.sum(one_hot_labels - self.outputs, dim=0)

class MultiCrossEntropyLoss(nn.Module):
    def __init__(self):
        super(MultiCrossEntropyLoss, self).__init__()

    def forward(self, predicts, labels):
        """
        输入:
            - predicts:预测值,shape=[N, C],N为样本数量,C为类别数量
            - labels:真实标签,shape=[N]
        输出:
            - 损失值:shape=[1]
        """
        # 将标签转换为长整型
        labels = labels.view(-1).long()
        N = predicts.shape[0]  # 样本数量
        loss = 0.0

        # 计算损失
        for i in range(N):
            index = labels[i]  # 获取当前样本的标签
            loss -= torch.log(predicts[i][index])  # 计算交叉熵损失

        return loss / N  # 返回平均损失
from abc import abstractmethod
# 优化器基类
class Optimizer(object):
    def __init__(self, init_lr, model):
        """
        优化器类初始化
        """
        # 初始化学习率,用于参数更新的计算
        self.init_lr = init_lr
        # 指定优化器需要优化的模型
        self.model = model

    @abstractmethod
    def step(self):
        """
        定义每次迭代如何更新参数
        """
        pass
class SimpleBatchGD(Optimizer):
    def __init__(self, init_lr, model):
        super(SimpleBatchGD, self).__init__(init_lr=init_lr, model=model)

    def step(self):
        # 参数更新
        # 遍历所有参数,按照公式(3.8)和(3.9)更新参数
        if isinstance(self.model.params, dict):
            for key in self.model.params.keys():
                self.model.params[key] = self.model.params[key] - self.init_lr * self.model.grads[key]
def accuracy(preds, labels):
    """
    输入:
        - preds:预测值,二分类时,shape=[N, 1],N为样本数量,多分类时,shape=[N, C],C为类别数量
        - labels:真实标签,shape=[N, 1]
    输出:
        - 准确率:shape=[1]
    """
    # 判断是二分类任务还是多分类任务,preds.shape[1]=1时为二分类任务,preds.shape[1]>1时为多分类任务
    if preds.shape[1] == 1:
        # 二分类时,判断每个概率值是否大于0.5,当大于0.5时,类别为1,否则类别为0
        # 使用 'torch.round' 进行四舍五入,将概率值转换为二进制标签
        preds = torch.round(preds)
    else:
        # 多分类时,使用 'torch.argmax' 计算最大元素索引作为类别
        preds = torch.argmax(preds, dim=1)

    # 计算准确率
    correct = (preds == labels).sum().item()
    #     print("correct:",correct)
    accuracy = correct / len(labels)
    #     print("shape of labels:",labels.shape)
    #     print("labels:",labels)
    #     print("shape of preds:",preds.shape)
    #     print("preds:",preds)
    return accuracy

#===================生成数据===============================
import numpy as np
import matplotlib
matplotlib.use('TkAgg')
import matplotlib.pyplot as plt

def make_multiclass_classification(n_samples=100, n_features=2, n_classes=3, shuffle=True, noise=0.1):
    """
    生成带噪音的多类别数据
    输入:
        - n_samples:数据量大小,数据类型为int
        - n_features:特征数量,数据类型为int
        - shuffle:是否打乱数据,数据类型为bool
        - noise:以多大的程度增加噪声,数据类型为None或float,noise为None时表示不增加噪声
    输出:
        - X:特征数据,shape=[n_samples,2]
        - y:标签数据, shape=[n_samples,1]
    """
    # 计算每个类别的样本数量
    n_samples_per_class = [int(n_samples / n_classes) for k in range(n_classes)]
    for i in range(n_samples - sum(n_samples_per_class)):
        n_samples_per_class[i % n_classes] += 1
    # 将特征和标签初始化为0
    X = torch.zeros([n_samples, n_features])
    y = torch.zeros([n_samples], dtype=torch.int32)
    # 随机生成3个簇中心作为类别中心
    centroids = torch.randperm(2 ** n_features)[:n_classes]
    centroids_bin = np.unpackbits(centroids.numpy().astype('uint8')).reshape((-1, 8))[:, -n_features:]
    centroids = torch.tensor(centroids_bin, dtype=torch.float32)
    # 控制簇中心的分离程度
    centroids = 1.5 * centroids - 1
    # 随机生成特征值
    X[:, :n_features] = torch.randn(size=[n_samples, n_features])

    stop = 0
    # 将每个类的特征值控制在簇中心附近
    for k, centroid in enumerate(centroids):
        start, stop = stop, stop + n_samples_per_class[k]
        # 指定标签值
        y[start:stop] = k % n_classes
        X_k = X[start:stop, :n_features]
        # 控制每个类别特征值的分散程度
        A = 2 * torch.rand(size=[n_features, n_features]) - 1
        X_k[...] = torch.matmul(X_k, A)
        X_k += centroid
        X[start:stop, :n_features] = X_k

    # 如果noise不为None,则给特征加入噪声
    if noise > 0.0:
        # 生成noise掩膜,用来指定给那些样本加入噪声
        noise_mask = torch.rand([n_samples]) < noise
        for i in range(len(noise_mask)):
            if noise_mask[i]:
                # 给加噪声的样本随机赋标签值
                y[i] = torch.randint(0,n_classes, (1,),dtype=torch.int32)
    # 如果shuffle为True,将所有数据打乱
    if shuffle:
        idx = torch.randperm(X.shape[0])
        X = X[idx]
        y = y[idx]

    return X, y
# 固定随机种子,保持每次运行结果一致
torch.manual_seed(102)
# 采样1000个样本
n_samples = 1000
X, y = make_multiclass_classification(n_samples=n_samples, n_features=2, n_classes=3, noise=0.2)

# 可视化生产的数据集,不同颜色代表不同类别
plt.figure(figsize=(5,5))
plt.scatter(x=X[:, 0].tolist(), y=X[:, 1].tolist(), marker='*', c=y.tolist())
plt.savefig('linear-dataset-vis2.pdf')
plt.show()
num_train = 640
num_dev = 160
num_test = 200

X_train, y_train = X[:num_train], y[:num_train]
X_dev, y_dev = X[num_train:num_train + num_dev], y[num_train:num_train + num_dev]
X_test, y_test = X[num_train + num_dev:], y[num_train + num_dev:]

# 打印X_train和y_train的维度
print("X_train shape: ", X_train.shape, "y_train shape: ", y_train.shape)
# 固定随机种子,保持每次运行结果一致
torch.manual_seed(102)

# 特征维度
input_dim = 2
# 类别数
output_dim = 3
# 学习率
lr = 0.1

# 实例化模型
model = ModelSR(input_dim=input_dim, output_dim=output_dim)
# 指定优化器
optimizer = SimpleBatchGD(init_lr=lr, model=model)
# 指定损失函数
loss_fn = MultiCrossEntropyLoss()
# 指定评价方式
metric = accuracy
# 实例化RunnerV2类
runner = RunnerV2(model, optimizer, metric, loss_fn)

# 模型训练
runner.train([X_train, y_train], [X_dev, y_dev], num_epochs=500, log_eopchs=50, eval_epochs=1,
             save_path="best_model.pdparams")


# 可视化观察训练集与验证集的指标变化情况
def plot(runner,fig_name):
    plt.figure(figsize=(10,5))
    plt.subplot(1,2,1)
    epochs = [i for i in range(len(runner.train_scores))]
    # 绘制训练损失变化曲线
    plt.plot(epochs, runner.train_loss, color='#e4007f', label="Train loss")
    # 绘制评价损失变化曲线
    plt.plot(epochs, runner.dev_loss, color='#f19ec2', linestyle='--', label="Dev loss")
    # 绘制坐标轴和图例
    plt.ylabel("loss", fontsize='large')
    plt.xlabel("epoch", fontsize='large')
    plt.legend(loc='upper right', fontsize='x-large')
    plt.subplot(1,2,2)
    # 绘制训练准确率变化曲线
    plt.plot(epochs, runner.train_scores, color='#e4007f', label="Train accuracy")
    # 绘制评价准确率变化曲线
    plt.plot(epochs, runner.dev_scores, color='#f19ec2', linestyle='--', label="Dev accuracy")
    # 绘制坐标轴和图例
    plt.ylabel("score", fontsize='large')
    plt.xlabel("epoch", fontsize='large')
    plt.legend(loc='lower right', fontsize='x-large')
    plt.tight_layout()
    plt.savefig(fig_name)
    plt.show()
# 可视化观察训练集与验证集的准确率变化情况
plot(runner,fig_name='linear-acc2.pdf')

#==================模型评价================================
score, loss = runner.evaluate([X_test, y_test])
print("[Test] score/loss: {:.4f}/{:.4f}".format(score, loss))
# 均匀生成40000个数据点
x1, x2 = torch.meshgrid(torch.linspace(-3.5, 2, 200), torch.linspace(-4.5, 3.5, 200),indexing='ij')
x = torch.stack([torch.flatten(x1), torch.flatten(x2)], dim=1)
# 预测对应类别
y = runner.predict(x)
y = torch.argmax(y, dim=1)
# 绘制类别区域
plt.ylabel('x2')
plt.xlabel('x1')
plt.scatter(x[:, 0].tolist(), x[:, 1].tolist(), c=y.tolist(), cmap=plt.cm.Spectral)

torch.manual_seed(102)
n_samples = 1000
X, y = make_multiclass_classification(n_samples=n_samples, n_features=2, n_classes=3, noise=0.2)

plt.scatter(X[:, 0].tolist(), X[:, 1].tolist(), marker='*', c=y.tolist())
plt.show()

(三) 实践:基于Softmax回归完成鸢尾花分类任务


1 、数据处理

缺失值处理(sum()函数检查无缺失值)-->异常值处理(绘制箱型图)--->数据集划分(80%训练集+10%测试集+10%验证集)

from sklearn.datasets import load_iris
import pandas
import numpy as np

iris_features = np.array(load_iris().data, dtype=np.float32)
iris_labels = np.array(load_iris().target, dtype=np.int32)
print(pandas.isna(iris_features).sum())
print(pandas.isna(iris_labels).sum())
import matplotlib
matplotlib.use('TkAgg')
import matplotlib.pyplot as plt #可视化工具

# 箱线图查看异常值分布
def boxplot(features):
    feature_names = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width']

    # 连续画几个图片
    plt.figure(figsize=(5, 5), dpi=200)
    # 子图调整
    plt.subplots_adjust(wspace=0.6)
    # 每个特征画一个箱线图
    for i in range(4):
        plt.subplot(2, 2, i+1)
        # 画箱线图
        plt.boxplot(features[:, i],
                    showmeans=True,
                    whiskerprops={"color":"#E20079", "linewidth":0.4, 'linestyle':"--"},
                    flierprops={"markersize":0.4},
                    meanprops={"markersize":1})
        # 图名
        plt.title(feature_names[i], fontdict={"size":5}, pad=2)
        # y方向刻度
        plt.yticks(fontsize=4, rotation=90)
        plt.tick_params(pad=0.5)
        # x方向刻度
        plt.xticks([])
    #plt.savefig('ml-vis.pdf')
    plt.show()

boxplot(iris_features)

#===============划分数据集==================
import copy
import torch

# 加载数据集
def load_data(shuffle=True):
    """
    加载鸢尾花数据
    输入:
        - shuffle:是否打乱数据,数据类型为bool
    输出:
        - X:特征数据,shape=[150,4]
        - y:标签数据, shape=[150]
    """
    # 加载原始数据
    X = np.array(load_iris().data, dtype=np.float32)
    y = np.array(load_iris().target, dtype=np.int32)

    X = torch.tensor(X)
    y = torch.tensor(y)

    # 数据归一化
    X_min,_ = torch.min(X, axis=0)
    X_max,_ = torch.max(X, axis=0)
    X = (X-X_min) / (X_max-X_min)

    # 如果shuffle为True,随机打乱数据
    if shuffle:
        idx = torch.randperm(X.shape[0])
        X = X[idx]
        y = y[idx]
    return X, y

# 固定随机种子
torch.manual_seed(102)

num_train = 120
num_dev = 15
num_test = 15

X, y = load_data(shuffle=True)
print("X shape: ", X.shape, "y shape: ", y.shape)
X_train, y_train = X[:num_train], y[:num_train]
X_dev, y_dev = X[num_train:num_train + num_dev], y[num_train:num_train + num_dev]
X_test, y_test = X[num_train + num_dev:], y[num_train + num_dev:]

运行结果: 

0
0
X shape:  torch.Size([150, 4]) y shape:  torch.Size([150])

可知不含缺失值以及异常值


2 、模型构建

使用Softmax回归模型进行鸢尾花分类实验,将模型的输入维度定义为4,输出维度定义为3。

由于均为多分类任务,模型这里我直接用的(二)中定义的ModelSR类



# 输入维度
input_dim = 4
# 类别数
output_dim = 3
# 实例化模型
model = ModelSR(input_dim=input_dim, output_dim=output_dim)
# 假设有一个输入张量 x
x = torch.randn(1, input_dim)  # 示例输入数据
# 使用模型进行前向传播
output = model(x)
# 输出是模型对输入的预测
print(output)


3、 模型训练

        使用 RunnerV2 类封装的训练流程,调用模型的 forward 方法计算输出,使用损失函数计算损失,并使用梯度下降法优化器更新模型参数。

        训练记录:记录每个 epoch 的训练损失和准确率(共训练80个epoch,其中每隔10个epoch打印),以及验证集的损失和准确率。并可视化训练集与验证集的准确率变化情况。

# 学习率
lr = 0.2
# 输入维度
input_dim = 4
# 类别数
output_dim = 3
# 实例化模型
model = ModelSR(input_dim=input_dim, output_dim=output_dim)
# 梯度下降法
optimizer = SimpleBatchGD(init_lr=lr, model=model)
# 交叉熵损失
loss_fn = MultiCrossEntropyLoss()
# 准确率
metric = accuracy

# 实例化RunnerV2
runner = RunnerV2(model, optimizer, metric, loss_fn)

#启动训练
runner.train([X_train, y_train], [X_dev, y_dev], num_epochs=200, log_epochs=10, save_path="best_model.pdparams")

可视化观察训练集与验证集的指标变化情况

# 可视化观察训练集与验证集的指标变化情况
def plot(runner,fig_name):
    plt.figure(figsize=(10,5))
    plt.subplot(1,2,1)
    epochs = [i for i in range(len(runner.train_scores))]
    # 绘制训练损失变化曲线
    plt.plot(epochs, runner.train_loss, color='#e4007f', label="Train loss")
    # 绘制评价损失变化曲线
    plt.plot(epochs, runner.dev_loss, color='#f19ec2', linestyle='--', label="Dev loss")
    # 绘制坐标轴和图例
    plt.ylabel("loss", fontsize='large')
    plt.xlabel("epoch", fontsize='large')
    plt.legend(loc='upper right', fontsize='x-large')
    plt.subplot(1,2,2)
    # 绘制训练准确率变化曲线
    plt.plot(epochs, runner.train_scores, color='#e4007f', label="Train accuracy")
    # 绘制评价准确率变化曲线
    plt.plot(epochs, runner.dev_scores, color='#f19ec2', linestyle='--', label="Dev accuracy")
    # 绘制坐标轴和图例
    plt.ylabel("score", fontsize='large')
    plt.xlabel("epoch", fontsize='large')
    plt.legend(loc='lower right', fontsize='x-large')
    plt.tight_layout()
    plt.savefig(fig_name)
    plt.show()
# 可视化观察训练集与验证集的准确率变化情况
plot(runner,fig_name='linear-acc2.pdf')
best accuracy performence has been updated: 0.00000 --> 0.46667
[Train] epoch: 0, loss: 1.09861159324646, score: 0.375
[Dev] epoch: 0, loss: 1.089357614517212, score: 0.4666666666666667
[Train] epoch: 10, loss: 0.9777260422706604, score: 0.7
[Dev] epoch: 10, loss: 1.023618221282959, score: 0.4666666666666667
[Train] epoch: 20, loss: 0.8894370794296265, score: 0.7
[Dev] epoch: 20, loss: 0.9739664793014526, score: 0.4666666666666667
[Train] epoch: 30, loss: 0.8196598887443542, score: 0.7
[Dev] epoch: 30, loss: 0.9317176342010498, score: 0.4666666666666667
[Train] epoch: 40, loss: 0.7635203003883362, score: 0.7
[Dev] epoch: 40, loss: 0.8957117199897766, score: 0.4666666666666667
[Train] epoch: 50, loss: 0.7176517248153687, score: 0.725
[Dev] epoch: 50, loss: 0.8649960160255432, score: 0.4666666666666667
[Train] epoch: 60, loss: 0.679577648639679, score: 0.7416666666666667
[Dev] epoch: 60, loss: 0.8386644721031189, score: 0.4666666666666667
[Train] epoch: 70, loss: 0.6474865078926086, score: 0.7583333333333333
[Dev] epoch: 70, loss: 0.8159361481666565, score: 0.4666666666666667
[Train] epoch: 80, loss: 0.6200525760650635, score: 0.7666666666666667
[Dev] epoch: 80, loss: 0.7961668372154236, score: 0.4666666666666667
[Train] epoch: 90, loss: 0.5962967276573181, score: 0.7833333333333333
[Dev] epoch: 90, loss: 0.7788369655609131, score: 0.4666666666666667
[Train] epoch: 100, loss: 0.5754876732826233, score: 0.8166666666666667
[Dev] epoch: 100, loss: 0.7635290622711182, score: 0.4666666666666667
best accuracy performence has been updated: 0.46667 --> 0.53333
[Train] epoch: 110, loss: 0.5570722818374634, score: 0.825
[Dev] epoch: 110, loss: 0.7499087452888489, score: 0.5333333333333333
best accuracy performence has been updated: 0.53333 --> 0.60000
[Train] epoch: 120, loss: 0.5406264066696167, score: 0.825
[Dev] epoch: 120, loss: 0.7377070188522339, score: 0.6
[Train] epoch: 130, loss: 0.525819718837738, score: 0.85
[Dev] epoch: 130, loss: 0.726706862449646, score: 0.6
[Train] epoch: 140, loss: 0.5123931169509888, score: 0.8583333333333333
[Dev] epoch: 140, loss: 0.7167316675186157, score: 0.6
[Train] epoch: 150, loss: 0.5001395344734192, score: 0.875
[Dev] epoch: 150, loss: 0.7076371312141418, score: 0.6
best accuracy performence has been updated: 0.60000 --> 0.66667
[Train] epoch: 160, loss: 0.48889240622520447, score: 0.875
[Dev] epoch: 160, loss: 0.6993042826652527, score: 0.6666666666666666
[Train] epoch: 170, loss: 0.47851642966270447, score: 0.875
[Dev] epoch: 170, loss: 0.6916343569755554, score: 0.6666666666666666
[Train] epoch: 180, loss: 0.46889936923980713, score: 0.875
[Dev] epoch: 180, loss: 0.6845447421073914, score: 0.6
[Train] epoch: 190, loss: 0.45994898676872253, score: 0.875
[Dev] epoch: 190, loss: 0.6779664158821106, score: 0.6

  • 训练阶段(Train):模型在训练集上的损失逐渐下降,表明模型正在学习并拟合数据。准确率逐渐上升,从初始的0.375到接近0.875,表明模型性能在不断提高。

  • 验证阶段(Dev):验证集上准确率变化较小,始终停留在较低水平(0.46667),直到后面才逐渐提高到0.66667,这可能表明模型在训练集上过拟合

4 、模型评价

使用测试数据对在训练过程中保存的最佳模型进行评价,观察模型在测试集上的准确率情况

#=================m模型评价=======================
# 加载最优模型
runner.load_model('best_model.pdparams')
# 模型评价
score, loss = runner.evaluate([X_test, y_test])
print("[Test] score/loss: {:.4f}/{:.4f}".format(score, loss))

最终输出:[Test] score/loss: 0.7333/0.5928,总体来看,模型在测试集上的表现是相对不错的。但可以考虑进一步优化以提高准确率或降低损失,于是我基于经验调整参数:

学习率为0.5 epoch为200::[Test] score/loss: 0.8667/0.4477------>模型泛化能力增强

学习率为0.7epoch为200:: [Test] score/loss: 0.8667/0.4478

学习率为0.2 epoch为500:   [Test] score/loss: 0.8667/0.4483

学习率为0.2 epoch为5000:    [Test] score/loss: 0.9333/0.2399---->此时模型已经泛化能力很强了

   调参以提高准确率

总代码:

import torch
import torch.nn as nn
from Runner2 import RunnerV2
import torch.nn.functional as F

class ModelSR(nn.Module):
    def __init__(self, input_dim, output_dim):
        super(ModelSR, self).__init__()
        # 将线性层的权重参数全部初始化为0
        self.params = {
            'W': nn.Parameter(torch.zeros(size=[input_dim, output_dim])),
            'b': nn.Parameter(torch.zeros(size=[output_dim]))
        }
        # 存放参数的梯度
        self.grads = {}
        self.X = None
        self.outputs = None
        self.output_dim = output_dim

    def forward(self, inputs):
        self.X = inputs
        # 线性计算
        score = torch.matmul(self.X, self.params['W']) + self.params['b']
        # Softmax 函数
        self.outputs = F.softmax(score, dim=1)
        return self.outputs

    def backward(self, labels):
        """
        输入:
            - labels:真实标签,shape=[N, 1],其中N为样本数量
        """
        # 计算偏导数
        N = labels.shape[0]
        labels = labels.view(-1).long()  # 确保标签为一维并转换为长整型

        one_hot_labels = F.one_hot(labels, num_classes=self.output_dim).float()  # 独热编码

        # 计算梯度
        self.grads['W'] = -1 / N * torch.matmul(self.X.t(), (one_hot_labels - self.outputs))
        self.grads['b'] = -1 / N * torch.sum(one_hot_labels - self.outputs, dim=0)


class MultiCrossEntropyLoss(nn.Module):
    def __init__(self):
        super(MultiCrossEntropyLoss, self).__init__()

    def forward(self, predicts, labels):
        """
        输入:
            - predicts:预测值,shape=[N, C],N为样本数量,C为类别数量
            - labels:真实标签,shape=[N]
        输出:
            - 损失值:shape=[1]
        """
        # 将标签转换为长整型
        labels = labels.view(-1).long()
        N = predicts.shape[0]  # 样本数量
        loss = 0.0

        # 计算损失
        for i in range(N):
            index = labels[i]  # 获取当前样本的标签
            loss -= torch.log(predicts[i][index])  # 计算交叉熵损失

        return loss / N  # 返回平均损失
from abc import abstractmethod
# 优化器基类
class Optimizer(object):
    def __init__(self, init_lr, model):
        """
        优化器类初始化
        """
        # 初始化学习率,用于参数更新的计算
        self.init_lr = init_lr
        # 指定优化器需要优化的模型
        self.model = model

    @abstractmethod
    def step(self):
        """
        定义每次迭代如何更新参数
        """
        pass
class SimpleBatchGD(Optimizer):
    def __init__(self, init_lr, model):
        super(SimpleBatchGD, self).__init__(init_lr=init_lr, model=model)

    def step(self):
        # 参数更新
        # 遍历所有参数,按照公式(3.8)和(3.9)更新参数
        if isinstance(self.model.params, dict):
            for key in self.model.params.keys():
                self.model.params[key] = self.model.params[key] - self.init_lr * self.model.grads[key]
def accuracy(preds, labels):
    """
    输入:
        - preds:预测值,二分类时,shape=[N, 1],N为样本数量,多分类时,shape=[N, C],C为类别数量
        - labels:真实标签,shape=[N, 1]
    输出:
        - 准确率:shape=[1]
    """
    # 判断是二分类任务还是多分类任务,preds.shape[1]=1时为二分类任务,preds.shape[1]>1时为多分类任务
    if preds.shape[1] == 1:
        # 二分类时,判断每个概率值是否大于0.5,当大于0.5时,类别为1,否则类别为0
        # 使用 'torch.round' 进行四舍五入,将概率值转换为二进制标签
        preds = torch.round(preds)
    else:
        # 多分类时,使用 'torch.argmax' 计算最大元素索引作为类别
        preds = torch.argmax(preds, dim=1)

    # 计算准确率
    correct = (preds == labels).sum().item()
    #     print("correct:",correct)
    accuracy = correct / len(labels)
    #     print("shape of labels:",labels.shape)
    #     print("labels:",labels)
    #     print("shape of preds:",preds.shape)
    #     print("preds:",preds)
    return accuracy

#===============数据集===============
from sklearn.datasets import load_iris
import pandas
import numpy as np

iris_features = np.array(load_iris().data, dtype=np.float32)
iris_labels = np.array(load_iris().target, dtype=np.int32)
print(pandas.isna(iris_features).sum())
print(pandas.isna(iris_labels).sum())
import matplotlib
matplotlib.use('TkAgg')
import matplotlib.pyplot as plt #可视化工具

# 箱线图查看异常值分布
def boxplot(features):
    feature_names = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width']

    # 连续画几个图片
    plt.figure(figsize=(5, 5), dpi=200)
    # 子图调整
    plt.subplots_adjust(wspace=0.6)
    # 每个特征画一个箱线图
    for i in range(4):
        plt.subplot(2, 2, i+1)
        # 画箱线图
        plt.boxplot(features[:, i],
                    showmeans=True,
                    whiskerprops={"color":"#E20079", "linewidth":0.4, 'linestyle':"--"},
                    flierprops={"markersize":0.4},
                    meanprops={"markersize":1})
        # 图名
        plt.title(feature_names[i], fontdict={"size":5}, pad=2)
        # y方向刻度
        plt.yticks(fontsize=4, rotation=90)
        plt.tick_params(pad=0.5)
        # x方向刻度
        plt.xticks([])
    #plt.savefig('ml-vis.pdf')
    plt.show()

boxplot(iris_features)

#===============划分数据集==================
import copy
import torch

# 加载数据集
def load_data(shuffle=True):
    """
    加载鸢尾花数据
    输入:
        - shuffle:是否打乱数据,数据类型为bool
    输出:
        - X:特征数据,shape=[150,4]
        - y:标签数据, shape=[150]
    """
    # 加载原始数据
    X = np.array(load_iris().data, dtype=np.float32)
    y = np.array(load_iris().target, dtype=np.int32)

    X = torch.tensor(X)
    y = torch.tensor(y)

    # 数据归一化
    X_min,_ = torch.min(X, axis=0)
    X_max,_ = torch.max(X, axis=0)
    X = (X-X_min) / (X_max-X_min)

    # 如果shuffle为True,随机打乱数据
    if shuffle:
        idx = torch.randperm(X.shape[0])
        X = X[idx]
        y = y[idx]
    return X, y
# 固定随机种子
torch.manual_seed(102)

num_train = 120
num_dev = 15
num_test = 15

X, y = load_data(shuffle=True)
print("X shape: ", X.shape, "y shape: ", y.shape)
X_train, y_train = X[:num_train], y[:num_train]
X_dev, y_dev = X[num_train:num_train + num_dev], y[num_train:num_train + num_dev]
X_test, y_test = X[num_train + num_dev:], y[num_train + num_dev:]
# 启动训练
# 学习率
lr = 0.2
# 输入维度
input_dim = 4
# 类别数
output_dim = 3
# 实例化模型
model = ModelSR(input_dim=input_dim, output_dim=output_dim)
# 梯度下降法
optimizer = SimpleBatchGD(init_lr=lr, model=model)
# 交叉熵损失
loss_fn = MultiCrossEntropyLoss()
# 准确率
metric = accuracy

# 实例化RunnerV2
runner = RunnerV2(model, optimizer, metric, loss_fn)

#启动训练
runner.train([X_train, y_train], [X_dev, y_dev], num_epochs=, log_epochs=1000, save_path="best_model.pdparams")

# 可视化观察训练集与验证集的指标变化情况
def plot(runner,fig_name):
    plt.figure(figsize=(10,5))
    plt.subplot(1,2,1)
    epochs = [i for i in range(len(runner.train_scores))]
    # 绘制训练损失变化曲线
    plt.plot(epochs, runner.train_loss, color='#e4007f', label="Train loss")
    # 绘制评价损失变化曲线
    plt.plot(epochs, runner.dev_loss, color='#f19ec2', linestyle='--', label="Dev loss")
    # 绘制坐标轴和图例
    plt.ylabel("loss", fontsize='large')
    plt.xlabel("epoch", fontsize='large')
    plt.legend(loc='upper right', fontsize='x-large')
    plt.subplot(1,2,2)
    # 绘制训练准确率变化曲线
    plt.plot(epochs, runner.train_scores, color='#e4007f', label="Train accuracy")
    # 绘制评价准确率变化曲线
    plt.plot(epochs, runner.dev_scores, color='#f19ec2', linestyle='--', label="Dev accuracy")
    # 绘制坐标轴和图例
    plt.ylabel("score", fontsize='large')
    plt.xlabel("epoch", fontsize='large')
    plt.legend(loc='lower right', fontsize='x-large')
    plt.tight_layout()
    plt.savefig(fig_name)
    plt.show()
# 可视化观察训练集与验证集的准确率变化情况
plot(runner,fig_name='linear-acc2.pdf')

#=================m模型评价=======================
# 加载最优模型
runner.load_model('best_model.pdparams')
# 模型评价
score, loss = runner.evaluate([X_test, y_test])
print("[Test] score/loss: {:.4f}/{:.4f}".format(score, loss))

(四)总结

        对于使用PyTorch框架构建模型有了大致了解,比如定义模型类,包括初始化方法、前向传播方法、反向传播、参数使用nn.Parameter包装,以便自动计算梯度等等,但还是需要多练,好多地方具体怎么实现的还是很模糊。        

        总的来说,与上次回归实验对比,这次实验进行起来相对轻松,一是我基本掌握了runner类的使用(上次实验主要是在这里花费了很多时间),所以这次完成的相对顺利,对代码的结构很清晰;二是这次时间比较充裕,大概用了一天半的时间,能够静下心来一点点调试错误,学习知识。

        这次实验我对超参数的选择有了很深的体会,学习率和训练轮次的适当调整可以增大模型的精度。另外就是真实感受到了电脑有GPU加速的好处哈哈哈哈~。

下面是我在调试代码过程中出现的一些奇奇怪怪的报错:

1、使用 indexing 参数:indexing='ij' 选项会保持原有的索引顺序

2、labels = labels.view(-1).long() # 确保标签为一维并转换为长整型这个错误提示说明 F.one_hot 函数只能用于索引张量(即 1ong类型的张量)

猜你喜欢

转载自blog.csdn.net/qq_73704268/article/details/142636530