【Pytorch编程】Pytorch-Ignite v0.4.8的安装以及简单使用

关于Pytorch-Ignite

Pytorch-Ignite是Pytorch的高级库,类似于Keras与Tensorflow的关系。其官方网站为:

概括而言,这个库可以帮助我们更加方便地训练、测试、使用基于Pytorch编写的深度学习模型。

安装

Pytorch-Ignite是依赖于Pytorch的,其安装可以包括以下几步:
1、创建python环境:

conda create -n py36_ignite_048 python=3.6
conda activate py36_ignite_048

2、安装pytorch:
以下命令安装的是cuda10.2,pytorch1.9.0版本

pip install torch==1.9.0+cu102 torchvision==0.10.0+cu102 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html

3、安装Pytorch-Ignite:

pip install pytorch-ignite

4、安装jupyter notebook【为了方便测试】:

pip install notebook

5、安装tensorflow【方便使用tensorboard】

pip install tensorflow

基本使用

打开jupyter notebook

关于jupyter notebook的使用可以参考【Python编程】服务器长期开启jupyter notebook远程连接服务

conda activate py36_ignite_048
jupyter notebook --port=1234 # 指定端口

上面的--port=1234指定了jupyter notebook远程服务的端口,这是因为大多数服务器的端口是进行了防火墙管理的,我们需要跟管理员咨询,哪些端口是开放给我们使用的,管理员可以通过使用防火墙工具查询,例如ufw

版本查看

新建notebook文件,
建立第一个cell单元,粘贴以下代码并运行

import torch
print(torch.__file__) # 查看安装位置
print(torch.__version__) # 查看版本号
print(torch.cuda.is_available()) # 查看CUDA版本是否可用

返回以下结果说明安装pytorch成功:

/home/XXXXXX/anaconda3/envs/py36_ignite_048/lib/python3.6/site-packages/torch/__init__.py
1.9.0+cu102
True

新建一个cell单元,查看ignite:

import ignite
print(ignite.__file__)  # 查看安装位置
print(ignite.__version__) # 查看版本号

返回以下结果说明安装成功:

/home/XXXXXX/anaconda3/envs/py36_ignite_048/lib/python3.6/site-packages/ignite/__init__.py
0.4.8

代码框架

from ignite.engine import Events, create_supervised_trainer, create_supervised_evaluator
from ignite.metrics import Accuracy, Loss

model = Net()
train_loader, val_loader = get_data_loaders(train_batch_size, val_batch_size)
optimizer = torch.optim.SGD(model.parameters(), lr=0.01, momentum=0.8)
criterion = nn.NLLLoss()

trainer = create_supervised_trainer(model, optimizer, criterion)

val_metrics = {
    
    
    "accuracy": Accuracy(),
    "nll": Loss(criterion)
}
evaluator = create_supervised_evaluator(model, metrics=val_metrics)

@trainer.on(Events.ITERATION_COMPLETED(every=log_interval))
def log_training_loss(trainer):
    print(f"Epoch[{trainer.state.epoch}] Loss: {trainer.state.output:.2f}")

@trainer.on(Events.EPOCH_COMPLETED)
def log_training_results(trainer):
    evaluator.run(train_loader)
    metrics = evaluator.state.metrics
    print(f"Training Results - Epoch: {trainer.state.epoch}  Avg accuracy: {metrics['accuracy']:.2f} Avg loss: {metrics['nll']:.2f}")

@trainer.on(Events.EPOCH_COMPLETED)
def log_validation_results(trainer):
    evaluator.run(val_loader)
    metrics = evaluator.state.metrics
    print(f"Validation Results - Epoch: {trainer.state.epoch}  Avg accuracy: {metrics['accuracy']:.2f} Avg loss: {metrics['nll']:.2f}")

trainer.run(train_loader, max_epochs=100)

使用示例

参考链接:https://pytorch-ignite.ai/tutorials/beginner/01-getting-started/

导入依赖包

import torch
from torch import nn
from torch.utils.data import DataLoader
from torchvision.datasets import MNIST
from torchvision.models import resnet18
from torchvision.transforms import Compose, Normalize, ToTensor

from ignite.engine import Engine, Events, create_supervised_trainer, create_supervised_evaluator
from ignite.metrics import Accuracy, Loss
from ignite.handlers import ModelCheckpoint
from ignite.contrib.handlers import TensorboardLogger, global_step_from_engine

/home/phd-chen.yirong/anaconda3/envs/py36_perbot/lib/python3.6/site-packages/tqdm/auto.py:22: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html
  from .autonotebook import tqdm as notebook_tqdm

检查CUDA是否可以使用

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(device)
cuda

定义深度学习模型类

该部分与Pytorch一模一样,没有任何变化!

# 定义深度学习模型类
class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        # Changed the output layer to output 10 classes instead of 1000 classes
        self.model = resnet18(num_classes=10)
        # Changed the input layer to take grayscale images for MNIST instaed of RGB images
        self.model.conv1 = nn.Conv2d(
            1, 64, kernel_size=3, padding=1, bias=False
        )
    def forward(self, x):
        return self.model(x)

# 创建Net类的对象
model = Net().to(device)
# 打印模型
print(model)
Net(
  (model): ResNet(
    (conv1): Conv2d(1, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
    (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (relu): ReLU(inplace=True)
    (maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
    (layer1): Sequential(
      (0): BasicBlock(
        (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
      (1): BasicBlock(
        (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (layer2): Sequential(
      (0): BasicBlock(
        (conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
        (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (downsample): Sequential(
          (0): Conv2d(64, 128, kernel_size=(1, 1), stride=(2, 2), bias=False)
          (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
      )
      (1): BasicBlock(
        (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (layer3): Sequential(
      (0): BasicBlock(
        (conv1): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
        (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (downsample): Sequential(
          (0): Conv2d(128, 256, kernel_size=(1, 1), stride=(2, 2), bias=False)
          (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
      )
      (1): BasicBlock(
        (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (layer4): Sequential(
      (0): BasicBlock(
        (conv1): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
        (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (downsample): Sequential(
          (0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)
          (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
      )
      (1): BasicBlock(
        (conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (avgpool): AdaptiveAvgPool2d(output_size=(1, 1))
    (fc): Linear(in_features=512, out_features=10, bias=True)
  )
)

数据集读取

下载数据集并且存放在与本文件相同的路径下,这部分与Pytorch也一摸一样

data_transform = Compose([ToTensor(), Normalize((0.1307,), (0.3081,))])

train_loader = DataLoader(
    MNIST(download=True, root=".", transform=data_transform, train=True), batch_size=128, shuffle=True
)

val_loader = DataLoader(
    MNIST(download=True, root=".", transform=data_transform, train=False), batch_size=256, shuffle=False
)
/home/phd-chen.yirong/anaconda3/envs/py36_perbot/lib/python3.6/site-packages/torchvision/datasets/mnist.py:498: UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors. This means you can write to the underlying (supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to protect its data or make it writeable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at  /pytorch/torch/csrc/utils/tensor_numpy.cpp:180.)
  return torch.from_numpy(parsed.astype(m[2], copy=False)).view(*s)

创建优化器和损失函数

这部分也与Pytorch一模一样

# 优化器以及学习率设置
optimizer = torch.optim.RMSprop(model.parameters(), lr=0.005)
# 损失函数
criterion = nn.CrossEntropyLoss()

自定义trainer、train_evaluator、val_evaluator

这部分实际上就是将传统的训练过程进行了高级别的封装

# 评估参数字典,Accuracy()、Loss(criterion)均为ignite.metrics的方法
val_metrics = {
    
    
    "accuracy": Accuracy(),
    "loss": Loss(criterion)
}

# 自定义训练过程
def train_step(engine, batch):
    model.train()
    optimizer.zero_grad()
    x, y = batch[0].to(device), batch[1].to(device)
    y_pred = model(x)
    loss = criterion(y_pred, y)
    loss.backward()
    optimizer.step()
    return loss.item()

# 对象trainer、train_evaluator和val_evaluator都是Engine的实例
trainer = Engine(train_step) # 训练循环

# 自定义验证过程
def validation_step(engine, batch):
    model.eval()
    with torch.no_grad():
        x, y = batch[0].to(device), batch[1].to(device)
        y_pred = model(x)
        return y_pred, y

train_evaluator = Engine(validation_step) # 验证循环
val_evaluator = Engine(validation_step) # 验证循环

# 将评估参数字典中的所有参数绑定到验证器当中
for name, metric in val_metrics.items():
    metric.attach(train_evaluator, name)

for name, metric in val_metrics.items():
    metric.attach(val_evaluator, name)

配置日志输出并绑定到trainer

log_interval = 100

@trainer.on(Events.ITERATION_COMPLETED(every=log_interval))
def log_training_loss(engine):
    print(f"Epoch[{engine.state.epoch}], Iter[{engine.state.iteration}] Loss: {engine.state.output:.2f}")

@trainer.on(Events.EPOCH_COMPLETED)
def log_training_results(trainer):
    train_evaluator.run(train_loader)
    metrics = train_evaluator.state.metrics
    print(f"Training Results - Epoch[{trainer.state.epoch}] Avg accuracy: {metrics['accuracy']:.2f} Avg loss: {metrics['loss']:.2f}")


@trainer.on(Events.EPOCH_COMPLETED)
def log_validation_results(trainer):
    val_evaluator.run(val_loader)
    metrics = val_evaluator.state.metrics
    print(f"Validation Results - Epoch[{trainer.state.epoch}] Avg accuracy: {metrics['accuracy']:.2f} Avg loss: {metrics['loss']:.2f}")

模型保存设置

# Score函数返回我们在val_metrics中定义的任何度量的当前值
def score_function(engine):
    return engine.state.metrics["accuracy"]

# Checkpoint to store n_saved best models wrt score function
model_checkpoint = ModelCheckpoint(
    "checkpoint",
    n_saved=2,
    filename_prefix="best",
    score_function=score_function,
    score_name="accuracy",
    global_step_transform=global_step_from_engine(trainer), # helps fetch the trainer's state
)
  
# Save the model after every epoch of val_evaluator is completed
val_evaluator.add_event_handler(Events.COMPLETED, model_checkpoint, {
    
    "model": model})
<ignite.engine.events.RemovableEventHandle at 0x7ffa680bc198>

输出结果到Tensorboard

方便使用tensorboard查看

# 创建TensorboardLogger对象
tb_logger = TensorboardLogger(log_dir="tb-logger")

# Attach handler to plot trainer's loss every 100 iterations
tb_logger.attach_output_handler(
    trainer,
    event_name=Events.ITERATION_COMPLETED(every=100),
    tag="training",
    output_transform=lambda loss: {
    
    "batch_loss": loss},
)

# Attach handler for plotting both evaluators' metrics after every epoch completes
for tag, evaluator in [("training", train_evaluator), ("validation", val_evaluator)]:
    tb_logger.attach_output_handler(
        evaluator,
        event_name=Events.EPOCH_COMPLETED,
        tag=tag,
        metric_names="all",
        global_step_transform=global_step_from_engine(trainer),
    )

开始训练模型

trainer.run(train_loader, max_epochs=5)

tb_logger.close()
/home/phd-chen.yirong/anaconda3/envs/py36_perbot/lib/python3.6/site-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at  /pytorch/c10/core/TensorImpl.h:1156.)
  return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)


Epoch[1], Iter[100] Loss: 0.12
Epoch[1], Iter[200] Loss: 0.08
Epoch[1], Iter[300] Loss: 0.09
Epoch[1], Iter[400] Loss: 0.10
Training Results - Epoch[1] Avg accuracy: 0.86 Avg loss: 0.53
Validation Results - Epoch[1] Avg accuracy: 0.86 Avg loss: 0.56
Epoch[2], Iter[500] Loss: 0.07
Epoch[2], Iter[600] Loss: 0.03
Epoch[2], Iter[700] Loss: 0.04
Epoch[2], Iter[800] Loss: 0.02
Epoch[2], Iter[900] Loss: 0.16
Training Results - Epoch[2] Avg accuracy: 0.99 Avg loss: 0.04
Validation Results - Epoch[2] Avg accuracy: 0.99 Avg loss: 0.04
Epoch[3], Iter[1000] Loss: 0.05
Epoch[3], Iter[1100] Loss: 0.07
Epoch[3], Iter[1200] Loss: 0.01
Epoch[3], Iter[1300] Loss: 0.06
Epoch[3], Iter[1400] Loss: 0.04
Training Results - Epoch[3] Avg accuracy: 0.99 Avg loss: 0.03
Validation Results - Epoch[3] Avg accuracy: 0.99 Avg loss: 0.04
Epoch[4], Iter[1500] Loss: 0.04
Epoch[4], Iter[1600] Loss: 0.07
Epoch[4], Iter[1700] Loss: 0.03
Epoch[4], Iter[1800] Loss: 0.03
Training Results - Epoch[4] Avg accuracy: 0.99 Avg loss: 0.03
Validation Results - Epoch[4] Avg accuracy: 0.99 Avg loss: 0.04
Epoch[5], Iter[1900] Loss: 0.04
Epoch[5], Iter[2000] Loss: 0.07
Epoch[5], Iter[2100] Loss: 0.04
Epoch[5], Iter[2200] Loss: 0.03
Epoch[5], Iter[2300] Loss: 0.04
Training Results - Epoch[5] Avg accuracy: 0.99 Avg loss: 0.02
Validation Results - Epoch[5] Avg accuracy: 0.99 Avg loss: 0.03

打开tensorboard查看结果

conda activate py36_ignite_048
cd [代码文件所在位置]
tensorboard --logdir=./tb-logger --bind_all --port=6666

上面的--port=6666指定端口号

猜你喜欢

转载自blog.csdn.net/m0_37201243/article/details/123662556