华为开源自研AI框架昇思MindSpore应用案例:Mask-RCNN实现实例分割

Mask R-CNN
MaskRCNN是一种概念简单、灵活、通用的目标实例分割框架,在检测出图像中目标的同时,还为每一个实例生成高质量掩码。这种称为Mask
R-CNN的方法,通过添加与现有边框检测分支平行的预测目标掩码分支,达到扩展Faster R-CNN的目的。Mask
R-CNN训练简单,运行速度达5fps,与Faster R-CNN相比,开销只有小幅上涨。此外,Mask
R-CNN易于推广到其他任务。例如,允许在同一框架中预测人体姿势。 Mask
R-CNN在COCO挑战赛的三个关键难点上都表现不俗,包括实例分割、边框目标检测和人物关键点检测。Mask
R-CNN没有什么华而不实的附加功能,各任务的表现都优于现存所有单模型,包括COCO 2016挑战赛的胜出模型。

模型简介
MaskRCNN是一个两级目标检测网络,作为FasterRCNN的扩展模型,在现有的边框检测分支的基础上增加了一个预测目标掩码的分支。该网络采用区域候选网络(RPN),可与检测网络共享整个图像
的卷积特征,无需任何代价就可轻松计算候选区域。整个网络通过共享卷积特征,将RPN和掩码分支合并为一个网络。其模型骨干还可以选择轻量级网络Mobilenet。

如果你对MindSpore感兴趣,可以关注昇思MindSpore社区

在这里插入图片描述

在这里插入图片描述

一、环境准备

1.进入ModelArts官网

云平台帮助用户快速创建和部署模型,管理全周期AI工作流,选择下面的云平台以开始使用昇思MindSpore,获取安装命令,安装MindSpore2.0.0-alpha版本,可以在昇思教程中进入ModelArts官网

在这里插入图片描述

选择下方CodeLab立即体验

在这里插入图片描述

等待环境搭建完成

在这里插入图片描述

2.使用CodeLab体验Notebook实例

下载NoteBook样例代码Mask-RCNN实现实例分割.ipynb为样例代码

在这里插入图片描述

选择ModelArts Upload Files上传.ipynb文件

在这里插入图片描述

在这里插入图片描述

在这里插入图片描述

选择Kernel环境

在这里插入图片描述

切换至GPU环境,切换成第一个限时免费

在这里插入图片描述

进入昇思MindSpore官网,点击上方的安装

在这里插入图片描述

获取安装命令

在这里插入图片描述

回到Notebook中,在第一块代码前加入命令
在这里插入图片描述

conda update -n base -c defaults conda

在这里插入图片描述

安装MindSpore 2.0 GPU版本

conda install mindspore=2.0.0a0 -c mindspore -c conda-forge

在这里插入图片描述

安装mindvision

pip install mindvision

在这里插入图片描述

安装下载download

pip install download

在这里插入图片描述

二、环境准备

官方库和第三方库的导入

我们首先导入案例依赖的官方库和第三方库。

import time
import os

import numpy as np
import mindspore.nn as nn
import mindspore.common.dtype as mstype
from mindspore.ops import operations as P
from mindspore.ops import functional as F
from mindspore.ops import composite as C
from mindspore.nn import layer as L
from mindspore.common.initializer import initializer
from mindspore import context, Tensor, Parameter
from mindspore import ParameterTuple
from mindspore.train.callback import Callback
from mindspore.nn.wrap.grad_reducer import DistributedGradReducer
from mindspore.train.callback import CheckpointConfig, ModelCheckpoint, TimeMonitor
from mindspore.train import Model
from mindspore.train.serialization import load_checkpoint, load_param_into_net
from mindspore.nn import Momentum
from mindspore.common import set_seed

from src.utils.config import config

数据处理

开始实验之前,请确保本地已经安装了Python环境并安装了MindSpore Vision套件。

数据准备

COCO2017是一个广泛应用的数据集,带有边框和像素级背景注释。这些注释可用于场景理解任务,如语义分割,目标检测和图像字幕制作。训练和评估的图像大小为118K和5K。

数据集大小:19G

训练:18G,118,000个图像

评估:1G,5000个图像

注释:241M;包括实例、字幕、人物关键点等

数据格式:图像及JSON文件

注:数据在dataset.py中处理。

首先,你需要下载 coco2017 数据集。

COCO 2017下载:https://cocodataset.org/#download

在这里插入图片描述

下载完成后,确保你的数据集存放符合如下路径。

!cat datasets.md

在这里插入图片描述
数据预处理
原始数据集中图像大小不一致,不方便统一读取和检测。我们首先统一图像大小。数据的注释信息保存在json文件中,我们需要读取出来给图像数据加label。

数据增强
在你开始训练模型之前。数据增强对于您的数据集以及创建训练数据和测试数据是必要的。对于coco数据集,你可以使用dataset.py为图像添加label,并将它们转换到MindRecord。MindRecord是一种MindSpore指定的数据格式,可以在某些场景下优化MindSpore的性能。

首先,我们创建MindRecord数据集保存和读取的地址。

from dataset.dataset import create_coco_dataset, data_to_mindrecord_byte_image

def create_mindrecord_dir(prefix, mindrecord_dir):
    """Create MindRecord Direction."""
    if not os.path.isdir(mindrecord_dir):
        os.makedirs(mindrecord_dir)
    if config.dataset == "coco":
        if os.path.isdir(config.data_root):
            print("Create Mindrecord.")
            data_to_mindrecord_byte_image("coco", True, prefix)
            print("Create Mindrecord Done, at {}".format(mindrecord_dir))
        else:
            raise Exception("coco_root not exits.")
    else:
        if os.path.isdir(config.IMAGE_DIR) and os.path.exists(config.ANNO_PATH):
            print("Create Mindrecord.")
            data_to_mindrecord_byte_image("other", True, prefix)
            print("Create Mindrecord Done, at {}".format(mindrecord_dir))
        else:
            raise Exception("IMAGE_DIR or ANNO_PATH not exits.")
    while not os.path.exists(mindrecord_file+".db"):
        time.sleep(5)

然后,加载数据集,调用dataset.py中的create_coco_dataset函数完成数据预处理和数据增强。

# Allocating memory Environment
device_target = config.device_target
rank = 0
device_num = 1
context.set_context(mode=context.GRAPH_MODE, device_target=device_target)

print("Start create dataset!")
# Call the interface for data processing
# It will generate mindrecord file in config.mindrecord_dir,
# and the file name is MaskRcnn.mindrecord0, 1, ... file_num.
prefix = "MaskRcnn.mindrecord"
mindrecord_dir = config.mindrecord_dir
mindrecord_file = os.path.join(mindrecord_dir, prefix + "0")
if rank == 0 and not os.path.exists(mindrecord_file):
    create_mindrecord_dir(prefix, mindrecord_dir)
# When create MindDataset, using the fitst mindrecord file,
# such as MaskRcnn.mindrecord0.
dataset = create_coco_dataset(mindrecord_file, batch_size=config.batch_size, device_num=device_num, rank_id=rank)
dataset_size = dataset.get_dataset_size()
print("total images num: ", dataset_size)
print("Create dataset done!")

在这里插入图片描述

数据集可视化

运行以下代码观察数据增强后的图片。可以发现图片经过了旋转处理,并且图片的shape也已经转换为待输入网络的(N,C,H,W)格式,其中N代表样本数量,C代表图片通道,H和W代表图片的高和宽。

import numpy as np
import matplotlib.pyplot as plt

show_data = next(dataset.create_dict_iterator())

show_images = show_data["image"].asnumpy()
print(f'Image shape: {
      
      show_images.shape}')

plt.figure()

# 展示2张图片供参考
for i in range(1, 3):
    plt.subplot(1, 2, i)

    # 将图片转换HWC格式
    image_trans = np.transpose(show_images[i - 1], (1, 2, 0))
    image_trans = np.clip(image_trans, 0, 1)

    plt.imshow(image_trans[:, :], cmap=None)
    plt.xticks(rotation=180)
    plt.axis("off")

在这里插入图片描述

训练

模型训练参数
在这里,我们列出了一些重要的训练参数。此外,您可以查看配置文件config.py的详细信息。

Parameter Default Description
workers 1 Number of parallel workers
device_target GPU Device type
learning_rate 0.002 learning rate
weight_decay 1e-4 Control weight decay speed
total_epoch 13 Number of epoch
batch_size 2 Batch size
dataset coco Dataset name
pre_trained ./checkpoint The path of pretrained model
checkpoint_path ./ckpt_0 The path to save
训练模型
模型训练需要定义好优化器、损失函数等。同时,可以加载预训练模型以加快模型训练。

因此,我们定义权重文件加载函数。

def load_pretrained_ckpt(net, load_path, device_target):
    """
    Load pretrained checkpoint.

    Args:
        net(Cell): Used Network
        load_path(string): The path of checkpoint.
        device_target(string): device target.

    Returns:
        Cell, the network with pretrained weights.
    """
    param_dict = load_checkpoint(load_path)
    if config.pretrain_epoch_size == 0:
        for item in list(param_dict.keys()):
            if not (item.startswith('backbone') or item.startswith('rcnn_mask')):
                param_dict.pop(item)

        if device_target == 'GPU':
            for key, value in param_dict.items():
                tensor = Tensor(value, mstype.float32)
                param_dict[key] = Parameter(tensor, key)

    load_param_into_net(net, param_dict)
    return net

本案例中,为了方便展示效果,选取了数据集中的部分数据进行了1个epoch的训练,由于加载了预训练模型,所以loss值快速趋于稳定,在1附近间波动,这可以作为判断模型收敛的一个标准。

训练得到的ckpt文件被保存在checkpoint文件夹内,可以作为后续fine-tune以及推理的加载模型使用。

from src.utils.lr_schedule import dynamic_lr

set_seed(1)

def train_maskrcnn():
    """Construct the traning function"""
    # Allocating memory Environment
    device_target = config.device_target
    rank = 0
    device_num = 1
    context.set_context(mode=context.GRAPH_MODE, device_target=device_target)

    print("Start create dataset!")
    # Call the interface for data processing
    # It will generate mindrecord file in config.mindrecord_dir,
    # and the file name is MaskRcnn.mindrecord0, 1, ... file_num.
    prefix = "MaskRcnn.mindrecord"
    mindrecord_dir = config.mindrecord_dir
    mindrecord_file = os.path.join(mindrecord_dir, prefix + "0")
    if rank == 0 and not os.path.exists(mindrecord_file):
        create_mindrecord_dir(prefix, mindrecord_dir)
    # When create MindDataset, using the fitst mindrecord file,
    # such as MaskRcnn.mindrecord0.
    dataset = create_coco_dataset(mindrecord_file, batch_size=config.batch_size, device_num=device_num, rank_id=rank)
    dataset_size = dataset.get_dataset_size()
    print("total images num: ", dataset_size)
    print("Create dataset done!")
    # Net Instance
    net = MaskRcnnResnet50(config=config)

    net = net.set_train()
    # load pretrained model
    load_path = config.pre_trained
    if load_path != "":
        print("Loading pretrained resnet50 checkpoint")
        net = load_pretrained_ckpt(net=net, load_path=load_path, device_target=device_target)

    loss = LossNet()
    lr = Tensor(dynamic_lr(config, rank_size=device_num, start_steps=config.pretrain_epoch_size * dataset_size),
                mstype.float32)
    opt = Momentum(params=net.trainable_params(), learning_rate=lr, momentum=config.momentum,
                   weight_decay=config.weight_decay, loss_scale=config.loss_scale)
    # wrap the loss function
    net_with_loss = WithLossCell(net, loss)
    # Use TrainOneStepCell set the training pipeline.
    net = TrainOneStepCell(net_with_loss, opt, sens=config.loss_scale)
    # Monitor the traning process.
    time_cb = TimeMonitor(data_size=dataset_size)
    loss_cb = LossCallBack(rank_id=rank)
    cb = [time_cb, loss_cb]
    # save the trained model
    if config.save_checkpoint:
        # set saved weights.
        ckpt_step = config.save_checkpoint_epochs * dataset_size
        ckptconfig = CheckpointConfig(save_checkpoint_steps=ckpt_step, keep_checkpoint_max=config.keep_checkpoint_max)
        save_checkpoint_path = os.path.join(config.save_checkpoint_path, 'ckpt_' + str(rank) + '/')
        # apply saved weights.
        ckpoint_cb = ModelCheckpoint(prefix='mask_rcnn', directory=save_checkpoint_path, config=ckptconfig)
        cb += [ckpoint_cb]
    # start training.
    model = Model(net)
    model.train(config.epoch_size, dataset, callbacks=cb, dataset_sink_mode=False)

if __name__ == '__main__':
    train_maskrcnn()

在这里插入图片描述
在这里插入图片描述

评估

完成训练后,我们可以将我们训练的模型保存在checkpoint目录下。

在COCO的validation数据集上,可以评估我们训练好的模型的准确性。

from pycocotools.coco import COCO

from src.utils.util import coco_eval, bbox2result_1image, results2json, get_seg_masks

set_seed(1)


def maskrcnn_eval(dataset_path, ckpt_path, ann_file):
    """
    MaskRcnn evaluation.

    Args:
        dataset_path(str): Dataset file path.
        ckpt_path(str): Checkpoint file path.
        ann_file(str): Annotations file path.
    """
    ds = create_coco_dataset(dataset_path, batch_size=config.test_batch_size, is_training=False)

    net = MaskRcnnResnet50(config)
    param_dict = load_checkpoint(ckpt_path)
    load_param_into_net(net, param_dict)
    net.set_train(False)

    eval_iter = 0
    total = ds.get_dataset_size()
    outputs = []
    dataset_coco = COCO(ann_file)

    print("total images num: ", total)
    print("Processing, please wait a moment.")
    max_num = 128
    start = time.time()
    for data in ds.create_dict_iterator(output_numpy=True, num_epochs=1):
        eval_iter = eval_iter + 1

        img_data = data['image']
        img_metas = data['image_shape']
        gt_bboxes = data['box']
        gt_labels = data['label']
        gt_num = data['valid_num']
        gt_mask = data["mask"]

        # run net
        output = net(Tensor(img_data), Tensor(img_metas), Tensor(gt_bboxes),
                     Tensor(gt_labels), Tensor(gt_num), Tensor(gt_mask))

        # output
        all_bbox = output[0]
        all_label = output[1]
        all_mask = output[2]
        all_mask_fb = output[3]

        for j in range(config.test_batch_size):
            all_bbox_squee = np.squeeze(all_bbox.asnumpy()[j, :, :])
            all_label_squee = np.squeeze(all_label.asnumpy()[j, :, :])
            all_mask_squee = np.squeeze(all_mask.asnumpy()[j, :, :])
            all_mask_fb_squee = np.squeeze(all_mask_fb.asnumpy()[j, :, :, :])

            all_bboxes_tmp_mask = all_bbox_squee[all_mask_squee, :]
            all_labels_tmp_mask = all_label_squee[all_mask_squee]
            all_mask_fb_tmp_mask = all_mask_fb_squee[all_mask_squee, :, :]

            if all_bboxes_tmp_mask.shape[0] > max_num:
                inds = np.argsort(-all_bboxes_tmp_mask[:, -1])
                inds = inds[:max_num]
                all_bboxes_tmp_mask = all_bboxes_tmp_mask[inds]
                all_labels_tmp_mask = all_labels_tmp_mask[inds]
                all_mask_fb_tmp_mask = all_mask_fb_tmp_mask[inds]

            bbox_results = bbox2result_1image(all_bboxes_tmp_mask, all_labels_tmp_mask, config.num_classes)
            segm_results = get_seg_masks(all_mask_fb_tmp_mask, all_bboxes_tmp_mask, all_labels_tmp_mask,
                                         img_metas[j], True, config.num_classes)
            outputs.append((bbox_results, segm_results))

    end = time.time()
    print("Evaluation cost time {}".format(end - start))
    eval_types = ["bbox", "segm"]
    result_files = results2json(dataset_coco, outputs, "./results.pkl")
    coco_eval(result_files, eval_types, dataset_coco, single_result=False)

def eval_():
    """Execute the Evaluation."""
    device_target = config.device_target
    context.set_context(mode=context.GRAPH_MODE, device_target=device_target)

    config.mindrecord_dir = os.path.join(config.data_root, config.mindrecord_dir)

    prefix = "MaskRcnn_eval.mindrecord"
    mindrecord_dir = config.mindrecord_dir
    mindrecord_file = os.path.join(mindrecord_dir, prefix)

    if not os.path.exists(mindrecord_file):
        if not os.path.isdir(mindrecord_dir):
            os.makedirs(mindrecord_dir)
        if config.dataset == "coco":
            if os.path.isdir(config.data_root):
                print("Create Mindrecord.")
                data_to_mindrecord_byte_image("coco", False, prefix, file_num=1)
                print("Create Mindrecord Done, at {}".format(mindrecord_dir))
            else:
                print("data_root not exits.")
        else:
            if os.path.isdir(config.IMAGE_DIR) and os.path.exists(config.ANNO_PATH):
                print("Create Mindrecord.")
                data_to_mindrecord_byte_image("other", False, prefix, file_num=1)
                print("Create Mindrecord Done, at {}".format(mindrecord_dir))
            else:
                print("IMAGE_DIR or ANNO_PATH not exits.")

    print("Start Eval!")
    maskrcnn_eval(mindrecord_file, config.checkpoint_path, config.ann_file)
    print("ckpt_path=", config.checkpoint_path)

if __name__ == '__main__':
    eval_()

在这里插入图片描述
在这里插入图片描述

推理

最后,可以使用自己的数据集来测试训练后的模型,完成目标检测。

import random
import colorsys

import matplotlib.pyplot as plt
import matplotlib.patches as patches


set_seed(1)

def get_ax(rows=1, cols=1, size=16):
    """
    Set axis

    Return a Matplotlib Axes array to be used in all visualizations in the notebook. Provide a central
    point to control graph sizes.
    Adjust the size attribute to control how big to render images.

    Args:
        rows(int): Row size. Default: 1.
        cols(int): Column size. Default: 1.
        size(int): Pixel size. Default: 16.

    Returns:
        Array, array of Axes
    """
    _, axis = plt.subplots(rows, cols, figsize=(size*cols, size*rows))
    return axis

def mindrecord_to_rgb(img_data):
    """
    Returns a RGB image from evaluated results.
    Args:
        rows(Array): An image.

    Returns:
        Array, a RGB image.
    """
    index = 0
    convert_img = (-np.min(img_data[index, :, :, :])+img_data[index, :, :, :]) *\
        255/(np.max(img_data[index, :, :, :])-np.min(img_data[index, :, :, :]))
    temp_img = convert_img.astype(np.uint8)
    image = np.zeros([config.img_height, config.img_width, 3])
    image[:, :, 0] = temp_img[0, :, :]
    image[:, :, 1] = temp_img[1, :, :]
    image[:, :, 2] = temp_img[2, :, :]
    return image

def random_colors(num, bright=True):
    """
    Generate random colors.

    To get visually distinct colors, generate them in HSV space then
    convert to RGB.

    Args:
        num(int): The color number.

    Returns:
        List, a list of different colors.
    """
    brightness = 1.0 if bright else 0.7
    hsv = [(i / num, 1, brightness) for i in range(num)]
    colors = list(map(lambda c: colorsys.hsv_to_rgb(*c), hsv))
    random.shuffle(colors)
    return colors

def infer():
    """
    Return Mask RCNN evaluated results.

    Returns:
        - output, tensor, Mask RCNN evaluated result.
                  [Tensor[2,80000,5], Tensor[2,80000,1], Tensor[2,80000,1], Tensor[2,80000,28,28]]
        - img, tensor, RGB image.
        - img_metas, list, shape (height, width, 3).
    """
    # load image
    device_target = config.device_target
    context.set_context(mode=context.GRAPH_MODE, device_target=device_target)

    mindrecord_dir = os.path.join(config.data_root, config.mindrecord_dir)

    prefix = "MaskRcnn_eval.mindrecord"

    mindrecord_file = os.path.join(mindrecord_dir, prefix)

    dataset = create_coco_dataset(mindrecord_file, batch_size=config.test_batch_size, is_training=False)

    total = dataset.get_dataset_size()
    image_id = np.random.choice(total, 1)

    # load model
    ckpt_path = config.checkpoint_path
    net = MaskRcnnResnet50(config)
    param_dict = load_checkpoint(ckpt_path)
    load_param_into_net(net, param_dict)
    net.set_train(False)

    data = list(dataset.create_dict_iterator(output_numpy=True, num_epochs=1))[image_id[0]]
    print("Image ID: ", image_id[0])
    img_data = data['image']
    img_metas = data['image_shape']
    gt_bboxes = data['box']
    gt_labels = data['label']
    gt_num = data['valid_num']
    gt_mask = data["mask"]

    img = mindrecord_to_rgb(img_data)

    start = time.time()
    # run net
    output = net(Tensor(img_data), Tensor(img_metas), Tensor(gt_bboxes),
                 Tensor(gt_labels), Tensor(gt_num), Tensor(gt_mask))
    end = time.time()
    print("Cost time of detection: {:.2f}".format(end - start))
    return output, img, img_metas

def detection(output, img, img_metas):
    """Mask RCNN Detection.
    Arg:
        output(Tensor): evaluated results by Mask RCNN.
                        [Tensor[2,80000,5], Tensor[2,80000,1], Tensor[2,80000,1], Tensor[2,80000,28,28]]
        img(Tensor): RGB image.
        img_metas(List): image shape.
    """
    # scaling ratio
    ratio = img_metas[0, 2]

    # output
    all_bbox = output[0][0].asnumpy()
    all_label = output[1][0].asnumpy()
    all_mask = output[2][0].asnumpy()

    num = 0
    mask_id = -1
    type_ids = []
    for bool_ in all_mask:
        mask_id += 1
        if np.equal(bool_, True) and all_bbox[mask_id, 4] > 0.8:
            type_ids.append(mask_id)
            num += 1
    print("Class Num:", num)

    # Generate random colors
    colors = random_colors(num)

    # Show area outside image boundaries.
    height = config.img_height
    width = config.img_width
    ax = get_ax(1)
    ax.set_ylim(height + 10, -10)
    ax.set_xlim(-10, width + 10)
    ax.axis('off')
    ax.set_title("Precision")

    masked_image = img.astype(np.uint32).copy()
    for j in range(num):
        color = colors[j]
        i = type_ids[j]
        # Bounding box
        x1, y1, x2, y2, _ = all_bbox[i]*ratio
        score = all_bbox[i, 4]

        p = patches.Rectangle((x1, y1), x2 - x1, y2 - y1, linewidth=2, alpha=0.7,
                              linestyle="dashed", edgecolor=color, facecolor='none')
        ax.add_patch(p)

        # Label
        class_names = config.data_classes
        class_id = all_label[i, 0].astype(np.uint8)+1
        score = all_bbox[i, 4]
        label = class_names[class_id]

        caption = "{} {:.3f}".format(label, score)
        ax.text(x1, y1 + 8, caption, color='w', size=11, backgroundcolor="none")

    ax.imshow(masked_image.astype(np.uint8))
    plt.show()

if __name__ == '__main__':
    out, img_rgb, img_shape = infer()
    detection(out, img_rgb, img_shape)

在这里插入图片描述

猜你喜欢

转载自blog.csdn.net/qq_46207024/article/details/140254824