基于YOLO8的WholeBody估计



前言

YOLO很强大,强大到可以做多个联合检测的任务,不仅准确,而且速度还快.例如,YOLOPose不可一世.那么,是否可以基于YOLPose实现更多的task呢?比如,基于YOLO的全身关键点检测,包括人脸人体手足的所有关键点的一起检测呢?这里探讨实现的可能性.


全人体关键点检测效果如下所示:
在这里插入图片描述

一、制作YOLO格式的数据集

参考COCO-wholebody的标注,我们需要将他们转为YOLO格式的数据集.
如果采用原始的coco2yolo,会遇到这样的问题:

val: WARNING ⚠️ /home/wqt/Datasets/coco/images/val2017/000000369503.jpg: ignoring corrupt image/label: non-normalized or out of bounds coordinate labels
......
......
val: WARNING ⚠️ /home/wqt/Datasets/coco/images/val2017/000000497344.jpg: ignoring corrupt image/label: non-normalized or out of bounds coordinate labels

即存在大量的图片标注是超过图像的范围,而yolo归一化坐标在[0, 1]之间.所以有必要对格式进行修正,即当坐标点不在图像宽高范围内,我们将之调整到边界上,并设置visitiable=0,即不可见的.
具体可参考coco2yolo.py工程的文件如下:

"""
2021/1/24
COCO 格式的数据集转化为 YOLO 格式的数据集,源代码采取遍历方式,太慢,
这里改进了一下时间复杂度,从O(nm)改为O(n+m),但是牺牲了一些内存占用
--json_path 输入的json文件路径
--save_path 保存的文件夹名字,默认为当前目录下的labels。
"""

import os 
import json
from tqdm import tqdm
import argparse
import numpy

parser = argparse.ArgumentParser()
parser.add_argument('--json_path', default='./instances_val2017.json',type=str, help="input: coco format(json)")
parser.add_argument('--save_path', default='./labels', type=str, help="specify where to save the output dir of labels")
arg = parser.parse_args()

def convert(size, box):
    dw = 1. / (size[0])
    dh = 1. / (size[1])
    x = box[0] + box[2] / 2.0
    y = box[1] + box[3] / 2.0
    w = box[2]
    h = box[3]

    x = x * dw
    w = w * dw
    y = y * dh
    h = h * dh
    return (x, y, w, h)

def convertKpts(size, kpts):
    dw = 1. / (size[0])
    dh = 1. / (size[1])

    kpts_str = ''
    for x in range(len(kpts)):
        if x % 3 == 0:
            w_factor = kpts[x]/ (size[0])
            if w_factor > 1:
                w_factor = 1
            kpts_str = kpts_str + str(w_factor) + ' '
            
            h_factor = kpts[x+1]/ (size[1])
            if h_factor > 1:
                h_factor = 1
            kpts_str = kpts_str + str(h_factor) + ' '

            conf_factor = kpts[x+2]
            if w_factor == 1 or h_factor == 1:
                conf_factor = 0
            kpts_str = kpts_str + str(conf_factor) + ' '

    return kpts_str

if __name__ == '__main__':
    json_file =   arg.json_path # COCO Object Instance 类型的标注
    ana_txt_save_path = arg.save_path  # 保存的路径

    data = json.load(open(json_file, 'r'))
    if not os.path.exists(ana_txt_save_path):
        os.makedirs(ana_txt_save_path)
    
    id_map = {
    
    } # coco数据集的id不连续!重新映射一下再输出!
    for i, category in enumerate(data['categories']): 
        id_map[category['id']] = i

    # 通过事先建表来降低时间复杂度
    max_id = 0
    for img in data['images']:
        max_id = max(max_id, img['id'])
    # 注意这里不能写作 [[]]*(max_id+1),否则列表内的空列表共享地址
    img_ann_dict = [[] for i in range(max_id+1)] 
    for i, ann in enumerate(data['annotations']):
        img_ann_dict[ann['image_id']].append(i)

    for img in tqdm(data['images']):
        filename = img["file_name"]
        img_width = img["width"]
        img_height = img["height"]
        img_id = img["id"]
        head, tail = os.path.splitext(filename)
        ana_txt_name = head + ".txt"  # 对应的txt名字,与jpg一致
        f_txt = open(os.path.join(ana_txt_save_path, ana_txt_name), 'w')
        '''for ann in data['annotations']:
            if ann['image_id'] == img_id:
                box = convert((img_width, img_height), ann["bbox"])
                f_txt.write("%s %s %s %s %s\n" % (id_map[ann["category_id"]], box[0], box[1], box[2], box[3]))'''
        # 这里可以直接查表而无需重复遍历
        for ann_id in img_ann_dict[img_id]:
            ann = data['annotations'][ann_id]
            if ann["bbox"] is None:
                continue
            box = convert((img_width, img_height), ann["bbox"])
            # id (1) + bbox (4)
            f_txt.write("%s %s %s %s %s " % (id_map[ann["category_id"]], box[0], box[1], box[2], box[3]))
            #keypoints(51) + face kpts (204) + lefthand_kpts () + righthand_kpts () + foot_kpts ()
            keypoints = convertKpts((img_width, img_height),ann['keypoints'])
            face_kpts = convertKpts((img_width, img_height),ann['face_kpts'])
            lefthand_kpts = convertKpts((img_width, img_height), ann['lefthand_kpts'])
            righthand_kpts = convertKpts((img_width, img_height), ann['righthand_kpts'])
            foot_kpts = convertKpts((img_width, img_height), ann['foot_kpts'])
            f_txt.write(keypoints + foot_kpts + face_kpts + lefthand_kpts + righthand_kpts)
            # obj['keypoints'] + obj['foot_kpts'] + obj['face_kpts'] + obj['lefthand_kpts'] + obj['righthand_kpts']
            f_txt.write('\n') 
        f_txt.close()

二、开始训练

对GPU不足的人特别不友好,一个epoch下来,几乎要半个小时;没办法,batch-size只能最大设置为16.

看下样本示例:
在这里插入图片描述
这样本标注的非常密集,或微小的目标也在标注范围内.

接下来就是漫长的训练过程来.
训练配置如下:

task=pose, mode=train, model=/home/wqt/NewProjects/ultralyticsWholeBody/runs/pose/train9/weights/best.pt, data=coco8-pose.yaml, epochs=100, patience=50, batch=16, imgsz=640, save=True, save_period=20, cache=False, device=, workers=8, project=None, name=/home/wqt/NewProjects/ultralyticsWholeBody/runs/pose/train, exist_ok=False, pretrained=True, optimizer=SGD, verbose=True, seed=0, deterministic=True, single_cls=False, rect=False, cos_lr=False, close_mosaic=0, resume=False, amp=True, fraction=1.0, profile=False, overlap_mask=True, mask_ratio=4, dropout=0.0, val=True, split=val, save_json=False, save_hybrid=False, conf=None, iou=0.7, max_det=300, half=False, dnn=False, plots=True, source=None, show=False, save_txt=False, save_conf=False, save_crop=False, show_labels=True, show_conf=True, vid_stride=1, line_width=None, visualize=False, augment=False, agnostic_nms=False, classes=None, retina_masks=False, boxes=True, format=torchscript, keras=False, optimize=False, int8=False, dynamic=False, simplify=False, opset=None, workspace=4, nms=False, lr0=0.01, lrf=0.01, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=7.5, cls=0.5, dfl=1.5, pose=12.0, kobj=1.0, label_smoothing=0.0, nbs=64, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, mosaic=1.0, mixup=0.0, copy_paste=0.0, cfg=None, v5loader=False, tracker=botsort.yaml, save_dir=/home/wqt/NewProjects/ultralyticsWholeBody/runs/pose/train10

模型配置如下:


                   from  n    params  module                                       arguments                     
  0                  -1  1       928  ultralytics.nn.modules.conv.Conv             [3, 32, 3, 2]                 
  1                  -1  1     18560  ultralytics.nn.modules.conv.Conv             [32, 64, 3, 2]                
  2                  -1  1     29056  ultralytics.nn.modules.block.C2f             [64, 64, 1, True]             
  3                  -1  1     73984  ultralytics.nn.modules.conv.Conv             [64, 128, 3, 2]               
  4                  -1  2    197632  ultralytics.nn.modules.block.C2f             [128, 128, 2, True]           
  5                  -1  1    295424  ultralytics.nn.modules.conv.Conv             [128, 256, 3, 2]              
  6                  -1  2    788480  ultralytics.nn.modules.block.C2f             [256, 256, 2, True]           
  7                  -1  1   1180672  ultralytics.nn.modules.conv.Conv             [256, 512, 3, 2]              
  8                  -1  1   1838080  ultralytics.nn.modules.block.C2f             [512, 512, 1, True]           
  9                  -1  1    656896  ultralytics.nn.modules.block.SPPF            [512, 512, 5]                 
 10                  -1  1         0  torch.nn.modules.upsampling.Upsample         [None, 2, 'nearest']          
 11             [-1, 6]  1         0  ultralytics.nn.modules.conv.Concat           [1]                           
 12                  -1  1    591360  ultralytics.nn.modules.block.C2f             [768, 256, 1]                 
 13                  -1  1         0  torch.nn.modules.upsampling.Upsample         [None, 2, 'nearest']          
 14             [-1, 4]  1         0  ultralytics.nn.modules.conv.Concat           [1]                           
 15                  -1  1    148224  ultralytics.nn.modules.block.C2f             [384, 128, 1]                 
 16                  -1  1    147712  ultralytics.nn.modules.conv.Conv             [128, 128, 3, 2]              
 17            [-1, 12]  1         0  ultralytics.nn.modules.conv.Concat           [1]                           
 18                  -1  1    493056  ultralytics.nn.modules.block.C2f             [384, 256, 1]                 
 19                  -1  1    590336  ultralytics.nn.modules.conv.Conv             [256, 256, 3, 2]              
 20             [-1, 9]  1         0  ultralytics.nn.modules.conv.Concat           [1]                           
 21                  -1  1   1969152  ultralytics.nn.modules.block.C2f             [768, 512, 1]                 
 22        [15, 18, 21]  1  10115986  ultralytics.nn.modules.head.Pose             [1, [133, 3], [128, 256, 512]]
YOLOv8s-pose summary: 250 layers, 19135538 parameters, 19135522 gradients

数据集与训练参数如下:

Transferred 397/397 items from pretrained weights
AMP: running Automatic Mixed Precision (AMP) checks with YOLOv8n...
AMP: checks passed ✅
optimizer: SGD(lr=0.01) with parameter groups 63 weight(decay=0.0), 73 weight(decay=0.0005), 72 bias
train: Scanning /home/wqt/Datasets/coco/labels/train2017.cache... 64115 images, 0 backgrounds, 0 corrupt: 
val: Scanning /home/wqt/Datasets/coco/labels/val2017.cache... 2693 images, 0 backgrounds, 0 corrupt
Plotting labels to /home/wqt/NewProjects/ultralyticsWholeBody/runs/pose/train10/labels.jpg... 
Image sizes 640 train, 640 val
Using 8 dataloader workers
Logging results to /home/wqt/NewProjects/ultralyticsWholeBody/runs/pose/train10
Starting training for 100 epochs...

猜你喜欢

转载自blog.csdn.net/wqthaha/article/details/131639252
今日推荐