yolov8系列[四]-yolov8模型部署

jetson平台

0.安装环境

下载torch、torchvision参考PyTorch 官方安装命令合集
我用的版本是
torch-1.10.0-cp37-cp37m-linux_aarch64.whl
torchvision-0.11.0-cp37-cp37m-linux_aarch64.whl

1. 下载源代码

下载:Deepstream-yolo
下载:ultralytics
DeepStream-Yolo/utils/ export_yoloV8.py复制到ultralytics根目录

cp DeepStream-Yolo/utils/gen_wts_yoloV8.py ultralytics

2. .pt转换模型转换为.onnx模型

  • 转换脚本
python export_yoloV8.py -w drone_yolov8m_best.pt --opset=12

执行上面的脚本得到 labels.txt drone_yolov8m_best.onnx

  • 遇到问题,使用一下的脚本转换会报错,加上 --opset=12解决
python export_yoloV8.py -w drone_yolov8m_best.pt

3. 配置deepstream_yolo

  1. 生成lib库
CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo
  1. 配置config_infer_primary_yoloV8
    修改config_infer_primary_yoloV8.txt相关配置
    执行脚本生成library
[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
onnx-file=drone_yolov8m_best.onnx
model-engine-file=drone_yolov8m.onnx_b1_gpu0_fp32.engine
#int8-calib-file=calib.table
labelfile-path=labels_drone.txt
batch-size=1
network-mode=0
num-detected-classes=1
interval=0
gie-unique-id=1
process-mode=1
network-type=0
cluster-mode=2
maintain-aspect-ratio=1
symmetric-padding=1
parse-bbox-func-name=NvDsInferParseYolo
custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so

[class-attrs-all]
nms-iou-threshold=0.45
pre-cluster-threshold=0.25
topk=300

4. 运行

deepstream-app -c deepstream_app_config_yolov8_drone.txt

参考:Deploy YOLOv8 on NVIDIA Jetson using TensorRT and DeepStream SDK

猜你喜欢

转载自blog.csdn.net/qq122716072/article/details/130930158