使用libtorch:C++调用Pytorch模型

ubuntu

1.按照以下操作将已经训练好的模型转换成pt格式保存
https://pytorch.apachecn.org/docs/1.0/cpp_export.html
注意如果模型有根据输入才能确定的参数,根据trace方法生成的torch script中,该参数变成了常量,这一点务必注意。

2.安装对应版本的libtorch
(注意事项:安装的libtorch的版本最好跟安装的Pytorch版本一致;)
比如,我本机上的Pytorch的版本是1.4.0,所以下载对应的libtorch 1.4.0版本)
我们可以下载源码自行编译,也可以下载编译好的,这里推荐下载编译好的,链接如下:
https://download.pytorch.org/libtorch/cu101/libtorch-shared-with-deps-1.4.0.zip
下载好后unzip解压即可;
3.加载pt模型,验证输出的结果
代码文件如下:BrickDetect.h

//
// Created by mzy on 2021/2/28.
//

#ifndef TEST3_BRICKDETECT_H
#define TEST3_BRICKDETECT_H
#include <iostream>
#include "opencv2/opencv.hpp"
#include <torch/torch.h>
#include <torch/script.h>
class BrickDetect
{
    
    
public:
    BrickDetect(std::string path);
    torch::jit::script::Module   module;
    std::vector<cv::Point>Brick_Detect(cv::Mat image);
private:
    std::vector<cv::Point> get_peak_points(torch::Tensor heatmaps);
};

BrickDetect.cpp

#include"BrickDetect.h"

BrickDetect::BrickDetect(std::string model_path)
{
    
    
    module = torch::jit::load(model_path);
    module.to(torch::kCUDA);
}

std::vector<cv::Point> BrickDetect::get_peak_points(torch::Tensor heatmaps)
{
    
    

    int C=4;
    heatmaps=heatmaps.squeeze();
    std::vector<cv::Point> all_peak_points;

    for(int j=0;j<C;j++)
    {
    
    
        torch::Tensor tmp=heatmaps[j];
        cv::Point point_tmp(-1,-1);
        float max_value=(torch::max(tmp)).item<float>();
        if (max_value> 0.5)
        {
    
    
            std::vector<torch::Tensor> index =torch::where(tmp>=max_value);
            int y=(index[0])[0].item<long>();

            int x=(index[1])[0].item<long>();
            point_tmp.x=x;
            point_tmp.y=y;
        }
        all_peak_points.push_back(point_tmp);
    }

    return all_peak_points;
}
std::vector<cv::Point>BrickDetect::Brick_Detect(cv::Mat image)
{
    
    
    cv::copyMakeBorder(image,image,0,0,0,12,cv::BORDER_CONSTANT,0);
    torch::Tensor tensor_image = torch::from_blob(image.data,
                                                  {
    
     image.rows, image.cols,3 }, torch::kByte);
    tensor_image = tensor_image.permute({
    
     2,0,1 });
    tensor_image = tensor_image.toType(torch::kFloat);
    tensor_image = tensor_image.div(255);
    tensor_image = tensor_image.unsqueeze(0);
    tensor_image=tensor_image.to(torch::kCUDA);
    // 网络前向计算
    torch::Tensor output =module.forward({
    
     tensor_image }).toTensor();
    std::vector<cv::Point>results=get_peak_points(output.to(torch::kCPU));
    return results;
}

main.cpp

#include <iostream>
#include "opencv2/opencv.hpp"
#include "BrickDetect.h"

int main() {
    
    
    std::string model_path = "/home/mzy/MultiBick/hourglass/model.pt";
    BrickDetect brickdetect_(model_path);
    cv::Mat image= cv::imread("/home/mzy/MultiText/AllImage/c52_2020_11_18_10_25_39.jpg");
    cv::Mat origin_image=image;
    std::vector<cv::Point> results=brickdetect_.Brick_Detect(image);

    for(int i=0;i<results.size();i++)
    {
    
    
        if(results[i].x>0)
        cv::circle(origin_image,results[i],5,cv::Scalar(255,0,0),5);
    }
    cv::imshow("result",origin_image);
    cv::waitKey();
    return 0;
}

CMakeList如下:

cmake_minimum_required(VERSION 3.0)
project(test3)

set(CMAKE_CXX_STANDARD 11)
SET(CMAKE_BUILD_TYPE Release)
set(OpenCV_DIR /home/mzy/workspace/opencv-3.4/build)
find_package( OpenCV 3 REQUIRED )
INCLUDE_DIRECTORIES(${
    
    OpenCV_INCLUDE_DIRS})
set(Torch_DIR /home/mzy/Downloads/libtorch/share/cmake/Torch)  # 我的 libtorch 的路径

find_package(Torch REQUIRED)
message(STATUS "OpenCV_LIBS = ${OpenCV_LIBS}")
message(STATUS "OpenCV_INCLUDE_DIRS = ${OpenCV_INCLUDE_DIRS}")
message(STATUS "TORCH_LIBRARIES = ${TORCH_LIBRARIES}")
message(STATUS "CUDA_LIBRARIES = ${CUDA_LIBRARIES}")
add_library(BrickDetect SHARED BrickDetect.cpp BrickDetect.h)
add_executable(test3 main.cpp BrickDetect.cpp BrickDetect.h)
link_directories(
        /home/mzy/Downloads/libtorch/lib
)
target_link_libraries( test3
        #${
    
    BrickDetect}
        ${
    
    OpenCV_LIBS}
        ${
    
    CUDA_LIBRARIES}
        ${
    
    TORCH_LIBRARIES}
        )

有几点报错和对应解决办法:
1.cannot detect CUDA
当我们已经在CMakeList里面添加好CUDA路径,仍然报这样的运行错误,我是通过将ubuntu16.04升级到Ubuntu18.04解决的;
2.我们项目中用到了imread,报错imread unreferenced ,意思就是没找到对应的库,确定在CMakeList里面添加了OPENCV的路径仍然报这样的错误,我的解决办法是将opencv4.1-》降为3.4版本,具体参考链接:https://blog.csdn.net/weixin_39326879/article/details/114277105
3.关于CUDA out of memory的报错,通过以下方式来解决:

{
    
    
    torch::NoGradGuard no_grad;
    module->weight += 1;
}  // Note that anything out of this scope will still record gradients

4.命名空间冲突,libtorch库本身代码不够规范,跟其他库一起用时可能会引起冲突,所以我们在写代码时要注意规范,不要用using namespace
5.注意CMakeList里面的顺序问题:

最后经过验证,依赖同样的GPU,libtorch的速度要比Pytorch慢5倍左右。
关于如何在C++ 上验证CPU的速度,详见:https://blog.csdn.net/weixin_39326879/article/details/114277027

windows

也就是基于libtorch库,一个基于深度学习的C++库
1.按照以下操作将已经训练好的模型转换成pt格式保存
https://pytorch.apachecn.org/docs/1.0/cpp_export.html
注意如果模型有根据输入才能确定的参数,根据trace方法生成的torch script中,该参数变成了常量,这一点务必注意。
2.安装对应版本的libtorch
(注意事项:安装的libtorch的版本最好跟安装的Pytorch版本一致;)
比如,我本机上的Pytorch的版本是1.4.0,所以下载对应的libtorch 1.4.0版本)
我们可以下载源码自行编译,也可以下载编译好的,这里推荐下载编译好的,链接如下:
https://download.pytorch.org/libtorch/cu101/libtorch-win-shared-with-deps-1.4.0.zip
下载完成之后解压,将include和lib文件夹按照配置opencv那样分别添加进入VS2017的属性配置中,然后链接库只用配置静态的lib:
torch.lib
c10.lib
caffe2.lib
添加完成可以使用了
尝试了以下代码:

#include <iostream>
#include<torch/script.h>

int main()
{
    
    

	torch::Tensor tensor = torch::rand({
    
     5,3 });
	std::cout << tensor << std::endl;

}

报错:
std不明确
解决办法:
将 属性》C/C++》语言》符合模式 改为否,问题解决。
接着报错:
由于找不到c10.dll(或其他libtorch/lib中的.dll动态库),无法继续执行代码
解决办法:
在属性界面的 调试=>环境 里添加libtorch动态库的路径:
PATH=D:\Code_Lib\libtorch\lib;%PATH%
参考解决文献:
https://blog.csdn.net/zzz_zzz12138/article/details/109138805
3.加载pt模型,验证输出的结果
最后没有成功,未果

猜你喜欢

转载自blog.csdn.net/weixin_39326879/article/details/114001179