Series Article Directory
Chapter 1: Visual Studio 2019 Dynamic Link Library DLL Establishment
Chapter Two: VS Dynamic Link Library DLL Debugging
Chapter 3: VS2019 OpenCV Environment Configuration
Chapter 4: C++ deployment pytorch model
1. How to deploy pytorch in C++?
2.3 Libtorch environment variable configuration
3. Convert the pytorch model to pt used by Libtorch
foreword
Environment: visual studio 2019; OpenCV4.5.5; pytorch1.8; libtorch10.2;
1. How to deploy pytorch in C++?
I know of two ways to deploy the pytorch model in C++. One is to convert the pytorch model to onnx and deploy it using the opencv DNN module. One is to use the Libtorch deployment of the corresponding version of pytorch. The onnx test found that after the conversion model process, the accuracy of semantic segmentation differed too much, so Libtorch was finally selected for deployment.
Two, Libtorch configuration
Note: 1. The Libtorch version should correspond to the pytorch version
2, Libtorch and pytorch's CPU/GPU should correspond
1. Download Libtorch
Pytorch official website download, there are Release version and Debug version
2. VS2019 configures Libtorch
2.1 Configure VC++ directory
First configure the include directory and library directory, corresponding to the same method as opencv.
2.2 Configure linker
Add all libs as dependencies, enter the lib directory in cmd, use the dir /b *.lib>1.txt command to generate a directory, and copy it for use.
2.3 Libtorch environment variable configuration
You can add it in the system environment variable, or directly copy all DLLs into the Release or Debug directory without configuring the environment.
3. Convert the pytorch model to pt used by Libtorch
# -*- coding:utf-8 -*-
import torch
model = torch.load("red_model.pth", map_location='cpu')
model.eval()
# 向模型中输入数据以得到模型参数
example = torch.rand(1, 3, 512, 512) # N*C*H*W
traced_script_module = torch.jit.trace(model, example)
# 保存模型
traced_script_module.save("red_models_trace.pt")
4. The use of Libtorch in C++
/****************************************
@brief : 分割
@input : 图像
@output : 掩膜
*****************************************/
void SegmentAI(Mat& imgSrc, int width, int height)
{
cv::Mat transImg;
cv::resize(imgSrc, transImg, cv::Size(512, 512));
//Deserialize the ScriptModule
torch::jit::script::Module Module = torch::jit::load("models_trace.pt");
//Module.to(at::kCUDA);//XXX-GPU版本添加
//processing
//cv::cvtColor(image_transfomed, image_transfomed, cv::COLOR_BGR2RGB); //转RGB
//Mat to tensor
torch::Tensor tensorImg = torch::from_blob(transImg.data, { transImg.rows, transImg.cols,3 }, torch::kByte);
tensorImg = tensorImg.permute({ 2,0,1 });
tensorImg = tensorImg.toType(torch::kFloat);
tensorImg = tensorImg.div(255);
tensorImg = tensorImg.unsqueeze(0);
//excute the model
torch::Tensor output = Module.forward({ tensorImg }).toTensor();
//tensor to Mat
torch::Tensor output_max = output.argmax(1);
output_max = output_max.squeeze();
output_max = output_max == 1;
output_max = output_max.mul(255).to(torch::kU8);
output_max = output_max.to(torch::kCPU);
Mat conjMask(Size(512, 512), CV_8UC1);
memcpy((void*)conjMask.data, output_max.data_ptr(), sizeof(torch::kU8) * output_max.numel());
//最大连通域
vector<vector<Point>> contours;
vector<Vec4i> hierarchy;
double largest_area = 0;
int largest_contour_index = 0;
findContours(conjMask, contours, hierarchy, RETR_EXTERNAL, CHAIN_APPROX_SIMPLE);
for (size_t i = 0; i < contours.size(); i++) // iterate through each contour.
{
double area = contourArea(contours[i]); // Find the area of contour
if (area > largest_area)
{
largest_area = area;
largest_contour_index = i;
}
}
Mat conjMaskMax = Mat(512, 512, CV_8UC1, cv::Scalar::all(0));
if (contours.size() != 0)
{
fillPoly(conjMaskMax, contours[largest_contour_index], Scalar(255, 255, 255), 8, 0);
}
resize(conjMaskMax, conjMaskMax, cv::Size(width, height));
conjMaskMax.convertTo(imgSrc, CV_8UC1, 255, 0);
}