YOLOV5 running code RuntimeError: cuDNN error: CUDNN_STATUS_BAD_PARAM

YOLOV5 running code RuntimeError: cuDNN error: CUDNN_STATUS_BAD_PARAM

Check that the input data is in the correct format. Ensure that the dimensions and types of input data match the model's requirements.
Check whether the model parameter settings are correct. Ensure that the model parameter settings are consistent with the input data.
Check if the GPU memory is sufficient. If you are low on GPU memory, you can try reducing the batch size or using a smaller model.
Update cuDNN and GPU drivers. Make sure you are using the latest version of cuDNN and GPU drivers.
If none of the above methods solve the problem, you can try running the code on the CPU to get more detailed error information.

Problem Description

Insert image description here
Insert image description here

solution

Insert image description here
There is no problem with the environment

import torch
torch.backends.cuda.matmul.allow_tf32 = True
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = False
torch.backends.cudnn.allow_tf32 = True
data = torch.randn([1, 32, 800, 800], dtype=torch.half, device='cuda', requires_grad=True)
net = torch.nn.Conv2d(32, 16, kernel_size=[1, 1], padding=[0, 0], stride=[1, 1], dilation=[1, 1], groups=1)
net = net.cuda().half()
out = net(data)
out.backward(torch.randn_like(out))
torch.cuda.synchronize()

The image is too large, resulting in insufficient video memory.

You can run it with CPU

Insert image description here

Insert image description here

Reduce workers
Insert image description here

Reference article

There are still problems. . . .

I won’t do it for now. I have something urgent to do now.

Similar questions

solved

Insert image description here
Comment out the code below

Guess you like

Origin blog.csdn.net/qq_41701723/article/details/133266387