Reversible image denoising - InvDN model inference test

Insert image description here

Performance: InvDN’s denoising performance outperforms most existing competing models, achieving new state-of-the-art results on SIDD datasets while enjoying reduced runtime. This shows that the method is highly efficient and accurate in dealing with real noise problems.

Model size: In addition, the size of InvDN is much smaller than DANet, with only 4.2% of the number of parameters. This means that the model maintains high performance while also having a smaller model size, which is very beneficial for deployment on resource-constrained devices.

Generating noise: By manipulating the latent representation of noise, InvDN is also able to generate noise that is more similar to the original noise. This shows that this method can not only remove noise, but also can be used to generate noise, increasing the flexibility of its application.

1. Source code package

Official website address:InvDN
Paper address:Paper
Official There are some problems with the direct use of the source code package provided. I modified and debugged the code myself. It is recommended that scholars directly use the source code package I provided for testing: Network disk source code package, Extraction code: 4qs2

2. Enter data into the network

The default reading method of the script provided by the official website is .mat data. You can also modify the reading code yourself to use OpenCV to read. There are two script files in the source code package I provided, with two reading methods respectively.

2.1 Read .mat data

To use this method, you need to first convert the test image .png or .jpg into .mat format data.

.mat format data production tutorial, scholars can read my other blog:.mat data production

The script to convert data into .mat format is also in the source code package I provided:

Insert image description here

In the source code package I provided, there is a .mat file that I have produced, as follows:

Insert image description here

2.1.1 Modify configuration file

The configuration file location is as follows:

Insert image description here

Insert image description here

The above pretrained weights are in the pretrained folder in the source code package I provided:

Insert image description here

2.1.2 Parameter modification

For the convenience of debugging, I added the configuration file path directly to the test_Real_Single.py file, as follows:

Insert image description here
Insert image description here
Run the script directly after modifying the above parameters.

Insert image description here
Insert image description here

After running, the test results will be obtained under the resurt_imags file:

Insert image description here

2.2 CV2 reads data

The above method needs to be converted to .mat format before it can be read. The reading code has been changed here and uses OpenCv2 to directly read the image inference test.

2.2.1 Modify parameters

When scholars use it, they only need to modify the path to import the noise image, as follows:

Insert image description here

After the run is completed, the inference results will also be saved in the resurt_imags file.

2.2.2 Inference speed test

Code to test the inference time is added to the script file, as follows, the results of the test on the GPU:

Insert image description here

Insert image description here

3. Test results

Insert image description here
Insert image description here

Insert image description here

4. Summary

The above is the process of reversible image denoising - InvDN model inference test. I have not trained it. I am currently just testing the effect of this method.

It’s not easy to summarize. Thank you for your support!

Guess you like

Origin blog.csdn.net/qq_40280673/article/details/134692335