21 days combat caffe- Day 3 - own handwriting test pictures

Followed by the last successful online step by step procedure to do, but the recognition accuracy is not good enough

1. First, make your own handwriting picture must be to provide a single pixel black and white photographs of 28 × 28 (opencv can be made, the computer is not installed, so you can make the windows drawing software)

Specific process is as follows

2. Similarly lenet_train_test.prototxt file generating deploy.prototxt training file, when using the former measurement, which is used for training the network profile

So we can be stored in the latter position by copying the file renaming way to make a deploy.prototxt file

It reads as follows:

 

name: "LeNet"  
 
   
layer {  
  name:"data"  
 type: "Input"  
 top: "data"  
 input_param { shape: { dim: 1 dim: 1 dim: 28 dim: 28 } }  
}  
   
  
layer {  
  name:"conv1"  
 type: "Convolution"  
 bottom: "data"  
 top: "conv1"  
 convolution_param {  
   num_output: 20  
   kernel_size: 5  
   stride: 1  
   weight_filler {  
     type: "xavier"  
   }  
 }  
}  
layer {  
  name:"pool1"  
 type: "Pooling"  
 bottom: "conv1"  
 top: "pool1"  
 pooling_param {  
   pool: MAX  
   kernel_size: 2  
   stride: 2  
  }  
}  
layer {  
  name:"conv2"  
 type: "Convolution"  
 bottom: "pool1"  
 top: "conv2"  
 convolution_param {  
   num_output: 50  
   kernel_size: 5  
   stride: 1  
   weight_filler {  
     type: "xavier"  
   }  
 }  
}  
layer {  
  name:"pool2"  
 type: "Pooling"  
 bottom: "conv2"  
 top: "pool2"  
 pooling_param {  
   pool: MAX  
   kernel_size: 2  
   stride: 2  
  }  
}  
layer {  
  name:"ip1"  
 type: "InnerProduct"  
 bottom: "pool2"  
 top: "ip1"  
 inner_product_param {  
   num_output: 500  
   weight_filler {  
     type: "xavier"  
   }  
 }  
}  
layer {  
  name:"relu1"  
 type: "ReLU"  
 bottom: "ip1"  
 top: "ip1"  
}  
layer {  
  name:"ip2"  
 type: "InnerProduct"  
 bottom: "ip1"  
 top: "ip2"  
 inner_product_param {  
   num_output: 10  
   weight_filler {  
     type: "xavier"  
   }  
 }  
}  
   
 
layer {  
  name:"prob"  
 type: "Softmax"  
 bottom: "ip2"  
 top: "prob"  
}  

3. Generate labels.txt label file in the current directory, named synset_words.txt, content

0

1

2

3

4

5

6

7

8

9

4. File using the calculated mean to generate compute_image_mean.cpp caffe provided mean.binaryproto binary file Mean

Instructions are as followssudo build/tools/compute_image_mean examples/mnist/mnist_train_lmdb examples/mnist/mean.binaryproto

5. Use caffe comes classifier classification.bin

6. Start test

Enter the following commands in the root directory caffe

./build/examples/cpp_classification/classification.bin examples/mnist/deploy.prototxt examples/mnist/lenet_iter_10000.caffemodel examples/mnist/mean.binaryproto examples/mnist/synset_words.txt examples/images/3.jpg  
结果如下

Haha this is the wrong result, because my 10000 model may be modified, the model is problematic anyway, so for model 5000, and succeeded, but the classification is not accurate

I checked the files of the two models really are not the same time, but he is on the same day the trained model, so asked about ho brother, said the model training time is not long let me re-training

 

Well, after retraining, although the accuracy of the model has not changed, the accuracy of 5000 should be changed, because it recognizes the success of the same picture

Well, so far in fact, I almost ran the model to test it, to work harder and understand the principle, bye

 

Guess you like

Origin www.cnblogs.com/stt-ac/p/10578603.html