目录
1.模型下载
首先需要下载facenet的模型,github下载网址为:https://github.com/davidsandberg/facenet
Pre-trained models
Model name | LFW accuracy | Training dataset | Architecture |
---|---|---|---|
20180408-102900 | 0.9905 | CASIA-WebFace | Inception ResNet v1 |
20180402-114759 | 0.9965 | VGGFace2 | Inception ResNet v1 |
NOTE: If you use any of the models, please do not forget to give proper credit to those providing the training dataset as well.
2.将facenet转成rknn模型并推理
import numpy as np
import cv2
from rknn.api import RKNN
INPUT_SIZE = 160
if __name__ == '__main__':
# Create RKNN object
rknn = RKNN(verbose=False, verbose_file='./test1.log')
# Config for Model Input PreProcess
#rknn.config(channel_mean_value='0 0 0 1', reorder_channel='0 1 2',target_platform=['rv1126'])
rknn.config(mean_values=[[0, 0, 0]], std_values=[[255, 255, 255]], reorder_channel='0 1 2', target_platform='rv1126',
quantized_dtype='asymmetric_affine-u8', optimization_level=3, output_optimize=1)
print('config done')
# load tensorflow model
print('--> Loading model')
rknn.load_tensorflow(tf_pb='./20180402-114759.pb',
# inputs=['input', 'phase_train'],
inputs=['input'],
outputs=['InceptionResnetV1/Bottleneck/BatchNorm/Reshape_1'],
#outputs=['embeddings'],
input_size_list=[[INPUT_SIZE, INPUT_SIZE, 3]])
print('done')
# Build Model
print('--> Building model')
# rknn.build(do_quantization=False,do_quantization=True, dataset='dataset.txt')
rknn.build(do_quantization=True, dataset='dataset.txt')
print('done')
# Export RKNN Model
rknn.export_rknn('./facenet_Reshape_1.rknn')
# Set inputs
img = cv2.imread('./ldh.jpg')
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img = cv2.resize(img,(INPUT_SIZE, INPUT_SIZE))
# init runtime environment
print('--> Init runtime environment')
#ret = rknn.init_runtime(target='rv1126')
ret = rknn.init_runtime()
if ret != 0:
print('Init runtime environment failed')
exit(ret)
print('done')
# Inference
print('--> Running model')
outputs = rknn.inference(inputs=[img])
print('len(outputs[0][0])::', len(outputs[0][0]))
print("outputs::", outputs)
print('done')
# perf
print('--> Begin evaluate model performance')
perf_results = rknn.eval_perf(inputs=[img])
print('done')
print("perf_results:", perf_results)
rknn.release()
在这里面取的输出层是outputs=['InceptionResnetV1/Bottleneck/BatchNorm/Reshape_1'],结果为512个float,部分截图如下
如果取输出为 #outputs=['embeddings'],那么得到的输出是
3 查看网络模型
我们使用netron查看转出来的rknn模型,看一下我们取的输出层是在哪里的
参考文献:
Toybrick-开源社区-人工智能-facenet模型转换