STM32CubeMX AI taste fresh

  When I was learning Tensorflow 2.0, of course, eventually to run on embedded devices, because the impact of the epidemic, a place to live away from the more recent office space office, only development board STM32F407ZG on hand, yes is a point in a child's development board, or else thinking that running on this board, just before seeing the Cube-AI, so we carried out a wave of attempts.
  At the same time thanks to this video, can be regarded as a Getting Started Guide video, picture quality is *** -> Getting Started Video

X-Cube-AI Introduction

  X-CUBE-AI is part of STM32Cube extension package STM32Cube.AI ecosystem, through automatic conversion pre-trained neural network and the resulting optimization library integrated into the user's project extends the STM32CubeMX function.
The official link here, never heard of friends can go to preview a wave ah: ST-the X-Cube-AI

  I concentrated it in a nutshell is currently more popular AI framework for transforming C code through X-Cube-AI extended to support use in embedded devices, currently use X-Cube-AI needs STM32CubeMX version 5.0 and above, STM8CubeMX did not pay attention, the current support model has transformed Keras, TF lite, ONNX, Lasagne, Caffe, ConvNetJS, fairly * of cattle, Cube-AI to model into a pile array, these arrays will after the content analysis to model, and model Tensorflow in the transfer array using the principle is the same.

&& prerequisite for development

  1. Suppose we used STM32CubeMX, of course, never learned in class can take a look at the micro snow, I also remember the beginning from the entry of micro snow, now is the snow-capped ah, but note that when I look at the tutorial classes are micro snow 5.0 the previous version, there will be a little different, but most of Baidu can be resolved;
  2. We assume that we can install X-Cube-AI extension.
  3. Python version 3.7;
  4. Tensorflow 2.0;
  5. It supports a variety of plug-ins Tensorflow2.0

Create a model

  Modeled on a PC, my model is to build an energy level detection output, input voltage, output the corresponding level, I made a classification model, warm hot code coming:

'''
电源等级检测测试
训练模型阈值
一级    ->  v>=8.0
二级    ->  7.8<=v<8.0
三级    ->  v<7.8

输入层 -> 隐藏层 -> 输出层

'''
'''
电源等级检测测试
训练模型阈值
一级    ->  v>=8.0
二级    ->  7.8<=v<8.0
三级    ->  v<7.8

输入层 -> 隐藏层 -> 输出层

'''

#导入工具包
import tensorflow as tf
import pandas as pd
import numpy as np

#读取数据
data = pd.read_csv('data/voltage.csv', sep=',', header=None)
voltage = data.iloc[:,0]
level = data.iloc[:,1:]
level.astype(int)

#建立模型
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(units=20, activation='relu', input_shape=(1,)))
model.add(tf.keras.layers.Dense(units=10, activation='relu'))
model.add(tf.keras.layers.Dense(units=3, activation='softmax'))
model.summary()

model.compile(optimizer=tf.keras.optimizers.Adam(0.001),
             loss=tf.keras.losses.categorical_crossentropy,
             metrics=[tf.keras.metrics.mse])
history = model.fit(x=voltage, y=level, epochs=40000)

print(model.evaluate(voltage, level))

#保存模型
model.save('level_check.h5')

Then again a PC loaded with the test model, and converts model TF lite format (ps: if you are using .h5 file directly is also possible, enter the model class selected Keras in CubeMX years), as follows:

'''
电源等级检测测试
训练模型阈值
一级    ->  v>=8.0
二级    ->  7.8<=v<8.0
三级    ->  v<7.8
'''
#导入工具包
import tensorflow as tf
import numpy as np

import time
import datetime

#输出函数 输出更加直观
def level_output(level=np.zeros(3)):
    for i in range(level.shape[1]):
        if level[0,i] == 1.0:
            return i+1

#测试电压
test_v = 7.78

t1 = time.time()

#导入模型计算
load_model = tf.keras.models.load_model('level_check.h5')
out = load_model.predict([test_v])
print(out)

cal_level = np.around(out).astype(int)

t2 = time.time()

#输出能源等级
level = level_output(cal_level)
print(level)
print((int(t2*1000)-int(t1*1000)))

#转换模型为tf lite格式 不量化
converter = tf.lite.TFLiteConverter.from_keras_model(load_model)
tflite_model = converter.convert()

#保存到磁盘
open("level_check.tflite", "wb").write(tflite_model)

Training data can not leak it, plus post-test training to use Oh, the new txt, then it can be saved as a CSV:

7.61,0,0,1
7.62,0,0,1
7.63,0,0,1
7.64,0,0,1
7.65,0,0,1
7.66,0,0,1
7.75,0,0,1
7.78,0,0,1
7.71,0,0,1
7.72,0,0,1
7.8,0,1,0
7.83,0,1,0
7.92,0,1,0
7.85,0,1,0
7.81,0,1,0
7.81,0,1,0
7.84,0,1,0
7.89,0,1,0
7.98,0,1,0
7.88,0,1,0
8.02,1,0,0
8.12,1,0,0
8.05,1,0,0
8.15,1,0,0
8.11,1,0,0
8.01,1,0,0
8.22,1,0,0
8.12,1,0,0
8.14,1,0,0
8.07,1,0,0

Create Project

  We can first be followed using a specific model's official website to create learning project, of course, I can see this step oh directly.

  1. New Project, and then import our model calculations, click Analyze, CubeMX will prompt us which type of MCU can support our use, of course, you can also directly choose STM32F407ZGTX, the model is very small, so the absolute support of it;
    Selected MCU
  2. Adding model, the analysis can be performed after the desktop level tested, the default input random number, no output, so I can not do the comparison, we can do something in their own data sets Oh, put the training data input and output is split into two a .csv on it;
    Here Insert Picture Description
    Here Insert Picture Description
    Here Insert Picture Description
    Here Insert Picture Description
    Here Insert Picture Description
    Here Insert Picture Description
  3. Enable external clock and USART;
    Here Insert Picture Description
    Here Insert Picture Description
    Here Insert Picture Description
  4. Provided the position code output format;
    Here Insert Picture Description
    Here Insert Picture Description
  5. Generated code;
    Here Insert Picture Description

Modify the project

  1. After the above steps cumbersome, so TFlite project code is generated on the STM32F407 Well below to modify our procedures, initialize the serial port, reconstruction console output, ease of use printf printout;
    Here Insert Picture Description
    Here Insert Picture Description
  2. Add code to use the model;
    Creating an activation domain
    Here Insert Picture Description

Output

结果
  It is not very simple ah, but more painful is that the underlying code to support neural network is a library file, but does not affect our use, whim, to see the code, and not enabled dedicated STM32 CRC check, then , it is not sufficient as long as the ROM and RAM, any device can be embedded directly use it? Ooh ooh ooh, get up! ! ! !

Released two original articles · won praise 0 · Views 254

Guess you like

Origin blog.csdn.net/bigmaxPP/article/details/104500092