Gadget Renesas

Linking Ideas and Electronics

Item

e-AI translator tutorial

Let's try on GR board !

Overview

This tutorial introduces the procedure of outputting a file for the e-AI translator and executing it on the GR board with "MNIST For ML Beginners" in Tensorflow example.
The example of MNIST is a compact AI of a single combined layer.

login

Preparation

Hardware

Prepare one of the GR boards. However, GR-KURUMI, GR-COTTON, and GR-ADZUKI can not be executed on the board because the ROM capacity is insufficient for the C source output in this tutorial.


Python, Tensorflow

Follow the steps on the installation page of Tensorflow to create an environment in which Python and Tensorflow can run. It is recommended to upgrade to latest environment by pip command.
pip3 install --upgrade tensorflow


e-AI translator

Since this tutorial uses the e-AI translator of the web compiler, installation is unnecessary.
You can also use the e-AI translator with e2studio plug-in, so in that case refer to the manual of the e-AI translator and install it.


Compile (build) environment

In this tutorial, it is not necessary to install any compiler because binary (bin file) can be built by the web compiler and executed on the GR board.
You can do it in the build environment with e2studio, so in that case build with e2studio.

 

Execution of machine learning

In order to output the learned AI model for the e-AI translator, execute the following Python code for MNIST. Please refer to the end of this page for the difference with the original code.

Click the link below to start the download. Please download it to an arbitrary folder.

mnist_softmax_for_e-ai.py

Go to the folder downloaded at the terminal and execute the following code.
python mnist_softmax_for_e-ai.py

Four files are generated in tf_LearnedModel as follows. This is the learned AI model for the e-AI translator.

login

Execution of e-AI translator

Log in to the web compiler and create a new project. Then press the "e-AI Translaotr" button below.

login


Next, press the "Upload" button and specify the tf_LearnedModel folder containing the learned AI model. 4 The file will be uploaded to the e-AI_Model folder. Please press the "Translate" button with other settings as it is.

login


If successful, Translation Success is displayed as shown below, and the sample program is displayed. We will use the sample program as it is, so let's copy the text. Please copy and then close the window.
E-AI translator plug-in in e2studio can also translate by specifying the AI model in the same way.

login

Running AI

Build the displayed sample program after copying to gr_sketch.cpp. After that, when executed on the GR board, the inference time and inference result are displayed as shown below.

login


Below, you can download the sample header by clicking the image. This time we read the handwritten numerical data prepared in advance, but if you convert from a camera image into a 28 x 28 float type gray image and input it, you can distinguish the numbers. Anyway I ran it with GR-PEACH, it seems to be executed in about 0.4 ms, the time is displayed as 0.

login login login login login login

Reference: About input file of e-AI translator

In the input file, a code for e-AI translator is added to Python code of the following URL.

https://github.com/tensorflow/tensorflow/blob/r1.3/tensorflow/examples/tutorials/mnist/mnist_softmax.py


Added code are 81 to 89 lines highlighted below. Added the code to output AI after learning as a graph structure to the position after learning completion. Use this sample case when outputting other AI models.

The comment-out part of lines 41 to 46 is to eliminate the application of the setting item "input shape size" in the e-AI translator. TensorFlow can not obtain the shape of the input variable by using the variable used at input as it is in the next layer. Once received with another variable, shape is fixed so that it does not depend on "Input shape size" setting.


 

Reference: About output file of e-AI translator

The outline of the file output by the e-AI translator is shown below. Check the files by double clicking on the web compiler.

File name Description
dnn_compute.c Inference execution function of the converted neural network
network.c Neural network function library
layer_graph.h Prototype declaration of the library function used in the converted neural network
layer_shapes.h Variable definition used in the converted neural network
weights.h Weight of the converted neural network, bias value
Typedef.h Type definition when using library
input_image_0.h Sample character data in MNIST format
network_description.txt Layer definition recognized as a result of analyzing the structure of the neural network and its configuration
checker_log_output.txt From the above analyzed structure, the result of estimating ROM/RAM capacity calculation amount which becomes a standard by using mathematical formula. However, speed priority and RAM consumption reduction priority options are not considered yet. Actually, since the value of the node is redefined as an argument, the actual required capacity is 2 to 3 times.
 

share