Because they are human-readable and can be easily edited for a given application.
The process is as follow:In this tutorial, we will be using uTensor with Mbed and TensorFlow.
It covers tool installations, training of the neural network, generating C++ files, setting up Mbed projects and the deployment.
Although these instructions are for Mac OS, they are applicable to other operating systems.
RequirementsMbed Command Line Tool (Mbed-cli)uTensor-cli (graph to source compiler)TensorFlow (Comes with uTensor-cli)Mbed board (or, with Mbed Simulator)CoolTerm (For serial communication)VS Code (Optional)Installing the RequirementsThese tools allow you to compile MCUs projects with your favorite IDE/editor on Windows/Mac/Linux.
Brew is handy for installing software packages on your Mac.
In your terminal:$ /usr/bin/ruby -e “$(curl -fsSL https://raw.
com/Homebrew/install/master/install)"$ brew updateNext, install a working version of GCC-arm cross-compiler:$ brew install https://raw.
rbIf you already have Python installed, skip this step.
However, do not work with your system Python directly, it is a good idea to use a virtual environment.
$ brew install python3Mbed-cli will need Mercural and Git:$ brew install mercurial gitThe installation of Mbed-CLI.
Check out the official doc for more details.
$ pip3 install mbed-cli$ mbed –version1.
3Getting uTensor-cli:$ pip3 install utensor_cgenFinally, grab CoolTerm from Roger Meier’s website.
Setting Up the MCU ProjectLet’s start with creating a new Mbed project.
This may take a couple minutes as the entire Mbed-OS’s source history will be downloaded.
$ mbed new my_uTensor$ cd my_uTensor/$ lsmbed-os mbed-os.
pyWe will need the uTensor runtime library.
It contains all the function implementations that will be linked during the compilation time.
$ mbed add https://github.
com/uTensor/uTensor$ lsmbed-os mbed_settings.
lib uTensorFinally, we will need some input data feeding to the neural network to verify it is working.
For the purpose of demonstration, we will use a generated header file that contains the data of a hand-written digit 7.
The input-data file has been prepared for you.
Download and place it in your project root:$ wget https://gist.
We will revisit these files later.
Training the Modelmxnet Handwritten Digit RecognitionFor simplicity, we will train a multi-layer perceptron (MLP) the handwritten digit dataset, MNIST.
The network architecture is shown above.
It takes in 28 by 28 greyscale image of a hand-written digit, flattens it to a linear 784 input.
The rest of the network is consisted of:1 input layer2 hidden layers (128 and 64 hidden units respectively)with ReLu activation functions1 output layer with softmaxThe Jupyter-Notebook contains the code, or, you may prefer a plain-old Python file here: deep_mlp.
The script defines the MLP and its training parameters.
Running the script you should see something like:$ python3 deep_mlp.
step 19000, training accuracy 0.
92step 20000, training accuracy 0.
94test accuracy 0.
9274saving checkpoint: chkps/mnist_modelConverted 6 variables to const ops.
written graph to: mnist_model/deep_mlp.
pbthe output nodes: ['y_pred']A protocol buffer that contains the trained model will be saved to the file system.
It is what we will supply to uTensor-cli for C++ code generation in the next step.
$ ls mnist_model/deep_mlp.
pb is what we will supply to uTensor-cli for C++ code generation in the next step.
Generating the C++ FilesHere’s the fun part.
Turning a graph, deep_mlp.
pb, into C++ files:deep_mlp.
cppContains the implementation of the modeldeep_mlp.
hppThis is the interface to the embedded program.
In this case, a function signature: get_deep_mlp_ctx(…)We shall see how to use this in the main.
hppContains the weight of the neural networkThe command for generating the C++ file is:$ utensor-cli convert mnist_model/deep_mlp.
Transforming graph: mnist_model/deep_mlp.
Graph transormation done.
Generate weight file: models/deep_mlp_weight.
Generate header file: models/deep_mlp.
Generate source file: models/deep_mlp.
cppSpecifying the output node helps uTensor-cli to traverse the graph and apply optimisations.
The name of the output node is shown in the training message in the previous section.
It depends on how the network is setup.
Compiling the ProgramAt this stage, we should have an Mbed project containing:Mbed OSuTensor libraryGenerated C++ modelInput data header fileAll we need is a main.
cpp to tie everything together:The Context class is the playground where the inference takes place.
The get_deep_mlp() is a generated function.
It automatically populates a Context object with the inference graph, takes a Tensor class as input.
The Context class, now contains the inference graph, can be evaluated to produce an output tensor containing the inference result.
The name of the output tensor is the same as the output node’s as specified by your training script.
In this example, the static array defined in the input_data.
h is being used as the input for inferencing.
In practice, this would be buffered sensor data or any memory-block containing the input data.
The data is arranged in row-major layout in memory (same as any C array).
The application has to keep the input memory-block safe for during inferencing.
Now, we can compile the whole project by issuing:$ mbed compile -m K66F -t GCC_ARM –profile=uTensor/build_profile/release.
jsonThe Mbed-cli needs to know what board it is compiling for, in this case, K66F.
You may want to update this to the target name of your board.
We are also using a custom build profile here to enable C++11 support.
Expect to see the similar compilation message:.
Compile [ 99.
cppLink: my_uTensorElf2Bin: my_uTensor| Module | .
text | .
data | .
bss|——————–|—————–|————-|————–| CMSIS_5/CMSIS | 68(+68) | 0(+0) | 0(+0)| [fill] | 505(+505) | 11(+11) | 22(+22)| [lib]/c.
a | 69431(+69431) | 2548(+2548) | 127(+127)| [lib]/gcc.
a | 7456(+7456) | 0(+0) | 0(+0)| [lib]/m.
a | 788(+788) | 0(+0) | 0(+0)| [lib]/misc | 248(+248) | 8(+8) | 28(+28)| [lib]/nosys.
a | 32(+32) | 0(+0) | 0(+0)| [lib]/stdc++.
a | 173167(+173167) | 141(+141) | 5676(+5676)| main.
o | 4457(+4457) | 0(+0) | 105(+105)| mbed-os/components | 16(+16) | 0(+0) | 0(+0)| mbed-os/drivers | 844(+844) | 0(+0) | 0(+0)| mbed-os/hal | 1472(+1472) | 4(+4) | 68(+68)| mbed-os/platform | 3494(+3494) | 260(+260) | 221(+221)| mbed-os/rtos | 8313(+8313) | 168(+168) | 6057(+6057)| mbed-os/targets | 7035(+7035) | 12(+12) | 301(+301)| models/deep_mlp1.
o | 148762(+148762) | 0(+0) | 1(+1)| uTensor/uTensor | 7995(+7995) | 0(+0) | 10(+10)| Subtotals | 434083(+434083) | 3152(+3152) | 12616(+12616)Total Static RAM memory (data + bss): 15768(+15768) bytesTotal Flash memory (text + data): 437235(+437235) bytesImage: .
binFlashing the BinaryAn typical Mbed board comes with an USB-interface called DAPLink.
Its just is to provide drag-and-drop programmability from your Desktop, debugging and serial communication.
One you’ve plugged-in your Mbed board, you should see:Connect your boardLocate the binary under .
binDrag and drop it into the Mbed DAPLink mount point (shown in the picture)Wait for the transfer to completeGetting the OutputBy default, all standard outputs on Mbed, printf(), are directed toward serial terminal.
DAP-Link interface enables us to view the serial communication via USB.
We are going to use CoolTerm for this.
Fire up CoolTermGo to OptionsClick on Re-Scan Serial PortsSelect the Port to usbmodem1234, this may vary every time you reconnect the board.
The baud rate is 115200, reflecting the configuration in the main.
cppClick OKClick ConnectPress the reset button on your board.
You should see the following message:Simple MNIST end-to-end uTensor cli example (device)Predicted label: 7Congratulations!.You have successfully deployed a simple neural network on a microcontroller!Closing RemarksuTensor is designed to be the interface between embedded engineers and data scientists alike.
It currently supports:Fully Connect Layer (MatMul & Add)Convolutional LayerPoolingReLuSoftmaxArgMaxDropoutI believe edge-computing is a new paradigm for applied machine learning.
Here, we are presented with unique opportunities and constraints.
In many cases, machine learning models may have to be built from the stretch, tailoring toward low-power and specific use-cases.
We are constantly looking into ways to drive the advancement in this field.
On behalf of the uTensor team, thank you for checking this tutorial out!.My twitter handle is @neil_the_1.
Send us an email at utensor@googlegroups.
com for an uTensor Slack invitation link.
Special thanks go to Kazami Hsieh, Dboy Liao, Michael Bartling, Jan Jongboom and everyone who has helped the project along the way!.. More details