Integrating the Edge Impulse Inferencing SDK

Edge Impulse is a leading development platform for machine learning on embedded devices. This page provides a step by step guide to deploying your Edge Impulse Studio project (also known as an Impulse) onto a Microchip Arm® Cortex®-based 32-bit microcontroller with the MPLAB X IDE.

Topics Covered


Impulse Deployment

The Edge Impulse Inferencing SDK is an open source C++ library that utilizes TensorFlow Lite for Microcontrollers, and it can be downloaded along with the model files in the deployment step within the Edge Impulse Studio. Select C++ library along with the desired optimizations and then click build to download the source files.

deploy.png

The files downloaded from the Edge Impulse Studio can be added to a project in MPLAB® X Integrated Development Environment (IDE) and can then be integrated into the application firmware. Typically this would be the same firmware that was used to collect the training data, and now instead of logging the data, it will be fed into the edge-impulse-sdk for live embedded inference.

This guide will use the "Create a Smart Dumbbell with Edge Impulse" example to cover some of the steps in porting the Edge Impulse Inferencing SDK.


Aside: Important Header Files

There are a few especially relevant header files you should inspect to get familiarized with the Edge Impulse SDK as detailed in the sections below.

model-parameters/model-metadata.h

This file contains various definitions related to the impulse configuration such as:

  • ei_classifier_inferencing_categories []
  • EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE
  • EI_CLASSIFIER_LABEL_COUNT
  • EI_CLASSIFIER_HAS_ANOMALY
  • EI_CLASSIFIER_FREQUENCY

classifier/ei_run_classifier.h

Contains the high-level APIs to be called by the main application.

dsp/config.hpp

This file sets the EI_CLASSIFIER_ENABLE_CMSIS_DSP macro based on the target platform and also contains other definitions related to the DSP block such as EIDSP_QUANTIZE_FILTERBANK, EIDSP_TRACK_ALLOCATIONS, etc.

classifier/ei_classifier_config.h

This file sets the EI_CLASSIFIER_TFLITE_ENABLE_CMSIS_NN macro based on the target platform.


Adding Your Impulse to an Existing MPLAB X Project

Once you've completed downloading your Impulse's C++ source files archive, extract the files into the src folder of your MPLAB X project. For a standard Impulse that uses a neural network classifier there should be three source folders: edge-impulse-sdk, model-parameters, and tflite-model.

files

Some of the C++ source files in the TensorFlow Lite library have the .cc extension. In order to build in MPLAB X, all of the files with extension .cc must be changed to .cpp. This can be easily accomplished from the terminal or command line by navigating to the edge-impulse-sdk/tensorflow/lite folder and executing the following command.

  • Windows: ren *.cc *.cpp
  • Mac/Linux: find . -name "*.cc" -exec sh -c 'mv "$1" "${1%.cc}.cpp"' _ {} \;

After the files have been copied into the project directly, they must also be added to the project from within MPLAB X by right clicking on the source folders and selecting Add files from folder. Be sure to add C and C++ source files as the library contains both.

project-files

Since the main application will be written in C++, the run_classifier_c.h and run_classifier_c.c files must be excluded from the project in order to avoid multiple function definitions. Right click on the files and select Exclude files(s) from current configuration. These files are only needed when the Edge Impulse Inferencing SDK is called from a C application.

exclude

Finally the include paths must be updated so that the compiler can find the newly added library source files. The smart dumbbell example project uses the include directories shown below for xc32-gcc and xc32-g++.

1

XC32-gcc

gcc

One line copy paste:

../src/edge-impulse-sdk;../src/edge-impulse-sdk/CMSIS/NN/Include;../src/edge-impulse-sdk/CMSIS/DSP/Include;../src/edge-impulse-sdk/CMSIS/DSP/PrivateInclude

2

XC32-g++

g++

One line copy paste:

../src/edge-impulse-sdk;../src/edge-impulse-sdk/third_party/ruy;../src/edge-impulse-sdk/third_party/gemmlowp;../src/edge-impulse-sdk/third_party/flatbuffers/include

Implementing Platform Specific Functions

The Edge Impulse Inferencing SDK requires hardware specific implementations of the following functions:

  • EI_IMPULSE_ERROR ei_sleep (int32_t time_ms)
  • uint64_t ei_read_timer_ms ()
  • uint64_t ei_read_timer_us ()
  • void ei_printf (const char *format, …)

Refer to the Smart Dumbbell Firmware for an example of how the functions can be implemented on the SAMD21 ML Evaluation Kit with code generated by MPLAB® Harmony Configurator (MHC). The function implementations can be found in src/edge-impulse-sdk/porting/ei_classifier_porting.cpp.

Support for printing float values with printf() can be enabled in Project Properties within XC32 (Global Options) by adding -mno-newlib-nano to the Additional options: field.


Implementing Main Application

Compiling Impulse with C or C++

Since the Inferencing SDK is a C++ application, it is simplest to write the main application that will call the run_classifier() function in C++ as well, and that is the approach that this article will cover. The Smart Dumbbell example project also takes this approach, similar to the standalone inferencing C++ example from Edge Impulse.

The Inferencing SDK can also be linked to from C applications if desired. This is done by compiling the impulse as a shared library with the EIDSP_SIGNAL_C_FN_POINTER=1 and EI_C_LINKAGE=1 macros defined and then linking to it from a C application. The run_classifier() function can then be called in your application as shown in this standalone inferencing C example from Edge Impulse.

Inferencing API: run_classifier() vs run_classifier_continuous()

Edge Impulse provides both a basic inferencing API that classifies non-overlapping sample windows as well as a continuous inferencing API that can classify multiple times per window. Inferencing with the basic API consists of first buffering a window of samples corresponding to the window size in Edge Impulse Studio, then passing the buffer to run_classifier(). This is the simplest method to perform inferencing but has some drawbacks:

  • Inferencing accuracy can be degraded when events are split across input windows
  • Latency may be unacceptable

By comparison, the continuous inferencing API allows you to partition the input window into an even number of slices, reducing the time between inferences, and reducing the possibility that events will be split across windows. This approach has the additional benefit of time-averaging the output classification probabilities. The downside of course is that running multiple inferences per window increases the overall computational load, so you will have to find a balance that works best for your application.

For illustration, pseudocode for using the continuous API is provided below. Note that in a real implementation, you would want to define the EI_CLASSIFIER_SLICE_SIZE and EI_CLASSIFIER_SLICES_PER_MODEL_WINDOW macros at the project level, not in your source code.

/* run_classifier_continuous() pseudo-code */
#include "edge-impulse-sdk/classifier/ei_run_classifier.h"
#include "edge-impulse-sdk/dsp/numpy.hpp"
#include "model-parameters/model_metadata.h"
 
// #define EI_CLASSIFIER_RAW_SAMPLE_COUNT 1000 /* This is pre-defined in model_metadata.h */
 
/* Note that in a real implementation, you must define these macros at the project level, not in your source code. */
#define EI_CLASSIFIER_SLICE_SIZE 100
#define EI_CLASSIFIER_SLICES_PER_MODEL_WINDOW 10 /* 1000 samples-per-window / 100 samples-per-slice = 10 slices-per-window
 
/* Function to give edge impulse inferencing SDK access to input signal data */
int get_feature_data(size_t offset, size_t length, float *out_ptr) {
    /* Implement get_feature_data */
 
    return EI_IMPULSE_OK;
}
 
int main ( void )
{
    /* Set up our Impulse input signal which will always be of size EI_CLASSIFIER_SLICE_SIZE */
    ei::signal_t signal;
    signal.total_length = EI_CLASSIFIER_SLICE_SIZE;
    signal.get_data = &get_feature_data;
 
    ei_impulse_result_t result = { 0 };
    EI_IMPULSE_ERROR ei_status = EI_IMPULSE_OK;
 
    run_classifier_init();
 
    /* Main loop */
    while ( true )
    {
        ei_status = run_classifier_continuous(&signal, &result, false);
        if (ei_status != EI_IMPULSE_OK) {
            printf("run_classifier returned: %d\r\n", ei_status);
            break;
        }
    }
 
    /* Execution should not come here during normal operation */
    return ( EXIT_FAILURE );
}

To learn more about using the continuous inferencing API, check out the continuous audio sampling example on the Edge Impulse website.


Project Configuration

Optimization Level

For best performance, the optimization level should be set to 2 or 3 for both the gcc and g++ compilers. (Optimization level 3 is not available with xc32 FREE license). The optimization-level parameter can be set in the Optimization menu of the xc32-gcc and xc32-g++ settings.

Heap Size

The Edge Impulse Inferencing SDK utilizes dynamic memory allocation so the application's heap size will need to be adjusted in the project settings in order to provide the required memory. The heap size will need to be increased dramatically due to the page size used by the compiler for ARM devices. For the smart dumbbell example project, the heap size is set to 14336.

heap

If you are re-generating code with MPLAB Harmony Configurator, make sure to change the Heap Size field there instead as the configurator will overwrite this option. This field can be accessed in MPLAB Harmony Configurator under Configuration Options -> System -> Device & Project Configuration -> Project Configuration -> Tool Chain Selections -> XC32 Global Options -> Linker -> General.

Build Options

The build option macros can be set in the Preprocessing and messages menu of the xc32-g++ options. The following table provides a summary of the most relevant macros and their descriptions.

Macro Default Value Description
EIDSP_USE_CMSIS_DSP Undefined - Disabled Enable CMSIS DSP functions for accelerated DSP calculations
EI_CLASSIFIER_TFLITE_ENABLE_CMSIS_NN Undefined - Disabled Enable CMSIS NN functions for accelerated neural network calculations for TensorFlow
EIDSP_TRACK_ALLOCATIONS 0 - Disabled Track dynamic memory allocations. Helpful for profiling and determining peak heap usage.
EIDSP_PRINT_ALLOCATIONS 0 - Disabled Print memory allocations to stdout for analysis and debugging
EIDSP_SIGNAL_C_FN_POINTER 0 Set to 1 when linking Impulse with a C application
EI_C_LINKAGE 0 Set to 1 when linking to C application
EI_CLASSIFIER_SLICE_SIZE Undefined Only relevant when using the continuous inferencing API. Should be set to EI_CLASSIFIER_RAW_SAMPLE_COUNT / EI_CLASSIFIER_SLICES_PER_MODEL_WINDOW.
EI_CLASSIFIER_SLICES_PER_MODEL_WINDOW Undefined Only relevant when using the continuous inferencing API. Defines how many slices to subdivide the input window into. Must be chosen such that the window is evenly divided into equal sized slices.

The smart dumbbell example project only uses the EIDSP_USE_CMSIS_DSP macro to enable CMSIS DSP functions because the CMSIS NN functions are not available on the SAMD21 which has the M0+ core.

macros

Building in Windows

Due to the large number of files in the CMSIS pack, the Windows command line length limitations may be exceeded during compilation. To get around this issue we recommend enabling the Use response file the link option within the xc32-ld settings as shown below.

link

Conclusions

You should now have a general understanding of how to deploy the Edge Impulse Inferencing SDK on a Microchip device. To learn more about the SDK visit the "Inferencing SDK" page.

© 2021 Microchip Technology, Inc.
Notice: ARM and Cortex are the registered trademarks of ARM Limited in the EU and other countries.
Information contained on this site regarding device applications and the like is provided only for your convenience and may be superseded by updates. It is your responsibility to ensure that your application meets with your specifications. MICROCHIP MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WHETHER EXPRESS OR IMPLIED, WRITTEN OR ORAL, STATUTORY OR OTHERWISE, RELATED TO THE INFORMATION, INCLUDING BUT NOT LIMITED TO ITS CONDITION, QUALITY, PERFORMANCE, MERCHANTABILITY OR FITNESS FOR PURPOSE. Microchip disclaims all liability arising from this information and its use. Use of Microchip devices in life support and/or safety applications is entirely at the buyer's risk, and the buyer agrees to defend, indemnify and hold harmless Microchip from any and all damages, claims, suits, or expenses resulting from such use. No licenses are conveyed, implicitly or otherwise, under any Microchip intellectual property rights.