United States - Flag United States

Please confirm your currency selection:

Bench Talk for Design Engineers

Bench Talk

rss

Bench Talk for Design Engineers | The Official Blog of Mouser Electronics


Democratizing AI Development with Edge Impulse Mike Parks

Democratizing AI Development Edge Impulse: Edge Impulse 101

(Source: nexusby -- stock.adobe.com)

Canadian philosopher Marshall McLuhan once said, “We become what we behold. We shape our tools, and thereafter our tools shape us.” If so, then artificial intelligence (AI) is most unique in that the object we behold is ourselves, specifically our brain. And if that is true, it will be interesting to see how the tool that is artificial intelligence will recursively shape ourselves and our future. Democratization of the development tools that allows us to create objects imbued with AI-based capabilities will be crucial to building a bright, positive future for humanity. A company known as Edge Impulse is doing its part to ensure just that.

The paradigm shift to neural networks can be daunting for those of us embedded system developers who cut our teeth in the hey-days of procedural or even object-oriented programming. For some, it feels like giving up a bit of absolute control of the design to what on the surface seems unproven, if not downright magic. Still, the promises of machine learning on the edge (meaning moving AI algorithms from the cloud and pushing it down to microcontrollers that are found in the billions of IoT devices) are too tempting to be ignored. Fortunately, Edge Impulse provides an incredibly straightforward and, more importantly, well-documented path forward for embedded systems engineers to successfully navigate the relatively new waters of AI, machine learning, and neural networks (NN).

The chances are that at some point in embedded design, an engineer will sketch out a flowchart to understand the various states a machine will be in during its operational lifecycle. To that end, it is beneficial to understand the steps that one will encounter while using Edge Impulse to develop a customized neural network for a unique embedded application. The following summarizes those steps from the perspective of an embedded electronics engineer versus a computer scientist with a specialty in artificial intelligence.

Step 1: Acquiring Training Data

The development of a neural network requires access to data. Lots and lots of data. In short, the more data, the more accurate the future NN model will be when predicting an output based on real-world operations. Edge Impulse provides a variety of easy-to-use tools to get data from the real world to their servers to develop a custom neural network. First and foremost, they provide pre-built firmware for many popular development boards (such as TI CC1352P Launchpad, SiLabs Thunderboard Sense 2, and the Arduino Portenta) that can access the various onboard sensors and send the data streaming back to Edge Impulse. For other boards, Edge Impulse provides a suite of tools under the umbrella of their Command Line Interface (CLI) toolset available for Mac OS, Windows, and the Linux distros Ubuntu and Raspbian. The CLI requires Python3 and Node.js to be installed on your desktop. Three key tools of the CLI are:

  • EI Dameon
  • Impulse Uploader
  • Data Forwarder

These tools are especially useful to get sensor data from a development board that lacks direct Internet connectivity. They act as a proxy to take in data via a serial port and forward it onto the Edge Impulse servers via the host internet connection. Edge Impulse also provides a browser-based mechanism to collect data (such as voice samples or accelerometer data) from a smartphone.

From a practical perspective, carefully think through all possible states your embedded device will encounter during its operation. For example, in a recent project involving industrial machinery and identifying machine failure from accelerometer data, the development team collected a lot of data while the data was operating under load as intended and when it was in a failure mode. But initially, it failed to collect data while the machine was idling. As a result, the first NN model had difficulty distinguishing between failure and idling. Finally, the NN was retrained with data collected while the machine was idling, and the accuracy of the predictions (e.g., the NN performance) of the model improved dramatically. Bottom line, if real estate is all about location, location, location. Then machine learning is data, data, data.

Step 2: Labeling and Chunking Up the Raw Data

Once the training data is on the Edge Impulse servers, the remainder of the work to train a NN model (aka an “Impulse”) occurs on the Edge Impulse website via a web browser. First, the datasets we collected must be labeled with the output state that each particular dataset represents. This is accomplished by simply editing the “Label” tag for each individual dataset that was collected. Using the aforementioned industrial machinery example, a third of the data was labeled “Failure,” another third “Normal,” and the last third was labeled “Idle.” Recall that the output of a neural network is not absolute; instead, it is a percentage of certainty ascribed to each possible outcome.

With time-series data (such as accelerometer readings collected over time), it is necessary to “chunk” up the data within each dataset. Like all good problem-solving techniques, breaking up a problem into smaller, more manageable chunks allows one to tackle seemingly insurmountable problems. During this initial phase of NN training, you can adjust some attributes of how the data will be analyzed, including window size, window increase, sampling frequency, and whether or not the data should be zero-padded. In addition, these various attributes can be adjusted to balance the tradeoff between the resolution of the analysis versus the time to complete the analysis.

Step 3: Analyze and Convert the Raw Data Chunks

After the data has been appropriately chunked up, it is time to analyze it by applying an appropriate analysis technique, such as a “Processing Block.” This takes the raw data and converts it into a format that can be used by the NN classifiers downstream in the training process. Edge Impulse offers several different analysis techniques depending on the type of data to be analyzed.

  • Spectral Analysis: Great for analyzing repetitive motion, such as data from accelerometers. Extracts the frequency and power characteristics of a signal over time.
  • Flatten: Flattens an axis into a single value, useful for slow-moving averages like temperature data, in combination with other blocks.
  • Mel-Filterbank Energy (MFE): Extracts a spectrogram from non-voice audio signals.
  • Mel Frequency Cepstral Coefficients (MFCC): Extracts a spectrogram from human voice audio files.
  • Image: Used to identify objects in static images.
  • Custom Processing Block: For those with a background in AI-based computer science, it is also possible to upload a custom processing block tailored to your specific application

Step 4: Classify the Data Chunks, Run the NN Classifier

Once we have the raw data converted into a usable format and understand how to extract characteristics from our datasets, it's necessary to train the NN to learn from those characteristics so it can classify test and operational datasets appropriately. In other words, all datasets that represent a system failure should be classified as such. Likewise, all datasets representing normal operations should be classified similarly. This is accomplished with the application of a so-called learning block. Like with processing blocks, there are various learning blocks that can be applied depending on the type of data. For example, for rapidly fluctuating time-varying data, such as the datasets from our example, the following learning blocks are available:

  • Classification (Keras): Learns patterns from data and can apply these to new data. Great for categorizing movement or recognizing audio.
  • Anomaly Detection (K-means): Find outliers in new data. Good for recognizing unknown states and complementing classifiers.
  • Regression (Keras): Learns patterns from data and can apply these to new data. Great for predicting continuous numeric values.

Various parameters of the signal processing algorithms can be tweaked to fine-tune the performance of the learning block. By adjusting the parameters such as cutoff frequency and Fast Fourier Transform (FFT) length, a balance can be achieved between processing time and peak Random Access Memory (RAM) usage. Edge Impulse even provides a performance estimate of processing time and RAM usage when running on the target embedded platform.

Lastly, the settings used to control the output of the NN classifier can be altered before finally generating the neural network model (aka impulse) itself. Parameters that can be adjusted include the number of training cycles, learning rate, validation set size, and the number of neurons for the intermediate layers of the network between the input and output layers. The ability to alter these parameters is crucial in preventing a common data science problem known as overfitting, which occurs when a model works perfectly with the training data but fails miserably when exposed to new data.

Step 5: Test the NN Model

Overfitting is not an uncommon concern for developers of machine learning algorithms. To ensure that the model is made sufficiently generic, it is necessary to test the neural network that Edge Impulse has generated against independent test data. The same techniques that Edge Impulse offers to collect training data can be used to collect test data. In addition to classifying previously recorded test data, it is also possible to have data streamed from the test device and classified in real-time on Edge Impulse servers. Designers can use either a direct connection in the firmware powered by the Edge Impulse Application Programming Interface (API) or the data forwarder proxy to get data from the sensor to the cloud.

Step 6: Deploy the NN Model

After the neural network achieves satisfactory results against training data, it’s time to package the NN model into a software library that can be deployed on microcontroller-based systems. Edge Impulse makes this an incredibly straightforward process. First, the model can be placed under version control so that future refinements can be compared to past models should that be needed. Next, the model can be turned into “turnkey ready” firmware for various embedded system development boards.

For development boards that Edge Impulse does not directly support, it is still possible to generate generic libraries, including the model files for system architectures based on C++, Arduino, WebAssembly, TensorRT, and STM32Cube.MX CMSIS-PACK. Before the library or firmware is generated is also possible to run optimizers to achieve either speed or memory-usage optimizations depending on the hardware specifications that the NN model (aka impulse) will run on. In addition, impulses based on the sensor data being sent as 8-bit integers or 32-bit floating-point numbers are possible.

Impulses can also be run on embedded systems running Linux OS thanks as Edge Impulse also provides Software Development Kits (SDKs) based on C++, GoLang, Node.js, and Python. It is also possible to run impulses on Windows and macOS with a C++ library.

Lastly, the impulse can be deployed to a smartphone directly without the need for any additional application being installed on the target device.

Summary

For those looking to integrate AI technologies into their next embedded system project, taking a look through the documentation and forums of Edge Impulse is a free and easy way to start understanding ML on the edge. A limited, free version is available for testing the Edge Impulse ecosystem. The key constraints for the free-tier are single developer sweat access, a maximum 20-minute processing time, and a cloud storage limitation of 4GB or 4-hours of data. In addition, an enterprise version is available, payable on a per project basis, which removes the restrictions of the free-tier and provides access to a private cloud and five seats per project.



« Back


Michael Parks, P.E. is the co-founder of Green Shoe Garage, a custom electronics design studio and embedded security research firm located in Western Maryland. He produces the Gears of Resistance Podcast to help raise public awareness of technical and scientific matters. Michael is also a licensed Professional Engineer in the state of Maryland and holds a Master’s degree in systems engineering from Johns Hopkins University.


All Authors

Show More Show More
View Blogs by Date

Archives