(Souce:RedlineVector/Shutterstock.com)
Automation has long fascinated innovative minds, as far back as the ancient Greeks. The 20th century saw rapid adoption in automation technologies thanks to mass electrification in the early part of the century and the invention of semiconductors in the latter half. However, automation has typically been restricted to tightly controlled spaces such as factories where every scenario can be planned and accounted for in designing the associated systems. The real-world tends to be far less predictable, and thus the adoption of autonomous systems has been relatively minimal because of safety concerns. That said, automation promises benefits too significant to ignore. For example, it could give the freedom of mobility back to people with paraplegia thanks to the adoption of autonomous vehicles.
Machine-learning (ML) algorithms are poised to be critical players in bending the curve of autonomous system adoption. Of keen interest to embedded system developers is combining these efficient, brain-like algorithms with inexpensive yet powerful microcontrollers and sensors. This technological union has given rise to so-called edge computing, which promises billions of affordable embedded electronic systems interacting with the physical world nearly instantaneously—a significant appeal of edge internet connection. Therefore, edge computing can bring uncompromised ML—powered capabilities to even the most remote locations with no connectivity. The bottom line is edge computing represents a revolution in automation, both in scale and capability.
With this revolution, embedded systems developers are being challenged to reimagine a wide range of consumer and industrial products to leverage ML technologies to make them safer, easier to use, or more efficient. Thankfully, companies such as Microchip Technology offer inexpensive yet powerful development boards that allow developers to explore and integrate ML-centric technologies into product prototypes quickly. We will explore how rapid prototyping can be accomplished with Microchip Technology’s MPLAB X integrated development environment (IDE) and their family of 32-bit microcontrollers and microprocessors (Figure 1).
For humans, our entire experience with the physical world is processed by the hundred billion neurons that comprise the brain. Its ability to learn and adapt, coupled with its extraordinary energy efficiency, make the biological brain a triumph of nature’s engineering ability. Reproducing the entire brain’s functionality artificially remains decades away (such as a true general-purpose artificial intelligence or AI). However, certain capability subsets of the brain can be reproduced today, thanks to emerging machine-learning technologies. For example, machine-vision algorithms can give electronic devices the ability to identify and classify objects in a camera’s field of view.
Why is this important? Widespread adoption of automation means humans and technology will interact more frequently and potentially do so in increasingly riskier ways. To mitigate these risks, machines must be more adept at sensing and understanding it’s the environment. Machine vision is one such mechanism to give devices the ability to see and comprehend physical 3D space. From a practical perspective, detecting the presence of a human in a physical space is an ability that has widespread implications across numerous use cases related to safety, security, and elderly/childcare, to name a few.
Powerful ML algorithms require equally powerful hardware. Microchip offers a wide array of 32-bit microprocessors and microcontrollers to meet nearly all performance and cost requirements for developers looking to build AI-at-the-edge product lines. Microchip makes it easy to develop and test these solutions using their ML evaluation kits, such as the EV18H79A or EV45Y33A. The VectorBlox™ Accelerator Software Development Kit (SDK) enables the design of low-power, small-form-factor AI/ML applications on Microchip’s PolarFire® Field Programmable Gate Arrays (FPGAs). FPGAs are well-suited for edge AI applications, including inferencing in power-constrained compute environments. This is because FPGAs can process more giga operations per second (GOPS) with increased power efficiency than central processing units (CPUs) or graphics processing units (GPUs). Designers can implement their algorithms on PolarFire FPGAs to meet the growing demand for power-efficient inferencing in edge applications. What’s more, PolarFire FPGAs do not require prior FPGA design experience. Microchip’s VectorBlox Accelerator SDK is designed to enable developers to code in C/C++ and program power-efficient neural networks.
Integrating machine vision algorithms with microcontroller hardware requires embedded systems developers to expand their knowledge and skills. To aid in that education, Microchip has partnered with various AI-focused startups to integrate their AI training solutions right into the MLPAB X IDE. First is NanoEdge AI suite from Cartesiam. NanoEdge AI Library is a tool to search for and integrate C programming language-based AI libraries into your embedded firmware project. AI Studio lets an embedded developer abstract away the details of signal processing and ML model training. The end result is a static library that can be linked in the main .c file and can run on any of Microchip’s Arm Cortex-based microcontrollers.
Edge Impulse is a complete TinyML training and deployment pipeline including dataset collection, DSP, training ML algorithms, testing, and highly efficient inference code generation across a wide range of sensor, audio, and vision applications. Thanks to an MPLAB X IDE plugin, training data can be sent quickly to Edge Impulse from nearly all Microchip’s 32-bit Arm microcontrollers.
Lastly, Microchip has also partnered with Motion Gestures to provide a unique mechanism for delivering gesture detection capabilities to embedded systems. Motion Gesture tools give developer pattern detection tools to capture gestures based on motion, touch, and vision. Developers can leverage Motion Gesture’s prebuilt library of gestures or leverage a smartphone app to train their own gestures. A plugin for MPLAB X IDE even lets developers easily integrate the Motion Gesture software library with libraries for a variety of Microchip sensors (e.g., capacitive touch, inertial measurement units or IMUs).
MPLAB X IDE is a powerful and highly expandable development suite for many of Microchip’s microcontrollers and digital signal processors. It is available for Windows, Mac OS, and Linux. It offers numerous features of keen interest for embedded developers including a data visualizer, an I/O oib viewer, and even a web-based version that allows developers to access their source code from any computer in the world.
Figure 1: Infographic of Microchip Technology’s AI and Machine Learning Solutions for Smart Applications (Source: Mouser Electronics)
View Larger Image
Here is a basic project that should give you the confidence and skills to develop more sophisticated machine-vision projects of your own by leveraging Microchip Technology’s 32-bit microprocessors and microcontrollers. As mentioned before, computer vision can be useful in numerous safety or security applications. Instead of illuminating an LED, a General-Purpose Input/Output (GPIO) could instead trigger a relay to break the current flow to heavy machinery should a person enter a location they are not supposed to be. Or a security device could sound an alarm should a person be detected after hours.
Of course, developers are not limited to identifying humans. ML algorithms can be trained to identify and classify any number of object types. Or perhaps there are use cases in which there is a need for something other than visual identification. Audio identification ML algorithms could be substituted to trigger outputs based on sounds instead of images. Regardless of the type of inputs, the hardware and software tools from Microchip and their AI-startup partners offer a quick and easy workflow to bring ML capability to the edge.
The bottom line, ML algorithms coupled with powerful, low-cost embedded systems are ushering in more robust and intelligent automation to the world. Embedded system developers now have access to numerous tools to help them embed machine-learning technology into their products quickly and inexpensively. Prudent product developers should be asking how, not if, machine-learning technologies can be adapted to their products to provide additional value to potential customers. Hopefully, this project stirred the imagination, and you are asking yourself: How can I leverage machine learning to bring artificial intelligence to the edge in my products?
Michael Parks, P.E. is the co-founder of Green Shoe Garage, a custom electronics design studio and embedded security research firm located in Western Maryland. He produces the Gears of Resistance Podcast to help raise public awareness of technical and scientific matters. Michael is also a licensed Professional Engineer in the state of Maryland and holds a Master’s degree in systems engineering from Johns Hopkins University.